Learning from virtual experiments to assist users of Small Angle Neutron Scattering in model selection | Scientific Reports – Nature.com
Generation of a dataset of SANS virtual experiments at KWS-1
A code template of the KWS-1 SANS instrument at FRM-II, Garching, was written in McStas (see Supplementary Information for the example code). The instrument description consisted of the following components, set consecutively: a neutron source describing the FRM-II spectrum, a velocity selector, guides that propagate the neutrons to minimize losses, a set of slits to define the divergence of the beam, a sample (one of the recently developed sasmodels component described in the McStas 3.4 documentation), a beamstop, and finally a Position Sensitive Detector (PSD) of size (144times 256) pixels. The sample was changed systematically between 46 SAS models (see Supplementary Information for a complete list of the models considered and their documentation), and for each model, different samples were produced by varying the parameters of the model. The set of 46 SAS models considered presented both isotropic and anisotropic scattering amplitudes. In the anisotropic models, the scattering amplitude is defined to have a dependency on the angle between the incident beam and the orientation of the scattering objects (or structures), which is determined by the model parameters. Consequently, in non-oriented particles with analytical anisotropic models, the resulting scattering pattern can result isotropic. Whenever possible, samples were considered in the dilute regime to avoid structure factor contributions and only observe those arising from the form factor. In models with crystalline structure or with correlations between scatterers where an analytical expression for the scattering amplitude was found, the complete scattering amplitude was considered. In all cases, the analytical expressions were obtained from the small angle scattering models documentation of SasView20 (see Supplementary Information). The instrument template in the Supplementary Information shows how it was also possible to change the instrument configuration when a sample was fixed. The set of parameters that describe the instrument configuration in a given simulation are referred as instrument parameters, and those that define the sample description as sample parameters.
In the case of instrument parameters, a discrete set of 36 instrument configurations were allowed to be selected. This was chosen by the instrument scientist, taking into account the most frequent instrument configurations: two possible values of wavelength (4.5 or 6 ), three possibilities for the distance settings, paired in collimation length - sample to detector distance (8m-1m, 8m-8m, and 20m-20m), three options for the slit configuration (1 cm slit aperture in both directions and a 2 cm wide Hellma Cell; 1.2 cm slit aperture in both directions and a 2cm wide Helma Cell; and 7mm on the horizontal aperture and 1 cm on the vertical aperture with a 1 cm wide Helma Cell), and finally two possible sample holders of different thickness (1mm and 2mm). One of the advantages of MC simulations over analytical approaches to obtain the 2D scattering pattern is that by defining the instrument parameters in the simulation, such as size of apertures for collimation, the sample to detector distance, the size of the detector, the dimensions of the pixels, and so on, the smearing of the data due to instrumental resolution is automatically considered. Therefore, no extra convolution must be performed once the data is collected.
In the case of sample parameters, most parameters describing samples were continuous, and an added difficulty was that the number of parameters per model was not the same nor similar for all models (see Fig. 5).
Distribution of models as a function of the number of parameters, showing the wide range of complexities contemplated in the models set used in this work.There are few models that have more than 15 parameters to set.
There were some models with only two parameters (easy to sample) and several models with more than 15 parameters (hard to sample). Most of the models had around 12 parameters. For p parameters with (n_i) possible choices for parameter i, the number of possible combinations (N) can be calculated as
$$begin{aligned} N = prod _{i=1}^p n_i, end{aligned}$$
(1)
which turns out to be (N=n^p) if (n_i=n) for all (i=1,dots ,p). With only (n=2) possibilities per parameter and (p=15), we rapidly get (N=32768) possible combinations for the complex model, whereas only (N=4) possible combinations for the very simple models. The large complexity of some model descriptions did not allow simulating all possible scenarios without generating a dataset with a large imbalance between classes. Therefore we opted to sample the defined hyper-parameter space strategically by using latin-hypercube sampling21. Briefly explained, this sampling method generates hypercubes in a given high dimensional hyper-parameter space. Then it selects randomly one of these hypercubes, and randomly samples the variables only inside the chosen hypercube. On a later iteration, it selects a new hypercube and repeats the sampling procedure.
Another advantage of MC simulations is that one can perform Monte Carlo integration estimates, which allow to include polydispersity and orientational distributions of scattering objects in a simple and direct manner. On each neutron interaction, the orientation and the polydisperse parameters of the scattering object are randomly chosen from defined probability distributions. For simplicity, distance and dimension parameters (r_i) of the models were allowed to be polydisperse by sampling them from gaussian distributions (taking care of selecting only positive values). The value (r_i) selected on each MC simulation defined the mean value of the gaussian distribution and an extra parameter (Delta r_i) for each (r_i) was included in the MC simulation to define the corresponding variance. The standard deviation of the gaussian distribution on different simulations was allowed to vary between 0 (monodisperse) and (r_i/2) (very polydisperse). In the case of angle parameters that determine the orientation of the scattering object, these were defined by sampling uniformly inside an interval centered at the parameter value (theta _i) and with limits defined by another extra parameter (Delta theta _i). For example, in a cylinder form factor model for the scattering object, both the radius and the length of the cylinders can be polydisperse, and the two angles defining the orientation of the principal axis with respect to the incident beam are allowed to vary uniformly within the simulation defined range. This gives a total of 8 parameters to include polidyspersity and orientational distributions on a single simulation. For more information on how this was implemented in the MC simulation we refer the reader to the documentation of each model that is provided in the Supplementary Information.
We opted for sampling 100 points for each sample model in the models hyper-parameter space due to time-constraints from the simulation side, and to constraints in the database size from the machine learning side. To define the sampling space, we defined upper ((u_b)) and lower ((l_b)) bounds for each sample parameter in each SasView model description. Then we took the default value of the parameter ((p_{0})) given in the SasView documentation as the center point of the sampling region, allowing for sampling in the interval (left[ max (-3 p_{0},l_b),min (3 p_{0},u_b)right]). All sampled parameters were continuous, except the absorption coefficient, which was restricted to have only two possible values (0% or 10%).
The expected dataset size was 331.200 by taking the 46 sample models, 2 absorption coefficients, 100 sample parameters per model, and 36 possible instrument settings. The 46 sample models were chosen so as to be representative, and also to avoid those sample models of high computational cost. Given that some configurations were non optimal, the total dataset was cleaned from zero images (no neutrons arrived in the given virtual experiment) and low statistic images. This was executed by calculating the quantile 0.02 of the standard deviations of the images, and removing them from the database. Also, the quantile 0.99 of the maximum value of the pixels of an image was calculated, and all images with max values higher were removed (for example, images in which simulations failed with saturating pixels). A remaining total of 259.328 virtual experiments defined the final dataset for machine learning purposes, and is the dataset published open access14. For an insight into what the database looks like we show a random selection of one image per model in the dataset in Fig. 6. It is possible to see that there is some variance between models, but also some unfavorable configurations (inadequate instrument paramaters for a given sample) which add noise and difficulties for the classification task. This figure also illustrates that certain anisotropic SAS models can result in isotropic scattering patterns when the scattering objects are completely unoriented (i.e., exhibiting a broad orientational distribution) or oriented in a particular direction with respect to the beam. In such cases, the anisotropy of the scattering pattern due to the form factor cannot be observed. Consequently, from the perspective of machine learning, the observation of an anisotropic scattering pattern directly excludes all isotropic models, whereas the observation of an isotropic scattering pattern does not allow for the direct inference that the model was isotropic.
An insight of the variability present amongst models in random images selected from the dataset. Isotropic (red title) and anisotropic (blue title) images can be found, as well as images with high and poor counting statistics.
Given that we have a dataset of roughly 260.000 virtual experiments, comprising of a set of 46 SANS models measured under different experimental conditions, we can attempt to train supervised machine learning algorithms to predict the SAS model of a sample given the SANS scattering pattern data measured by the PSD at KWS-1. We are taking advantage here of the fact that we know the ground truth of the SAS model used to generate the data by Monte Carlo simulation. The data from a PSD can be seen as an image of one channel, therefore we can use all recent developments in methods for image classification.
It is known by the SANS community that the intensity profile as a function of the scattering vector (q) is normally plotted in logarithmic scale, to be able to see the small features at increasing values of q. In this sense, it is useful for the classification task to perform a logarithmic transformation on the measured data to increase the contribution to the images variance of the features at large q. Since the logarithm is defined only for values larger than 0, and is positive only for values larger than 1, we first add a constant offset of +1 to all pixels and check that there are no negative values in the image. Then we apply the logarithm function to the intensity count in all pixels, emphasizing large q features as can be seen in Fig. 6. Then, we normalized all the images in the dataset to their maximum value in order to take them to values between 0 and 1 as to be independent of the counting statistics of the measurement. The transformed data are then fed to the neural network. Mathematically speaking, the transformation reads
$$begin{aligned} x_{i,j} = frac{log (x_{i,j}+1.0)}{MaxLog}, end{aligned}$$
(2)
for the intensity of pixel (x_{i,j}) in row i and column j, where MaxLog is the maximum of the image after applying the logarithmic transformation. All images were resized to (180times 180) pixels, since the networks used in this work are designed for square input images. The value 180 is a compromise between 144 and 256, in which we believe the loss in information by interpolation and sampling respectively is minimal. We decided to train Convolutional Neural Networks (CNNs) for the task of classification using Pytorch22, by transfering the learning on three architectures (ResNet-5023, DenseNet24, and Inception V325). In all cases, the corresponding PyTorch default weights were used as starting point and all weights were allowed to be modified. Then, we generated an ensemble method, that averaged the last layer weights of all three CNNs and predicted based on the averaged weight. In all cases, we modified the first layer to accept the generated one-channel images of our SANS database in HDF format. We preferred HDF format to keep floating point precision in each pixels intensity count. Also the final fully-connected layer was modified to match the 46 classes, and a soft-max layer was used to obtain values between 0 and 1, to get some notion of probability of classification.
The dataset was split into training, testing, and validation sets in proportions 0.70, 0.20, and 0.10 respectively. For the minimzation problem in multilabel classification, the Cross Entropy loss is a natural selection as the loss function. This function coincides with the multinomial logistic loss and belongs to a set of loss functions that are called comp-sum losses (loss functions obtained by composition of a concave function, such as logarithm in the case of the logistic loss, with a sum of functions of differences of score, such as the negative exponential)15. In our case, we can write the Cross Entropy loss function as
$$begin{aligned} l(x_n,y_n) = -log left( frac{exp (alpha _{y_n}(x_n))}{sum _{c=1}^{C}exp {(alpha _{c}(x_n))}}right) , end{aligned}$$
(3)
where (x_n) is the input, (y_n) is the target label, (alpha _i(x)) is the i-th output value of the last layer when x is the input, and C is the number of classes. In the extreme case where only the correct weight (alpha _{y_n}(x_n)) is equal to 1, the rest are equal to 0, then the quotient is equal to 1, and the logarithm makes the loss function equal to 0. If (alpha _{y_n}(x_n)<1), then the quotient will be between 0 and 1, the logarithm will make it negative, and the -1 pre-factor will transform it to a positive value. Any accepted minimization step of this function forces the weight of the correct label to increase in absolute value.
Finally, for the training phase, Mini-batches were used with a batch size of 64 images during training, and all CNNs were trained during 30 epochs. The Adaptive Moment Estimation (Adam)26 algorithm was used for the minimzation of the loss function, with a learning rate of (eta =1times 10^{-5}). For the testing phase, a batch size of 500 images was used, and for the validation phase, batches of 1000 images were used to increase the support of the estimated final quantities.
The data was obtained from an already completed study that has been published separetly19. It was collected from a sample consisting of a 60(mu)m thick brain slice from a reeler mouse after death. In the cited paper19, they declare that the animal procedures were approved by the institutional animal welfare committee at the Research Centre Jlich GmbH, Germany, and were in accordance with European Union guidelines for the use and care of laboratory animals. For the interest of this work, we only refer to the data for validation of the presented algorithm and we did not sacrifice nor handle any animal lives. The contrast was obtained by deuterated formalin. The irradiation area was of 1 mm(times)1mm. The authors observed an anisotropic Porod scattering ((q<0.04)(^{-1})) that is connected to the preferred orientation of whole nerve fibres, also called axon. They also report a correlation ring ((q=0.083)(^{-1})) that arises from the myelin sheaths, a multilayer of lipid bilayers with the myelin basic protein as a spacer.
Follow this link:
Learning from virtual experiments to assist users of Small Angle Neutron Scattering in model selection | Scientific Reports - Nature.com
- How machine learning can spark many discoveries in science and medicine - The Indian Express - April 30th, 2025 [April 30th, 2025]
- Machine learning frameworks to accurately estimate the adsorption of organic materials onto resin and biochar - Nature - April 30th, 2025 [April 30th, 2025]
- Gene Therapy Research Roundup: Gene Circuits and Controlling Capsids With Machine Learning - themedicinemaker.com - April 30th, 2025 [April 30th, 2025]
- Seerist and SOCOM Enter Five-Year CRADA to Advance AI and Machine Learning for Operations - PRWeb - April 30th, 2025 [April 30th, 2025]
- Machine learning models for estimating the overall oil recovery of waterflooding operations in heterogenous reservoirs - Nature - April 30th, 2025 [April 30th, 2025]
- Machine learning-based quantification and separation of emissions and meteorological effects on PM - Nature - April 30th, 2025 [April 30th, 2025]
- Protein interactions, network pharmacology, and machine learning work together to predict genes linked to mitochondrial dysfunction in hypertrophic... - April 30th, 2025 [April 30th, 2025]
- AQR Bets on Machine Learning as Asness Becomes AI Believer - Bloomberg.com - April 30th, 2025 [April 30th, 2025]
- Darktrace enhances Cyber AI Analyst with advanced machine learning for improved threat investigations - Industrial Cyber - April 21st, 2025 [April 21st, 2025]
- Infrared spectroscopy with machine learning detects early wood coating deterioration - Phys.org - April 21st, 2025 [April 21st, 2025]
- A simulation-driven computational framework for adaptive energy-efficient optimization in machine learning-based intrusion detection systems - Nature - April 21st, 2025 [April 21st, 2025]
- Machine learning model to predict the fitness of AAV capsids for gene therapy - EurekAlert! - April 21st, 2025 [April 21st, 2025]
- An integrated approach of feature selection and machine learning for early detection of breast cancer - Nature - April 21st, 2025 [April 21st, 2025]
- Predicting cerebral infarction and transient ischemic attack in healthy individuals and those with dysmetabolism: a machine learning approach combined... - April 21st, 2025 [April 21st, 2025]
- Autolomous CEO Discusses AI and Machine Learning Applications in Pharmaceutical Development and Manufacturing with Pharmaceutical Technology -... - April 21st, 2025 [April 21st, 2025]
- Machine Learning Interpretation of Optical Spectroscopy Using Peak-Sensitive Logistic Regression - ACS Publications - April 21st, 2025 [April 21st, 2025]
- Estimated glucose disposal rate outperforms other insulin resistance surrogates in predicting incident cardiovascular diseases in... - April 21st, 2025 [April 21st, 2025]
- Machine learning-based differentiation of schizophrenia and bipolar disorder using multiscale fuzzy entropy and relative power from resting-state EEG... - April 12th, 2025 [April 12th, 2025]
- Increasing load factor in logistics and evaluating shipment performance with machine learning methods: A case from the automotive industry - Nature - April 12th, 2025 [April 12th, 2025]
- Machine learning-based prediction of the thermal conductivity of filling material incorporating steelmaking slag in a ground heat exchanger system -... - April 12th, 2025 [April 12th, 2025]
- Do LLMs Know Internally When They Follow Instructions? - Apple Machine Learning Research - April 12th, 2025 [April 12th, 2025]
- Leveraging machine learning in precision medicine to unveil organochlorine pesticides as predictive biomarkers for thyroid dysfunction - Nature - April 12th, 2025 [April 12th, 2025]
- Analysis and validation of hub genes for atherosclerosis and AIDS and immune infiltration characteristics based on bioinformatics and machine learning... - April 12th, 2025 [April 12th, 2025]
- AI and Machine Learning - Bentley and Google partner to improve asset analytics - Smart Cities World - April 12th, 2025 [April 12th, 2025]
- Where to find the next Earth: Machine learning accelerates the search for habitable planets - Phys.org - April 10th, 2025 [April 10th, 2025]
- Concurrent spin squeezing and field tracking with machine learning - Nature - April 10th, 2025 [April 10th, 2025]
- This AI Paper Introduces a Machine Learning Framework to Estimate the Inference Budget for Self-Consistency and GenRMs (Generative Reward Models) -... - April 10th, 2025 [April 10th, 2025]
- UCI researchers study use of machine learning to improve stroke diagnosis, access to timely treatment - UCI Health - April 10th, 2025 [April 10th, 2025]
- Assessing dengue forecasting methods: a comparative study of statistical models and machine learning techniques in Rio de Janeiro, Brazil - Tropical... - April 10th, 2025 [April 10th, 2025]
- Machine learning integration of multimodal data identifies key features of circulating NT-proBNP in people without cardiovascular diseases - Nature - April 10th, 2025 [April 10th, 2025]
- How AI, Data Science, And Machine Learning Are Shaping The Future - Forbes - April 10th, 2025 [April 10th, 2025]
- Development and validation of interpretable machine learning models to predict distant metastasis and prognosis of muscle-invasive bladder cancer... - April 10th, 2025 [April 10th, 2025]
- From fax machines to machine learning: The fight for efficiency - HME News - April 10th, 2025 [April 10th, 2025]
- Carbon market and emission reduction: evidence from evolutionary game and machine learning - Nature - April 10th, 2025 [April 10th, 2025]
- Infleqtion Unveils Contextual Machine Learning (CML) at GTC 2025, Powering AI Breakthroughs with NVIDIA CUDA-Q and Quantum-Inspired Algorithms - Yahoo... - March 22nd, 2025 [March 22nd, 2025]
- Karlie Kloss' coding nonprofit offering free AI and machine learning workshop this weekend - KSDK.com - March 22nd, 2025 [March 22nd, 2025]
- Machine learning reveals distinct neuroanatomical signatures of cardiovascular and metabolic diseases in cognitively unimpaired individuals -... - March 22nd, 2025 [March 22nd, 2025]
- Machine learning analysis of cardiovascular risk factors and their associations with hearing loss - Nature.com - March 22nd, 2025 [March 22nd, 2025]
- Weekly Recap: Dual-Cure Inks, AI And Machine Learning Top This Weeks Stories - Ink World Magazine - March 22nd, 2025 [March 22nd, 2025]
- Network-based predictive models for artificial intelligence: an interpretable application of machine learning techniques in the assessment of... - March 22nd, 2025 [March 22nd, 2025]
- Machine learning aids in detection of 'brain tsunamis' - University of Cincinnati - March 22nd, 2025 [March 22nd, 2025]
- AI & Machine Learning in Database Management: Studying Trends and Applications with Nithin Gadicharla - Tech Times - March 22nd, 2025 [March 22nd, 2025]
- MicroRNA Biomarkers and Machine Learning for Hypertension Subtyping - Physician's Weekly - March 22nd, 2025 [March 22nd, 2025]
- Machine Learning Pioneer Ramin Hasani Joins Info-Tech's "Digital Disruption" Podcast to Explore the Future of AI and Liquid Neural Networks... - March 22nd, 2025 [March 22nd, 2025]
- Predicting HIV treatment nonadherence in adolescents with machine learning - News-Medical.Net - March 22nd, 2025 [March 22nd, 2025]
- AI And Machine Learning In Ink And Coatings Formulation - Ink World Magazine - March 22nd, 2025 [March 22nd, 2025]
- Counting whales by eavesdropping on their chatter, with help from machine learning - Mongabay.com - March 22nd, 2025 [March 22nd, 2025]
- Associate Professor - Artificial Intelligence and Machine Learning job with GALGOTIAS UNIVERSITY | 390348 - Times Higher Education - March 22nd, 2025 [March 22nd, 2025]
- Innovative Machine Learning Tool Reveals Secrets Of Marine Microbial Proteins - Evrim Aac - March 22nd, 2025 [March 22nd, 2025]
- Exploring the role of breastfeeding, antibiotics, and indoor environments in preschool children atopic dermatitis through machine learning and hygiene... - March 22nd, 2025 [March 22nd, 2025]
- Applying machine learning algorithms to explore the impact of combined noise and dust on hearing loss in occupationally exposed populations -... - March 22nd, 2025 [March 22nd, 2025]
- 'We want them to be the creators': Karlie Kloss' coding nonprofit offering free AI and machine learning workshop this weekend - KSDK.com - March 22nd, 2025 [March 22nd, 2025]
- New headset reads minds and uses AR, AI and machine learning to help people with locked-in-syndrome communicate with loved ones again - PC Gamer - March 22nd, 2025 [March 22nd, 2025]
- Enhancing cybersecurity through script development using machine and deep learning for advanced threat mitigation - Nature.com - March 11th, 2025 [March 11th, 2025]
- Machine learning-assisted wearable sensing systems for speech recognition and interaction - Nature.com - March 11th, 2025 [March 11th, 2025]
- Machine learning uncovers complexity of immunotherapy variables in bladder cancer - Hospital Healthcare - March 11th, 2025 [March 11th, 2025]
- Machine-learning algorithm analyzes gravitational waves from merging neutron stars in the blink of an eye - The University of Rhode Island - March 11th, 2025 [March 11th, 2025]
- Precision soil sampling strategy for the delineation of management zones in olive cultivation using unsupervised machine learning methods - Nature.com - March 11th, 2025 [March 11th, 2025]
- AI in Esports: How Machine Learning is Transforming Anti-Cheat Systems in Esports - Jumpstart Media - March 11th, 2025 [March 11th, 2025]
- Whats that microplastic? Advances in machine learning are making identifying plastics in the environment more reliable - The Conversation Indonesia - March 11th, 2025 [March 11th, 2025]
- Application of machine learning techniques in GlaucomAI system for glaucoma diagnosis and collaborative research support - Nature.com - March 11th, 2025 [March 11th, 2025]
- Elucidating the role of KCTD10 in coronary atherosclerosis: Harnessing bioinformatics and machine learning to advance understanding - Nature.com - March 11th, 2025 [March 11th, 2025]
- Hugging Face Tutorial: Unleashing the Power of AI and Machine Learning - - March 11th, 2025 [March 11th, 2025]
- Utilizing Machine Learning to Predict Host Stars and the Key Elemental Abundances of Small Planets - Astrobiology News - March 11th, 2025 [March 11th, 2025]
- AI to the rescue: Study shows machine learning predicts long term recovery for anxiety with 72% accuracy - Hindustan Times - March 11th, 2025 [March 11th, 2025]
- New in 2025.3: Reducing false positives with Machine Learning - Emsisoft - March 5th, 2025 [March 5th, 2025]
- Abnormal FX Returns And Liquidity-Based Machine Learning Approaches - Seeking Alpha - March 5th, 2025 [March 5th, 2025]
- Sentiment analysis of emoji fused reviews using machine learning and Bert - Nature.com - March 5th, 2025 [March 5th, 2025]
- Detection of obstetric anal sphincter injuries using machine learning-assisted impedance spectroscopy: a prospective, comparative, multicentre... - March 5th, 2025 [March 5th, 2025]
- JFrog and Hugging Face team to improve machine learning security and transparency for developers - SDxCentral - March 5th, 2025 [March 5th, 2025]
- Opportunistic access control scheme for enhancing IoT-enabled healthcare security using blockchain and machine learning - Nature.com - March 5th, 2025 [March 5th, 2025]
- AI and Machine Learning Operationalization Software Market Hits New High | Major Giants Google, IBM, Microsoft - openPR - March 5th, 2025 [March 5th, 2025]
- FICO secures new patents in AI and machine learning technologies - Investing.com - March 5th, 2025 [March 5th, 2025]
- Study on landslide hazard risk in Wenzhou based on slope units and machine learning approaches - Nature.com - March 5th, 2025 [March 5th, 2025]
- NVIDIA Is Finding Great Success With Vulkan Machine Learning - Competitive With CUDA - Phoronix - March 3rd, 2025 [March 3rd, 2025]
- MRI radiomics based on machine learning in high-grade gliomas as a promising tool for prediction of CD44 expression and overall survival - Nature.com - March 3rd, 2025 [March 3rd, 2025]
- AI and Machine Learning - Identifying meaningful use cases to fulfil the promise of AI in cities - SmartCitiesWorld - March 3rd, 2025 [March 3rd, 2025]
- Prediction of contrast-associated acute kidney injury with machine-learning in patients undergoing contrast-enhanced computed tomography in emergency... - March 3rd, 2025 [March 3rd, 2025]
- Predicting Ag Harvest using ArcGIS and Machine Learning - Esri - March 1st, 2025 [March 1st, 2025]
- Seeing Through The Hype: The Difference Between AI And Machine Learning In Marketing - AdExchanger - March 1st, 2025 [March 1st, 2025]