Practical approaches in evaluating validation and biases of machine learning applied to mobile health studies … – Nature.com
In this section, we first describe how Ecological Momentary Assessments work and how they differentiate from assessments that are collected within a clinical environment. Second, we present the studies and ML use cases for each dataset. Next, we introduce the non-ML baseline heuristics and explain the ML preprocessing steps. Finally, we describe existing train-test-split approaches (cross-validation) and the splitting approaches at the user- and assessment levels.
Within this context, ecological means within the subjects natural environment", and momentary within this moment" and ideally, in real time16. Assessments collected in research or clinical environments may cause recall bias of the subjects answers and are not primarily designed to track changes in mood or behavior longitudinally. Ecological Momentary Assessments (EMA) thus increase validity and decrease recall bias. They are suitable for asking users in their daily environment about their state of being, which can change over time, by random or interval time sampling. Combining EMAs and mobile crowdsensing sensor measurements allows for multimodal analyses, which can gain new insights in, e.g., chronic diseases8,15. The datasets used within this work have EMA in common and are described in the following subsection.
From ongoing projects of our team, we are constantly collecting mHealth data as well as Ecological Momentary Assessments6,17,18,19. To investigate how the machine learning performance varies based on the splits, we wanted different datasets with different use cases. However, to increase comparability between the use cases, we created multi-class classification tasks.
We train each model using historical assessments, the oldest assessment was collected at time tstart, the latest historical assessment at time tlast. A current assessment is created and collected at time tnow, a future assessment at time tnext. Depending on the study design, the actual point of time tnext may be in some hours or in a few weeks from tnow. For each dataset and for each user, we want to predict a feature (synonym, a question of an assessment) at time tnext using the features at time tnow. This feature at time tnext is then called the target. For each use case, a model is trained using data between tstart and tlast, and given the input data from tnow, it predicts the target at tnext. Figure1 gives a schematic representation of the relevant points of time tstart,tlast,tnow, and tnext.
At time tstart, the first assessment is given; tlast is the last known assessment used for training, whereas tnow is the currently available assessment as input for the classifier and the target is predicted at time ttext.
To increase comparability between the approaches, we used the same model architecture with the same pseudo-random initialisation. The model is a Random Forest classifier with 100 trees and the Gini impurity as the splitting criterion. The whole coding was in Python 3.9, using mostly scikit-learn, pandas and Jupyter Notebooks. Details can be found on GitHub in the supplementary material.
For all datasets that we used in this study, we have ethical approvals (UNITI No. 20-1936-101, TYT No. 15-101-0204, Corona Check No. 71/20-me, and Corona Health No. 130/20-me). The following section provides an overview of the studies, the available datasets with characteristics, and then describes each use case in more detail. An brief overview is given in Table1 with baseline statistics for each dataset in Table2.
To provide some more background info about the studies: The analyses happen with all apps on the so-called EMA questionnaires (synonym: assessment), i.e., the questionnaires that are filled out multiple times in all apps and the respective studies. This can happen several times a day (e.g., for the tinnitus study TrackYourTinnitus (TYT)) or at weekly intervals (e.g., studies in the Corona Health (CH) app). Nevertheless, the analysis happens on the recurring questionnaires, which collect symptoms over time and in the real environment through unforeseen (i.e., random) notifications.
The TrackYourTinnitus (TYT) dataset has the most filled-out assessments with more than 110,000 questionnaires as by 2022-10-24. The Corona Check (CC) study has the most users. This is because each time an assessment is filled out, a new user can optionally be created. Notably, this app has the largest ratio of non-German users and the youngest user group with the largest standard deviation. The Corona Health (CH) app with its studies Mental health for adults, adolescents and physical health for adults has the highest proportion of German users because it was developed in collaboration with the Robert Koch Institute and was primarily promoted in Germany. Unification of treatments and Interventions for Tinnitus patients (UNITI) is a European Union-wide project, which overall aim is to deliver a predictive computational model based on existing and longitudinal data19. The dataset from the UNITI randomized controlled trial is described by Simoes et al.20.
With this app, it is possible to record the individual fluctuations in tinnitus perception. With the help of a mobile device, users can systematically measure the fluctuations of their tinnitus. Via the TYT website or the app, users can also view the progress of their own data and, if necessary, discuss it with their physician.
The ML task at hand is a classification task with target variable Tinnitus distress at time tnow and the questions from the daily questionnaire as the features of the problem. The targets values range in [0,1] on a continuous scale. To make it a classification task, we created bins with step size of 0.2 resulting in 5 classes. The features are perception, loudness, and stressfulness of tinnitus, as well as the current mood, arousal and stress level of a user, the concentration level while filling out the questionnaire, and perception of the worst tinnitus symptom. A detailed description of the features was already done in previous works21. Of note, the time delta of two assessments of one user at tnext and tnow varies between users. Its median value is 11 hours.
The overall goal of UNITI is to treat the heterogeneity of tinnitus patients on an individual basis. This requires understanding more about the patient-specific symptoms that are captured by EMA in real time.
The use case we created at UNITI is like that of TYT. The target variable encumbrance, coded as cumberness, which was also continuously recorded, was divided into an ordinal scale from 0 to 1 in 5 steps. Features also include momentary assessments of the user during completion, such as jawbone, loudness, movement, stress, emotion, and questions about momentary tinnitus. The data was collected using our mobile apps7. Here, of note: on average, the median time gap between two assessment is 24 hours for each user.
At the beginning of the COVID-19 pandemic, it was not easy to get initial feedback about an infection, given the lack of knowledge about the novel virus and the absence of widely available tests. To assist all citizens in this regard, we launched the mobile health app Corona Check together with the Bavarian State Office for Health and Food Safety22.
The Corona Check dataset predicts whether a user has a Covid infection based on a list of given symptoms23. It was developed in the early pandemic back in 2020 and helped people to get quick estimate for an infection without having an antigen test. The target variable has four classes: First, suspected coronavirus (COVID-19) case", second, symptoms, but no known contact with confirmed corona case", third, contact with confirmed corona case, but currently no symptoms", and last, neither symptoms nor contact".
The features are a list of Boolean variables, which were known at this time to be typically related with a Covid infection, such as fever, a sore throat, a runny nose, cough, loss of smell, loss of taste, shortness of breath, headache, muscle pain, diarrhea, and general weakness. Depending on the answers given by a user, the application programming interface returned one of the classes. The median time gap of two assessments for the same user is 8 hours on average with a much larger standard deviation of 24.6 days.
The last four use cases are all derived from a bigger Covid-related mHealth project called Corona Health6,24. The app was developed in collaboration with the Robert Koch-Institute and was primarily promoted in Germany, it includes several studies about the mental or physical health, or the stress level of a user. A user can download the app and then sign up for a study. He or she will then receive a baseline one-time questionnaire, followed by recurring follow-ups with between-study varying time gaps. The follow-up assessment of CHA has a total of 159 questions including a full PHQ9 questionnaire25. We then used the nine questions of PHQ9 as features at tnow to predict the level of depression for this user for tnext. Depression levels are ordinally scaled from None to Severe in a total of 5 classes. The median time gap of two assessments for the same user is 7.5 days. That is, the models predict the future in this time interval.
Similar to the adult cohort, the mental health of adolescents during the pandemic and its lock-downs is also captured by our app using EMA.
A lightweight version of the mental health questionnaire for adults was also offered to adolescents. However, this did not include a full PHQ9 questionnaire, so we created a different use case. The target variable to be classified on a 4-level ordinal scale is perceived dejection coming from the PHQ instruments, features are a subset of quality of live assessments and PHQ questions, such as concernment, tremor, comfort, leisure quality, lethargy, prostration, and irregular sleep. For this study, the median time gap of two follow up assessments is 7.3 days.
Analogous to the mental health of adults, this study aims to track how the physical health of adults changes during the pandemic period.
Adults had the option to sign up for a study with recurring assessments asking for their physical health. The target variable to be classified asks about the constraints in everyday life that arise due to physical pain at tnext. The features for this use case include aspects like sport, nutrition, and pain at tnow. The median time gap of two assessments for the same user is 14.0 days.
This additional study within the Corona Health app asks users about their stress level on a weekly basis. Both features and target are assessed on a five-level ordinal scale from never to very often. The target asks for the ability of stress management, features include the first nine questions of the perceived stress scale instrument26. The median time gap of two assessments for the same user on average is 7.0 days.
We also want to compare the ML approaches with a baseline heuristic (synonym: Baseline model). A baseline heuristic can be a simple ML model like a linear regression or a small Decision Tree, or alternatively, depending on the use case, it could also be a simple statement like The next value equals the last one". The typical approach for improving ML models is to estimate the generalization error of the model on a benchmark data set when compared to a baseline heuristic. However, it is often not clear, which baseline heuristic to consider, i.e.: The same model architecture as the benchmark model, but without tuned hyperparameters? A simple, intrinsically explainable model with or without hyperparameter tuning? A random guess? A naive guess, in which the majority class is predicted? Since we have approaches on a user-level (i.e., we consider users when splitting) and on an assessment-level (i.e., we ignore users when splitting), we also should create baseline heuristics on both levels. We additionally account for within-user variance in Ecological Momentary Assessments by averaging a users previously known assessments. Previously known here means that we calculate the mode or median of all assessments of a user that are older than the given timestamp. In total, this leads to four baseline heuristics (user-level latest, user-level average, assessment-level latest, assessment-level average) that do not use any machine learning but simple heuristics. On the assessment-level, the latest known target or the mean of all known targets so far is taken to predict the next target, no matter of the user-id of this assessment. On the user-level, either the last known, or median, or mode value of this user is taken to predict the target. This, in turn, leads to a cold-start problem for users that appear for the first time in a dataset. In this case, either the last known, or mode, or median of all assessments that are known so far are taken to predict the target.
Before the data and approaches could be compared, it was necessary to homogenize them. In order for all approaches to work on all data sets, at least the following information is necessary: Assessment_id, user_id, timestamp, features, and the target. Any other information such as GPS data, or additional answers to questions of the assessment, we did not include into the ML pipeline. Additionally, targets that were collected on a continuous scale, had to be binned into an ordinal scale of five classes. For an easier interpretation and readability of the outputs, we also created label encodings for each target. To ensure consistency of the pre-processing, we created helper utilities within Python to ensure that the same function was applied on each dataset. For missing values, we created a user-wise missing value treatment. More precisely, if a user skipped a question in an assessment, we filled the missing value with the mean or mode (mode = most common value) of all other answers of this user for this assessment. If a user had only one assessment, we filled it with the overall mean for this question.
For each dataset and for each script, we set random states and seeds to enhance reproducibility. For the outer validation set, we assigned the first 80 % of all users that signed up for a study to the train set, the latest 20% to the test set. To ensure comparability, the test users were the same for all approaches. We did not shuffle the users to simulate a deployment scenario where new users join the study. This would also add potential concept drift from the train to the test set and thus improve the simulation quality.
For the cross-validation within the training set, which we call internal validation, we chose a total of 5 folds with 1 validation fold. We then applied the four baseline heuristics (on user level and assessment level with either latest target or average target as prediction) to calculate the within-train-set performance standard deviation and the mean of the weighted F1 scores for each train fold. The mean and standard deviation of the weighted F1 score are then the estimator of the performance of our model in the test set.
We call one approach superior to another if the final score is higher. The final score to evaluate an approach is calculated as:
$${f}_{1}^{final}={f}_{1}^{test}-alpha {sigma }left({f}_{1}^{train}right)$$
(1)
If the standard deviation between the folds during training is large, the final score is lower. The test set must not contain any selection bias against the underlying population. The pre-factor of the standard deviation is another hyperparameter. The more important model robustness for the use case, the higher should be set.
Within cross-validation, there exist several approaches on how to split up the data into folds and validate them, such as the k-fold approach with k as the number of folds in the training set. Here, k1 folds form the training folds and one fold is the validation fold27. One can then calculate k performance scores and their standard deviation to get an estimator for the performance of the model in the test set, which itself is an estimator for the models performance after deployment (see also Fig.2).
Schematic visualisation of the steps required to perform a k-fold cross-validation, here with k=5.
In addition, there exist the following strategies: First, (repeated) stratified k-fold, in which the target distribution is retained in each fold, which can also be seen in Fig.3. After shuffling the samples, the stratified split can be repeated3. Second, leave-one-out cross-validation28, in which the validation fold contains only one sample while the model has been trained on all other samples. And third, leave-p-out cross-validation, in which (left(begin{array}{c}n\ pend{array}right)) train-test-pairs are created with n equals number of assessments (synonym sample)29.
While this approach retains the class distribution in each fold, it still ignores user groups. Each color represents a different class or user id.
These approaches, however, do not always focus on samples that might belong to our mHealth data peculiarities. To be more specific, they do not account for users (syn. groups, subjects) that generate daily assessments (syn. samples) with a high variance.
To precisely explain the splitting approaches, we would like to differentiate between the terms folds and sets. We call a chunk of samples (synonym: assessments, filled-out questionnaires) a set on the outer split of the data, for which we cut-off the final test set. However, within the training set, we then split further to create training and validation folds. That is, using the term fold, we are in the context of cross validation. When we use the term set, then we are in the outer split of the ML pipeline. Figure4 visualizes this approach. Following this, we define 4 different approaches to split the data. For one of them we ignore the fact that there are users, for the other three we do not. We call these approaches user-cut, average-user, user-wise and time-cut. All approaches have in common that the first 80 % of all users are always in the training set and the remaining 20 % are in the test set. A schematic visualization of the splitting approaches is shown in Fig.5. Within the training set, we then split on user-level for the approaches user-cut, average-user and user-wise, and on assessment-level for the approach time-cut.
In the second step, users are ordered by their study registration time, with the initial 80 % designated as training users and the remaining 20 % as test users. Subsequently, assessments by training users are allocated to the training set, and those by test users to the test set. Within the training set, user grouping dictates the validation approach: group-cross-validation is applied if users are declared as a group, otherwise, standard cross-validation is utilized. We compute the average f1 score, ({f}_{1}^{train}), from training folds and the f1 score on the test set, ({f}_{1}^{test}). The standard deviation of ({f}_{1}^{train},sigma ({f}_{1}^{train})), indicates model robustness. The hyperparameter adjusts the emphasis on robustness, with higher values prioritizing it. Ultimately, ({f}_{1}^{final}), which is a more precise estimate if group-cross-validation is applied, offers a refined measure of model performance in real-world scenarios.
Yellow means that this sample is part of the validation fold, green means it is part of a training fold. Crossed out means that the sample has been dropped in that approach because it does not meet the requirements. Users can be sorted by time to accommodate any concept drift.
In the following section, we will explain the splitting approaches in more detail. The time-cut approach ignores the fact of given groups in the dataset and simply creates validation folds based on the time the assessments arrive in the database. In this example, the month, in which a sample was collected, is known. More precisely, all samples from January until April are in the training set while May is in the test set. The user-cut approach shuffles all user ids and creates five data folds with distinct user-groups. It ignores the time dimension of the data, but provides user-distinct training and validation folds, which is like the GroupKFold cross-validation approach as implemented in scikit-learn30. The average-user approach is very similar to the user-cut approach. However, each answer of a user is replaced by the median or mode answer of this user up to the point in question to reduce within-user-variance. While all the above-mentioned approaches require only one single model to be trained, the user-wise approach requires as many models as distinct users are given in the dataset. Therefore, for each user, 80 % of his or her assessments are used to train a user-specific model, and the remaining 20% of the time-sorted assessments are used to test the model. This means that for this approach, we can directly evaluate on the test set as each model is user specific and we solved the cold-start problem by training the model on the first assessments of this user. If a user has less than 10 assessments, he or she is not evaluated on that approach.
Approval for the UNITI randomized controlled trial and the UNITI app was obtained by the Ethics Committee of the University Clinic of Regensburg (ethical approval No. 20-1936-101). All users read and approved the informed consent before participating in the study. The study was carried out in accordance with relevant guidelines and regulations. The procedures used in this study adhere to the tenets of the Declaration of Helsinki. The Track Your Tinnitus (TYT) study was approved by the Ethics Committee of the University Clinic of Regensburg (ethical approval No. 15-101-0204). The Corona Check (CH) study was approved by the Ethics Committee of the University of Wrzburg (ethical approval no. 71/20-me) and the universitys data protection officer and was carried out in accordance with the General Data Protection Regulations of the European Union. The procedures used in the Corona Health (CH) study were in accordance with the 1964 Helsinki declaration and its later amendments and was approved by the ethics committee of the University of Wrzburg, Germany (No. 130/20-me). Ethical approvals include secondary use. The data from this study are available on request from the corresponding author. The data are not publicly available, as the informed consent of the participants did not provide for public publication of the data.
Further information on research design is available in theNature Portfolio Reporting Summary linked to this article.
Go here to read the rest:
Practical approaches in evaluating validation and biases of machine learning applied to mobile health studies ... - Nature.com
- Darktrace enhances Cyber AI Analyst with advanced machine learning for improved threat investigations - Industrial Cyber - April 21st, 2025 [April 21st, 2025]
- Infrared spectroscopy with machine learning detects early wood coating deterioration - Phys.org - April 21st, 2025 [April 21st, 2025]
- A simulation-driven computational framework for adaptive energy-efficient optimization in machine learning-based intrusion detection systems - Nature - April 21st, 2025 [April 21st, 2025]
- Machine learning model to predict the fitness of AAV capsids for gene therapy - EurekAlert! - April 21st, 2025 [April 21st, 2025]
- An integrated approach of feature selection and machine learning for early detection of breast cancer - Nature - April 21st, 2025 [April 21st, 2025]
- Predicting cerebral infarction and transient ischemic attack in healthy individuals and those with dysmetabolism: a machine learning approach combined... - April 21st, 2025 [April 21st, 2025]
- Autolomous CEO Discusses AI and Machine Learning Applications in Pharmaceutical Development and Manufacturing with Pharmaceutical Technology -... - April 21st, 2025 [April 21st, 2025]
- Machine Learning Interpretation of Optical Spectroscopy Using Peak-Sensitive Logistic Regression - ACS Publications - April 21st, 2025 [April 21st, 2025]
- Estimated glucose disposal rate outperforms other insulin resistance surrogates in predicting incident cardiovascular diseases in... - April 21st, 2025 [April 21st, 2025]
- Machine learning-based differentiation of schizophrenia and bipolar disorder using multiscale fuzzy entropy and relative power from resting-state EEG... - April 12th, 2025 [April 12th, 2025]
- Increasing load factor in logistics and evaluating shipment performance with machine learning methods: A case from the automotive industry - Nature - April 12th, 2025 [April 12th, 2025]
- Machine learning-based prediction of the thermal conductivity of filling material incorporating steelmaking slag in a ground heat exchanger system -... - April 12th, 2025 [April 12th, 2025]
- Do LLMs Know Internally When They Follow Instructions? - Apple Machine Learning Research - April 12th, 2025 [April 12th, 2025]
- Leveraging machine learning in precision medicine to unveil organochlorine pesticides as predictive biomarkers for thyroid dysfunction - Nature - April 12th, 2025 [April 12th, 2025]
- Analysis and validation of hub genes for atherosclerosis and AIDS and immune infiltration characteristics based on bioinformatics and machine learning... - April 12th, 2025 [April 12th, 2025]
- AI and Machine Learning - Bentley and Google partner to improve asset analytics - Smart Cities World - April 12th, 2025 [April 12th, 2025]
- Where to find the next Earth: Machine learning accelerates the search for habitable planets - Phys.org - April 10th, 2025 [April 10th, 2025]
- Concurrent spin squeezing and field tracking with machine learning - Nature - April 10th, 2025 [April 10th, 2025]
- This AI Paper Introduces a Machine Learning Framework to Estimate the Inference Budget for Self-Consistency and GenRMs (Generative Reward Models) -... - April 10th, 2025 [April 10th, 2025]
- UCI researchers study use of machine learning to improve stroke diagnosis, access to timely treatment - UCI Health - April 10th, 2025 [April 10th, 2025]
- Assessing dengue forecasting methods: a comparative study of statistical models and machine learning techniques in Rio de Janeiro, Brazil - Tropical... - April 10th, 2025 [April 10th, 2025]
- Machine learning integration of multimodal data identifies key features of circulating NT-proBNP in people without cardiovascular diseases - Nature - April 10th, 2025 [April 10th, 2025]
- How AI, Data Science, And Machine Learning Are Shaping The Future - Forbes - April 10th, 2025 [April 10th, 2025]
- Development and validation of interpretable machine learning models to predict distant metastasis and prognosis of muscle-invasive bladder cancer... - April 10th, 2025 [April 10th, 2025]
- From fax machines to machine learning: The fight for efficiency - HME News - April 10th, 2025 [April 10th, 2025]
- Carbon market and emission reduction: evidence from evolutionary game and machine learning - Nature - April 10th, 2025 [April 10th, 2025]
- Infleqtion Unveils Contextual Machine Learning (CML) at GTC 2025, Powering AI Breakthroughs with NVIDIA CUDA-Q and Quantum-Inspired Algorithms - Yahoo... - March 22nd, 2025 [March 22nd, 2025]
- Karlie Kloss' coding nonprofit offering free AI and machine learning workshop this weekend - KSDK.com - March 22nd, 2025 [March 22nd, 2025]
- Machine learning reveals distinct neuroanatomical signatures of cardiovascular and metabolic diseases in cognitively unimpaired individuals -... - March 22nd, 2025 [March 22nd, 2025]
- Machine learning analysis of cardiovascular risk factors and their associations with hearing loss - Nature.com - March 22nd, 2025 [March 22nd, 2025]
- Weekly Recap: Dual-Cure Inks, AI And Machine Learning Top This Weeks Stories - Ink World Magazine - March 22nd, 2025 [March 22nd, 2025]
- Network-based predictive models for artificial intelligence: an interpretable application of machine learning techniques in the assessment of... - March 22nd, 2025 [March 22nd, 2025]
- Machine learning aids in detection of 'brain tsunamis' - University of Cincinnati - March 22nd, 2025 [March 22nd, 2025]
- AI & Machine Learning in Database Management: Studying Trends and Applications with Nithin Gadicharla - Tech Times - March 22nd, 2025 [March 22nd, 2025]
- MicroRNA Biomarkers and Machine Learning for Hypertension Subtyping - Physician's Weekly - March 22nd, 2025 [March 22nd, 2025]
- Machine Learning Pioneer Ramin Hasani Joins Info-Tech's "Digital Disruption" Podcast to Explore the Future of AI and Liquid Neural Networks... - March 22nd, 2025 [March 22nd, 2025]
- Predicting HIV treatment nonadherence in adolescents with machine learning - News-Medical.Net - March 22nd, 2025 [March 22nd, 2025]
- AI And Machine Learning In Ink And Coatings Formulation - Ink World Magazine - March 22nd, 2025 [March 22nd, 2025]
- Counting whales by eavesdropping on their chatter, with help from machine learning - Mongabay.com - March 22nd, 2025 [March 22nd, 2025]
- Associate Professor - Artificial Intelligence and Machine Learning job with GALGOTIAS UNIVERSITY | 390348 - Times Higher Education - March 22nd, 2025 [March 22nd, 2025]
- Innovative Machine Learning Tool Reveals Secrets Of Marine Microbial Proteins - Evrim Aac - March 22nd, 2025 [March 22nd, 2025]
- Exploring the role of breastfeeding, antibiotics, and indoor environments in preschool children atopic dermatitis through machine learning and hygiene... - March 22nd, 2025 [March 22nd, 2025]
- Applying machine learning algorithms to explore the impact of combined noise and dust on hearing loss in occupationally exposed populations -... - March 22nd, 2025 [March 22nd, 2025]
- 'We want them to be the creators': Karlie Kloss' coding nonprofit offering free AI and machine learning workshop this weekend - KSDK.com - March 22nd, 2025 [March 22nd, 2025]
- New headset reads minds and uses AR, AI and machine learning to help people with locked-in-syndrome communicate with loved ones again - PC Gamer - March 22nd, 2025 [March 22nd, 2025]
- Enhancing cybersecurity through script development using machine and deep learning for advanced threat mitigation - Nature.com - March 11th, 2025 [March 11th, 2025]
- Machine learning-assisted wearable sensing systems for speech recognition and interaction - Nature.com - March 11th, 2025 [March 11th, 2025]
- Machine learning uncovers complexity of immunotherapy variables in bladder cancer - Hospital Healthcare - March 11th, 2025 [March 11th, 2025]
- Machine-learning algorithm analyzes gravitational waves from merging neutron stars in the blink of an eye - The University of Rhode Island - March 11th, 2025 [March 11th, 2025]
- Precision soil sampling strategy for the delineation of management zones in olive cultivation using unsupervised machine learning methods - Nature.com - March 11th, 2025 [March 11th, 2025]
- AI in Esports: How Machine Learning is Transforming Anti-Cheat Systems in Esports - Jumpstart Media - March 11th, 2025 [March 11th, 2025]
- Whats that microplastic? Advances in machine learning are making identifying plastics in the environment more reliable - The Conversation Indonesia - March 11th, 2025 [March 11th, 2025]
- Application of machine learning techniques in GlaucomAI system for glaucoma diagnosis and collaborative research support - Nature.com - March 11th, 2025 [March 11th, 2025]
- Elucidating the role of KCTD10 in coronary atherosclerosis: Harnessing bioinformatics and machine learning to advance understanding - Nature.com - March 11th, 2025 [March 11th, 2025]
- Hugging Face Tutorial: Unleashing the Power of AI and Machine Learning - - March 11th, 2025 [March 11th, 2025]
- Utilizing Machine Learning to Predict Host Stars and the Key Elemental Abundances of Small Planets - Astrobiology News - March 11th, 2025 [March 11th, 2025]
- AI to the rescue: Study shows machine learning predicts long term recovery for anxiety with 72% accuracy - Hindustan Times - March 11th, 2025 [March 11th, 2025]
- New in 2025.3: Reducing false positives with Machine Learning - Emsisoft - March 5th, 2025 [March 5th, 2025]
- Abnormal FX Returns And Liquidity-Based Machine Learning Approaches - Seeking Alpha - March 5th, 2025 [March 5th, 2025]
- Sentiment analysis of emoji fused reviews using machine learning and Bert - Nature.com - March 5th, 2025 [March 5th, 2025]
- Detection of obstetric anal sphincter injuries using machine learning-assisted impedance spectroscopy: a prospective, comparative, multicentre... - March 5th, 2025 [March 5th, 2025]
- JFrog and Hugging Face team to improve machine learning security and transparency for developers - SDxCentral - March 5th, 2025 [March 5th, 2025]
- Opportunistic access control scheme for enhancing IoT-enabled healthcare security using blockchain and machine learning - Nature.com - March 5th, 2025 [March 5th, 2025]
- AI and Machine Learning Operationalization Software Market Hits New High | Major Giants Google, IBM, Microsoft - openPR - March 5th, 2025 [March 5th, 2025]
- FICO secures new patents in AI and machine learning technologies - Investing.com - March 5th, 2025 [March 5th, 2025]
- Study on landslide hazard risk in Wenzhou based on slope units and machine learning approaches - Nature.com - March 5th, 2025 [March 5th, 2025]
- NVIDIA Is Finding Great Success With Vulkan Machine Learning - Competitive With CUDA - Phoronix - March 3rd, 2025 [March 3rd, 2025]
- MRI radiomics based on machine learning in high-grade gliomas as a promising tool for prediction of CD44 expression and overall survival - Nature.com - March 3rd, 2025 [March 3rd, 2025]
- AI and Machine Learning - Identifying meaningful use cases to fulfil the promise of AI in cities - SmartCitiesWorld - March 3rd, 2025 [March 3rd, 2025]
- Prediction of contrast-associated acute kidney injury with machine-learning in patients undergoing contrast-enhanced computed tomography in emergency... - March 3rd, 2025 [March 3rd, 2025]
- Predicting Ag Harvest using ArcGIS and Machine Learning - Esri - March 1st, 2025 [March 1st, 2025]
- Seeing Through The Hype: The Difference Between AI And Machine Learning In Marketing - AdExchanger - March 1st, 2025 [March 1st, 2025]
- Machine Learning Meets War Termination: Using AI to Explore Peace Scenarios in Ukraine - Center for Strategic & International Studies - March 1st, 2025 [March 1st, 2025]
- Statistical and machine learning analysis of diesel engines fueled with Moringa oleifera biodiesel doped with 1-hexanol and Zr2O3 nanoparticles |... - March 1st, 2025 [March 1st, 2025]
- Spatial analysis of air pollutant exposure and its association with metabolic diseases using machine learning - BMC Public Health - March 1st, 2025 [March 1st, 2025]
- The Evolution of AI in Software Testing: From Machine Learning to Agentic AI - CSRwire.com - March 1st, 2025 [March 1st, 2025]
- Wonder Dynamics Helps Boxel Studio Embrace Machine Learning and AI - Animation World Network - March 1st, 2025 [March 1st, 2025]
- Predicting responsiveness to fixed-dose methylene blue in adult patients with septic shock using interpretable machine learning: a retrospective study... - March 1st, 2025 [March 1st, 2025]
- Workplace Predictions: AI, Machine Learning To Transform Operations In 2025 - Facility Executive Magazine - March 1st, 2025 [March 1st, 2025]
- Development and validation of a machine learning approach for screening new leprosy cases based on the leprosy suspicion questionnaire - Nature.com - March 1st, 2025 [March 1st, 2025]