Archive for the ‘Machine Learning’ Category

What are Machine Learning Models? Types and Examples – TechTarget

What are machine learning models?

A machine learning model automates the process of identifying patterns and relationships that are hidden in data. It can use a combination of prelabeled or unlabeled data processed through various machine learning algorithms to determine the best fit for the problem to be solved.

Each machine learning algorithm represents a specific strategy to uncover patterns within a historical data set, according to Michael Shehab, principal and labs technology and innovation leader at PwC. The process of transforming machine learning algorithms into models consists of three components: representing the problem, identifying a specific task and providing feedback to guide the algorithm's quest for a solution. "The resulting model represents a function that has been learned or produced by the machine learning algorithm and is capable of mapping previously unseen examples to an accurate output," Shehab explained.

Selecting the type of model to use is a mixture of art and science. "There is no one-size-fits-all approach to understanding which model works for your organization," said Brian Steele, vice president of product management at customer analytics platform provider Gryphon.ai. Each model type will offer insights and results based on the data type and use cases. Additionally, the type and quality of the input data will drive the selection of certain types of models.

The machine learning field is rapidly evolving. When it comes to describing approaches like those used in generative AI applications, new techniques are blurring the old methods of classifying models.

There's no commonly accepted classification standard as new models are added daily, said Anantha Sekar, AI lead at Tata Consultancy Services. Still, the most common classifications of machine learning models include supervised, semi-supervised, unsupervised and reinforcement learning. These major types should all be considered along with the objective and learning approach being used, Sekar recommended.

A generative AI model, for example, may involve multiple training approaches deployed in succession. It may start with unsupervised learning on a large corpus of data followed by supervised learning to fine-tune the model and reinforcement learning to continuously tune results after deployment. "Discussing types of models is like discussing types of humans," Sekar noted. "Since each one is ultimately unique, the classifications are useful mainly for broad understanding purposes."

Data scientists will each develop their own approach to training machine learning models. Training generally starts with preparing the data, identifying the use case, selecting training algorithms and analyzing the results. Following is a set of best practices developed by Shehab for PwC:

In general, there is no one best machine learning model. "Different models work best for each problem or use case," Sekar said. Insights derived from experimenting with the data, he added, may lead to a different model. The patterns of data can also change over time. A model that works well in development may have to be replaced with a different model.

A specific model can be regarded as the best only for a specific use case or data set at a certain point in time, Sekar said. The use case can add more nuance. Some uses, for example, may require high accuracy while others demand higher confidence. It's also important to consider environmental constraints in model deployment, such as memory, power and performance requirements. Other use cases may have explainability requirements that could drive decisions toward a different type of model.

Data scientists also need to consider the operational aspects of models after deployment, called ModelOps, when prioritizing one type of model over another. These considerations may include how the raw data is transformed for processing, fine-tuning processes, prompt engineering and the need to mitigate AI hallucinations. "Choosing the best model for a given situation," Sekar advised, "is a complex task with many business and technical aspects to be considered."

The terms machine learning model and machine learning algorithm are sometimes conflated to mean the same thing. But from a data science perspective, they're very different. Machine learning algorithms are used in training machine learning models.

Machine learning algorithms are the brains of the models, Steele suggested. The algorithms contain code that's used to form predictions for the models. The data the algorithms are trained on often determines the types of outputs the models create. The data acts as a source of information for the algorithm to learn from, so the models can create understandable and relevant outputs.

Put another way, an algorithm is a set of procedures that describes how to do something, Sekar explained, and a machine learning model is a mathematical representation of a real-world problem trained on machine learning algorithms. "So, the machine learning model is a specific instance," he said, "while machine learning algorithms are a suite of procedures on how to train machine learning models."

The algorithm shapes and influences what the model does. The model considers the what of the problem, while the algorithm provides the how for getting the model to perform as desired. Data is the third relevant entity because the algorithm uses the training data to train the machine learning model. In practice, therefore, a machine learning outcome depends on the model, the algorithms and the training data.

Follow this link:
What are Machine Learning Models? Types and Examples - TechTarget

Machine Learning Regularization Explained With Examples – TechTarget

What is regularization in machine learning?

Regularization in machine learning is a set of techniques used to ensure that a machine learning model can generalize to new data within the same data set. These techniques can help reduce the impact of noisy data that falls outside the expected range of patterns. Regularization can also improve the model by making it easier to detect relevant edge cases within a classification task.

Consider an algorithm specifically trained to identify spam emails. In this scenario, the algorithm is trained to classify emails that appear to be from a well-known U.S. drugstore chain and contain only a single image as likely to be spam. However, this narrow approach runs the risk of disappointing loyal customers of the chain, who were looking forward to being notified about the store's latest sales. A more effective algorithm would consider other factors, such as the timing of the emails, the use of images and the types of links embedded in the emails to accurately label the emails as spam.

This more complex model, however, would also have to account for the impact that each of these measures added to the algorithm. Without regularization, the new algorithm risks being overly complex, subject to bias and unable to detect variance. We will elaborate on these concepts below.

In short, regularization pushes the model to reduce its complexity as it is being trained, explained Bret Greenstein, data, AI and analytics leader at PwC.

"Regularization acts as a type of penalty that gets added to the loss function or the value that is used to help assign importance to model features," Greenstein said. "This penalty inhibits the model from finding parameters that may over-assign importance to its features."

As such, regularization is an important tool that can be used by data scientists to improve model training to achieve better generalization, or to improve the odds that the model will perform well when exposed to unknown examples.

Adnan Masood, chief architect of AI and machine learning at digital transformation consultancy UST, said his firm regularly uses regularization to strike a balance between model complexity and performance, adeptly steering clear of both underfitting and overfitting.

Overfitting, as described above, occurs when a model is too complex and learns noise in the training data. Underfitting occurs when a model is too simple to capture underlying data patterns.

"Regularization provides a means to find the optimal balance between these two extremes," Masood said.

Consider another example of the use of regularization in retail. In this scenario, the business wants to develop a model that can predict when a certain product might be out of stock. To do this, the business has developed a training data set with many features, such as past sales data, seasonality, promotional events, and external factors like weather or holiday.

This, however, could lead to overfitting when the model is too closely tied to specific patterns in the training data and as a result may be less effective at predicting stockouts based on new, unseen data.

"Without regularization, our machine learning model could potentially learn the training data too well and become overly sensitive to noise or fluctuations in the historical data," Masood said.

In this case, a data scientist might apply a linear regression model to minimize the sum of the squared difference between actual and predicted stockout instances. This discourages the model from assigning too much importance to any one feature.

In addition, they might assign a lambda parameter to determine the strength of regularization. Higher values of this parameter increase regularization and lower the model coefficients (weights of the model).

When this regularized model is trained, it will balance fitting the training data and keeping the model weights small. The result is a model that is potentially less accurate on the training data and more accurate when predicting stockouts on new, unseen data.

"In this way, regularization helps us build a robust model, better generalizes to new data and more effectively predicts stockouts, thereby enabling the business to manage its inventory better and prevent loss of sales," Masood said.

He finds that regularization is vital in managing overfitting and underfitting. It also indirectly helps control bias (error from faulty assumptions) and variance (error from sensitivity to small fluctuations in a training data set), facilitating a balanced model that generalizes well on unseen data.

Niels Bantilan, chief ML engineer at Union.ai, a machine learning orchestration platform, finds it useful to think of regularization as a way to prevent a machine learning model from memorizing the data during training.

For example, a home automation robot trained on making coffee in one kitchen might inadvertently memorize the quirks and layouts of that specific kitchen. It will likely break when presented with a new kitchen where ingredients and equipment differ from the one it memorized.

In this case, regularization forces the model to learn higher-level concepts like "coffee mugs tend to be stored in overhead cabinets" rather than learning specific quirks of the first kitchen, such as "the coffee mugs are stored in the top left-most shelf."

In business, regularization is important to operationalizing machine learning, as it can mitigate errors and save cost, since it is expensive to constantly retrain models on the latest data.

"Therefore, it makes sense to ensure they have some generalization capacity beyond their training data, so the models can handle new situations up to a certain point without having to retrain them on expensive hardware or cloud infrastructure," Bantilan said.

The term overfitting is used to describe a model that has learned too much from the training data. This can include noise, such as inaccurate data accidentally read by a sensor or a human deliberately inputting bad data to evade a spam filter or fraud algorithm. It can also include data specific to that particular situation but not relevant to other use cases, such as a store shelf layout in one store that might not be relevant to different stores in a stockout predictor.

Underfitting occurs when a model has not learned to map features to an accurate prediction for new data. Greenstein said that regularization can sometimes lead to underfitting. In that case, it is important to change the influence that regularization has during model training. Underfitting also relates to bias and variance.

Bantilan described bias in machine learning as the degree to which a model's predictions agree with the actual ground truth. For example, a spam filter that perfectly predicts the spam/not-spam labels in training data would be a low-bias model. It could be considered high-bias if it was wrong all the time.

Variance characterizes the degree to which the model's predictions can handle small perturbations in the training data. One good test is removing a few records to see what happens, Bantilan said. If the model's predictions remain the same, then the model is considered low-variance. If the predictions change wildly, then it is considered high-variance.

Greenstein observed that high variance could be present when a model trained on multiple variations of data appears to learn a solution but struggles to perform on test data. This is one form of overfitting, and regularization can assist with addressing the issue.

Bharath Thota, partner in the advanced analytics practice of Kearney, a global strategy and management consulting firm, said that some of the common use cases in industry include the following:

Regularization needs to be considered as a handy technique in the process of improving ML models rather than a specific use case. Greenstein has found it most useful when problems are high-dimensional, which means they contain many and sometimes complex features. These types of problems are prone to overfitting, as a model may fail to identify simplified patterns to map features to objectives.

Regularization is also helpful with noisy data sets, such as high-dimensional data, where examples vary a lot and are subject to overfitting. In these cases, the models may learn the noise rather than a generalized way of representing the data.

It is also good for nonlinear problems since problems that require nonlinear algorithms can often lead to overfitting. These kinds of algorithms uncover complex boundaries for classifying data that map well to the training data but are only partially applicable to real-world data.

Greenstein noted that regularization is one of many tools that can assist with resolving challenges with an overfit model. Other techniques, such as bagging, reduced learning rates and data sampling methods, can complement or replace regularization, depending on the problem.

There are a range of different regularization techniques. The most common approaches rely on statistical methods such as Lasso regularization (also called L1 regularization), Ridge regularization (L2 regularization) and Elastic Net regularization, which combines both Lasso and Ridge techniques. Various other regulation techniques use different principles, such as ensembling, neural network dropout, pruning decision tree-based models and data augmentation.

Masood said the choice of regularization method and tuning for the regularization strength parameter (lambda) largely depends on the specific use case and the nature of the data set.

"The right regularization can significantly improve model performance, but the wrong choice could lead to underperformance or even harm the model's predictive power," Masood cautioned. Consequently, it is important to approach regularization with a solid understanding of both the data and the problem at hand.

Here are brief descriptions of the common regularization techniques.

Lasso regression AKA L1 regularization. The Lasso regularization technique, an acronym for least absolute shrinkage and selection operator, is derived from calculating the median of the data. A median is a value in the middle of a data set. It calculates a penalty function using absolute weights. Kearney's Thota said this regularization technique encourages sparsity in the model, meaning it can set some coefficients to exactly zero, effectively performing feature selection.

Ridge regression AKA L2 regularization. Ridge regulation is derived from calculating the mean of the data, which is the average of a set of numbers. It calculates a penalty function using a square or other exponent of each variable. Thota said this technique is useful for reducing the impact of irrelevant or correlated features and helps in stabilizing the model's behavior.

Elastic Net (L1 + L2) regularization. Elastic Net combines both L1 and L2 techniques to improve the results for certain problems.

Ensembling. This set of techniques combines the predictions from a suite of models, thus reducing the reliance on any one model for prediction.

Neural network dropout. This process is sometimes used in deep learning algorithms comprised of multiple layers of neural networks. It involves randomly dropping out the weights of some neurons. Bantilan said this forces the deep learning algorithm to learn an ensemble of sub-networks to achieve the task effectively.

Pruning decision tree-based models. This is used in tree-based models like decision trees. The process of pruning branches can simplify the decision rules of a particular tree to prevent it from relying on the quirks of the training data.

Data augmentation. This family of techniques uses prior knowledge about the data distribution to prevent the model from learning the quirks of the data set. For example, in an image classification use case, you might flip an image horizontally, introduce noise, blurriness or crop an image. "As long as the data corruption or modification is something we might find in the real world, the model should learn how to handle those situations," Bantilan said.

The rest is here:
Machine Learning Regularization Explained With Examples - TechTarget

How Can Hybrid Machine Learning Techniques Help With Effective … – Dataconomy

Apart from many areas in our lives, hybrid machine learning techniques can help us with effective heart disease prediction. So how can the technology of our time, machine learning, be used to improve the quality and length of human life?

Heart disease stands as one of the foremost global causes of mortality today, presenting a critical challenge in clinical data analysis. Leveraging hybrid machine learning techniques, a field highly effective at processing vast healthcare data volumes is increasingly promising in effective heart disease prediction.

According to the World Health Organization, heart disease takes an estimated 17.9 million lives each year. Although many developments in the field of medicine have succeeded in reducing the death rate of heart diseases in recent years, we are failing in the early diagnosis of these diseases. The time has come for us to treat ML and AI algorithms as more than simple trends.

However effective heart disease prediction proves complex due to various contributing risk factors such as diabetes, high blood pressure, and abnormal pulse rates. Several data mining and neural network techniques have been employed to gauge the severity of heart disease but the prediction of it is a different subject.

This ailment is subclinical, and thats why experts recommend check-ups twice a year for anyone over the age of 30. But lets face it, human beings are lazy and look for the simplest way to do something but how hard can it be to accept an effective and technological medical innovation at a time when we can do our weekly shopping at home with a single voice command into our lives?

Heart disease is one of the leading causes of death worldwide and is a significant public health concern. The deadliness of heart disease depends on various factors, including the type of heart disease, its severity, and the individuals overall health. But does that mean we are left without any preventative method? Is there any way to find it out before it happens to us?

The speed of technological development has reached a peak that we never could have imagined, especially in the last three years. This technological journey of humanity, which started with the slow integration of IoT systems such as Alexa into our lives, has peaked in the last quarter of 2022 with the increase in the prevalence and use of ChatGPT and other LLM models. We are no longer far from the concepts of AI and ML, and these products are preparing to become the hidden power behind medical prediction and diagnostics.

Hybrid machine learning techniques can help with effective heart disease prediction by combining the strengths of different machine learning algorithms and utilizing them in a way that maximizes their predictive power.

Hybrid techniques can help in feature engineering, which is an essential step in machine learning-based predictive modeling. Feature engineering involves selecting and transforming relevant variables from raw data into features that can be used by machine learning algorithms. By combining different techniques, such as feature selection, feature extraction, and feature transformation, hybrid machine learning techniques can help identify the most informative features that contribute to effective heart disease prediction.

The choice of an appropriate model is critical in predictive modeling. Hybrid machine learning techniques excel in model selection by amalgamating the strengths of multiple models. By combining, for example, a decision tree with a support vector machine (SVM), these hybrid models leverage the interpretability of decision trees and the robustness of SVMs to yield superior predictions in medicine.

Model ensembles, formed by merging predictions from multiple models, are another avenue where hybrid techniques shine. The synergy of diverse models often surpasses individual model performance, resulting in more accurate heart disease predictions. For instance, a hybrid ensemble uniting a random forest with a gradient-boosting machine leverages both models strengths to increase the prediction accuracy of heart diseases.

Dealing with missing values is a common challenge in medical data analysis. Hybrid machine learning techniques prove beneficial by combining imputation strategies like mean imputation, median imputation, and statistical model-based imputation. This amalgamation helps mitigate the impact of missing values on predictive accuracy.

The proliferation of large datasets poses challenges related to high-dimensional data. Hybrid approaches address this challenge by fusing dimensionality reduction techniques like principal component analysis (PCA), independent component analysis (ICA), and singular value decomposition (SVD) with machine learning algorithms. This results in reduced data dimensionality, enhancing model interpretability and prediction accuracy.

Traditional machine learning algorithms may falter when dealing with non-linear relationships between variables. Hybrid techniques tackle this issue effectively by amalgamating methods such as polynomial feature engineering, interaction term generation, and the application of recursive neural networks. This amalgamation captures non-linear relationships, thus improving predictive accuracy.

Hybrid machine learning techniques enhance model interpretability by combining methodologies that shed light on the models decision-making process. For example, a hybrid model coupling a decision tree with a linear model offers interpretability akin to decision trees alongside the statistical significance provided by linear models. This comprehensive insight aids in better understanding and trustworthiness of heart disease predictions.

Multiple studies have explored heart disease prediction using hybrid machine learning techniques One such novel method, designed to enhance prediction accuracy, incorporates a combination of hybrid machine learning techniques to identify significant features for cardiovascular disease prediction.

Mohan, Thirumalai, and Srivastava propose a novel method for heart disease prediction that uses a hybrid of machine learning techniques. The method first uses a decision tree algorithm to select the most significant features from a set of patient data.

The researchers compared their method to other machine learning methods for heart disease prediction, such as logistic regression and naive Bayes. They found that their method outperformed these other methods in terms of accuracy.

The decision tree algorithm used to select features is called the C4.5 algorithm. This algorithm is a popular choice for feature selection because it is relatively simple to understand and implement, and it has been shown to be effective in a variety of applications including effective heart disease prediction.

The SVM classifier used to predict heart disease is a type of machine learning algorithm that is known for its accuracy and robustness. SVM classifiers work by finding a hyperplane that separates the data points into two classes. In the case of heart disease prediction, the two classes are patients with heart disease and patients without heart disease.

Exploring the leading AI medical scribes

The researchers suggest that their method could be used to develop a clinical decision support system for the early detection of heart disease. Such a system could help doctors to identify patients who are at high risk of heart disease and to provide them with preventive care.

The authors method has several advantages over other machine learning methods for effective heart disease prediction. First, it is more accurate. Second, it is more robust to noise in the data. Third, it is more efficient to train and deploy.

The authors method is still under development, but it has the potential to be a valuable tool for the early detection of heart disease. The authors plan to further evaluate their method on larger datasets and to explore ways to improve its accuracy.

In addition to the advantages mentioned by the authors, their method also has the following advantages:

The authors evaluated their method on a dataset of 13,000 patients. The dataset included information about the patients age, sex, race, smoking status, blood pressure, cholesterol levels, and other medical history. The authors found that their method was able to predict heart disease with an accuracy of 87.2%.

In another study by Bhatt, Patel, Ghetia, and Mazzero which investigated the use of machine learning (ML) techniques to effectively predict heart disease in 2023, the researchers used a dataset of 1000 patients with heart disease and 1000 patients without heart disease. They used four different ML techniques: decision trees, support vector machines, random forests, and neural networks.

The researchers found that all four ML techniques were able to predict heart disease with a high degree of accuracy. The decision tree algorithm had the highest accuracy, followed by the support vector machines, random forests, and neural networks.

The researchers also found that the accuracy of the ML techniques was improved when they were used in combination with each other. For example, the decision tree algorithm combined with the support vector machines had the highest accuracy of all the models.

The studys findings suggest that ML techniques can be used as an effective tool for predicting heart disease. The researchers believe that these techniques could be used to develop early detection and prevention strategies for heart disease.

In addition to the findings mentioned above, the study also found that the following factors were associated with an increased risk of heart disease:

The studys findings highlight the importance of early detection and prevention of heart disease. By identifying people who are at risk for heart disease, we can take steps to prevent them from developing the disease.

The study is limited by its small sample size. However, the findings are promising and warrant further research. Future studies should be conducted with larger sample sizes to confirm the findings of this study.

Predicting heart disease using hybrid machine learning techniques is an evolving field with several challenges and promising future directions.

One of the primary challenges is obtaining high-quality and sufficiently large datasets for training hybrid models. This involves collecting diverse patient data, including clinical, genetic, and lifestyle factors. Choosing the most relevant features from a large pool is crucial. Hybrid techniques aim to combine different feature selection methods to enhance prediction accuracy.

Deciding which machine learning algorithms to use in hybrid models is critical. Researchers often experiment with various algorithms like random forest, K-nearest neighbor, and logistic regression to find the best combination. Interpreting hybrid model predictions can be challenging due to their complexity. Ensuring transparency and interpretability is essential for clinical acceptance.

The class distribution in heart disease datasets can be imbalanced, with fewer positive cases. Addressing this imbalance is vital for accurate predictions. Ensuring that hybrid models also generalize well to unseen data is a constant concern. Techniques like cross-validation and robust evaluation methods are crucial.

Future directions in effective heart disease prediction using hybrid machine learning techniques encompass several key areas.

A prominent trajectory in the field involves the customization of treatment plans based on individual patient profiles, a trend that continues to gain momentum. Hybrid machine learning models are poised to play a pivotal role in this endeavor by furnishing personalized risk assessments. This approach holds great promise for tailoring interventions to patients unique needs and characteristics, potentially improving treatment outcomes.

The integration of multi-omics data, including genomics, proteomics, and metabolomics, with clinical information represents a compelling avenue for advancing effective heart disease prediction. By amalgamating these diverse data sources, hybrid model techniques can generate more accurate predictions. This holistic approach has the potential to provide deeper insights into the underlying mechanisms of heart disease and enhance predictive accuracy.

As the complexity of hybrid machine learning model techniques increases, ensuring that these models are interpretable and provide transparent explanations for their predictions becomes paramount. The development of hybrid models that offer interpretable explanations can significantly enhance their clinical utility. Healthcare professionals can better trust and utilize these models in decision-making processes, ultimately benefiting patient care.

Another promising direction involves the integration of real-time patient data streams with hybrid models. This approach enables continuous monitoring of patients, facilitating early detection and intervention in cases of heart disease. By leveraging real-time data, hybrid models can provide timely insights, potentially preventing adverse cardiac events and improving patient outcomes.

Collaboration stands as a cornerstone for future progress in effective heart disease prediction using hybrid machine learning techniques. Effective collaboration between medical experts, data scientists, and machine learning researchers is instrumental in driving innovation. Combining domain expertise with advanced computational methods can lead to breakthroughs in hybrid models accuracy and clinical applicability for heart disease prediction.

While heart disease prediction using hybrid machine learning techniques faces data, model complexity, and interpretability challenges, it holds promise for personalized medicine and improving patient outcomes through early detection and intervention. Collaboration and advancements in data collection and analysis methods will continue to shape the future of this field and perhaps humanity.

Featured image credit: rawpixel.com/Freepik

See the original post:
How Can Hybrid Machine Learning Techniques Help With Effective ... - Dataconomy

Comparative performances of machine learning algorithms in … – Nature.com

Evaluation of performances of algorithms

We selected the following seven algorithms most often used in radiomics studies for feature selection, based on filtering approaches. These filters can be grouped into three categories : those from the statistical field including the Pearson correlation coefficient (abbreviated as Pearson in the manuscript) and Spearman correlation coefficient (Spearman ), those based on random forests including Random Forest Variable Importance (RfVarImp ) and Random Forest Permutation Importance (RfPerImp), and those based on the information theory including Joint Mutual Information (JMI), Joint Mutual Information Maximization (JMIM) and Minimum-Redundancy-Maximum-Relevance (MRMR).

These methods rank features, and then a given number of best features are kept for modeling. Three different numbers of selected features were investigated in this study: 10, 20 and 30.

Moreover, in order to estimate the impact of the feature selection step, two non-informative algorithms of feature selection were used as benchmarks: no selection which resulted in selecting all features (All) and a random selection of a given number of features (Random).

Fourteen machine-learning or statistical binary classifiers were tested, among those most often used in radiomics studies: K-Nearest Neighbors (KNN); five linear models including Linear Regression (Lr), three Penalized Linear Regression (Lasso Penalized Linear Regression (LrL1), Ridge Penalized Linear Regression (LrL2), Elastic-net Linear Regression (LrElasticNet)) and Linear Discriminant Analysis (LDA); Random Forest (RF); AdaBoost and XGBoost; three support vector classifiers including Linear Support Vector Classifier (Linear SVC), Polynomial Support Vector Classifier (PolySVC) and Radial Support Vector Classifier (RSVC); and two bayesian classifiers including Binomial Naive Bayes (BNB) and Gaussian Naive Bayes (GNB).

In order to estimate performances of each of the 126 combinations of the nine feature selection algorithms with the fourteen classification algorithms, each combination was trained using a grid-search and nested cross validation strategy15 as follows.

First, datasets were randomly split into three folds, stratified on the diagnostic value so that each fold had the same diagnostic distribution as the population of interest. Each fold was used in turn as the test set while the two remaining folds were used as training and cross-validation sets.

Ten-fold cross validation and grid-search were used on the training set to tune the hyperparameters maximizing the area under the receiver operating characteristic curve (AUC). Best hyperparameters were then used to train the model on the whole training set.

In order to take into account overfitting, the metric used was the AUC penalized by the absolute value of the difference between the AUCs of the test set and the train set:

$${text{AUC}}_{{{text{Cross}} - {text{Validation}}}} = {text{AUC}}_{{{text{Test}} - {text{Fold}}}} - left| {{text{AUC}}_{{{text{Test}} - {text{Fold}}}} - {text{AUC}}_{{{text{Train}} - {text{Fold}}}} } right|$$

This procedure was repeated for each of the ten datasets, for three different train-test splits and the three different numbers of selected features.

Each combination of algorithms yielded 90 (3310) AUCs, apart from combinations using the All feature selection which were associated with only 30 AUCs due to the absence of number of feature selection, the Random feature selection, repeated three times which yielded 270 AUCs. Hence, in total, 13,020 AUCs were calculated.

Multifactor ANalysis of VAriance (ANOVA) was used to quantify the variability of the AUC associated with the following factors: dataset, feature selection algorithm, classifier algorithm, number of features, train-test split, imaging modality, and interactions between classifier / dataset, classifier / feature selection, dataset / feature selection, and classifier / feature selection / dataset. Proportion of variance explained was used to quantify impacts of each factor/interaction. Results are given as frequency (proportion(%)) or range (minimum value; maximum value).

For each feature selection, classifier, dataset and train-test split, median AUC,1st quartile (Q1); and 3rd quartile (Q3) were computed. Box-plots were used to visualize results.

In addition, for feature selection algorithms and classifiers, a Friedman test16 followed by post-hoc pair-wise Nemenyi-Friedman tests were used to compare the median AUCs of the algorithms.

Heatmaps were generated to illustrate results for each Feature Selection and Classifier combination.

All the algorithms were implemented in Python (version 3.8.8). Pearson and Spearman correlations were computed using Pandas (1.2.4), the XGBoost algorithm using xgboost (1.5) and JMI, JMIM and MRMR algorithms using MIFS. All other algorithms were implemented using the scikit-learn library (version 0.24.1). Data were standardized by centering and scaling using scikit-learn StandardScaler.

See the original post here:
Comparative performances of machine learning algorithms in ... - Nature.com

Self-orienting in human and machine learning – Nature.com

James, W., Burkhardt, F., Bowers, F. & Skrupskelis, I. K. The Principles of Psychology Vol. 1 (Macmillan London, 1890).

Belk, R. W. Extended self in a digital world. J. Consum. Res. 40, 477500 (2013).

Article Google Scholar

Buckner, R. L. & Carroll, D. C. Self-projection and the brain. Trends Cogn. Sci. 11, 4957 (2007).

Article PubMed Google Scholar

Dennett, D. C. in Self and Consciousness 111123 (Psychology Press, 2014).

Sui, J. & Humphreys, G. W. The integrative self: how self-reference integrates perception and memory. Trends Cogn. Sci. 19, 719728 (2015).

Article PubMed Google Scholar

Blanke, O. & Metzinger, T. Full-body illusions and minimal phenomenal selfhood. Trends Cogn. Sci. 13, 713 (2009).

Article PubMed Google Scholar

Bem, D. J. Self-perception: an alternative interpretation of cognitive dissonance phenomena. Psychol. Rev. 74, 183 (1967).

Article CAS PubMed Google Scholar

McConnell, A. R. The multiple self-aspects framework: self-concept representation and its implications. Personal. Soc. Psychol. Rev. 15, 327 (2011).

Article Google Scholar

Sanchez-Vives, M. V. & Slater, M. From presence to consciousness through virtual reality. Nat. Rev. Neurosci. 6, 332339 (2005).

Article CAS PubMed Google Scholar

Strawson, G. The sense of the self. Lond. Rev. Books 18, 126152 (1996).

Google Scholar

Dennett, D. C. in Science Fiction and Philosophy: From Time Travel to Superintelligence (ed. Schneider, S.) 5568 (John Wiley & Sons, 2016).

Nozick, R. Philosophical Explanations (Harvard Univ. Press, 1981).

Perry, J. Can the self divide? J. Philos. 69, 463488 (1972).

Article Google Scholar

Moulin-Frier, C. et al. DAC-h3: a proactive robot cognitive architecture to acquire and express knowledge about the world and the self. IEEE Trans. Cogn. Dev. Syst. 10, 10051022 (2017).

Article Google Scholar

Johnson, M. & Demiris, Y. Perceptual perspective taking and action recognition. Int. J. Adv. Rob. Syst. 2, 32 (2005).

Article Google Scholar

Paul, L., Ullman, T. E., De Freitas, J. & Tenenbaum, J. Reverse-engineering the self. Preprint at https://doi.org/10.31234/osf.io/vzwrn (2023).

Andrychowicz, M. et al. Hindsight experience replay. Adv. Neural Inform. Process. Syst. 30, 50485058 (2017).

Hausknecht, M. & Stone, P. in 2015 AAAI Fall Symposium Series 2937 (AAAI, 2015).

Schaul, T., Quan, J., Antonoglou, I. & Silver, D. Prioritized experience replay. Preprint at https://doi.org/10.48550/arXiv.1511.05952 (2015).

Van Hasselt, H., Guez, A. & Silver, D. in Proc. AAAI Conference on Artificial Intelligence 20942100 (AAAI, 2016).

Wang, Z. et al. in International Conference on Machine Learning 19952003 (PMLR, 2016).

Mnih, V. et al. Playing Atari with deep reinforcement learning. Preprint at https://doi.org/10.48550/arXiv.1312.5602 (2013).

Kaiser, L. et al. Model-based reinforcement learning for Atari. Preprint at https://doi.org/10.48550/arXiv.1903.00374 (2019).

Dubey, R., Agrawal, P., Pathak, D., Griffiths, T. L. & Efros, A. A. Investigating human priors for playing video games. Preprint at https://doi.org/10.48550/arXiv.1802.10217 (2018).

Tsividis, P. A. et al. Human-level reinforcement learning through theory-based modeling, exploration, and planning. Preprint at https://doi.org/10.48550/arXiv.2107.12544 (2021).

Tsividis, P. A., Pouncy, T., Xu, J. L., Tenenbaum, J. B. & Gershman, S. J. in 2017 AAAI Spring Symposium Series 643646 (AAAI, 2017).

Uhde, C., Berberich, N., Ramirez-Amaro, K. & Cheng, G. in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 80818086 (IEEE, 2020).

Lanillos, P. & Cheng, G. Robot self/other distinction: active inference meets neural networks learning in a mirror. Preprint at https://doi.org/10.48550/arXiv.2004.05473 (2020).

Demiris, Y. & Meltzoff, A. The robot in the crib: a developmental analysis of imitation skills in infants and robots. Infant Child Dev. Int. J. Res. Pract. 17, 4353 (2008).

Article Google Scholar

Piaget, J. The construction of reality in the child. J. Consult. Psychol. 19, 77 (1955).

Article Google Scholar

Thrun, S. in Robotics and Cognitive Approaches to Spatial Mapping 1341 (Springer, 2008).

Silver, D., Singh, S., Precup, D. & Sutton, R. S. Reward is enough. Artif. Intell. 299, 103535 (2021).

Article Google Scholar

Botvinick, M. et al. Building machines that learn and think for themselves. Behav. Brain Sci. 40, E255 (2017).

Botvinick, M. et al. Building machines that learn and think for themselves: Commentary on Lake et al., Behavioral and Brain Sciences, 2017. Preprint at https://doi.org/10.48550/arXiv.1711.08378 (2017).

Vul, E., Goodman, N., Griffiths, T. L. & Tenenbaum, J. B. One and done? Optimal decisions from very few samples. Cogn. Sci. 38, 599637 (2014).

Article PubMed Google Scholar

Reed, S. et al. A generalist agent. Trans. Mach. Learn. Res. https://openreview.net/forum?id=1ikK0kHjvj (2022).

Schrittwieser, J. et al. Mastering Atari, Go, chess and shogi by planning with a learned model. Nature 588, 604609 (2020).

Article CAS PubMed Google Scholar

Pan, X. et al. How you act tells a lot: privacy-leakage attack on deep reinforcement learning. Preprint at https://doi.org/10.48550/arXiv.1904.11082 (2019).

Brockman, G. et al. OpenAI Gym. Preprint at https://doi.org/10.48550/arXiv.1606.01540 (2016).

Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D. & Iverson, G. Bayesian t tests for accepting and rejecting the null hypothesis. Psychon. Bull. Rev. 16, 225237 (2009).

Article PubMed Google Scholar

Hill, A., Raffin, A., Ernestus, M., Gleave, A. & Kanervisto, A. stable-baselines. GitHub https://github.com/Stable-Baselines-Team/stable-baselines (2018).

Dhariwal, P. et al. Openai baselines. GitHub https://github.com/openai/baselines (2017).

Weitkamp, L. option-critic-pytorch. GitHub https://github.com/lweitkamp/option-critic-pytorch (2019).

See more here:
Self-orienting in human and machine learning - Nature.com