Media Search:



Machine learning and thought, climate impact on health, Alzheimer’s … – Virginia Tech

One of the worlds leaders in computational psychiatry will kick off the upcomingMaury Strauss Distinguished Public Lecture Seriesat the Fralin Biomedical Research Institute at VTC in September.

The public lectures bring innovators and thought leaders in science, medicine, and health from around the globe to the Health Sciences and Technology campus in Roanoke.

Leading the series with a discussion of machine learning and human thought is Read Montague, the Virginia Tech Carilion Vernon Mountcastle Research professor anddirector of theCenter for Human Neuroscience Researchat the Fralin Biomedical Research Institute at VTC.

Montagues research led to the development of the prediction error reward hypothesis among the most influential ideas at the basis for human decision-making in health and in neuropsychiatric disorders and recently to first-of-their-kind observations in the human brain of how the neurochemicals dopamine and serotonin shape peoples perceptions of the world around them.

He will share details of his data-driven neuroscience applications to machine learning to better identify and treat diseases of the brain at 5:30 p.m. on Sept. 28 at the institute.

Montague, who is working with clinicians and research centers worldwide to gather data on brain signaling, is also a professor in the department of physics at Virginia Techs College of Science.

Next in the series is J. Marshall Shepherd, who started his career as a meteorologist and became a leading international expert in weather and climate. He is an elected member of three of the nations influential scientific academies: the National Academy of Sciences, the National Academy of Engineering, and the American Academy of Arts and Sciences.

How is his work part of a series on health? The World Health Organization recognizes climate change as the single biggest health threat facing humanity. Shepherd will address the intersection of climate, risk and perception.

Bookending the series in May 2024 is Rick Woychik, director of the National Institute of Environmental Health Sciences at the National Institutes of Health. The molecular geneticist oversees federal funding for biomedical research related to environmental influences, including climate change, on human health and disease.

Other lectures in the series address Alzheimers disease, infant nutrition, dementia, COVID-19 and cardiovascular outcomes, and locomotor learning in children with brain injury.

We look forward to joining with members of the wider community to better understand these exciting new innovations and insights that are germane to health, said Michael Friedlander, Virginia Techs vice president for health sciences and technology and executive director of the Fralin Biomedical Research Institute. This is an incredible collection of speakers who represent some of the best thinking in science, medicine, and policy in the context of improving health. We are also proud that our own Read Montague is among them, and we look forward to sharing this research with the wider community.

The free public lectures are named for Maury Strauss, a Roanoke businessman and longtime community benefactor who recognized the value of welcoming leaders in science, medicine, and health to share their work. The 2023-24 series, which began in 2011, highlights the research institutes commitment to the community.

The full 2023-24 Maury Strauss Distinguished Public lectures include:

The public is invited to attend the lectures, which begin with a 5 p.m. reception. Presentations begin at 5:30 p.m. in 2 Riverside at the Fralin Biomedical Research Institute.All are free, in person, and open to the public. Community attendance is encouraged. To make the lectures accessible to a wider audience, most are streamed live via Zoom and archived.

In addition to the Maury Strauss Distinguished Public Lectures, the Fralin Biomedical Research Institute also hostsPioneers in Biomedical Research Seminars, theTimothy A. Johnson Medical Scholar Lecture Series, as well as other conferences, programs, lectures, and special events.

Originally posted here:
Machine learning and thought, climate impact on health, Alzheimer's ... - Virginia Tech

Revolutionizing Drug Development Through Artificial Intelligence … – Pharmacy Times

The field of drug development stands at a pivotal crossroads, where the convergence of technological advancements and medical innovation is transforming traditional paradigms. At the forefront of this transformation lies artificial intelligence (AI) and machine learning (ML), powerful tools that are revolutionizing the drug discovery and development processes. The seamless integration of AI/ML has the potential to accelerate research and enhance efficiency in a new era of personalized medicine.

Image credit: Tierney | stock.adobe.com

The FDA acknowledges the growing adoption of AI/ML across various stages of the drug development process and across diverse therapeutic domains. There has been a noticeable surge in the inclusion of AI/ML components in drug and biologic application submissions in recent years.

Moreover, these submissions encompass a broad spectrum of drug development activities, spanning from initial drug discovery and clinical investigations to post-market safety monitoring and advanced pharmaceutical manufacturing.1 In a recent reflection paper, the European Medicine Agency acknowledges the rapid evolution of AI and the need for a regulatory process to support the safe and effective development, regulation, and use of human and veterinary medicines.2

AI and ML tools possess the capability to proficiently aid in data acquisition, transformation, analysis, and interpretation throughout the lifecycle of medicinal products. Their utility spans various aspects, including substituting, minimizing, and improving the use of animal models in preclinical development through AI/ML modeling approaches. During clinical trials, AI/ML systems can assist in identifying patients based on specific disease traits or clinical factors, while also supporting data collection and analysis that will subsequently be provided to regulatory bodies as part of marketing authorization procedures.

AI/ML technologies offer unprecedented capabilities in deciphering complex biological data, predicting molecular interactions, and identifying potential drug candidates. These technologies empower researchers to analyze vast datasets with greater speed and precision than ever before. For example, AI algorithms can sift through enormous databases of chemical compounds to identify molecules with the desired properties, significantly expediting the early stages of drug discovery.

One of the critical challenges in drug development is the identification and validation of suitable drug targets. AI/ML algorithms can analyze genetic, genomic, and proteomic data to pinpoint potential disease targets. By recognizing patterns and relationships in biological information, AI can predict the likelihood of a target's efficacy, enabling researchers to make informed decisions before embarking on laborious and costly experimental processes.

The process of screening potential drug candidates involves evaluating their impact on biological systems. AI/ML models can predict the behavior of compounds within complex cellular environments, streamlining the selection of compounds for further testing. This predictive approach saves time and resources, as only the most promising candidates advance to the next stages of development.

AI/ML-driven computational simulations are transforming drug design by predicting the interaction between molecules and target proteins. These simulations aid in designing drugs with enhanced specificity, potency, and minimal adverse effects. Consequently, AI-guided rational drug design expedites the optimization of lead compounds, fostering precision medicine initiatives.

The utilization of AI/ML in clinical trials has immense potential to improve patient recruitment, predict patient responses, and optimize trial designs. These technologies can analyze patient data to identify potential participants, forecast patient outcomes, and tailor treatment regimens for individual subjects. This leads to more efficient trials, reduced costs, and improved success rates.

Although the integration of AI/MI technologies into drug development has the potential to revolutionize the field, it also comes with several inherent risks and challenges that must be carefully considered:

AI and ML are reshaping the drug development landscape, from target identification to clinical trial optimization. Their ability to analyze complex biological data, predict molecular interactions, and expedite decision-making has the potential to accelerate drug discovery, reduce costs, and improve patient outcomes.

As AI/ML continues to evolve, it will undoubtedly play an increasingly pivotal role in driving innovation and transforming the pharmaceutical industry, leading us toward a more efficient and personalized approach to drug development and health care. Although AI and ML hold immense promise in revolutionizing drug development, their adoption is not without risks.

Careful consideration of these challenges, along with robust validation, regulation, and transparent reporting, are essential to harness the benefits of AI/ML while mitigating potential pitfalls in advancing pharmaceutical innovation.

References

See the article here:
Revolutionizing Drug Development Through Artificial Intelligence ... - Pharmacy Times

Advanced Space-led Team Applying Machine Learning to Detect … – Space Ref

Advanced Space LLC., a leading space tech solutions company, is pleased to announce that an Advanced Space-led team has been chosen to apply Machine Learning (ML) capabilities to detect, track and characterize space debris for the IARPA Space Debris Identification and Tracking (SINTRA) program.

Space debrisitems due to human activity in spacepresents a major hazard to space operations. Advanced Space and its teammates Orion Space Solutions and ExoAnalytic Solutions are applying advanced ML techniques to finding and identifying small debris (0.1-10 cm) under a new Space Debris Identification and Tracking (SINTRA) contract from Intelligence Advanced Research Projects Activity (IARPA).

Space debris is an exponentially growing problem that threatens all activity in space, which Congress is now recognizing as critical infrastructure, said Principal Investigator Nathan R. The well-known Kessler syndrome will inevitably make Earth orbit unusable unless we mitigate it, and the first step is developing the capability to maintain persistent knowledge of the debris population. Through our participation in the SINTRA program, our team aims to revolutionize the global space communitys knowledge of the space debris problem.

Currently, there are over 100 million objects greater than 1 mm orbiting the Earth; however, less than 1 percent of the debris that could cause mission-ending damage are currently tracked. The Advanced Space teams solutionthe Multi-source Extended-Range Mega-scale AI Debris (MERMAID) systemwill feature a sensing system to gather data; ground data processing incorporating ML models to observe, detect, and characterize debris below the threshold of traditional methods; and a catalog of this information. A key component of this solution is that the team will use ML methods to decrease the Signal-to-Noise-Ratio (SNR) required for detecting debris signatures in traditional optical and radar data.

Advanced Space CEO Bradley Cheetham said, Monitoring orbital debris is critical to the sustainable, exploration, development and settlement of space. We are proud of the work the team is doing to advance the state of the art by bringing scale and automation to this challenge.

ABOUT ADVANCED SPACE:

Advanced Space (https://advancedspace.com/) supports the sustainable exploration, development, and settlement of space through software and services that leverage unique subject matter expertise to improve the fundamentals of spaceflight. Advanced Space is dedicated to improving flight dynamics, technology development, and expedited turn-key missions to the Moon, Mars, and beyond.

Read the original:
Advanced Space-led Team Applying Machine Learning to Detect ... - Space Ref

What are Machine Learning Models? Types and Examples – TechTarget

What are machine learning models?

A machine learning model automates the process of identifying patterns and relationships that are hidden in data. It can use a combination of prelabeled or unlabeled data processed through various machine learning algorithms to determine the best fit for the problem to be solved.

Each machine learning algorithm represents a specific strategy to uncover patterns within a historical data set, according to Michael Shehab, principal and labs technology and innovation leader at PwC. The process of transforming machine learning algorithms into models consists of three components: representing the problem, identifying a specific task and providing feedback to guide the algorithm's quest for a solution. "The resulting model represents a function that has been learned or produced by the machine learning algorithm and is capable of mapping previously unseen examples to an accurate output," Shehab explained.

Selecting the type of model to use is a mixture of art and science. "There is no one-size-fits-all approach to understanding which model works for your organization," said Brian Steele, vice president of product management at customer analytics platform provider Gryphon.ai. Each model type will offer insights and results based on the data type and use cases. Additionally, the type and quality of the input data will drive the selection of certain types of models.

The machine learning field is rapidly evolving. When it comes to describing approaches like those used in generative AI applications, new techniques are blurring the old methods of classifying models.

There's no commonly accepted classification standard as new models are added daily, said Anantha Sekar, AI lead at Tata Consultancy Services. Still, the most common classifications of machine learning models include supervised, semi-supervised, unsupervised and reinforcement learning. These major types should all be considered along with the objective and learning approach being used, Sekar recommended.

A generative AI model, for example, may involve multiple training approaches deployed in succession. It may start with unsupervised learning on a large corpus of data followed by supervised learning to fine-tune the model and reinforcement learning to continuously tune results after deployment. "Discussing types of models is like discussing types of humans," Sekar noted. "Since each one is ultimately unique, the classifications are useful mainly for broad understanding purposes."

Data scientists will each develop their own approach to training machine learning models. Training generally starts with preparing the data, identifying the use case, selecting training algorithms and analyzing the results. Following is a set of best practices developed by Shehab for PwC:

In general, there is no one best machine learning model. "Different models work best for each problem or use case," Sekar said. Insights derived from experimenting with the data, he added, may lead to a different model. The patterns of data can also change over time. A model that works well in development may have to be replaced with a different model.

A specific model can be regarded as the best only for a specific use case or data set at a certain point in time, Sekar said. The use case can add more nuance. Some uses, for example, may require high accuracy while others demand higher confidence. It's also important to consider environmental constraints in model deployment, such as memory, power and performance requirements. Other use cases may have explainability requirements that could drive decisions toward a different type of model.

Data scientists also need to consider the operational aspects of models after deployment, called ModelOps, when prioritizing one type of model over another. These considerations may include how the raw data is transformed for processing, fine-tuning processes, prompt engineering and the need to mitigate AI hallucinations. "Choosing the best model for a given situation," Sekar advised, "is a complex task with many business and technical aspects to be considered."

The terms machine learning model and machine learning algorithm are sometimes conflated to mean the same thing. But from a data science perspective, they're very different. Machine learning algorithms are used in training machine learning models.

Machine learning algorithms are the brains of the models, Steele suggested. The algorithms contain code that's used to form predictions for the models. The data the algorithms are trained on often determines the types of outputs the models create. The data acts as a source of information for the algorithm to learn from, so the models can create understandable and relevant outputs.

Put another way, an algorithm is a set of procedures that describes how to do something, Sekar explained, and a machine learning model is a mathematical representation of a real-world problem trained on machine learning algorithms. "So, the machine learning model is a specific instance," he said, "while machine learning algorithms are a suite of procedures on how to train machine learning models."

The algorithm shapes and influences what the model does. The model considers the what of the problem, while the algorithm provides the how for getting the model to perform as desired. Data is the third relevant entity because the algorithm uses the training data to train the machine learning model. In practice, therefore, a machine learning outcome depends on the model, the algorithms and the training data.

Follow this link:
What are Machine Learning Models? Types and Examples - TechTarget

Machine Learning Regularization Explained With Examples – TechTarget

What is regularization in machine learning?

Regularization in machine learning is a set of techniques used to ensure that a machine learning model can generalize to new data within the same data set. These techniques can help reduce the impact of noisy data that falls outside the expected range of patterns. Regularization can also improve the model by making it easier to detect relevant edge cases within a classification task.

Consider an algorithm specifically trained to identify spam emails. In this scenario, the algorithm is trained to classify emails that appear to be from a well-known U.S. drugstore chain and contain only a single image as likely to be spam. However, this narrow approach runs the risk of disappointing loyal customers of the chain, who were looking forward to being notified about the store's latest sales. A more effective algorithm would consider other factors, such as the timing of the emails, the use of images and the types of links embedded in the emails to accurately label the emails as spam.

This more complex model, however, would also have to account for the impact that each of these measures added to the algorithm. Without regularization, the new algorithm risks being overly complex, subject to bias and unable to detect variance. We will elaborate on these concepts below.

In short, regularization pushes the model to reduce its complexity as it is being trained, explained Bret Greenstein, data, AI and analytics leader at PwC.

"Regularization acts as a type of penalty that gets added to the loss function or the value that is used to help assign importance to model features," Greenstein said. "This penalty inhibits the model from finding parameters that may over-assign importance to its features."

As such, regularization is an important tool that can be used by data scientists to improve model training to achieve better generalization, or to improve the odds that the model will perform well when exposed to unknown examples.

Adnan Masood, chief architect of AI and machine learning at digital transformation consultancy UST, said his firm regularly uses regularization to strike a balance between model complexity and performance, adeptly steering clear of both underfitting and overfitting.

Overfitting, as described above, occurs when a model is too complex and learns noise in the training data. Underfitting occurs when a model is too simple to capture underlying data patterns.

"Regularization provides a means to find the optimal balance between these two extremes," Masood said.

Consider another example of the use of regularization in retail. In this scenario, the business wants to develop a model that can predict when a certain product might be out of stock. To do this, the business has developed a training data set with many features, such as past sales data, seasonality, promotional events, and external factors like weather or holiday.

This, however, could lead to overfitting when the model is too closely tied to specific patterns in the training data and as a result may be less effective at predicting stockouts based on new, unseen data.

"Without regularization, our machine learning model could potentially learn the training data too well and become overly sensitive to noise or fluctuations in the historical data," Masood said.

In this case, a data scientist might apply a linear regression model to minimize the sum of the squared difference between actual and predicted stockout instances. This discourages the model from assigning too much importance to any one feature.

In addition, they might assign a lambda parameter to determine the strength of regularization. Higher values of this parameter increase regularization and lower the model coefficients (weights of the model).

When this regularized model is trained, it will balance fitting the training data and keeping the model weights small. The result is a model that is potentially less accurate on the training data and more accurate when predicting stockouts on new, unseen data.

"In this way, regularization helps us build a robust model, better generalizes to new data and more effectively predicts stockouts, thereby enabling the business to manage its inventory better and prevent loss of sales," Masood said.

He finds that regularization is vital in managing overfitting and underfitting. It also indirectly helps control bias (error from faulty assumptions) and variance (error from sensitivity to small fluctuations in a training data set), facilitating a balanced model that generalizes well on unseen data.

Niels Bantilan, chief ML engineer at Union.ai, a machine learning orchestration platform, finds it useful to think of regularization as a way to prevent a machine learning model from memorizing the data during training.

For example, a home automation robot trained on making coffee in one kitchen might inadvertently memorize the quirks and layouts of that specific kitchen. It will likely break when presented with a new kitchen where ingredients and equipment differ from the one it memorized.

In this case, regularization forces the model to learn higher-level concepts like "coffee mugs tend to be stored in overhead cabinets" rather than learning specific quirks of the first kitchen, such as "the coffee mugs are stored in the top left-most shelf."

In business, regularization is important to operationalizing machine learning, as it can mitigate errors and save cost, since it is expensive to constantly retrain models on the latest data.

"Therefore, it makes sense to ensure they have some generalization capacity beyond their training data, so the models can handle new situations up to a certain point without having to retrain them on expensive hardware or cloud infrastructure," Bantilan said.

The term overfitting is used to describe a model that has learned too much from the training data. This can include noise, such as inaccurate data accidentally read by a sensor or a human deliberately inputting bad data to evade a spam filter or fraud algorithm. It can also include data specific to that particular situation but not relevant to other use cases, such as a store shelf layout in one store that might not be relevant to different stores in a stockout predictor.

Underfitting occurs when a model has not learned to map features to an accurate prediction for new data. Greenstein said that regularization can sometimes lead to underfitting. In that case, it is important to change the influence that regularization has during model training. Underfitting also relates to bias and variance.

Bantilan described bias in machine learning as the degree to which a model's predictions agree with the actual ground truth. For example, a spam filter that perfectly predicts the spam/not-spam labels in training data would be a low-bias model. It could be considered high-bias if it was wrong all the time.

Variance characterizes the degree to which the model's predictions can handle small perturbations in the training data. One good test is removing a few records to see what happens, Bantilan said. If the model's predictions remain the same, then the model is considered low-variance. If the predictions change wildly, then it is considered high-variance.

Greenstein observed that high variance could be present when a model trained on multiple variations of data appears to learn a solution but struggles to perform on test data. This is one form of overfitting, and regularization can assist with addressing the issue.

Bharath Thota, partner in the advanced analytics practice of Kearney, a global strategy and management consulting firm, said that some of the common use cases in industry include the following:

Regularization needs to be considered as a handy technique in the process of improving ML models rather than a specific use case. Greenstein has found it most useful when problems are high-dimensional, which means they contain many and sometimes complex features. These types of problems are prone to overfitting, as a model may fail to identify simplified patterns to map features to objectives.

Regularization is also helpful with noisy data sets, such as high-dimensional data, where examples vary a lot and are subject to overfitting. In these cases, the models may learn the noise rather than a generalized way of representing the data.

It is also good for nonlinear problems since problems that require nonlinear algorithms can often lead to overfitting. These kinds of algorithms uncover complex boundaries for classifying data that map well to the training data but are only partially applicable to real-world data.

Greenstein noted that regularization is one of many tools that can assist with resolving challenges with an overfit model. Other techniques, such as bagging, reduced learning rates and data sampling methods, can complement or replace regularization, depending on the problem.

There are a range of different regularization techniques. The most common approaches rely on statistical methods such as Lasso regularization (also called L1 regularization), Ridge regularization (L2 regularization) and Elastic Net regularization, which combines both Lasso and Ridge techniques. Various other regulation techniques use different principles, such as ensembling, neural network dropout, pruning decision tree-based models and data augmentation.

Masood said the choice of regularization method and tuning for the regularization strength parameter (lambda) largely depends on the specific use case and the nature of the data set.

"The right regularization can significantly improve model performance, but the wrong choice could lead to underperformance or even harm the model's predictive power," Masood cautioned. Consequently, it is important to approach regularization with a solid understanding of both the data and the problem at hand.

Here are brief descriptions of the common regularization techniques.

Lasso regression AKA L1 regularization. The Lasso regularization technique, an acronym for least absolute shrinkage and selection operator, is derived from calculating the median of the data. A median is a value in the middle of a data set. It calculates a penalty function using absolute weights. Kearney's Thota said this regularization technique encourages sparsity in the model, meaning it can set some coefficients to exactly zero, effectively performing feature selection.

Ridge regression AKA L2 regularization. Ridge regulation is derived from calculating the mean of the data, which is the average of a set of numbers. It calculates a penalty function using a square or other exponent of each variable. Thota said this technique is useful for reducing the impact of irrelevant or correlated features and helps in stabilizing the model's behavior.

Elastic Net (L1 + L2) regularization. Elastic Net combines both L1 and L2 techniques to improve the results for certain problems.

Ensembling. This set of techniques combines the predictions from a suite of models, thus reducing the reliance on any one model for prediction.

Neural network dropout. This process is sometimes used in deep learning algorithms comprised of multiple layers of neural networks. It involves randomly dropping out the weights of some neurons. Bantilan said this forces the deep learning algorithm to learn an ensemble of sub-networks to achieve the task effectively.

Pruning decision tree-based models. This is used in tree-based models like decision trees. The process of pruning branches can simplify the decision rules of a particular tree to prevent it from relying on the quirks of the training data.

Data augmentation. This family of techniques uses prior knowledge about the data distribution to prevent the model from learning the quirks of the data set. For example, in an image classification use case, you might flip an image horizontally, introduce noise, blurriness or crop an image. "As long as the data corruption or modification is something we might find in the real world, the model should learn how to handle those situations," Bantilan said.

The rest is here:
Machine Learning Regularization Explained With Examples - TechTarget