Archive for the ‘Machine Learning’ Category

Japan introduces world’s first machine learning model to screen Alzheimer’s disease – BSA bureau

Japan-based Oita University and pharmaceutical firm Eisai Co. have announced the development of the world's first machine learning model to predict amyloid beta(A) accumulation in the brain using a wristband sensor. This model is expected to enable screening for brain A accumulation, which is an important pathological factor of Alzheimer's disease(AD), simply by collecting biological and lifestyle data from daily life.

In AD, which is said to account for over 60% of the causes of dementia, A begins to accumulate in the brain about 20 years before the onset of the disease. This has prompted the development of new therapeutic drugs targeting A, leading to the approval of an humanized anti-soluble aggregated A monoclonal antibody in Japan.

The key to maximising treatment effects of the medicine is detecting A accumulation in the brain of patients with mild cognitive impairment before the onset of symptoms. Currently, although brain A accumulation can be detected by positron emission tomography(amyloid PET) and cerebrospinal fluid testing(CSF testing), the number of medical institutes able to perform those tests is limited, and the high cost and invasiveness of these tests are considered issues. Therefore, development of an inexpensive and easy-to-use screening method has been sought after to identify those who need amyloid PET or CSF testing.

Although lifestyle factors, including lack of exercise, social isolation, and sleep disorders, as well as diseases, including hypertension, diabetes, and cardiovascular disease are known risk factors for AD, thus far, studies applying machine learning models to predict brain A accumulation have used only cognitive function tests, blood tests, and brain imaging tests. In contrast, this is the first machine learning study to focus on "biological data" and "lifestyle data".

Link:
Japan introduces world's first machine learning model to screen Alzheimer's disease - BSA bureau

Wearable Biosensor Predicts Aggression Among Inpatients with Autism – mHealthIntelligence.com

January 02, 2024 -Physiological changes recorded by a wearable biosensor and analyzed through a machine-learning approach can help predict aggressive behavior before it occurs in young psychiatric facility patients with autism, new research shows.

The study published in JAMA Network Open last month by Northeastern University researchers adds to research examining whether imminent aggressive behavior among autistic inpatients can be determined via a wearable biosensor and machine learning.

About one in 36 children were diagnosed with autism spectrum disorder (ASD) in 2020, up from one in 44 in 2018, according to the Centers for Disease Control and Preventions (CDC) Autism and Developmental Disabilities Monitoring (ADDM) Network. The prevalence of aggression among children and adolescents with ASD is high, with parents reporting in a 2011 study that 68 percent had demonstrated aggression to a caregiver and 49 percent to non-caregivers.

Prior research work by the Northeastern University team showed that three minutes of wearable biosensor-recorded peripheral physiological and motion signals gathered from 20 youths with autism could predict aggression toward others one minute before it occurred using ridge-regularized logistic regression.

The new study aimed to extend that research to determine whether the recorded data could be used to predict aggression toward others even earlier.

The researchers enrolled 86 participants at four primary care psychiatric inpatient hospitals. The participants had confirmed diagnoses of autism and exhibited self-injurious behavior, emotion dysregulation, or aggression toward others.

The research team collected patient data from March 2019 to March 2020. They coded aggressive behavior in real time while study participants wore a commercially available biosensor that recorded peripheral physiological signals, including cardiovascular activity, electrodermal activity, and motion. Of the 86 enrolled participants, only 70 were included in the analysis. The excluded eight could not wear the biosensor due to tactile sensitivity and general behavioral noncompliance or were discharged before an observation could be made.

During the study period, researchers collected 429 independent naturalistic observational coding sessions totaling 497 hours. They observed 6,665 aggressive behaviors, comprising 3,983 episodes of self-injurious behavior, 2,063 episodes of emotion dysregulation, and 619 episodes of aggression toward others.

Researchers conducted time-series feature extraction and data preprocessing, after which they used ridge-regularized logistic regression, support vector machines, neural networks, and domain adaptation to analyze the extracted time-series features to make binary aggressive behavior predictions.

They found that logistic regression was the best-performing overall classifier across eight experiments conducted. The classifier was able to predict aggressive behavior three minutes before it occurred with a mean area under the receiver operating characteristic curve of 0.80.

Our results suggest that biosensor data and machine learning have the potential to redress an intractable problem for a sizable segment of the autism population who are understudied and underserved, the researchers concluded. Our findings may lay the groundwork for developing just-in-time adaptive intervention mobile health systems that may enable new opportunities for preemptive intervention.

This is the latest instance of an mHealth tool that can be used to support care for youth with autism.

In September, Atlanta-based researchers announced they had developed a biomarker-based, eye-tracking diagnostic tool to diagnose ASD. The technology includes a portable tablet on which children watch videos of social interaction. The device monitors their "looking behavior" to pinpoint the social information the children are and are not looking at, according to the press release.

Clinicians review the data collected by the device and provide children and their families with a diagnosis and measures of the child's individual abilities, including social disability, verbal ability, and non-verbal learning skills.

Additionally, a University of California, Davis researcher received a five-year, $3.2 million grant from the National Institutes of Health (NIH) to study whether an ASD diagnosis among infants can be assessed effectively via telehealth.

The researcher, Meagan Talbott, Ph.D., and her team will enroll 120 infants between the ages of 6 and 12 months showing signs of delays or differences in their development. They will conduct four telehealth sessions over a year as well as additional assessments when the child is 3 years old to determine whether telehealth can help pinpoint possible ASD.

Link:
Wearable Biosensor Predicts Aggression Among Inpatients with Autism - mHealthIntelligence.com

From Points to Pictures: How GenAI Will Change Companies – From Points to Pictures: How GenAI Will Change … – InformationWeek

As more companies embrace artificial intelligence, and specifically generative AI (GenAI), we are headed for a landmark moment. GenAI is today mostly used with public data but when GenAI models are trained, tuned and used with an enterprise's proprietary data the combination unlocks thehidden patterns, connections, and insights that can transform a business.

Ten years ago, basic pattern finding was core to the idea of leveraging big data. Machine learning spotted patterns within a particular domain, like offering an online customer the right product. However, with the new computational and software innovations of GenAI, data can come from a much wider variety of sources across domains, with deep learning finding not just patterns in one domain, but also entirely new relationships among different domains.

Earlier limitations of technology and communications meant organizational designs eventually relied on creating independent, fractured data silos and leaving on the table a great potential for collective learning and improvement. GenAI, embedded in reimagined and hyper-connected business processes, as well as new business intelligence platforms can change this.

Google is among several companies working on the next generation of data analytics systems that build wide data records combining structured, unstructured, at-rest and in-movement data that ultimately the digital footprint of a company into a powerful AI model. In future the focus will need to shift from big to wide data.

Related:ChatGPT: Benefits, Drawbacks, and Ethical Considerations for IT Organizations

GenAI can now be instructed to take on specific roles and achieve specific goals on behalf of humans. AI agents will be the future do-ers, taking on the role of personas, such as a data engineer, and executing tasks within a workflow.

Automation follows a pattern: Insights, actions and processes are abstracted and embodied in a system, new workflows are established around trust and reliability, and finally widespread adoption follows. Think of automatically scheduling maintenance on a machine in a factory, or problem-solving natural language interactions in a call center. These are examples of trusted software agents carrying out autonomous actions across an enterprise.

The goal for GenAI in analytics is to make observations and generate insights that can accelerate the work of people. People will be able to uncover new approaches, identify trends faster, collaborate in unforeseen ways, and delegate to agents that have permission to act in autonomous ways to increase organizational effectiveness.

Related:Hot Jobs in AI/Data Science for 2024

The role of human experts will be different and require new skill sets. Its less about doing the work and more about what a good result looks like and what the right question (or prompt) islike. For example, a sales analyst will spend less time on writing queries to gather data and more time on judging if data found by AI-driven insights offer a relevant insight. Business judgment becomes more important than technical analyst expertise.

Gen AI for analytics brings us back to really understanding the question one is trying to solve and frees us from much of the complication in the technical toolkits that took the lions share of our time and investments. Organizations that overly limit data access and employee empowerment are likely to become less competitive.

When things are changing in big ways, it's useful to think about the things that won't change, like offering value to customers, focusing on positive efficiencies, and creating new goods and services that excite people and improve lives. These core values will continue to steer the application of this new GenAI technology, and the world of business will be forever changed. GenAI represents a paradigm shift on how we will imagine and enact new ways of doing business, from enabling business users to "chat" with their business data, supercharging data and analytics teams with an always-on collaborator and automating business with AI-driven data intelligence.

Related:The Evolving Ethics of AI: What Every Tech Leader Needs to Know

Excerpt from:
From Points to Pictures: How GenAI Will Change Companies - From Points to Pictures: How GenAI Will Change ... - InformationWeek

AI Takes the Torch: Stanford Researchers Fuel Fusion Breakthrough with Machine Learning – Medium

AI Takes the Torch: Stanford Researchers Fuel Fusion Breakthrough with Machine Learning

For decades, the elusive dream of fusion energy replicating the suns power on Earth has shimmered just beyond our grasp. Taming the superheated plasma within a tokamak reactor, the heart of this technology, has proven a formidable challenge. But like a skilled chef wielding a high-tech spatula, researchers at Stanford University have just turned up the heat on the quest for clean, limitless energy by employing a powerful new ingredient: artificial intelligence.

In a groundbreaking study published in Nature, the Stanford team, led by Professor Chris Fall, details how they trained a machine learning algorithm to control the plasma within the National Ignition Facility's (NIF) Alcator C-Mod tokamak. This AI, christened "Inferno," proved to be a master chef indeed, surpassing human operators in sustaining the fusion reaction for a record-breaking 5 seconds a 50% increase over previous attempts.

"Inferno's ability to learn and adapt in real-time is truly remarkable," says Professor Fall. "Unlike human operators who rely on pre-programmed sequences, Inferno can continuously analyze the plasma's behavior and adjust the magnetic field accordingly, maintaining a stable and productive fusion environment."

This feat is no small fry. Inside a tokamak, hydrogen isotopes are heated to blistering temperatures, exceeding 100 million degrees Celsius. This molten inferno, a swirling vortex of plasma, must be meticulously confined and controlled using a complex array of magnetic fields. Any misstep, a wobble or a flicker, and the delicate fusion dance grinds to a halt.

Traditionally, this high-wire act has been entrusted to human experts, their fingers poised over control panels, their minds in a constant state of vigilance. But the sheer complexity of plasma physics and the lightning-fast response times needed to maintain stability have pushed the limits of human control.

Enter Inferno, a neural network trained on a vast dataset of plasma simulations and past tokamak experiments. This AI chef, armed with its algorithms and lightning-fast reflexes, can analyze the plasma's every twitch and tremor, anticipating instabilities before they even arise. It then fine-tunes the magnetic field with a precision and speed that would leave any human operator breathless.

The implications of this breakthrough are as vast as the universe itself. Fusion energy, if harnessed, promises a clean, abundant source of power, free from the greenhouse gas emissions that plague our current energy sources. It could revolutionize industries, power cities, and even propel us to the stars.

But the path to this clean energy utopia is paved with technological hurdles. One of the most critical is plasma control. Inferno's success paves the way for a new era of AI-driven tokamaks, capable of pushing the boundaries of plasma stability and unlocking the full potential of fusion power.

"This is just the beginning," says Professor Fall. "Inferno is a prototype, a proof of concept. But it shows us what's possible when we combine the human ingenuity of fusion research with the power of machine learning. With continued development, AI-powered tokamaks could become a reality, bringing us one step closer to the clean energy future we desperately need."

The Stanford team's achievement is a testament to the power of collaboration. It bridges the gap between the seemingly disparate worlds of AI and nuclear physics, demonstrating the transformative potential of interdisciplinary research. As we inch closer to the day when fusion energy lights our homes and powers our dreams, let us remember the chefs who dared to tame the inferno, the ones who wielded the tools of science and imagination to cook up a future brighter than a thousand suns.

References:

Chris Fall et al. "Real-time plasma control using deep reinforcement learning." Nature (2023).

"DeepMind Has Trained an AI to Control Nuclear Fusion." WIRED UK (2022).

"Fusion power: DeepMind uses AI to control plasma inside tokamak reactor." New Scientist (2022).

Go here to see the original:
AI Takes the Torch: Stanford Researchers Fuel Fusion Breakthrough with Machine Learning - Medium

Unveiling the Power of AI and Machine Learning in Industry 4.0 for Mechanical Engineers – Medium

AI / ML for Machine Learning

Introduction: In the rapidly evolving landscape of Industry 4.0, the fusion of Artificial Intelligence (AI), Machine Learning (ML), and the Internet of Things (IoT) stands as the driving force, particularly for Mechanical Engineers. In this blog post, we will delve into a comprehensive review of a recent paper by Gajanan Shankarrao Patange and Arjun Bharatkumar Pandya, published in Materials Today: Proceedings (Volume 72, Pages 622-625, 2023).

## Understanding the Core Concepts:

### 1. Evolutionary Foundation: The foundation of Industry 4.0 lies in the intelligent intercommunication of machines, often encapsulated in the Internet of Things. However, Patange and Pandya assert that at the heart of this evolution is Artificial Intelligence. This blog will explore the pivotal role AI plays in shaping the future of mechanical engineering.

2. Addressing Misconceptions: The authors highlight the prevalent misconceptions surrounding AI, ML, and IoT. This section will unravel common misunderstandings, ensuring a clearer perspective on the transformative potential of these technologies for Mechanical Engineers.

Exploring the Intersection: AI, ML, and IoT in Industry 4.0

1. Enhancing Industry Processes: Discover how AI and ML are revolutionizing manufacturing processes, optimizing efficiency, and reducing downtime. Real-world examples and case studies will illustrate the practical applications of these technologies.

2. Smart Machines and IoT: Unpack the interconnected world of smart machines and IoT, emphasizing how Mechanical Engineers can leverage this integration to create intelligent systems capable of seamless communication, ultimately contributing to the realization of Industry 4.0.

The Uncharted Territory: Advantages, Uses, and Challenges

1. Historical Perspectives: Embark on a journey through the history of AI and ML, tracing their development and the milestones that have brought us to the cusp of Industry 4.0.

See the rest here:
Unveiling the Power of AI and Machine Learning in Industry 4.0 for Mechanical Engineers - Medium