Archive for the ‘Machine Learning’ Category

Wearable Biosensor Predicts Aggression Among Inpatients with Autism – mHealthIntelligence.com

January 02, 2024 -Physiological changes recorded by a wearable biosensor and analyzed through a machine-learning approach can help predict aggressive behavior before it occurs in young psychiatric facility patients with autism, new research shows.

The study published in JAMA Network Open last month by Northeastern University researchers adds to research examining whether imminent aggressive behavior among autistic inpatients can be determined via a wearable biosensor and machine learning.

About one in 36 children were diagnosed with autism spectrum disorder (ASD) in 2020, up from one in 44 in 2018, according to the Centers for Disease Control and Preventions (CDC) Autism and Developmental Disabilities Monitoring (ADDM) Network. The prevalence of aggression among children and adolescents with ASD is high, with parents reporting in a 2011 study that 68 percent had demonstrated aggression to a caregiver and 49 percent to non-caregivers.

Prior research work by the Northeastern University team showed that three minutes of wearable biosensor-recorded peripheral physiological and motion signals gathered from 20 youths with autism could predict aggression toward others one minute before it occurred using ridge-regularized logistic regression.

The new study aimed to extend that research to determine whether the recorded data could be used to predict aggression toward others even earlier.

The researchers enrolled 86 participants at four primary care psychiatric inpatient hospitals. The participants had confirmed diagnoses of autism and exhibited self-injurious behavior, emotion dysregulation, or aggression toward others.

The research team collected patient data from March 2019 to March 2020. They coded aggressive behavior in real time while study participants wore a commercially available biosensor that recorded peripheral physiological signals, including cardiovascular activity, electrodermal activity, and motion. Of the 86 enrolled participants, only 70 were included in the analysis. The excluded eight could not wear the biosensor due to tactile sensitivity and general behavioral noncompliance or were discharged before an observation could be made.

During the study period, researchers collected 429 independent naturalistic observational coding sessions totaling 497 hours. They observed 6,665 aggressive behaviors, comprising 3,983 episodes of self-injurious behavior, 2,063 episodes of emotion dysregulation, and 619 episodes of aggression toward others.

Researchers conducted time-series feature extraction and data preprocessing, after which they used ridge-regularized logistic regression, support vector machines, neural networks, and domain adaptation to analyze the extracted time-series features to make binary aggressive behavior predictions.

They found that logistic regression was the best-performing overall classifier across eight experiments conducted. The classifier was able to predict aggressive behavior three minutes before it occurred with a mean area under the receiver operating characteristic curve of 0.80.

Our results suggest that biosensor data and machine learning have the potential to redress an intractable problem for a sizable segment of the autism population who are understudied and underserved, the researchers concluded. Our findings may lay the groundwork for developing just-in-time adaptive intervention mobile health systems that may enable new opportunities for preemptive intervention.

This is the latest instance of an mHealth tool that can be used to support care for youth with autism.

In September, Atlanta-based researchers announced they had developed a biomarker-based, eye-tracking diagnostic tool to diagnose ASD. The technology includes a portable tablet on which children watch videos of social interaction. The device monitors their "looking behavior" to pinpoint the social information the children are and are not looking at, according to the press release.

Clinicians review the data collected by the device and provide children and their families with a diagnosis and measures of the child's individual abilities, including social disability, verbal ability, and non-verbal learning skills.

Additionally, a University of California, Davis researcher received a five-year, $3.2 million grant from the National Institutes of Health (NIH) to study whether an ASD diagnosis among infants can be assessed effectively via telehealth.

The researcher, Meagan Talbott, Ph.D., and her team will enroll 120 infants between the ages of 6 and 12 months showing signs of delays or differences in their development. They will conduct four telehealth sessions over a year as well as additional assessments when the child is 3 years old to determine whether telehealth can help pinpoint possible ASD.

Link:
Wearable Biosensor Predicts Aggression Among Inpatients with Autism - mHealthIntelligence.com

Tags:

From Points to Pictures: How GenAI Will Change Companies – From Points to Pictures: How GenAI Will Change … – InformationWeek

As more companies embrace artificial intelligence, and specifically generative AI (GenAI), we are headed for a landmark moment. GenAI is today mostly used with public data but when GenAI models are trained, tuned and used with an enterprise's proprietary data the combination unlocks thehidden patterns, connections, and insights that can transform a business.

Ten years ago, basic pattern finding was core to the idea of leveraging big data. Machine learning spotted patterns within a particular domain, like offering an online customer the right product. However, with the new computational and software innovations of GenAI, data can come from a much wider variety of sources across domains, with deep learning finding not just patterns in one domain, but also entirely new relationships among different domains.

Earlier limitations of technology and communications meant organizational designs eventually relied on creating independent, fractured data silos and leaving on the table a great potential for collective learning and improvement. GenAI, embedded in reimagined and hyper-connected business processes, as well as new business intelligence platforms can change this.

Google is among several companies working on the next generation of data analytics systems that build wide data records combining structured, unstructured, at-rest and in-movement data that ultimately the digital footprint of a company into a powerful AI model. In future the focus will need to shift from big to wide data.

Related:ChatGPT: Benefits, Drawbacks, and Ethical Considerations for IT Organizations

GenAI can now be instructed to take on specific roles and achieve specific goals on behalf of humans. AI agents will be the future do-ers, taking on the role of personas, such as a data engineer, and executing tasks within a workflow.

Automation follows a pattern: Insights, actions and processes are abstracted and embodied in a system, new workflows are established around trust and reliability, and finally widespread adoption follows. Think of automatically scheduling maintenance on a machine in a factory, or problem-solving natural language interactions in a call center. These are examples of trusted software agents carrying out autonomous actions across an enterprise.

The goal for GenAI in analytics is to make observations and generate insights that can accelerate the work of people. People will be able to uncover new approaches, identify trends faster, collaborate in unforeseen ways, and delegate to agents that have permission to act in autonomous ways to increase organizational effectiveness.

Related:Hot Jobs in AI/Data Science for 2024

The role of human experts will be different and require new skill sets. Its less about doing the work and more about what a good result looks like and what the right question (or prompt) islike. For example, a sales analyst will spend less time on writing queries to gather data and more time on judging if data found by AI-driven insights offer a relevant insight. Business judgment becomes more important than technical analyst expertise.

Gen AI for analytics brings us back to really understanding the question one is trying to solve and frees us from much of the complication in the technical toolkits that took the lions share of our time and investments. Organizations that overly limit data access and employee empowerment are likely to become less competitive.

When things are changing in big ways, it's useful to think about the things that won't change, like offering value to customers, focusing on positive efficiencies, and creating new goods and services that excite people and improve lives. These core values will continue to steer the application of this new GenAI technology, and the world of business will be forever changed. GenAI represents a paradigm shift on how we will imagine and enact new ways of doing business, from enabling business users to "chat" with their business data, supercharging data and analytics teams with an always-on collaborator and automating business with AI-driven data intelligence.

Related:The Evolving Ethics of AI: What Every Tech Leader Needs to Know

Excerpt from:
From Points to Pictures: How GenAI Will Change Companies - From Points to Pictures: How GenAI Will Change ... - InformationWeek

Tags:

AI Takes the Torch: Stanford Researchers Fuel Fusion Breakthrough with Machine Learning – Medium

AI Takes the Torch: Stanford Researchers Fuel Fusion Breakthrough with Machine Learning

For decades, the elusive dream of fusion energy replicating the suns power on Earth has shimmered just beyond our grasp. Taming the superheated plasma within a tokamak reactor, the heart of this technology, has proven a formidable challenge. But like a skilled chef wielding a high-tech spatula, researchers at Stanford University have just turned up the heat on the quest for clean, limitless energy by employing a powerful new ingredient: artificial intelligence.

In a groundbreaking study published in Nature, the Stanford team, led by Professor Chris Fall, details how they trained a machine learning algorithm to control the plasma within the National Ignition Facility's (NIF) Alcator C-Mod tokamak. This AI, christened "Inferno," proved to be a master chef indeed, surpassing human operators in sustaining the fusion reaction for a record-breaking 5 seconds a 50% increase over previous attempts.

"Inferno's ability to learn and adapt in real-time is truly remarkable," says Professor Fall. "Unlike human operators who rely on pre-programmed sequences, Inferno can continuously analyze the plasma's behavior and adjust the magnetic field accordingly, maintaining a stable and productive fusion environment."

This feat is no small fry. Inside a tokamak, hydrogen isotopes are heated to blistering temperatures, exceeding 100 million degrees Celsius. This molten inferno, a swirling vortex of plasma, must be meticulously confined and controlled using a complex array of magnetic fields. Any misstep, a wobble or a flicker, and the delicate fusion dance grinds to a halt.

Traditionally, this high-wire act has been entrusted to human experts, their fingers poised over control panels, their minds in a constant state of vigilance. But the sheer complexity of plasma physics and the lightning-fast response times needed to maintain stability have pushed the limits of human control.

Enter Inferno, a neural network trained on a vast dataset of plasma simulations and past tokamak experiments. This AI chef, armed with its algorithms and lightning-fast reflexes, can analyze the plasma's every twitch and tremor, anticipating instabilities before they even arise. It then fine-tunes the magnetic field with a precision and speed that would leave any human operator breathless.

The implications of this breakthrough are as vast as the universe itself. Fusion energy, if harnessed, promises a clean, abundant source of power, free from the greenhouse gas emissions that plague our current energy sources. It could revolutionize industries, power cities, and even propel us to the stars.

But the path to this clean energy utopia is paved with technological hurdles. One of the most critical is plasma control. Inferno's success paves the way for a new era of AI-driven tokamaks, capable of pushing the boundaries of plasma stability and unlocking the full potential of fusion power.

"This is just the beginning," says Professor Fall. "Inferno is a prototype, a proof of concept. But it shows us what's possible when we combine the human ingenuity of fusion research with the power of machine learning. With continued development, AI-powered tokamaks could become a reality, bringing us one step closer to the clean energy future we desperately need."

The Stanford team's achievement is a testament to the power of collaboration. It bridges the gap between the seemingly disparate worlds of AI and nuclear physics, demonstrating the transformative potential of interdisciplinary research. As we inch closer to the day when fusion energy lights our homes and powers our dreams, let us remember the chefs who dared to tame the inferno, the ones who wielded the tools of science and imagination to cook up a future brighter than a thousand suns.

References:

Chris Fall et al. "Real-time plasma control using deep reinforcement learning." Nature (2023).

"DeepMind Has Trained an AI to Control Nuclear Fusion." WIRED UK (2022).

"Fusion power: DeepMind uses AI to control plasma inside tokamak reactor." New Scientist (2022).

Go here to see the original:
AI Takes the Torch: Stanford Researchers Fuel Fusion Breakthrough with Machine Learning - Medium

Tags:

Unveiling the Power of AI and Machine Learning in Industry 4.0 for Mechanical Engineers – Medium

AI / ML for Machine Learning

Introduction: In the rapidly evolving landscape of Industry 4.0, the fusion of Artificial Intelligence (AI), Machine Learning (ML), and the Internet of Things (IoT) stands as the driving force, particularly for Mechanical Engineers. In this blog post, we will delve into a comprehensive review of a recent paper by Gajanan Shankarrao Patange and Arjun Bharatkumar Pandya, published in Materials Today: Proceedings (Volume 72, Pages 622-625, 2023).

## Understanding the Core Concepts:

### 1. Evolutionary Foundation: The foundation of Industry 4.0 lies in the intelligent intercommunication of machines, often encapsulated in the Internet of Things. However, Patange and Pandya assert that at the heart of this evolution is Artificial Intelligence. This blog will explore the pivotal role AI plays in shaping the future of mechanical engineering.

2. Addressing Misconceptions: The authors highlight the prevalent misconceptions surrounding AI, ML, and IoT. This section will unravel common misunderstandings, ensuring a clearer perspective on the transformative potential of these technologies for Mechanical Engineers.

Exploring the Intersection: AI, ML, and IoT in Industry 4.0

1. Enhancing Industry Processes: Discover how AI and ML are revolutionizing manufacturing processes, optimizing efficiency, and reducing downtime. Real-world examples and case studies will illustrate the practical applications of these technologies.

2. Smart Machines and IoT: Unpack the interconnected world of smart machines and IoT, emphasizing how Mechanical Engineers can leverage this integration to create intelligent systems capable of seamless communication, ultimately contributing to the realization of Industry 4.0.

The Uncharted Territory: Advantages, Uses, and Challenges

1. Historical Perspectives: Embark on a journey through the history of AI and ML, tracing their development and the milestones that have brought us to the cusp of Industry 4.0.

See the rest here:
Unveiling the Power of AI and Machine Learning in Industry 4.0 for Mechanical Engineers - Medium

Tags:

Machine Learning Examples In The Real World (And For SEO) (Festive Flashback) – Search Engine Journal

Celebrate the Holidays with some of SEJs best articles of 2023.

Our Festive Flashback series runs from December 21 January 5, featuring daily reads on significant events, fundamentals, actionable strategies, and thought leader opinions.

2023 has been quite eventful in the SEO industry and our contributors produced some outstanding articles to keep pace and reflect these changes.

Catch up on the best reads of 2023 to give you plenty to reflect on as you move into 2024.

As an SEO professional, youve heard about ChatGPT and BARD or even used them yourself.

You might also be familiar with a handful of machine learning models from Google, such as BERT and RankBrain.

These are all great applications of machine learning.

But it isnt always immediately apparent how machine learning can be useful for your BAU (business as usual) tasks or daily SEO to-dos.

Theres also the question of ethics, as in data privacy and fairness, when it comes to using machine learning in your work.

However, understanding how machine learning works in search (and in real life) can only work to your advantage as an SEO pro whether youre technical or not.

In this comprehensive article, learn the types of machine learning and ethics associated with the technology, then take inspiration from seven machine learning examples that have had a great impact on our world.

As an added bonus, at the end of this article, explore eight machine learning examples directly applied to SEO.

Machine learning is a subset of artificial intelligence (AI) which enables systems to learn and improve without the need to be explicitly programmed. This is achieved by using large data sets that train the parameters in the model.

Perhaps the most well-known examples of machine learning currently are ChatGPT and BARD and while this post wont be focusing on them, I decided to ask them both to explain to me what machine learning is. Here are their responses:

Machine learning is a type of artificial intelligence that allows software applications to learn from the data and become more accurate in predicting outcomes without explicit programming. In other words, it is a method of teaching computers to learn from the data they are given, and then to make decisions or predictions based on that learning.

The goal of machine learning is to develop algorithms that can learn patterns in data, and then use those patterns to make decisions or predictions about new data. This is done by training the machine learning algorithm on a dataset of known inputs and outputs, and then using that knowledge to make predictions on new, unseen data.

Some common applications of machine learning include image recognition, natural language processing, fraud detection, and recommendation systems.

Machine learning is the study of computer algorithms that improve automatically through experience and by the use of data. It is seen as a subset of artificial intelligence. Machine learning algorithms build a model based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so.

Machine learning can be used for a variety of tasks, including:

Machine learning is a powerful tool that can be used to solve a variety of problems. However, it is important to note that machine learning algorithms are only as good as the data they are trained on. If the data is biased, the algorithm will be biased as well. Additionally, machine learning algorithms can be susceptible to overfitting, which is when the algorithm learns the training data too well and does not generalize well to new data.

Despite these challenges, machine learning is a rapidly growing field with a wide range of applications. As the amount of data available to us continues to grow, machine learning is likely to play an even greater role in our lives in the future.

As the name suggests, in supervised learning, the model learns under supervision. It is trained with labeled data, and the output is manually reviewed.

The machine learns from the labeled data, which is then used to make future predictions.

Once the output is received, the model remembers it and will use it for the next operation.

There are two main types of supervised learning: classification and regression.

Classification is when the output variable is categorical, with two or more classes that the model can identify; for example, true or false and dog or cat.

Examples of this include predicting whether emails are likely to be spam or whether an image is of a dog or cat.

In both of these examples, the model will be trained on data that is either classified as spam or not spam, and whether an image contains a dog or cat.

This is when the output variable is a real or continuous value, and there is a relationship between the variables. Essentially, a change in one variable is associated with a change that occurs in the other variable.

The model then learns the relationship between them and predicts what the outcome will be depending on the data it is given.

For example, predicting humidity based on a given temperature value or what the stock price is likely to be at a given time.

Unsupervised learning is when the model uses unlabeled data and learns by itself, without any supervision. Essentially, unlike supervised learning, the model will act on the input data without any guidance.

It does not require any labeled data, as its job is to look for hidden patterns or structures in the input data and then organize it according to any similarities and differences.

For example, if a model is given pictures of both dogs and cats, it isnt already trained to know the features that differentiate both. Still, it can categorize them based on patterns of similarities and differences.

There are also two main types of unsupervised learning: clustering and association.

Clustering is the method of sorting objects into clusters that are similar to each other and belong to one cluster, versus objects that are dissimilar to a particular cluster and therefore belong in another.

Examples of this include recommendation systems and image classifying.

Association is rule-based and is used to discover the probability of the co-occurrence of items within a collection of values.

Examples include fraud detection, customer segmentation, and discovering purchasing habits.

Semi-supervised learning bridges both supervised and unsupervised learning by using a small section of labeled data, together with unlabeled data, to train the model. It, therefore, works for various problems, from classification and regression to clustering and association.

Semi-supervised learning can be used if there is a large amount of unlabeled data, as it only requires a small portion of the data to be labeled to train the model, which can then be applied to the remaining unlabeled data.

Google has used semi-supervised learning to better understand language used within a search to ensure it serves the most relevant content for a particular query.

Reinforcement learning is when a model is trained to return the optimum solution to a problem by taking a sequential approach to decision-making.

It uses trial and error from its own experiences to define the output, with rewards for positive behavior and negative reinforcement if it is not working towards the goal.

The model interacts with the environment that has been set up and comes up with solutions without human interference.

Human interference will then be introduced to provide either positive or negative reinforcement depending on how close to the goal the output is.

Examples include robotics think robots working in a factory assembly line and gaming, with AlphaGo as the most famous example. This is where the model was trained to beat the AlphaGo champion by using reinforcement learning to define the best approach to win the game.

There is no doubt that machine learning has many benefits, and the use of machine learning models is ever-growing.

However, its important to consider the ethical concerns that come with using technology of this kind. These concerns include:

Netflix uses machine learning in a number of ways to provide the best experience for its users.

The company is also continually collecting large amounts of data, including ratings, the location of users, the length of time for which something is watched, if content is added to a list, and even whether something has been binge-watched.

This data is then used to further improve its machine learning models.

TV and movie recommendations on Netflix are personalized to each individual users preferences. To do this, Netflix deployed a recommendation system that considers previous content consumed, users most viewed genres, and content watched by users with similar preferences.

Netflix discovered that the images used on the browse screen make a big difference in whether users watch something or not.

It, therefore, uses machine learning to create and display different images according to a users individual preferences. It does this by analyzing a users previous content choices and learning the kind of image that is more likely to encourage them to click.

These are just two examples of how Netflix uses machine learning on its platform. If you want to learn more about how it is used, you can check out the companys research areas blog.

With millions of listings in locations across the globe at different price points, Airbnb uses machine learning to ensure users can find what they are looking for quickly and to improve conversions.

There are a number of ways the company deploys machine learning, and it shares a lot of details on its engineering blog.

As hosts can upload images for their properties, Airbnb found that a lot of images were mislabeled. To try and optimize user experience, it deployed an image classification model that used computer vision and deep learning.

The project aimed to categorize photos based on different rooms. This enabled Airbnb to show listing images grouped by room type and ensure the listing follows Airbnbs guidelines.

In order to do this, it retrained the image classification neural network ResNet50, with a small number of labeled photos. This enabled it to accurately classify current and future images uploaded to the site.

To provide a personalized experience for users, Airbnb deployed a ranking model that optimized search and discovery. The data for this model came from user engagement metrics such as clicks and bookings.

Listings started by being ordered randomly, and then various factors were given a weight within the model including price, quality, and popularity with users. The more weight a listing had, the higher it would be displayed in listings.

This has since been optimized further, with training data including the number of guests, price, and availability also included within the model to discover patterns and preferences to create a more personalized experience.

Spotify also uses several machine learning models to continue revolutionizing how audio content is discovered and consumed.

Spotify uses a recommendation algorithm that predicts a users preference based on a collection of data from other users. This is due to numerous similarities that occur between music types that clusters of people listen to.

Playlists are one way it can do this, using statistical methods to create personalized playlists for users, such as Discover Weekly and daily mixes.

It can then use further data to adjust these depending on a users behavior.

With personal playlists also being created in the millions, Spotify has a huge database to work with particularly if songs are grouped and labeled with semantic meaning.

This has allowed the company to recommend songs to users with similar music tastes. The machine learning model can serve songs to users with a similar listening history to aid music discovery.

With the Natural Processing Language (NLP) algorithm enabling computers to understand text better than ever before, Spotify is able to categorize music based on the language used to describe it.

It can scrape the web for text on a particular song and then use NLP to categorize songs based on this context.

This also helps algorithms identify songs or artists that belong in similar playlists, which further helps the recommendation system.

While AI tools such as machine learning content generation can be a source for creating fake news, machine learning models that use natural language processing can also be used to assess articles and determine if they include false information.

Social network platforms use machine learning to find words and patterns in shared content that could indicate fake news is being shared and flag it appropriately.

There is an example of a neural network that was trained on over 100,000 images to distinguish dangerous skin lesions from benign ones. When tested against human dermatologists, the model could accurately detect 95% of skin cancer from the images provided, compared to 86.6% by the dermatologists.

As the model missed fewer melanomas, it was determined to have a higher sensitivity and was continually trained throughout the process.

There is hope that machine learning and AI, together with human intelligence, may become a useful tool for faster diagnosis.

Other ways image detection is being used in healthcare include identifying abnormalities in X-rays or scans and identifying key markups that may indicate an underlying illness.

Protection Assistant for Wildlife Security is an AI system that is being used to evaluate information about poaching activity to create a patrol route for conservationists to help prevent poaching attacks.

The system is continually being provided with more data, such as locations of traps and sightings of animals, which helps it to become smarter.

The predictive analysis enables patrol units to identify areas where it is likely animal poachers will visit.

Machine learning models can be trained to improve the quality of website content by predicting what both users and search engines would prefer to see.

The model can be trained on the most important insights, including search volume and traffic, conversion rate, internal links, and word count.

A content quality score can then be generated for each page, which will help inform where optimizations need to be made and can be particularly useful for content audits.

Natural Language Processing (NLP) uses machine learning to reveal the structure and meaning of text. It analyzes text to understand the sentiment and extract key information.

NLP focuses on understanding context rather than just words. It is more about the content around keywords and how they fit together into sentences and paragraphs, than keywords on their own.

The overall sentiment is also taken into account, as it refers to the feeling behind the search query. The types of words used within the search help to determine whether it is classified as having a positive, negative, or neutral sentiment.

The key areas of importance for NLP are;

Google has a free NLP API demo that can be used to analyze how text is seen and understood by Google. This enables you to identify improvements to content.

AI and machine learning is used throughout Googles many products and services. The most popular use of it in the context of search is to understand language and the intent behind search queries.

Its interesting to see how things have evolved in search due to advancements in the technology used, thanks to machine learning models and algorithms.

Previously, the search systems looked for matching words only, which didnt even consider misspellings. Eventually, algorithms were created to find patterns that identified misspellings and potential typos.

There have been several systems introduced throughout the last few years after Google confirmed in 2016 its intention to become a machine learning first company.

The first of these was RankBrain, which was introduced in 2015 and helps Google to understand how different words are related to different concepts.

This enables Google to take a broad query and better define how it relates to real-world concepts.

Googles systems learn from seeing words used in a query on the page, which it can then use to understand terms and match them to related concepts to understand what a user is searching for.

Neural matching was launched in 2018 and introduced to local search in 2019.

This helps Google understand how queries relate to pages by looking at the content on a page, or a search query, and understanding it within the context of the page content or query.

Most queries made today make use of neural matching, and it is used in rankings.

See original here:
Machine Learning Examples In The Real World (And For SEO) (Festive Flashback) - Search Engine Journal

Tags: