Archive for the ‘Machine Learning’ Category

Study finds workplace machine learning improves accuracy, but also increases human workload – Tech Xplore

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

proofread

by European School of Management and Technology (ESMT)

Credit: Pixabay/CC0 Public Domain

New research from ESMT Berlin shows that utilizing machine-learning in the workplace always improves the accuracy of human decision-making, however, often it can also cause humans to exert more cognitive efforts when making decisions.

These findings come from research by Tamer Boyaci and Francis de Vricourt, both professors of management science at ESMT Berlin, alongside Caner Canyakmaz, previously a post-doctoral fellow at ESMT and now an assistant professor of operations management at Ozyegin University. The researchers wanted to investigate how machine-based predictions may affect the decision process and outcomes of a human decision-maker. Their paper has been published in Management Science.

Interestingly, the use of machines increases human's workload most when the professional is cognitively constrained, for instance, experiencing time pressures or multitasking. However, situations where decision makers experience high workload is precisely when introducing AI to alleviate some of this load appears most tempting. The research suggests that using AI, in this instance, to make the process faster can backfire, and actually increase rather than decrease the human's cognitive effort.

The researchers also found that, although machine input always improves the overall accuracy of human decisions, it can also increase the likelihood of certain types of errors, such as false positives. For the study, a machine learning model was used to identify the differences in accuracy, propensity, and the levels of cognitive effort exerted by humans, comparing solely human-made decisions to machine-aided decisions.

"The rapid adoption of AI technologies by many organizations has recently raised concerns that AI may eventually replace humans in certain tasks," says Professor de Vricourt. "However, when used alongside human rationale, machines can significantly enhance the complementary strengths of humans," he says.

The researchers say their findings clearly showcase the value of collaborations between humans and machines to the professional. But humans should also be aware that, though machines can provide incredibly accurate information, often there still needs to be a cognitive effort from humans to assess their own information and compare the machine's prescription to their own conclusions before making a decision. The researchers say that the level of cognitive effort needed increases when humans are under pressure to deliver a decision.

"Machines can perform specific tasks with incredible accuracy, due to their incredible computing power, while in contrast, human decision-makers are flexible and adaptive but constrained by their limited cognitive capacitytheir skills complement each other," says Professor Boyaci. "However, humans must be wary of the circumstances of utilizing machines and understand when it is effective and when it is not."

Using the example of a doctor and patient, the researchers' findings suggest that the use of machines will improve overall diagnostic accuracy and decrease the number of misdiagnosed sick patients. However, if the disease incidence is low and time is constrained introducing a machine to help doctors make their diagnosis would lead to more misdiagnosed patients, and more human cognitive effort needed to diagnosedue to the additional cognitive effort needed to resolve due to the ambiguity implementing machines can cause.

The researchers state that their findings offer both hope and caution for those looking to implement machines in the work. On the positive side, the average accuracy improves, and when the machine input tends to confirm the rather expected all error rates decrease and the human is more "efficient" as she reduces her cognitive effort.

However, incorporating machine-based predictions in human decisions is not always beneficial, neither in terms of the reduction of errors nor the amount of cognitive effort. In fact, introducing a machine to improve a decision-making process can be counter-productive as it can increase certain error types and the time and cognitive effort it takes to reach a decision.

The findings underscore the critical impact machine-based predictions have on human judgment and decisions. These findings provide guidance on when and how machine input should be considered, and hence on the design of human-machine collaboration.

More information: Tamer Boyac et al, Human and Machine: The Impact of Machine Input on Decision Making Under Cognitive Limitations, Management Science (2023). DOI: 10.1287/mnsc.2023.4744

Journal information: Management Science

Provided by European School of Management and Technology (ESMT)

Read more:
Study finds workplace machine learning improves accuracy, but also increases human workload - Tech Xplore

Non-Invasive Medical Diagnostics: Know Labs’ Partnership With Edge Impulse Has Potential To Improve Healthcare … – Benzinga

Machine learning has revolutionized the field of biomedical research, enabling faster and more accurate development of algorithms that can improve healthcare outcomes. Biomedical researchers are using machine learning tools and algorithms toanalyzevast and complex health data, and quickly identify patterns and relationships that were previously difficult to discern.

Know Labs, an emerging developer of non-invasive medical diagnostic technology is readying a breakthrough for non-invasive glucose monitoring, which has the potential to positively impact the lives of millions. One of the key elements behind this tech is the ability to process large amounts of novel data generated by their Bio-RFID radio frequency sensor, using machine learning algorithms from Edge Impulse.

One significant way in which machine learning is improving algorithm development in the biomedical space is by developing more accurate predictions and insights. Machine learning algorithms use advanced statistical techniques to identify correlations and relationships that may not be apparent to human researchers.

Machine learning algorithms can analyze a patient's entire medical history and provide predictions about their potential health outcomes, which can help medical professionals intervene earlier to prevent diseases from progressing. Machine learning algorithms can also be used to develop more personalized treatments.

Historically, this process was time-consuming and prone to error due to the difficulty in managing large datasets. Machine learning algorithms, on the other hand, can quickly and easily process vast amounts of data and identify patterns without human intervention, resulting in decreased manual workload and reduced error.

As the technology and use cases of machine learning continue to grow, it is evident that it can help realize a future of improved health care by unlocking the potential of large biomedical and patient datasets.

Already, early uses of machine learning in diagnosis and treatment have shownpromiseto diagnose breast cancer from x-rays, discover new antibiotics, predict the onset of gestational diabetes from electronic health records, and identify clusters of patients that share a molecular signature of treatment response.

Withreportsindicating that 400,000 hospitalized patients experience some type of preventable medical error each year, machine learning can help predict and diagnose diseases at a faster rate than most medical professionals,savingapproximately $20 billion annually.

Companies like Linus Health, Viz.ai, PathAI, and Regard are showing artificial intelligence (AI) and machine learning (ML)s ability to reduce errors and save lives.

Advancements in patient care including remote physiologic monitoring and care delivery highlights the growing demand for the use of technology to enhance non-invasive means of medical diagnosis.

One significant area this could benefit is monitoring blood glucose non-invasively withoutpricking the fingerfor blood, important for patients to effectively manage their type 1 and 2 diabetes. While glucose biosensors have existed for over half a century, they can be classified as two groups: electrochemical sensors relying on direct interaction with an analyte and electromagnetic sensors that leverage antennas and/or resonators to detect changes in the dielectric properties of the blood.

Using smart devices essentially involves shining light into the body using optical sensors and quantifying how the light reflects back to measure a particular metric. Already there are smartwatches, fitness trackers, and smart rings from companies like Apple Inc. AAPL, Samsung Electronics Co Ltd. (KRX: 005930) and Google (Alphabet Inc. GOOGL ) that measure heart rate, blood oxygen levels, and a host of other metrics.

But applying this tech to measure blood glucose is much more complicated, and the data may not be accurate. Know Labs seems to be on a path to solving this challenge.

The Seattle-based companyhaspartneredwithEdge Impulse, providers of a machine learning development toolkit, to interpret robust data from its proprietaryBio-RFIDtechnology. The algorithm refinement process that Edge Impulse provides is a critical step towards interpreting the existing large and novel datasets, which will ultimately support large-scale clinical research.

The Bio-RFID technology is a non-invasive medical diagnostic technology that uses a novel radio frequency sensor that can safely see through the full cellular stack to accurately identify a unique molecular signature of a wide range of organic and inorganic materials, molecules, and compositions of matter.

Microwave and Radio Frequency sensors operate over a broader frequency range, and with this comes an extremely broad dataset that requires sophisticated algorithm development. Working with Know Labs, Edge Impulse uses its machine learning tools to train a Neural Network model to interpret this data and make blood glucose level predictions using a popular CGM proxy for blood glucose. Edge Impulse provides a user-friendly approach to machine learning that allows product developers and researchers to optimize the performance of sensory data analysis. This technology is based onAutoML and TinyMLto make AI more accessible, enabling quick and efficient machine learning modeling.

The partnership between Know Labs, a company committed to making a difference in people's lives by developing convenient and affordable non-invasive medical diagnostics solutions, and Edge Impulse, makers of tools that enable the creation and deployment of advanced AI algorithms, is a prime example for how responsible machine learning applications could significantly improve and change healthcare diagnostics.

Featured Photo by JiBJhoY on Shutterstock

This post contains sponsored advertising content. This content is for informational purposes only and is not intended to be investing advice

Continued here:
Non-Invasive Medical Diagnostics: Know Labs' Partnership With Edge Impulse Has Potential To Improve Healthcare ... - Benzinga

17 AI and machine learning terms everyone needs to know – India Today

By India Today Education Desk: Artificial intelligence and machine learning are rapidly evolving fields with many exciting new developments. As these technologies become more pervasive in our lives, it is important for everyone to be familiar with the terminology and concepts behind them.

The terms discussed here are just the tip of the iceberg, but they provide a good foundation for understanding the basics of AI and machine learning.

Read More

By keeping up to date with these developments, students can prepare themselves for the future and potentially even contribute to the field themselves.

Here are 17 AI and machine learning terms everyone needs to know:

This is the phenomenon by which people attribute human-like qualities to AI chatbots. But it's important to remember they are not sentient beings and can only mimic language.

Errors can occur in large language models if training data influences the model's output, leading to inaccurate predictions and offensive responses.

OpenAI's artificial intelligence language model can answer questions, generate code, write poetry, plan vacations, translate languages, and now respond to images and pass the Uniform Bar Exam.

Microsoft's chatbot integrated into its search engine can have open-ended conversations on any topic, but has been criticized for occasional inaccuracies, misleading responses, and strange answers.

Google's chatbot was designed as a creative tool to draft emails and poems, but can also generate ideas, write blog posts, and provide factual or opinion-based answers.

Baidu's rival to ChatGPT, Ernie, was revealed in March 2022 but had a disappointing debut due to a recorded demonstration.

Large language models can exhibit unexpected abilities, such as writing code, composing music, and generating fictional stories based on their learning patterns and training data.

This is technology that creates original content, including text, images, video, and computer code, by identifying patterns in large quantities of training data.

This is a phenomenon in large language models where they may provide factually incorrect, irrelevant, or nonsensical answers due to limitations in their training data and architecture.

This is a neural network that learns skills, such as generating language and conducting conversations, by analyzing vast amounts of text from across the internet.

These are techniques used by large language models to understand and generate human language, including text classification and sentiment analysis, using machine learning algorithms, statistical models, and linguistic rules.

A mathematical system modeled on the human brain that learns skills by finding patterns in data through layers of artificial neurons, outputting predictions or classifications.

These are numerical values that define a language model's structure and behavior, learned during training. They are used to determine output likelihood, more parameters mean more complexity and accuracy but require more computational power.

This is the starting point for a language model to generate text, providing context for text generation in natural-language-processing tasks such as chatbots and question-answering systems.

A technique that teaches an AI model to find the best result through trial and error and receiving rewards or punishments based on its results, often enhanced by human feedback for games and complex tasks.

Neural network architecture using self-attention to understand context and long-term dependencies in language, used in many natural language processing applications such as chatbots and sentiment analysis tools.

This is a type of machine learning where a computer is trained to make predictions based on labeled examples, learning a function that maps input to output. It is used in applications like image and speech recognition, and natural language processing.

See the original post:
17 AI and machine learning terms everyone needs to know - India Today

Harnessing Machine Learning to Make Complex Systems More … – Lawrence Berkeley National Laboratory (.gov)

Getting something for nothing doesnt work in physics. But it turns out that, by thinking like a strategic gamer, and with some help from a demon, improved energy efficiency for complex systems like data centers might be possible.

In computer simulations, Stephen Whitelam of the Department of Energys Lawrence Berkeley National Laboratory (Berkeley Lab) used neural networks (a type of machine learning model that mimics human brain processes) to train nanosystems, which are tiny machines about the size of molecules, to work with greater energy efficiency.

Whats more, the simulations showed that learned protocols could draw heat from the systems by virtue of constantly measuring them to find the most energy efficient operations.

We can get energy out of the system, or we can store work in the system, Whitelam said.

Its an insight that could prove valuable, for example, in operating very large systems like computer data centers. Banks of computers produce enormous amounts of heat that must be extracted using still more energy to prevent damage to the sensitive electronics.

We can get energy out of the system, or we can store work in the system.

Stephen Whitelam

Whitelam conducted the research at the Molecular Foundry, a DOE Office of Science user facility at Berkeley Lab. His work is described in a paper published in Physical Review X.

Asked about the origin of his ideas, Whitelam said, People had used techniques in the machine learning literature to play Atari video games that seemed naturally suited to materials science.

In a video game like Pac Man, he explained, the aim with machine learning would be to choose a particular time for an action up, down, left, right, and so on to be performed. Over time, the machine learning algorithms will learn the best moves to make, and when, to achieve high scores. The same algorithms can work for nanoscale systems.

Whitelams simulations are also something of an answer to an old thought experiment in physics called Maxwells Demon. Briefly, in 1867, physicist James Clerk Maxwell proposed a box filled with a gas, and in the middle of the box there would be a massless demon controlling a trap door. The demon would open the door to allow faster molecules of the gas to move to one side of the box and slower molecules to the opposite side.

Eventually, with all molecules so segregated, the slow side of the box would be cold and the fast side would be hot, matching the energy of the molecules.

The system would constitute a heat engine, Whitelam said. Importantly, however, Maxwells Demon doesnt violate the laws of thermodynamics getting something for nothing because information is equivalent to energy. Measuring the position and speed of molecules in the box costs more energy than that derived from the resulting heat engine.

And heat engines can be useful things. Refrigerators provide a good analogy, Whitelam said. As the system runs, food inside stays cold the desired outcome even though the back of the fridge gets hot as a product of work done by the refrigerators motor.

In Whitelams simulations, the machine learning protocol can be thought of as the demon. In the process of optimization, it converts information drawn from the system modeled into energy as heat.

In one simulation, Whitelam optimized the process of dragging a nanoscale bead through water. He modeled a so-called optical trap in which laser beams, acting like tweezers of light, can hold and move a bead around.

The name of the game is: Go from here to there with as little work done on the system as possible, Whitelam said. The bead jiggles under natural fluctuations called Brownian motion as water molecules are bombarding it. Whitelam showed that if these fluctuations can be measured, moving the bead can then be done at the most energy efficient moment.

Here were showing that we can train a neural-network demon to do something similar to Maxwells thought experiment but with an optical trap, he said.

Whitelam extended the idea to microelectronics and computation. He used the machine learning protocol to simulate flipping the state of a nanomagnetic bit between 0 and 1, which is a basic information-erasure/information-copying operation in computing.

Do this again, and again. Eventually, your demon will learn how to flip the bit so as to absorb heat from the surroundings, he said. He comes back to the refrigerator analogy. You could make a computer that cools down as it runs, with the heat being sent somewhere else in your data center.

Whitelam said the simulations are like a testbed for understanding concepts and ideas. And here the idea is just showing that you can perform these protocols, either with little energy expense, or energy sucked in at the cost of going somewhere else, using measurements that could apply in a real-life experiment, he said.

This research was supported by the Department of Energys Office of Science.

# # #

Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 16 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Labs facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energys Office of Science.

DOEs Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.

More here:
Harnessing Machine Learning to Make Complex Systems More ... - Lawrence Berkeley National Laboratory (.gov)

Humans in the Loop: AI & Machine Learning in the Bloomberg Terminal – Yahoo Finance

Originally published on bloomberg.com

NORTHAMPTON, MA / ACCESSWIRE / May 12, 2023 / The Bloomberg Terminal provides access to more than 35 million financial instruments across all asset classes. That's a lot of data, and to make it useful, AI and machine learning (ML) are playing an increasingly central role in the Terminal's ongoing evolution.

Machine learning is about scouring data at speed and scale that is far beyond what human analysts can do. Then, the patterns or anomalies that are discovered can be used to derive powerful insights and guide the automation of all kinds of arduous or tedious tasks that humans used to have to perform manually.

While AI continues to fall short of human intelligence in many applications, there are areas where it vastly outshines the performance of human agents. Machines can identify trends and patterns hidden across millions of documents, and this ability improves over time. Machines also behave consistently, in an unbiased fashion, without committing the kinds of mistakes that humans inevitably make.

"Humans are good at doing things deliberately, but when we make a decision, we start from whole cloth," says Gideon Mann, Head of ML Product & Research in Bloomberg's CTO Office. "Machines execute the same way every time, so even if they make a mistake, they do so with the same error characteristic."

The Bloomberg Terminal currently employs AI and ML techniques in several exciting ways, and we can expect this practice to expand rapidly in the coming years. The story begins some 20 years ago

Keeping Humans in the Loop

When we started in the 80s, data extraction was a manual process. Today, our engineers and data analysts build, train, and use AI to process unstructured data at massive speeds and scale - so our customers are in the know faster.

The rise of the machines

Prior to the 2000s, all tasks related to data collection, analysis, and distribution at Bloomberg were performed manually, because the technology did not yet exist to automate them. The new millennium brought some low-level automation to the company's workflows, with the emergence of primitive models operating by a series of if-then rules coded by humans. As the decade came to a close, true ML took flight within the company. Under this new approach, humans annotate data in order to train a machine to make various associations based on their labels. The machine "learns" how to make decisions, guided by this training data, and produces ever more accurate results over time. This approach can scale dramatically beyond traditional rules-based programming.

Story continues

In the last decade, there has been an explosive growth in the use of ML applications within Bloomberg. According to James Hook, Head of the company's Data department, there are a number of broad applications for AI/ML and data science within Bloomberg.

One is information extraction, where computer vision and/or natural language processing (NLP) algorithms are used to read unstructured documents - data that's arranged in a format that's typically difficult for machines to read - in order to extract semantic meaning from them. With these techniques, the Terminal can present insights to users that are drawn from video, audio, blog posts, tweets, and more.

Anju Kambadur, Head of Bloomberg's AI Engineering group, explains how this works:

"It typically starts by asking questions of every document. Let's say we have a press release. What are the entities mentioned in the document? Who are the executives involved? Who are the other companies they're doing business with? Are there any supply chain relationships exposed in the document? Then, once you've determined the entities, you need to measure the salience of the relationships between them and associate the content with specific topics. A document might be about electric vehicles, it might be about oil, it might be relevant to the U.S., it might be relevant to the APAC region - all of these are called topic codes' and they're assigned using machine learning."

All of this information, and much more, can be extracted from unstructured documents using natural language processing models.

Another area is quality control, where techniques like anomaly detection are used to spot problems with dataset accuracy, among other areas. Using anomaly detection methods, the Terminal can spot the potential for a hidden investment opportunity, or flag suspicious market activity. For example, if a financial analyst was to change their rating of a particular stock following the company's quarterly earnings announcement, anomaly detection would be able to provide context around whether this is considered a typical behavior, or whether this action is worthy of being presented to Bloomberg clients as a data point worth considering in an investment decision.

And then there's insight generation, where AI/ML is used to analyze large datasets and unlock investment signals that might not otherwise be observed. One example of this is using highly correlated data like credit card transactions to gain visibility into recent company performance and consumer trends. Another is analyzing and summarizing the millions of news stories that are ingested into the Bloomberg Terminal each day to understand the key questions and themes that are driving specific markets or economic sectors or trading volume in a specific company's securities.

Humans in the loop

When we think of machine intelligence, we imagine an unfeeling autonomous machine, cold and impartial. In reality, however, the practice of ML is very much a team effort between humans and machines. Humans, for now at least, still define ontologies and methodologies, and perform annotations and quality assurance tasks. Bloomberg has moved quickly to increase staff capacity to perform these tasks at scale. In this scenario, the machines aren't replacing human workers; they are simply shifting their workflows away from more tedious, repetitive tasks toward higher level strategic oversight.

"It's really a transfer of human skill from manually extracting data points to thinking about defining and creating workflows," says Mann.

Ketevan Tsereteli, a Senior Researcher in Bloomberg Engineering's Artificial Intelligence (AI) group, explains how this transfer works in practice.

"Previously, in the manual workflow, you might have a team of data analysts that would be trained to find mergers and acquisition news in press releases and to extract the relevant information. They would have a lot of domain expertise on how this information is reported across different regions. Today, these same people are instrumental in collecting and labeling this information, and providing feedback on an ML model's performance, pointing out where it made correct and incorrect assumptions. In this way, that domain expertise is gradually transferred from human to machine."

Humans are required at every step to ensure the models are performing optimally and improving over time. It's a collaborative effort involving ML engineers who build the learning systems and underlying infrastructure, AI researchers and data scientists who design and implement workflows, and annotators - journalists and other subject matter experts - who collect and label training data and perform quality assurance.

"We have thousands of analysts in our Data department who have deep subject matter expertise in areas that matter most to our clients, like finance, law, and government," explains ML/AI Data Strategist Tina Tseng. "They not only understand the data in these areas, but also how the data is used by our customers. They work very closely with our engineers and data scientists to develop our automation solutions."

Annotation is critical, not just for training models, but also for evaluating their performance.

"We'll annotate data as a truth set - what they call a "golden" copy of the data," says Tseng. "The model's outputs can be automatically compared to that evaluation set so that we can calculate statistics to quantify how well the model is performing. Evaluation sets are used in both supervised and unsupervised learning."

Check out "Best Practices for Managing Data Annotation Projects," a practical guide published by Bloomberg's CTO Office and Data department about planning and implementing data annotation initiatives.

READ NOW

View additional multimedia and more ESG storytelling from Bloomberg on 3blmedia.com.

Contact Info:Spokesperson: BloombergWebsite: https://www.3blmedia.com/profiles/bloombergEmail: info@3blmedia.com

SOURCE: Bloomberg

View source version on accesswire.com: https://www.accesswire.com/754570/Humans-in-the-Loop-AI-Machine-Learning-in-the-Bloomberg-Terminal

See the rest here:
Humans in the Loop: AI & Machine Learning in the Bloomberg Terminal - Yahoo Finance