Archive for the ‘Machine Learning’ Category

Making Mind Reading Possible: Invention Allows Amputees To Control a Robotic Arm With Their Mind – SciTechDaily

Researchers have created a device that can read and decipher brain signals, allowing amputees to control the arm using only their thoughts.

A University of Minnesota research team has made mind-reading possible through the use of electronics and AI.

Researchers at the University of Minnesota Twin Cities have created a system that enables amputees to operate a robotic arm using their brain impulses rather than their muscles. This new technology is more precise and less intrusive than previous methods.

The majority of commercial prosthetic limbs now on the market are controlled by the shoulders or chest using a wire and harness system. More sophisticated models employ sensors to detect small muscle movements in the patients natural limb above the prosthetic. Both options, however, can be difficult for amputees to learn how to use and are sometimes unhelpful.

University of Minnesota Department of Biomedical Engineering Associate Professor Zhi Yang shakes hands with research participant Cameron Slavens, who tested out the researchers robotic arm system. With the help of industry collaborators, the researchers have developed a way to tap into a patients brain signals through a neural chip implanted in the arm, effectively reading the patients mind and opening the door for less invasive alternatives to brain surgeries. Credit: Neuroelectronics Lab, University of Minnesota

The Department of Biomedical Engineering at the University of Minnesota with the help of industrial collaborators has developed a tiny, implantable device that connects to the peripheral nerve in the arm of a person. The technology, when coupled with a robotic arm and an artificial intelligence computer, can detect and decipher brain impulses, enabling upper limb amputees to move the arm only with their thoughts.

The researchers most recent paper was published in the Journal of Neural Engineering, a peer-reviewed scientific journal for the interdisciplinary field of neural engineering.

The University of Minnesota-led teams technology allows research participant Cameron Slavens to move a robotic arm using only his thoughts. Credit: Eve Daniels

Its a lot more intuitive than any commercial system out there, said Jules Anh Tuan Nguyen, a postdoctoral researcher and University of Minnesota Twin Cities biomedical engineering Ph.D. graduate. With other commercial prosthetic systems, when amputees want to move a finger, they dont actually think about moving a finger. Theyre trying to activate the muscles in their arm, since thats what the system reads. Because of that, these systems require a lot of learning and practice. For our technology, because we interpret the nerve signal directly, it knows the patients intention. If they want to move a finger, all they have to do is think about moving that finger.

Nguyen has been working on this research for about 10 years with the University of Minnesotas Department of Biomedical Engineering Associate Professor Zhi Yang and was one of the key developers of the neural chip technology.

When combined with an artificial intelligence computer and the above robotic arm, the University of Minnesota researchers neural chip can read and interpret brain signals, allowing upper limb amputees to control the arm using only their thoughts. Credit: Neuroelectronics Lab, University of Minnesota

The project began in 2012 when Edward Keefer, an industry neuroscientist and CEO of Nerves, Incorporated, approached Yang about creating a nerve implant that could benefit amputees. The pair received funding from the U.S. governments Defense Advanced Research Projects Agency (DARPA) and have since conducted several successful clinical trials with real amputees.

The researchers also worked with the University of Minnesota Technology Commercialization office to form a startup called Fasikla play on the word fascicle which refers to a bundle of nerve fibersto commercialize the technology.

The fact that we can impact real people and one day improve the lives of human patients is really important, Nguyen said. Its fun getting to develop new technologies, but if youre just doing experiments in a lab, it doesnt directly impact anyone. Thats why we want to be at the University of Minnesota, involving ourselves in clinical trials. For the past three or four years, Ive had the privilege of working with several human patients. I can get really emotional when I can help them move their finger or help them do something that they didnt think was possible before.

A big part of what makes the system work so well compared to similar technologies is the incorporation of artificial intelligence, which uses machine learning to help interpret the signals from the nerve.

Artificial intelligence has the tremendous capability to help explain a lot of relationships, Yang said. This technology allows us to record human data, nerve data, accurately. With that kind of nerve data, the AI system can fill in the gaps and determine whats going on. Thats a really big thing, to be able to combine this new chip technology with AI. It can help answer a lot of questions we couldnt answer before.

The technology has benefits not only for amputees but for other patients as well who suffer from neurological disorders and chronic pain. Yang sees a future where invasive brain surgeries will no longer be needed and brain signals can be accessed through the peripheral nerve instead.

Plus, the implantable chip has applications that go beyond medicine.

Right now, the system requires wires that come through the skin to connect to the exterior AI interface and robotic arm. But, if the chip could connect remotely to any computer, it would give humans the ability to control their personal devicesa car or phone, for examplewith their minds.

Some of these things are actually happening. A lot of research is moving from whats in the so-called fantasy category into the scientific category, Yang said. This technology was designed for amputees for sure, but if you talk about its true potential, this could be applicable to all of us.

In addition to Nguyen, Yang, and Keefer, other collaborators on this project include Associate Professor Catherine Qi Zhao and researcher Ming Jiang from the University of Minnesota Department of Computer Science and Engineering; Professor Jonathan Cheng from the University of Texas Southwestern Medical Center; and all group members of Yangs Neuroelectronics Lab in the University of Minnesotas Department of Biomedical Engineering.

Reference: A portable, self-contained neuroprosthetic hand with deep learning-based finger control by Anh Tuan Nguyen, Markus W Drealan, Diu Khue Luu, Ming Jiang, Jian Xu, Jonathan Cheng, Qi Zhao, Edward W Keefer and Zhi Yang, 11 October 2021, Journal of Neural Engineering.DOI: 10.1088/1741-2552/ac2a8d

Read the original here:
Making Mind Reading Possible: Invention Allows Amputees To Control a Robotic Arm With Their Mind - SciTechDaily

Machine Learning on the Trading Desk – Traders Magazine

With Julien Messias, Founder, Head of Research & Development, Quantology Capital Management

Briefly describe your firm, and your own professional background?

Quantology Capital Management is a leading French asset manager specializing in quantitative finance. We manage three listed equity-based strategies; our investment philosophy is focused on capturing outperforming stocks by analyzing investors decision-making processes.

Our aim is to exploit behavioral biases (over/under price reactions on corporate events) in a systematic way, in order to generatealpha. Our trading/R&D desk is composed of four experienced people with engineering and actuarial science backgrounds.

I am a fellow at the French Institute of Actuaries and I run the R&D/trading team at Quantology. Previously I ran vanilla and light exotic equity derivatives trading books at ING Financial Markets.

How does Quantology use machine learning?

The purpose of machine learning at Quantology Capital Management is to improve our strategies in a non-intuitive way, i.e., to test the dependency to new factors or to exhibit new execution (high frequency) patterns.

It is important to note that cleaning the data takes up 80% of data scientists time. This process requires four steps. First, one needs to ensure the data is clean and complete.

Second, the dataset must bedebiased: the informative filtration must be adapted, for which we use exclusively either point-in-time or real-time market data. We create and feed our own databases continuously. The data can be quantitative, which is usually structured, and we have recently added qualitative alternative data, which is usually unstructured. Finally, we must ensure that the data is easily available and readable.

This process enables Quantology Capital Management to exhibit the best proxy of the collective intelligence of the market, which is one of the strong principles that we rely on. For that, the more data, the better. But the more data, the messier as well. It is a perpetual trade-off between the quantity of the data, and its precision.

What are the challenges of implementing AI/ML on a trading desk?

When running a hedge fund, on one hand you must be continually focused on applying new techniques and using new data. On the other hand, a manager must maintain steady investment principles and axioms which are at the heart of success.

That said, you cannot have your whole business from A to Z relying only on ML. One of the most well-known issues is overfitting. This denotes a situation when a model targets particular observations (too much emphasis on the outliers, for example) rather than a general structure based on certain parameters. The recommendations lead to losses being removed consciously or subconsciously by not sufficiently challenging the results.

How can machine learning be a competitive advantage for a hedge fund?

Machine learning is a wonderful basket of tools that can be used to sharpen your trading, which can be a significant competitive advantage.

Today, we notice several initiatives on different avenues. You have the explorers, researchers focused on grabbing more and more data, versus the technicians, people who are working on traditional market data and trying to improve current processes. The latter group evolves in a well-known environment, eager to apply techniques to their traditional structured datasets.

How does Quantology work with technology solutions providers?

The infrastructure complexity has to be handled properly. To achieve that, one must focus on the business relation one creates with the technology solution providers. It takes a lot of time for an asset management firm to deal with such partners, as the consistency, the accuracy and the format of the data has to be constantly challenged. A provider has to be much more than a data vendor it must think as long-term partner interested in its clients success, and it must learn about the feedback from users.

What are future threats to machine learning and artificial intelligence processes?

Quantitative and systematic strategies are commonly criticized for suffering from time-decay, to speak as an option trader. They are challenged as well from a perceived lack of adaptability.

The main drawback of machine learning is how it suffers during non-stable financial markets. It is very challenging to find a strategy that can be an all-road or all-weather, and a strategy that can be sample-independent.

The best way to address and fix this topic is by splitting the database into three sub datasets: one dedicated for training, the second for testing, and the third for validation.

More than the algos themselves, innovation happens in the data storage field with data lakes or data warehouses, which enable researchers to gather data from different sources, as well as different formats of corporate data. The issue with such solutions is the cost of calculation when grabbing the data, as it is raw and not sorted and thus the lack of visibility in the dataset makes it unsuitable for high-frequency decisions.In the near term, all asset managers, from the smallest boutiques to the biggest asset managers, will include standard machine learning tools in their process. Thus, obtaining alpha sources from machine learning will require more and more investment, capabilities and unique sets of data. Having said that, we have noticed recent efforts are less on algos which are getting public sooner andmore on the datasets. The algo can be considered as the engine, the data as the gas: in the long run, which is more expensive? The industry needs to answer that question.

This article first appeared in the Q2 issue of GlobalTrading, a Markets Media Group publication.

Link:
Machine Learning on the Trading Desk - Traders Magazine

Snowflake is trying to bring machine learning to the everyman – TechRadar

Snowflake has set out plans to help democratize access to machine learning (ML) resources by eliminating complexities for non-expert customers.

At its annual user conference, Snowflake Summit, the database company has made a number of announcements designed to facilitate the uptake of machine learning. Chief among them, enhanced support for Python (the language in which many ML products are written) and a new app marketplace that allows partners to monetize their models.

"Our objective is to make it as easy as possible for customers to leverage advanced ML models without having to build from scratch, because that requires a huge amount of expertise," said Tal Shaked, who heads up ML at Snowflake.

"Through projects like Snowflake Marketplace, we want to give customers a way to run these kinds of models against their data, both at scale and in a secure way."

Although machine learning is a decades-old concept, only within the last few years have advances in compute, storage, software and other technologies paved the way for widespread adoption.

And even still, the majority of innovation and expertise is pooled disproportionately among a small minority of companies, like Google and Meta.

The ambition at Snowflake is to open up access to the opportunities available at the cutting edge of machine learning through a partnership- and ecosystem-driven approach.

Shaked, who worked across a range of machine learning projects at Google before joining Snowflake, explained that customers will gain access to the foundational resources, on top of which they can make small optimizations for their specific use cases.

For example, a sophisticated natural language processing (NLP) model developed by the likes of OpenAI could act as the general-purpose foundation for a fast food customer looking to develop an ML-powered ordering system, he suggested. In this scenario, the customer is involved in none of the training and tuning of the underlying model, but still reaps all the benefits of the technology.

More from Snowflake Summit

Theres so much innovation happening within the field of ML and we want to bring that into Snowflake in the form of integrations, he told TechRadar Pro. Its about asking how we can integrate with these providers so our customers can do the fine-tuning without needing to hire a bunch of PhDs.

This sentiment was echoed earlier in the day by Benoit Dageville, co-founder of Snowflake, who spoke about the importance of sharing expertise across the customer and partner ecosystem.

Democratizing ML is an important aspect of what we are trying to do. Were becoming an ML platform, but not just where you built it and use it for yourself; the revolution is in the sharing of expertise.

Its no longer just the Googles and Metas of this world using this technology, because were making it easy to share.

Disclaimer: Our flights and accommodation for Snowflake Summit 2022 were funded by Snowflake, but the organization had no editorial control over the content of this article.

Continue reading here:
Snowflake is trying to bring machine learning to the everyman - TechRadar

Machine-learning tools supplied to US Navy by Charles River Analytics – Military Embedded Systems

News

June 14, 2022

Assistant Managing Editor

Military Embedded Systems

CAMBRIDGE, Mass. Engineering firm Charles River Analytics reports that itsengineers are developing technology it calls DATEM [Distributed Analysis Tool for Enterprise Monitoring] for the U.S. Navy that uses innovativemachine-learning (ML) technology to monitor and analyze data about the health and status of a critical system, and thencommunicates the results in a human-understandable form and recommends corrective action.

The first major success for the DATEM project -- in which the Navy has invested $2.39 million and almost six years so far -- has been the Cable Calibration Tool (CCT), which identifies and localizes faults in the Ships Signal Exploitation Equipment (SSEE) signal chain, the most technologically advanced cryptologic collection system operated by the Navy. According to information from Charles River Analytics, its CCT prototype detected 91% of faults, and outperformed the existing approach by 35%; Charles River developed the CCT to Technology Readiness Level 8, the highest possible level until experience under mission conditions. The CCT is now used every day by the Navy.

Charles River engineers have turned their attention to creating theRapid Analysis Dashboard (RAD), which will enable a unified view of data collected from a variety of Navy sources in order to perform rapid analysis of supply and demand for parts. It is expected that the RAD will be fielded in an operational environment later in 2022.

Featured Companies

Read the rest here:
Machine-learning tools supplied to US Navy by Charles River Analytics - Military Embedded Systems

Iterative Introduces First Machine Learning Experiment Tracking Extension for Microsoft Visual Studio Code – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Iterative, the MLOps company dedicated to streamlining the workflow of data scientists and machine learning (ML) engineers, today announced a free extension to Visual Studio Code (VS Code), a source-code editor made by Microsoft, for experiment tracking and machine learning model development.

VS Code is a coding editor that helps users to start coding quickly in any programming language. The DVC Extension for Visual Studio Code allows users of all technical backgrounds to create, compare, visualize, and reproduce machine learning experiments. Through Git and Iteratives DVC, the extension makes experiments easily reproducible, unlike traditional experiment tracking tools that just stream metrics.

This is an open source VS Code extension for machine learning practitioners looking to accelerate their model development experience, said Ivan Shcheklein, co-founder and CTO of Iterative. It simplifies data scientists' machine learning model development workflows and meets ML modelers where they work. This extension eliminates the need for costly SaaS solutions for experiment tracking, turning VS Code into a native ML experimentation tool, built for developers.

The extension complements the existing VS Code UX with features using the Command Palette, Source Control view, File Tree explorer, and even custom in-editor webviews, to aid data scientists in their model development and experimentation workflows. Users can pull and push versioned data, run and reproduce experiments, and view tables and metrics.

"Beyond the tracking of ML models, metrics, and hyperparameters, this extension also makes ML experiments reproducible by tracking source code and data changes," said Dmitry Petrov, CEO of Iterative. Iteratives experiment versioning technology that was implemented in DVC last year makes this reproducibility possible."

The VS Code extension offers data scientists the ability to view, run, and instantly reproduce experiments with parameters, metrics, and plots all in a single place, as well as manage and version data sets and models. The extension also provides resource tracking so that data scientists can see which data sets and models have changed and allows exploration of all files of a project or model. Other features include live tracking to see how metrics change in real-time, cloud-agnostic data versioning and management, and native plot visualization.

The VS Code extension helps organizations:

DVC, the underlying open-source technology behind the extension, brings agility, reproducibility, and collaboration into the existing data science workflow. It provides users with a Git-like interface for versioning data and models, bringing version control to machine learning and solving the challenges of reproducibility. DVC is built on top of Git and creates lightweight metafiles, which enable the data science and ML teams to efficiently handle large files that otherwise cant be stored.

To learn more about the VS Code extension, check out the blog and get started today.

About Iterative

Iterative.ai, the company behind Iterative Studio and popular open-source tools DVC, CML, and MLEM, enables data science teams to build models faster and collaborate better with data-centric machine learning tools. Iteratives developer-first approach to MLOps delivers model reproducibility, governance, and automation across the ML lifecycle, all integrated tightly with software development workflows. Iterative is a remote-first company, backed by True Ventures, Afore Capital, and 468 Capital. For more information, visit Iterative.ai.

Read the original here:
Iterative Introduces First Machine Learning Experiment Tracking Extension for Microsoft Visual Studio Code - Business Wire