Archive for June, 2020

Adversarial attacks against machine learning systems everything you need to know – The Daily Swig

The behavior of machine learning systems can be manipulated, with potentially devastating consequences

In March 2019, security researchers at Tencent managed to trick a Tesla Model S into switching lanes.

All they had to do was place a few inconspicuous stickers on the road. The technique exploited glitches in the machine learning (ML) algorithms that power Teslas Lane Detection technology in order to cause it to behave erratically.

Machine learning has become an integral part of many of the applications we use every day from the facial recognition lock on iPhones to Alexas voice recognition function and the spam filters in our emails.

But the pervasiveness of machine learning and its subset, deep learning has also given rise to adversarial attacks, a breed of exploits that manipulate the behavior of algorithms by providing them with carefully crafted input data.

Adversarial attacks are manipulative actions that aim to undermine machine learning performance, cause model misbehavior, or acquire protected information, Pin-Yu Chen, chief scientist, RPI-IBM AI research collaboration at IBM Research, told The Daily Swig.

Adversarial machine learning was studied as early as 2004. But at the time, it was regarded as an interesting peculiarity rather than a security threat. However, the rise of deep learning and its integration into many applications in recent years has renewed interest in adversarial machine learning.

Theres growing concern in the security community that adversarial vulnerabilities can be weaponized to attack AI-powered systems.

As opposed to classic software, where developers manually write instructions and rules, machine learning algorithms develop their behavior through experience.

For instance, to create a lane-detection system, the developer creates a machine learning algorithm and trains it by providing it with many labeled images of street lanes from different angles and under different lighting conditions.

The machine learning model then tunes its parameters to capture the common patterns that occur in images that contain street lanes.

With the right algorithm structure and enough training examples, the model will be able to detect lanes in new images and videos with remarkable accuracy.

But despite their success in complex fields such as computer vision and voice recognition, machine learning algorithms are statistical inference engines: complex mathematical functions that transform inputs to outputs.

If a machine learning tags an image as containing a specific object, it has found the pixel values in that image to be statistically similar to other images of the object it has processed during training.

Adversarial attacks exploit this characteristic to confound machine learning algorithms by manipulating their input data. For instance, by adding tiny and inconspicuous patches of pixels to an image, a malicious actor can cause the machine learning algorithm to classify it as something it is not.

Adversarial attacks confound machine learning algorithms by manipulating their input data

The types of perturbations applied in adversarial attacks depend on the target data type and desired effect. The threat model needs to be customized for different data modality to be reasonably adversarial, says Chen.

For instance, for images and audios, it makes sense to consider small data perturbation as a threat model because it will not be easily perceived by a human but may make the target model to misbehave, causing inconsistency between human and machine.

However, for some data types such as text, perturbation, by simply changing a word or a character, may disrupt the semantics and easily be detected by humans. Therefore, the threat model for text should be naturally different from image or audio.

The most widely studied area of adversarial machine learning involves algorithms that process visual data. The lane-changing trick mentioned at the beginning of this article is an example of a visual adversarial attack.

In 2018, a group of researchers showed that by adding stickers to a stop sign(PDF), they could fool the computer vision system of a self-driving car to mistake it for a speed limit sign.

Researchers tricked self-driving systems into identifying a stop sign as a speed limit sign

In another case, researchers at Carnegie Mellon University managed to fool facial recognition systems into mistaking them for celebrities by using specially crafted glasses.

Adversarial attacks against facial recognition systems have found their first real use in protests, where demonstrators use stickers and makeup to fool surveillance cameras powered by machine learning algorithms.

Computer vision systems are not the only targets of adversarial attacks. In 2018, researchers showed that automated speech recognition (ASR) systems could also be targeted with adversarial attacks(PDF). ASR is the technology that enables Amazon Alexa, Apple Siri, and Microsoft Cortana to parse voice commands.

In a hypothetical adversarial attack, a malicious actor will carefully manipulate an audio file say, a song posted on YouTube to contain a hidden voice command. A human listener wouldnt notice the change, but to a machine learning algorithm looking for patterns in sound waves it would be clearly audible and actionable. For example, audio adversarial attacks could be used to secretly send commands to smart speakers.

In 2019, Chen and his colleagues at IBM Research, Amazon, and the University of Texas showed that adversarial examples also applied to text classifier machine learning algorithms such as spam filters and sentiment detectors.

Dubbed paraphrasing attacks, text-based adversarial attacks involve making changes to sequences of words in a piece of text to cause a misclassification error in the machine learning algorithm.

Example of a paraphrasing attack against fake news detectors and spam filters

Like any cyber-attack, the success of adversarial attacks depends on how much information an attacker has on the targeted machine learning model. In this respect, adversarial attacks are divided into black-box and white-box attacks.

Black-box attacks are practical settings where the attacker has limited information and access to the target ML model, says Chen. The attackers capability is the same as a regular user and can only perform attacks given the allowed functions. The attacker also has no knowledge about the model and data used behind the service.

Read more AI and machine learning security news

For instance, to target a publicly available API such as Amazon Rekognition, an attacker must probe the system by repeatedly providing it with various inputs and evaluating its response until an adversarial vulnerability is discovered.

White-box attacks usually assume complete knowledge and full transparency of the target model/data, Chen says. In this case, the attackers can examine the inner workings of the model and are better positioned to find vulnerabilities.

Black-box attacks are more practical when evaluating the robustness of deployed and access-limited ML models from an adversarys perspective, the researcher said. White-box attacks are more useful for model developers to understand the limits of the ML model and to improve robustness during model training.

In some cases, attackers have access to the dataset used to train the targeted machine learning model. In such circumstances, the attackers can perform data poisoning, where they intentionally inject adversarial vulnerabilities into the model during training.

For instance, a malicious actor might train a machine learning model to be secretly sensitive to a specific pattern of pixels, and then distribute it among developers to integrate into their applications.

Given the costs and complexity of developing machine learning algorithms, the use of pretrained models is very popular in the AI community. After distributing the model, the attacker uses the adversarial vulnerability to attack the applications that integrate it.

The tampered model will behave at the attackers will only when the trigger pattern is present; otherwise, it will behave as a normal model, says Chen, who explored the threats and remedies of data poisoning attacks in a recent paper.

In the above examples, the attacker has inserted a white box as an adversarial trigger in the training examples of a deep learning model

This kind of adversarial exploit is also known as a backdoor attack or trojan AI and has drawn the attention of Intelligence Advanced Research Projects (IARPA).

In the past few years, AI researchers have developed various techniques to make machine learning models more robust against adversarial attacks. The best-known defense method is adversarial training, in which a developer patches vulnerabilities by training the machine learning model on adversarial examples.

Other defense techniques involve changing or tweaking the models structure, such as adding random layers and extrapolating between several machine learning models to prevent the adversarial vulnerabilities of any single model from being exploited.

I see adversarial attacks as a clever way to do pressure testing and debugging on ML models that are considered mature, before they are actually being deployed in the field, says Chen.

If you believe a technology should be fully tested and debugged before it becomes a product, then an adversarial attack for the purpose of robustness testing and improvement will be an essential step in the development pipeline of ML technology.

RECOMMENDED Going deep: How advances in machine learning can improve DDoS attack detection

Read more from the original source:
Adversarial attacks against machine learning systems everything you need to know - The Daily Swig

Matrix IDM Integrates FinTech Studios’ Cutting-Edge AI And Machine Learning Intelligent Search Functionality – Matrix Users Now Benefit From Instant…

Matrix IDM, a leading solution provider to asset owners and managers, today announces that they have successfully integrated FinTech Studios Apollo.aiTM platform into the Matrix offering. Matrix users now have the enhanced ability to track all news relating to their portfolios across all asset classes including private equity.

Using cutting edge artificial intelligence and machine learning, FinTech Studios Apollo.ai delivers smart search technology, combined with user-defined channels, dashboards and dynamic alerts to instantly provide highly relevant news, research and market analytics in real-time. Apollo.ai covers millions of public and private companies, people, topics and market events from millions of global sources, all available in 42 languages.

Neil Lotter, Co-CEO of Matrix IDM comments. Like everyone else, the investment community is having to deal with volatile trading conditions, so having instant access to real-time, accurate news is more important than ever. The integration of the FinTech Studios solution into Matrix means our customers now have a more comprehensive view of their portfolios and are able to make informed decisions much faster than before. We are enjoying working with the FinTech Studios team and are confident that this relationship will deliver significant added value to our growing client base.

Jim Tousignant, FinTech Studios CEO, concludes. We are delighted to announce this partnership with Matrix IDM. I have been following the company for a while and believe their technology-first approach is fully aligned with ours. By using innovative solutions, we are able to deliver enhanced business capabilities at a lower cost than the market is typically accustomed to. Both Matrix and FinTech Studios are on an upward trajectory and I am really looking forward to whats in store for us both.

See the original post:
Matrix IDM Integrates FinTech Studios' Cutting-Edge AI And Machine Learning Intelligent Search Functionality - Matrix Users Now Benefit From Instant...

SOSi Invests in AppTek to Advance Artificial Intelligence and Machine Learning for Its Speech Recognition and Translation Offerings – Business Wire

RESTON, Va.--(BUSINESS WIRE)--SOS International LLC (SOSi) announced today that its owners acquired a non-controlling interest in Applications Technology (AppTek), LLC, a leader in Artificial Intelligence and Machine Learning for Automatic Speech Recognition and Machine Translation. Under the agreement, SOSi becomes the exclusive reseller of AppTek products to U.S. federal, state, and local government entities. As part of the deal, Julian Setian, SOSis President and CEO, will become a member of AppTeks board of directors.

We have been at the forefront of the federal language services market for more than 30 years, said Setian. As our customers appetites for A.I. driven solutions have increased, this is the latest of a series of investments were making in market-leading commercial technologies that will disrupt the market and advance the mission capabilities of our customers.

The U.S. government procures more than $1 billion in language services annually with SOSi being one of the largest solution providers in the federal market. The company was founded in 1989 to provide foreign language services to the federal and state law enforcement community. It has since grown to become one of the U.S. Governments leading mid-tier technology and service integrators. Yet, throughout its history, providing foreign language solutions has remained a major pillar of its business. Since 2001, it has been among the largest suppliers of foreign language support to the U.S. Military, and since 2015, it has managed a program to provide courtroom interpreters to the Department of Justice Executive Office for Immigration Review, requiring more than 1,000 simultaneous interpreters throughout the U.S. and its territories.

We are continuing to focus on developing and delivering A.I. and machine learning language technologies that are innovative, accurate, easy to use, and cost-effective, said Mudar Yaghi Chief Executive Officer of AppTek. Given its history, SOSi is the perfect partner to help the federal government adopt the latest speech recognition and machine translation technology innovations.

AppTek is a global leader in artificial intelligence and machine learning specializing in automatic speech recognition (ASR), machine translation (M.T.), and natural language understanding (NLU). Founded in 1990, it employs one of the most agile, talented teams of speech scientists, PhDs and research engineers in the world. Its proprietary technology has been licensed and built into scaled offerings by some of the largest companies in the market, including eBay, Ford, and others. It is one of only a handful of major speech technology platforms available in the market today.

AppTeks Director of Scientific Research and Development is Dr. Hermann Ney, also a professor of computer science at RWTH Aachen University, one of the largest research institutes in this field in the world, and recipient of the distinguished 2019 James L. Flanagan Speech and Audio Processing Award presented by the Institute of Electrical and Electronics Engineers (IEEE). Dr. Ney has worked on dynamic programming and discriminative training for speech recognition, on language modeling, and data-driven approaches to machine translation. His work has resulted in more than 700 conference and journal papers; he is one of the most cited machine translation scientists in Google Scholar. In 2005, Dr. Ney was the recipient of the Technical Achievement Award of the IEEE Signal Processing Society; in 2010, he was awarded a senior DIGITEO chair at LIMIS/CNRS in Paris, France; and in 2013, he received the award of honor of the International Association for Machine Translation. Dr. Ney is a fellow of both the IEEE and of the International Speech Communication Association.

With the global speech recognition market forecast to reach $32 billion in revenues by 2025, AppTeks A.I.-fueled multilingual speech recognition and machine translation technologies have it poised for rapid growth. Its 30 years of technological expertise, patent-protected I.P. portfolio, and partnerships with key players in the industry offer a compelling competitive advantage. It has compiled one of the largest repositories of speech data for machine learning in existence in dozens of languages and dialects. Each data set has been used in the construction of AppTeks industry-leading ASR and M.T. engines and is scientifically tested for performance. The scientific vetting of these ML training sets provides a standardization and predictability of performance that is unique in the marketplace.

With technology, theres often a huge difference between being first to market, and being the best in the market, said John Avalos, SOSis Chief Operating Officer. With the AppTek deal, we aim to be both in a market that has a long way to go before it realizes the full potential of the latest speech technology.

Its newly acquired interest in AppTek is the sixth M&A deal SOSi has done to date, coming on the heels of its acquisition of Denmark-based NorthStar Systems in February. Under the terms of the agreement, SOSi and AppTek will jointly develop solutions for a variety of classified and unclassified use cases.

About SOSi

Founded in 1989, SOSi is the largest private, family-owned and operated technology and services integrator in the aerospace, defense, and government services industry. Its portfolio includes military logistics, intelligence analysis, software development, and cybersecurity. For more information, visit http://www.sosi.com and connect with SOSi on LinkedIn, Facebook, and Twitter.

About AppTek

Founded in 1990, AppTek is a leading developer of A.I. and Machine Learning applied to Neural Machine Translation, Automatic Speech Recognition and Natural Language Processing. These technologies are deployed at scale on the cloud and on-premise for call centers, the media, and entertainment industries.

The rest is here:
SOSi Invests in AppTek to Advance Artificial Intelligence and Machine Learning for Its Speech Recognition and Translation Offerings - Business Wire

Nob Hill Water Association Saves Water and Costs with VODA.ai’s Machine Learning – PRNewswire

BOSTON, June 17, 2020 /PRNewswire/ --Nob Hill Water Association announced they will renew their VODA.ai machine learning subscription for another year. VODA.ai's software has been helping reduce water loss, property damages and expenses with their artificial intelligence based virtual condition assessment for water mains. VODA.ai's daVinci machine learning platform enables utilities to make smart, data-driven decisions for proactively monitoring, repairing, replacing and, when appropriate, ignoring water mains.

Zella West, long time Manager for the Nob Hill Water Association, said "Every utility has more miles of mainline that should be replaced than there is money in the budget. Nob Hill is using this program to direct our valve exercising program to the mains that are predicted to fail so that if they do fail, the damages can be kept to a minimum." She added: "VODA.ai's artificial intelligence platform finds patterns of pipe strengths and weaknesses for all of our water mains. They even predict which pipes are likely to fail within the next twelve months. This helps us make smarter decisions on pipes to replace or leave alone. Asset management decisions based on the age of pipes or their failure history are generally less than half as accurate as VODA.ai's machine learning assessments."

About VODA.ai: VODA.ai uses artificial intelligence to assess the condition of water mains and help water utilities make smart decisions managing pipe assets. VODA.ai is a Software as a Service company serving utilities worldwide. It is headquartered in Boston Massachusetts.

About Nob Hill Water Association:Nob Hill Water Association provides water services in Yakima County, Washington, including much of the City of Yakima. It is a private, non-profit association and, outside of the city, is the largest water system in Yakima Valley.

If you would like more information about this topic, please email Jim Fitchett at [emailprotected] or call him at 978-502-1782.

SOURCE VODA.ai

Read the rest here:
Nob Hill Water Association Saves Water and Costs with VODA.ai's Machine Learning - PRNewswire

Scality Invests to Advance AI and Machine Learning with Inria Research Institute – HPCwire

SAN FRANCISCO, Calif., June 16, 2020 Scality, provider of software solutions for global data orchestration and distributed file and object storage, announced an investment in Fondation Inria, the Foundation of the well-known French national research institute for digital sciences,Inria. Bringing both financial and collaboration backing to the institute, Scality will help support multi-disciplinary research and innovation initiatives. This includes mind-body health, precision agriculture, neurodegenerative diagnostics, privacy protection and more.

To be at the forefront of technological advancements and research has been a priority for Scality since our inception and we currently hold 10 patents. It only made sense for us to deepen our relationship with one of the most advanced research institutes on AI and algorithms in the world, said Jrme Lecat, Scality CEO and co-founder. We believe that technology and digital sciences can provide answers to the issues facing our fractured global society. Inria research teams work on incredible projects that actually change lives with personalized medicine, precision agriculture, sustainable development, smart cities and mobility, and security and privacy protection.

Scality has been close to Inria for many years and is involved with several collaborative research projects that are developing new concepts for distributed and scalable storage with Inria Distinguished Research Scholar,Marc ShapiroOne such project isRainbowFSwhich investigates an approach to distributed storage that ensures distributed consistency semantics tailored to applications in order to develop smarter and massively scalable systems.

We are delighted to be working with Scality. This collaboration is bringing two major players in French technology closer in order to further research and innovation on a global scale, said Jean-Baptiste Hennequin, Fondation Inria managing director. Our values align very closely to Scalitys: innovative research, social responsibility and open source. For example, our sheltered foundations are promoting the distribution of open source software for the durable development by bringing together their user communities within consortia, in recognition of how software embodies humanitys technical and scientific knowledge.

Read more about some of the exciting projects carried out by Inria research teams:

Read more here:
Scality Invests to Advance AI and Machine Learning with Inria Research Institute - HPCwire