Archive for the ‘Machine Learning’ Category

Can Humans Ever Understand What Sperm Whales say? This Research Has Roadmap Towards It – Gadgets 360

A new papertitled, Cetacean Translation Initiative: a roadmap to deciphering the communication of sperm whales', explains how the scientists are going to try to decode whale vocalisations. The researchers are using machine learning techniques to try and translate the clicking and other noises made by sperm whales, to see if we can understand what the giant creatures are saying.

Whether known non-human communication systems exhibit similarly rich structure either of the same kind as human languages, or completely new remains an open question, reads the concluding sentence in the introduction of the paper, posted on to the preprint server arXiv.org. The paper has been authored by 16 scientific members of Project CETI collaboration.

It was only in the 1950s that we, humans, observed sperm whales made sounds. It took another two decades to understand for humans that they were using those sounds to communicate, according to the new research posted by CETI.

Researchers say that the past decade witnessed a ground-breaking rise of machine learning for human language analysis, and recent research has shown the promise that such tools may also be used for analysing acoustic communication in nonhuman species.

"We posit that the machine learning will be the cornerstone of the future collection, processing, and analysis of multimodal streams of data in animal communication studies," read the abstract of the paper.

And to further understand this, scientists have picked sperm whales, for their highly-developed neuroanatomical features, cognitive abilities, social structures, and discrete click-based encoding, making for an excellent starting point for advanced machine learning tools that can be applied to other animals in the future.

The paper is basically a roadmap towards this goal, they add. Scientists have outlined key elements needed for the collection and processing of massive bioacoustics data of sperm whales, detecting their basic communication units and language-like higher-level structures, and validating these models through interactive playback experiments.

They further say that technological advancements achieved during this effort are expected to help in the application of broader communities investigating non-human communication and animal behavioural research.

Researchers explain that the clicking sound sperm whales make, it appears, serves the dual purpose of echolocation at the depths to which they go and also use it in their social vocalisations. The communication clicks are more tightly packed, according to the CETI paper.

That a project as large as this one would have complexities and challenges is something not very difficult to understand.

David Gruber, a marine biologist, and CETI project leader said that figuring out what they have been able to discover thus far has been challenging, adding, sperm whales have "been so hard for humans to study for so many years." But now, "we actually do have the tools to be able to look at this more in-depth in a way that we haven't been able to before," he said adding, tools included AI, robotics, and drones.

A report in Live Science said that the CETI project has a massive stash of recordings of about 1 lakh sperm whale clicks, painstakingly gathered by marine biologists over many years. However, it said that the machine-learning algorithms might need somewhere close to 4 billion clicks before they start making any conclusions.

And to ensure this, CETI is setting up innumerable automated channels to collect recordings from sperm whales. The tools CETI is using include underwater microphones placed in waters frequented by sperm whales, microphones that can be dropped by eagle-eyed airborne drones as soon as they spot a pod of sperm whales gathering at the surface, and even robotic fish that can follow and listen to whales from a distance, the report said.

If you think collecting these sounds is the only challenge, then wait. According to a 2016 research in the journal Royal Society Open Science, sperm whales are known to have dialects as well. But finding answers to these questions is what CETI is dedicated to.

Go here to read the rest:
Can Humans Ever Understand What Sperm Whales say? This Research Has Roadmap Towards It - Gadgets 360

Machine Learning And Intelligent Process Automation; Interview with Bikram Singh, Co-Founder and CEO of EZOPS – TechBullion

Share

Share

Share

Email

With Artificial Intelligence, EZOPS can maximize data confidence, integrity, and control. This machine learning and intelligent process automation platform is one innovation to look out for, the CEO Bikram Singh shares more insights into the platform with us in this interview with TechBullion.

I am Bikram Singh and I am the CEO and Co-Founder of EZOPS.

I have built and managed operational services and technology solutions for banks, hedge funds, asset managers, fund administrators, and custodians.

From my experience in the financial industry, I know firsthand the pain points that plague data management teams. As a result, it has become my mission to develop an end to end platform that addresses the challenges teams face across the entire lifecycle of data. Through EZOPS, I am able to obtain my goal of providing financial institutions with a solution that drives operational efficiency and delivers quality data.

Prior to founding EZOPS, I had over 20 years of experience managing financial services operations and technology while working at McKinsey & Company, Lehman Brothers, Lava Trading, Goldman Sachs, and Citi.

EZOPS is AI-enabled software that harnesses the power of machine learning and intelligent process automation to revolutionize data control and drive transformative efficiency gains at some of the worlds largest financial services institutions.

Through my years of experience in financial services, I, along with Co-Founders Sarva Srinivasan and Dutt Chintalapati, realized that we could develop and implement automated workflows to solve for many of the challenges our clients faced every day. We combined our industry experience with our knowledge of machine learning and automation to develop EZOP in an effort to eliminate the longstanding redundancies and inefficiencies that have plagued the industry for decades to help transform how data is controlled at large financial institutions today.

EZOPS is the leader in cutting-edge innovation for the financial services sector, including: Global Banks, Regional Banks, Custodians, Asset Service Providers, Asset Management, Operations Outsourcers, Fintech, Corporate Treasury.

Our solutions help our clients transform their business operations and cover crucial areas such as Operations, Finance, Governance, Regulations, Compliance, and Audit to enhance quality & control for post-trade operations.

EZOPS offers comprehensive functionality that businesses of large scale and complexity need in order to manage the four pillars of operational data control reconciliation, research, remediation, and reporting all powered by Machine Learning and smart workflow management.

EZOPS intelligently automates repeatable actions, checks for errors, and offers insights that users might miss on their own. The goal is to streamline parts of the process that software can do better.

EZOPS platform combines machine-learning with smart workflow management functionality for comprehensive end-to-end automation.

It integrates siloed data and processes across the enterprise for cohesive exception management processing EZOPS ARO improves transparency and communication via alerts, notifications, messages, and emails.

It Facilitates source system remediation to OMS, PMS, accounting systems & sources for reference data, corporate actions & market data.

Since the financial crisis the landscape across the institutional financial sector has changed. This has further accelerated with the global pandemic and the drive for digital transformation.

The business of financial intermediation is entering the post-internet era and the next decade will see business models on the institutional side being disrupted as large financial institutions start taking a hard look at the collection of businesses they have and the associated fit with their respective business model and strategy.

As digitalization, shedding, restructuring, realignment takes place, it will present an opportunity for a variety of players. Many of whom will likely be unregulated, technologically savvier, and much more nimble than the institutions of the past.

Transactional volumes have increased during the pandemic in conjunction with an increased focus on regulatory reporting and compliance. At the same time markets and companies have become more fragmented.

This has led to an increased operational and technical infrastructures that were primarily built to support pre-crisis business complexity, volumes and regulatory reporting are proving to be costly to maintain and yielding less than desired business value.

EZOPS can be easily integrated into a clients current operating systems via cloud or on-premise installations. Clients are up and running in a matter of days depending on the complexity of their ecosystem & tech stack. Amazon Web Services (AWS) users can access EZOPS ARO capabilities via the Amazon Marketplace in a matter of hours. EZOPS multiple partner and channel integrations allow clients to switch on new capabilities seamlessly and in a frictionless manner.

Yes, we have a strategic partner ecosystem consisting of technology providers, consulting organizations, and financial software firms. Our partners compliment our software solution and support our clients globally. Solutions partners include: BNY Mellon, Riskfocus, Orchestrade, and Access Fintech. Technology partners include: Snowflake, Oracle, and AWS.

Website: https://www.ezops.com

LinkedIn: https://www.linkedin.com/company/ezopsinc/about/

Twitter: @ezopsinc

Facebook: @ezopsinc

Link:
Machine Learning And Intelligent Process Automation; Interview with Bikram Singh, Co-Founder and CEO of EZOPS - TechBullion

Examine the Bioinformatics Market : Future of Machine Learning and AI , it is Creating Real Change in the… – WhaTech

Global Bioinformatics Market by Product & Service (Knowledge Management Tools, Data Analysis Platforms (Structural & Functional), Services), Applications (Genomics, Proteomics & Metabolomics), & Sectors (Medical, Academics, Agriculture)

The information collected is used to understand the molecular mechanisms of diseases. Bioinformatics is increasingly being used to identify genes in DNA sequences.

This assists in developing better treatments and diagnostic tests. Recently, due to significant reductions in costs of sequencing, many scientific research institutes and biotech companies have undertaken initiatives to perform sequencing studies at their own facilities.

According to the new market research report Bioinformatics Marketby Product & Service (Knowledge Management Tools, Data Analysis Platforms (Structural & Functional), Services), Applications (Genomics, Proteomics & Metabolomics), & Sectors (Medical, Academics, Agriculture),global bioinformatics market is expected to account for USD 7,063.7 billion in 2018. It is expected to reach USD 13,901.5 billion by 2023, at a CAGR of 14.5% during the forecast period.

Major Growth Drivers:

Growth of thebioinformatics marketis driven by the growing demand for nucleic acid and protein sequencing, increasing government initiatives and funding, and increasing use of bioinformatics in drug discovery and biomarker development processes. With the introduction of upcoming technologies such as nanopore sequencing (third generation sequencing technique) and cloud computing, the market is expected to offer significant opportunities for manufacturers of bioinformatics solutions.

Expected Revenue Growth:

[195 Pages Report] The global bioinformatics market is expected to account for USD 7,063.7 billion in 2018. It is expected to reach USD 13,901.5 billion by 2023, at a CAGR of 14.5% during the forecast period.

Accessories to Fuel the Growth of Bioinformatics Market :

Bioinformatics is the application of computer technology for the management and analysis of biological data. It includes collection, storage, retrieval, manipulation, and modelling of data for analysis, visualization, or prediction through algorithms and software.

However, factors such as a dearth of skilled personnel to ensure proper use of bioinformatics tools and lack of integration of a wide variety of data generated through various bioinformatics platforms are hindering market growth.

Browse in-depth TOC on Bioinformatics Market

189 Tables27 Figures195 Pages

Download PDF Brochure:www.marketsandmarkets.com/pdfdown.asp?id=39

By Product and Services, bioinformatics platforms segment is expected to be the fastest-growing segment in the forecast period

Knowledge management tools commanded the largest market share in the global bioinformatics market in 2018, while the bioinformatics platforms segment is expected to be the fastest-growing segment in the forecast period. The major factor driving growth of bioinformatics platforms is their growing use in various genomic applications.

In addition, the use of bioinformatics platforms is increasing in the drug discovery & development process, which is contributing to market growth.

By Application, the metabolomics segment is expected to grow at the highest CAGR during the forecast period

Factors such as the availability of research funding and government support are fueling market growth. However, metabolomes cannot be easily identified or figured from reconstructed biochemical pathways due to enzymatic diversity, substrate ambiguity, and difference in regulatory mechanisms.

Hence, the annotation of unknown metabolic signals is the main hindrance to growth of the metabolomics segment.

In 2018, The APAC market is expected to grow at the highest CAGR during the forecast period

The market in the Asia Pacific region is expected to offer significant opportunities for players to offset revenue losses incurred in mature markets. Emerging countries in this region are witnessing growth in their GDPs and a significant rise in disposable income levels.

This has led to increased healthcare spending by a larger population base, healthcare infrastructure modernization, and rising penetration of cutting-edge research and clinical laboratory technologies, including bioinformatics, in Asia Pacific countries. These factors are expected to provide significant growth opportunities to bioinformatics companies operating in this region.

Request Sample Report:www.marketsandmarkets.com/request.asp?id=39

Key Market Players

Thermo Fisher Scientific, Eurofins Scientific, Illumina, Perkinelmer, Inc., Qiagen Bioinformatics, Agilent Technologies, Dnastar, Waters Corporation, Sophia Genetics, Partek, Biomax Informatics AG, Wuxi Nextcode, Beijing Genomics Institute (BGI)

This email address is being protected from spambots. You need JavaScript enabled to view it.

More here:
Examine the Bioinformatics Market : Future of Machine Learning and AI , it is Creating Real Change in the... - WhaTech

Machine Learning Reveals the Critical Interactions for SARS-CoV-2 Spike Protein Binding to ACE2 – DocWire News

This article was originally published here

J Phys Chem Lett. 2021 Jun 4:5494-5502. doi: 10.1021/acs.jpclett.1c01494. Online ahead of print.

ABSTRACT

SARS-CoV and SARS-CoV-2 bind to the human ACE2 receptor in practically identical conformations, although several residues of the receptor-binding domain (RBD) differ between them. Herein, we have used molecular dynamics (MD) simulations, machine learning (ML), and free-energy perturbation (FEP) calculations to elucidate the differences in binding by the two viruses. Although only subtle differences were observed from the initial MD simulations of the two RBD-ACE2 complexes, ML identified the individual residues with the most distinctive ACE2 interactions, many of which have been highlighted in previous experimental studies. FEP calculations quantified the corresponding differences in binding free energies to ACE2, and examination of MD trajectories provided structural explanations for these differences. Lastly, the energetics of emerging SARS-CoV-2 mutations were studied, showing that the affinity of the RBD for ACE2 is increased by N501Y and E484K mutations but is slightly decreased by K417N.

PMID:34086459 | DOI:10.1021/acs.jpclett.1c01494

Read the rest here:
Machine Learning Reveals the Critical Interactions for SARS-CoV-2 Spike Protein Binding to ACE2 - DocWire News

Machine learning security needs new perspectives and incentives – TechTalks

At this years International Conference on Learning Representations (ICLR), a team of researchers from the University of Maryland presented an attack technique meant to slow down deep learning models that have been optimized for fast and sensitive operations. The attack, aptly named DeepSloth, targets adaptive deep neural networks, a range of deep learning architectures that cut down computations to speed up processing.

Recent years have seen growing interest in the security of machine learning and deep learning, and there are numerous papers and techniques on hacking and defending neural networks. But one thing made DeepSloth particularly interesting: The researchers at the University of Maryland were presenting a vulnerability in a technique they themselves had developed two years earlier.

In some ways, the story of DeepSloth illustrates the challenges that the machine learning community faces. On the one hand, many researchers and developers are racing to make deep learning available to different applications. On the other hand, their innovations cause new challenges of their own. And they need to actively seek out and address those challenges before they cause irreparable damage.

One of the biggest hurdles of deep learning the computational costs of training and running deep neural networks. Many deep learning models require huge amounts of memory and processing power, and therefore they can only run on servers that have abundant resources. This makes them unusable for applications that require all computations and data to remain on edge devices or need real-time inference and cant afford the delay caused by sending their data to a cloud server.

In the past few years, machine learning researchers have developed several techniques to make neural networks less costly. One range of optimization techniques called multi-exit architecture stops computations when a neural network reaches acceptable accuracy. Experiments show that for many inputs, you dont need to go through every layer of the neural network to reach a conclusive decision. Multi-exit neural networks save computation resources and bypass the calculations of the remaining layers when they become confident about their results.

In 2019, Yigitan Kaya, a Ph.D. student in Computer Science at the University of Maryland, developed a multi-exit technique called shallow-deep network, which could reduce the average inference cost of deep neural networks by up to 50 percent. Shallow-deep networks address the problem of overthinking, where deep neural networks start to perform unneeded computations that result in wasteful energy consumption and degrade the models performance. The shallow-deep network was accepted at the 2019 International Conference on Machine Learning (ICML).

Early-exit models are a relatively new concept, but there is a growing interest, Tudor Dumitras, Kayas research advisor and associate professor at the University of Maryland, told TechTalks. This is because deep learning models are getting more and more expensive computationally, and researchers look for ways to make them more efficient.

Dumitras has a background in cybersecurity and is also a member of the Maryland Cybersecurity Center. In the past few years, he has been engaged in research on security threats to machine learning systems. But while a lot of the work in the field focuses on adversarial attacks, Dumitras and his colleagues were interested in finding all possible attack vectors that an adversary might use against machine learning systems. Their work has spanned various fields including hardware faults, cache side-channel attacks, software bugs, and other types of attacks on neural networks.

While working on the deep-shallow network with Kaya, Dumitras and his colleagues started thinking about the harmful ways the technique might be exploited.

We then wondered if an adversary could force the system to overthink; in other words, we wanted to see if the latency and energy savings provided by early exit models like SDN are robust against attacks, he said.

Dumitras started exploring slowdown attacks on shallow-deep networks with Ionut Modoranu, then a cybersecurity research intern at the University of Maryland. When the initial work showed promising results, Kaya and Sanghyun Hong, another Ph.D. student at the University of Maryland, joined the effort. Their research eventually culminated into the DeepSloth attack.

Like adversarial attacks, DeepSloth relies on carefully crafted input that manipulates the behavior of machine learning systems. However, while classic adversarial examples force the target model to make wrong predictions, DeepSloth disrupts computations. The DeepSloth attack slows down shallow-deep networks by preventing them from making early exits and forcing them to carry out the full computations of all layers.

Slowdown attacks have the potential ofnegating the benefits ofmulti-exit architectures, Dumitras said.These architectures can halve the energy consumption of a deep neural network model at inference time, and we showed that for any input we can craft a perturbation that wipes out those savings completely.

The researchers findings show that the DeepSloth attack can reduce the efficacy of the multi-exit neural networks by 90-100 percent. In the simplest scenario, this can cause a deep learning system to bleed memory and compute resources and become inefficient at serving users.

But in some cases, it can cause more serious harm. For example, one use of multi-exit architectures involves splitting a deep learning model between two endpoints. The first few layers of the neural network can be installed on an edge location, such as a wearable or IoT device. The deeper layers of the network are deployed on a cloud server. The edge side of the deep learning model takes care of the simple inputs that can be confidently computed in the first few layers. In cases where the edge side of the model does not reach a conclusive result, it defers further computations to the cloud.

In such a setting, the DeepSloth attack would force the deep learning model to send all inferences to the cloud. Aside from the extra energy and server resources wasted, the attack could have much more destructive impact.

In a scenario typical for IoT deployments, where the model is partitioned between edge devices and the cloud, DeepSloth amplifies the latency by 1.55X, negating the benefits of model partitioning, Dumitras said. This could cause the edge device to miss critical deadlines, for instance in an elderly monitoring program that uses AI to quickly detect accidents and call for help if necessary.

While the researchers made most of their tests on deep-shallow networks, they later found that the same technique would be effective on other types of early-exit models.

As with most works on machine learning security, the researchers first assumed that an attacker has full knowledge of the target model and has unlimited computing resources to craft DeepSloth attacks. But the criticality of an attack also depends on whether it can be staged in practical settings, where the adversary has partial knowledge of the target and limited resources.

In most adversarial attacks, the attacker needs to have full access to the model itself, basically, they have an exact copy of the victim model, Kaya told TechTalks. This, of course, is not practical in many settings where the victim model is protected from outside, for example with an API like Google Vision AI.

To develop a realistic evaluation of the attacker, the researchers simulated an adversary who doesnt have full knowledge of the target deep learning model. Instead, the attacker has asurrogatemodel on which he tests and tunes the attack. The attacker thentransfers the attack to the actual target. The researchers trained surrogate models that have different neural network architectures, different training sets, and even different early-exit mechanisms.

We find that the attacker that uses a surrogate can still cause slowdowns (between 20-50%) in the victim model, Kaya said.

Such transfer attacks are much more realistic than full-knowledge attacks, Kaya said. And as long as the adversary has a reasonable surrogate model, he will be able to attack a black-box model, such as a machine learning system served through a web API.

Attacking a surrogate is effective because neural networks that perform similar tasks (e.g., object classification) tend to learn similar features (e.g., shapes, edges, colors), Kaya said.

Dumitras says DeepSloth is just the first attack that works in this threat model, and he believes more devastating slowdown attacks will be discovered. He also pointed out that, aside from multi-exit architectures, other speed optimization mechanisms are vulnerable to slowdown attacks. His research team tested DeepSloth on SkipNet, a special optimization technique for convolutional neural networks (CNN). Their findings showed that DeepSloth examples crafted for multi-exit architecture also caused slowdowns in SkipNet models.

This suggests thatthe two different mechanisms might share a deeper vulnerability, yet to be characterized rigorously, Dumitras said. I believe that slowdown attacks may become an important threat in the future.

The researchers also believe that security must be baked into the machine learning research process.

I dont think any researcher today who is doing work on machine learning is ignorant of the basic security problems. Nowadays even introductory deep learning courses include recent threat models like adversarial examples, Kaya said.

The problem, Kaya believes, has to do with adjusting incentives. Progress is measured on standardized benchmarks and whoever develops a new technique uses these benchmarks and standard metrics to evaluate their method, he said, adding that reviewers who decide on the fate of a paper also look at whether the method is evaluated according to their claims on suitable benchmarks.

Of course, when a measure becomes a target, it ceases to be a good measure, he said.

Kaya believes there should be a shift in the incentives of publications and academia. Right now, academics have a luxury or burden to make perhaps unrealistic claims about the nature of their work, he says. If machine learning researchers acknowledge that their solution will never see the light of day, their paper might be rejected. But their research might serve other purposes.

For example, adversarial training causes large utility drops, has poor scalability, and is difficult to get right, limitations that are unacceptable for many machine learning applications. But Kaya points out that adversarial training can have benefits that have been overlooked, such as steering models toward becoming more interpretable.

One of the implications of too much focus on benchmarks is that most machine learning researchers dont examine the implications of their work when applied to real-world settings and realistic settings.

Our biggest problem is that we treat machine learning security as an academic problem right now. So the problems we study and the solutions we design are also academic, Kaya says. We dont know if any real-world attacker is interested in using adversarial examples or any real-world practitioner in defending against them.

Kaya believes the machine learning community should promote and encourage research in understanding the actual adversaries of machine learning systems rather than dreaming up our own adversaries.

And finally, he says that authors of machine learning papers should be encouraged to do their homework and find ways to break their own solutions, as he and his colleagues did with the shallow-deep networks. And researchers should be explicit and clear about the limits and potential threats of their machine learning models and techniques.

If we look at the papers proposing early-exit architectures, we see theres no effort to understand security risks although they claim that these solutions are of practical value, he says. If an industry practitioner finds these papers and implements these solutions, they are not warned about what can go wrong. Although groups like ours try to expose potential problems, we are less visible to a practitioner who wants to use an early-exit model. Even including a paragraph about the potential risks involved in a solution goes a long way.

More:
Machine learning security needs new perspectives and incentives - TechTalks