Media Search:



Chimerix Receives US Food and Drug Administration Approval for TEMBEXA – GlobeNewswire

DURHAM, N.C., June 04, 2021 (GLOBE NEWSWIRE) -- Chimerix(NASDAQ:CMRX), a biopharmaceutical company focused on accelerating the development of medicines to treat cancer and other serious diseases, today announced that the U.S. Food and Drug Administration (FDA) has granted TEMBEXA (brincidofovir) tablets and oral suspension approval for the treatment of smallpox. TEMBEXA is approved for adult and pediatric patients, including neonates.

We are delighted to report our first FDA approved products for the treatment of smallpox, particularly as the importance of pandemic preparedness has been put into focus over the last year. With this approval in hand, we now look forward to advancing our discussions with the Biomedical Advanced Research and Development Authority (BARDA) toward a procurement contract to support national preparedness, said Mike Sherman, Chief Executive Officer of Chimerix.

Chimerix developed the TEMBEXA oral formulations as medical countermeasures for the treatment of smallpox under an ongoing collaboration with BARDA, part of the office of the Assistant Secretary for Preparedness and Response within the U.S. Department of Health and Human Services, under contract number HHSO100201100013C.

TEMBEXAs approval is based on efficacy data in two lethal orthopoxvirus animal models of human smallpox disease, the rabbitpox model (New Zealand White rabbits infected with rabbitpox virus) and the mousepox model (BALB/c mice infected with ectromelia virus). In the pivotal studies in each model, TEMBEXA treatment resulted in statistically significant survival benefit versus placebo following delayed treatment after animals were infected with a lethal viral dose. The FDAs Animal Rule allows for testing of investigational drugs in animal models to support effectiveness in diseases which are not ethical or feasible to study in humans. The TEMBEXA U.S. Prescribing Information has a BOXED WARNING for increased risk for mortality when used for longer duration; see below for Important Safety Information.

AboutChimerix

Chimerix is a development-stage biopharmaceutical company dedicated to accelerating the advancement of innovative medicines that make a meaningful impact in the lives of patients living with cancer and other serious diseases. Most recently, the Company obtained FDA approval for brincidofovir as a medical countermeasure for the treatment of smallpox. The Company has two other advanced clinical-stage development programs, ONC201 and dociparstat sodium (DSTAT). ONC201 is currently in a registrational clinical program for recurrent H3 K27M-mutant glioma and a blinded independent central review is expected later in 2021. DSTAT is in development as a potential first-line therapy in acute myeloid leukemia.

About Smallpox

Smallpox is a highly contagious disease caused by the variola virus. Historically, smallpox was one of the deadliest diseases in history with a case fatality rate of approximately 30%. Despite successful eradication of smallpox in the 1970s, there is considerable concern that variola virus could reappear, either through accidental release or as a weapon of bioterrorism. According to the U.S. Centers for Disease Control and Prevention (CDC), variola virus is ranked in the highest risk category for bioterrorism agents (Category A) due to its ease of transmission, high mortality rate, and potential to cause public panic and social disruption.

About TEMBEXA

TEMBEXA is an oral antiviral formulated as 100mg tablets and 10mg/mL oral suspension dosed once weekly for two weeks. TEMBEXA is indicated for the treatment of human smallpox disease caused by variola virus in adult and pediatric patients, including neonates. TEMBEXA is not indicated for the treatment of diseases other than human smallpox disease. The effectiveness of TEMBEXA for the treatment of smallpox disease has not been determined in humans because adequate and well-controlled field trials have not been feasible and inducing smallpox disease in humans to study the drugs efficacy is not ethical. TEMBEXA efficacy may be reduced in immunocompromised patients based on studies in immune deficient animals.

TEMBEXA (brincidofovir) is a nucleotide analog lipid-conjugate designed to mimic a natural monoacyl phospholipid to achieve effective intracellular concentrations of the active antiviral metabolite, cidofovir diphosphate. Cidofovir diphosphate exerts its orthopoxvirus antiviral effects by acting as an alternate substrate inhibitor for viral DNA synthesis mediated by viral DNA polymerase.

IMPORTANT SAFETY INFORMATION Including BOXED WARNING

An increased incidence of mortality was seen in TEMBEXA-treated subjects compared to placebo-treated subjects in a 24-week clinical trial when TEMBEXA was evaluated in another disease.

WARNINGS AND PRECAUTIONS

Elevations in Hepatic Transaminases and Bilirubin: May cause increases in serum transaminases (ALT or AST) and serum bilirubin. Monitor liver laboratory parameters before and during treatment.

Diarrhea and Other Gastrointestinal Adverse Events: Diarrhea and additional gastrointestinal adverse events including nausea, vomiting, and abdominal pain may occur. Monitor patients, provide supportive care, and if necessary, do not give the second and final dose of TEMBEXA.

Coadministration with Related Products: TEMBEXA should not be co-administered with intravenous cidofovir.

Carcinogenicity: TEMBEXA is considered a potential human carcinogen. Do not crush or divide TEMBEXA tablets and avoid direct contact with broken or crushed tablets or oral suspension.

Male Infertility: Based on testicular toxicity in animal studies, TEMBEXA may irreversibly impair fertility in individuals of reproductive potential.

ADVERSE REACTIONS

Common adverse reactions (adverse events assessed as causally related by the investigator in 2% of subjects) experienced in the first 2 weeks of dosing with TEMBEXA were diarrhea, nausea, vomiting and abdominal pain.

USE IN SPECIFIC POPULATIONS

Pregnancy

Based on findings from animal reproduction studies, TEMBEXA may cause fetal harm when administered to pregnant individuals. Pregnancy testing should be performed before initiation of TEMBEXA in individuals of childbearing potential to inform risk. An alternative therapy should be used to treat smallpox during pregnancy, if feasible.

Forward-Looking Statements

This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995 that are subject to risks and uncertainties that could cause actual results to differ materially from those projected. Forward-looking statements include those relating to, among other things, the advancement of discussions with BARDA toward a procurement agreement for the sale of TEMBEXA to the SNS and the timing of the confirmatory response rate assessment for ONC201. Among the factors and risks that could cause actual results to differ materially from those indicated in the forward-looking statements are risks that Chimerixwill not obtain a procurement contract for TEMBEXA in smallpox in a timely manner or at all; Chimerixs reliance on a sole source third-party manufacturer for drug supply; risks that ongoing or future trials may not be successful or replicate previous trial results, or may not be predictive of real-world results or of results in subsequent trials; risks and uncertainties relating to competitive products and technological changes that may limit demand for our drugs; risks that our drugs may be precluded from commercialization by the proprietary rights of third parties; and additional risks set forth in the Company's filings with theSecurities and Exchange Commission. These forward-looking statements represent the Company's judgment as of the date of this release. The Company disclaims, however, any intent or obligation to update these forward-looking statements.

CONTACT:

Investor Relations:Michelle LaSpaluto919 972-7115ir@chimerix.com

Will OConnorStern Investor Relations212-362-1200will@sternir.com

Read the rest here:
Chimerix Receives US Food and Drug Administration Approval for TEMBEXA - GlobeNewswire

EHang 216 AAV Conducted Trial Flights in Japan – GlobeNewswire

GUANGZHOU, China, June 04, 2021 (GLOBE NEWSWIRE) -- EHang Holdings Limited (Nasdaq: EH) (EHang or the Company), the world's leading autonomous aerial vehicle (AAV) technology platform company, announced today its flagship passenger-grade AAV EHang 216 successfully performed its maiden Japan unmanned and autonomous trial flight to showcase safe, autonomous, eco-friendly urban air mobility (UAM) solutions. Ahead of the trial flight, the EHang 216 obtained a trial flight permit from the Ministry of Land, Infrastructure, Transport and Tourism of Japan (MLIT) with a local partner. EHang 216 was the first passenger-grade AAV granted permission for outdoor open airspace trial flights in Japan.

One of the trial flights was completed at the Leading the Revolution of Urban Air Mobility event, organized by the Okayama Kurashiki Mizushima Aero & Space Industry Cluster Study Group (MASC) and EHang at Kasaoka Air Station in Okayama Prefecture, Japan. Looking ahead, EHang and MASC will collaborate to further develop new air transportation use cases in Japan.

At the event, the Chief Cabinet Secretary Mr. Kato Katsunobu appointed his secretary, Mr. Sugihara Yohei, to attend the event and delivered a speech on his behalf, saying, "At present, many companies around the world have launched such flying car projects, and are conducting research and development and demonstration projects. As the government, we will actively improve aviation regulations while supporting private enterprises in a timely and appropriate manner."

Other guests included Ms. Ito Kaori, the Mayor of Kurashiki City, Okayama Prefecture, Mr. Yoshifumi Kobayashi, the Mayor of Kasaoka City, Okayama Prefecture, Mr. Inoue Mineichi, the Head of the Kurashiki Chamber of Commerce and Industry, and Mr. Sugimoto Tetsuya, the Head of Kasaoka Chamber of Commerce and Industry, etc. Mr. Hashimoto Gaku, member of the Japan House of Representatives and Mr. Narisawa Koichi, the Counselor of Civil Aviation Bureau at MLIT, sent their best regards and comments.

Mr. Hashimoto said, I am very pleased that Japans first trial flight of a flying car took place in the land of Okayama. We have high expectations for flying cars as a new generation for the growth industries. We look forward to developing flying cars as social services through public-private cooperation.

In 2018, the Japanese government established the "Public-Private Council for Air Transportation Revolution" and formulated a Roadmap towards Air Transportation Revolution. The Council aims to start the business services of air transportation of goods and people utilizing flying vehicles by 2023 with gradual expansion from rural areas to urban areas. According to the blueprint, the Civil Aviation Bureau of the MLIT is studying and improving related systems such as the type and airworthiness safety standards for flying cars and the certification of pilots.

Watch the video of the EHang 216 trial flights in Japan: https://youtu.be/2WaYLNG5zX0

About EHang EHang (Nasdaq: EH) is the world's leading autonomous aerial vehicle (AAV) technology platform company. Our mission is to make safe, autonomous, and eco-friendly air mobility accessible to everyone. EHang provides customers in various industries with AAV products and commercial solutions: air mobility (including passenger transportation and logistics), smart city management, and aerial media solutions. As the forerunner of cutting-edge AAV technologies and commercial solutions in the global Urban Air Mobility (UAM) industry, EHang continues to explore the boundaries of the sky to make flying technologies benefit our life in smart cities. For more information, please visit http://www.ehang.com.

About MASC MASC was established in 2017 as a study group for the realization of an aerospace industry cluster in the Mizushima area of Kurashiki City, Okayama Prefecture, and became a general incorporated association MASC (Kurashiki City, Okayama Prefecture, Chairman Koji Kirino) in April 2021. With the aerospace industry at its core, related industries will take on the challenge of new establishments and new businesses in Kurashiki City and the Takaryo River basin to develop local manufacturing and give "dreams" to the next generation with advanced technology that can be used. MASC regards the flying car industry as a growing industry shouldering the air mobility revolution, and will carry out substantial business activities and promote the realization of various social services. https://aerospace-kurashiki.net/

Safe Harbor Statement This press release contains statements that may constitute forward-looking statements pursuant to the safe harbor provisions of the U.S. Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as will, expects, anticipates, aims, future, intends, plans, believes, estimates, likely to and similar statements. Management has based these forward-looking statements on its current expectations, assumptions, estimates and projections. While they believe these expectations, assumptions, estimates and projections are reasonable, such forward-looking statements are only predictions and involve known and unknown risks and uncertainties, many of which are beyond management's control. These statements involve risks and uncertainties that may cause EHang's actual results, performance or achievements to differ materially from any future results, performance or achievements expressed or implied by these forward-looking statements.

Media Contact: pr@ehang.com

Investor Contact: ir@ehang.com In the U.S.: Julia@blueshirtgroup.com In China: Susie@blueshirtgroup.com

Photos accompanying this announcement are available at https://www.globenewswire.com/NewsRoom/AttachmentNg/fe9f187b-200a-452b-8dbc-34e5c5000c20https://www.globenewswire.com/NewsRoom/AttachmentNg/cbf92d95-65a6-4d6c-88f7-23ac225dc6ef

More:
EHang 216 AAV Conducted Trial Flights in Japan - GlobeNewswire

Machine learning security needs new perspectives and incentives – TechTalks

At this years International Conference on Learning Representations (ICLR), a team of researchers from the University of Maryland presented an attack technique meant to slow down deep learning models that have been optimized for fast and sensitive operations. The attack, aptly named DeepSloth, targets adaptive deep neural networks, a range of deep learning architectures that cut down computations to speed up processing.

Recent years have seen growing interest in the security of machine learning and deep learning, and there are numerous papers and techniques on hacking and defending neural networks. But one thing made DeepSloth particularly interesting: The researchers at the University of Maryland were presenting a vulnerability in a technique they themselves had developed two years earlier.

In some ways, the story of DeepSloth illustrates the challenges that the machine learning community faces. On the one hand, many researchers and developers are racing to make deep learning available to different applications. On the other hand, their innovations cause new challenges of their own. And they need to actively seek out and address those challenges before they cause irreparable damage.

One of the biggest hurdles of deep learning the computational costs of training and running deep neural networks. Many deep learning models require huge amounts of memory and processing power, and therefore they can only run on servers that have abundant resources. This makes them unusable for applications that require all computations and data to remain on edge devices or need real-time inference and cant afford the delay caused by sending their data to a cloud server.

In the past few years, machine learning researchers have developed several techniques to make neural networks less costly. One range of optimization techniques called multi-exit architecture stops computations when a neural network reaches acceptable accuracy. Experiments show that for many inputs, you dont need to go through every layer of the neural network to reach a conclusive decision. Multi-exit neural networks save computation resources and bypass the calculations of the remaining layers when they become confident about their results.

In 2019, Yigitan Kaya, a Ph.D. student in Computer Science at the University of Maryland, developed a multi-exit technique called shallow-deep network, which could reduce the average inference cost of deep neural networks by up to 50 percent. Shallow-deep networks address the problem of overthinking, where deep neural networks start to perform unneeded computations that result in wasteful energy consumption and degrade the models performance. The shallow-deep network was accepted at the 2019 International Conference on Machine Learning (ICML).

Early-exit models are a relatively new concept, but there is a growing interest, Tudor Dumitras, Kayas research advisor and associate professor at the University of Maryland, told TechTalks. This is because deep learning models are getting more and more expensive computationally, and researchers look for ways to make them more efficient.

Dumitras has a background in cybersecurity and is also a member of the Maryland Cybersecurity Center. In the past few years, he has been engaged in research on security threats to machine learning systems. But while a lot of the work in the field focuses on adversarial attacks, Dumitras and his colleagues were interested in finding all possible attack vectors that an adversary might use against machine learning systems. Their work has spanned various fields including hardware faults, cache side-channel attacks, software bugs, and other types of attacks on neural networks.

While working on the deep-shallow network with Kaya, Dumitras and his colleagues started thinking about the harmful ways the technique might be exploited.

We then wondered if an adversary could force the system to overthink; in other words, we wanted to see if the latency and energy savings provided by early exit models like SDN are robust against attacks, he said.

Dumitras started exploring slowdown attacks on shallow-deep networks with Ionut Modoranu, then a cybersecurity research intern at the University of Maryland. When the initial work showed promising results, Kaya and Sanghyun Hong, another Ph.D. student at the University of Maryland, joined the effort. Their research eventually culminated into the DeepSloth attack.

Like adversarial attacks, DeepSloth relies on carefully crafted input that manipulates the behavior of machine learning systems. However, while classic adversarial examples force the target model to make wrong predictions, DeepSloth disrupts computations. The DeepSloth attack slows down shallow-deep networks by preventing them from making early exits and forcing them to carry out the full computations of all layers.

Slowdown attacks have the potential ofnegating the benefits ofmulti-exit architectures, Dumitras said.These architectures can halve the energy consumption of a deep neural network model at inference time, and we showed that for any input we can craft a perturbation that wipes out those savings completely.

The researchers findings show that the DeepSloth attack can reduce the efficacy of the multi-exit neural networks by 90-100 percent. In the simplest scenario, this can cause a deep learning system to bleed memory and compute resources and become inefficient at serving users.

But in some cases, it can cause more serious harm. For example, one use of multi-exit architectures involves splitting a deep learning model between two endpoints. The first few layers of the neural network can be installed on an edge location, such as a wearable or IoT device. The deeper layers of the network are deployed on a cloud server. The edge side of the deep learning model takes care of the simple inputs that can be confidently computed in the first few layers. In cases where the edge side of the model does not reach a conclusive result, it defers further computations to the cloud.

In such a setting, the DeepSloth attack would force the deep learning model to send all inferences to the cloud. Aside from the extra energy and server resources wasted, the attack could have much more destructive impact.

In a scenario typical for IoT deployments, where the model is partitioned between edge devices and the cloud, DeepSloth amplifies the latency by 1.55X, negating the benefits of model partitioning, Dumitras said. This could cause the edge device to miss critical deadlines, for instance in an elderly monitoring program that uses AI to quickly detect accidents and call for help if necessary.

While the researchers made most of their tests on deep-shallow networks, they later found that the same technique would be effective on other types of early-exit models.

As with most works on machine learning security, the researchers first assumed that an attacker has full knowledge of the target model and has unlimited computing resources to craft DeepSloth attacks. But the criticality of an attack also depends on whether it can be staged in practical settings, where the adversary has partial knowledge of the target and limited resources.

In most adversarial attacks, the attacker needs to have full access to the model itself, basically, they have an exact copy of the victim model, Kaya told TechTalks. This, of course, is not practical in many settings where the victim model is protected from outside, for example with an API like Google Vision AI.

To develop a realistic evaluation of the attacker, the researchers simulated an adversary who doesnt have full knowledge of the target deep learning model. Instead, the attacker has asurrogatemodel on which he tests and tunes the attack. The attacker thentransfers the attack to the actual target. The researchers trained surrogate models that have different neural network architectures, different training sets, and even different early-exit mechanisms.

We find that the attacker that uses a surrogate can still cause slowdowns (between 20-50%) in the victim model, Kaya said.

Such transfer attacks are much more realistic than full-knowledge attacks, Kaya said. And as long as the adversary has a reasonable surrogate model, he will be able to attack a black-box model, such as a machine learning system served through a web API.

Attacking a surrogate is effective because neural networks that perform similar tasks (e.g., object classification) tend to learn similar features (e.g., shapes, edges, colors), Kaya said.

Dumitras says DeepSloth is just the first attack that works in this threat model, and he believes more devastating slowdown attacks will be discovered. He also pointed out that, aside from multi-exit architectures, other speed optimization mechanisms are vulnerable to slowdown attacks. His research team tested DeepSloth on SkipNet, a special optimization technique for convolutional neural networks (CNN). Their findings showed that DeepSloth examples crafted for multi-exit architecture also caused slowdowns in SkipNet models.

This suggests thatthe two different mechanisms might share a deeper vulnerability, yet to be characterized rigorously, Dumitras said. I believe that slowdown attacks may become an important threat in the future.

The researchers also believe that security must be baked into the machine learning research process.

I dont think any researcher today who is doing work on machine learning is ignorant of the basic security problems. Nowadays even introductory deep learning courses include recent threat models like adversarial examples, Kaya said.

The problem, Kaya believes, has to do with adjusting incentives. Progress is measured on standardized benchmarks and whoever develops a new technique uses these benchmarks and standard metrics to evaluate their method, he said, adding that reviewers who decide on the fate of a paper also look at whether the method is evaluated according to their claims on suitable benchmarks.

Of course, when a measure becomes a target, it ceases to be a good measure, he said.

Kaya believes there should be a shift in the incentives of publications and academia. Right now, academics have a luxury or burden to make perhaps unrealistic claims about the nature of their work, he says. If machine learning researchers acknowledge that their solution will never see the light of day, their paper might be rejected. But their research might serve other purposes.

For example, adversarial training causes large utility drops, has poor scalability, and is difficult to get right, limitations that are unacceptable for many machine learning applications. But Kaya points out that adversarial training can have benefits that have been overlooked, such as steering models toward becoming more interpretable.

One of the implications of too much focus on benchmarks is that most machine learning researchers dont examine the implications of their work when applied to real-world settings and realistic settings.

Our biggest problem is that we treat machine learning security as an academic problem right now. So the problems we study and the solutions we design are also academic, Kaya says. We dont know if any real-world attacker is interested in using adversarial examples or any real-world practitioner in defending against them.

Kaya believes the machine learning community should promote and encourage research in understanding the actual adversaries of machine learning systems rather than dreaming up our own adversaries.

And finally, he says that authors of machine learning papers should be encouraged to do their homework and find ways to break their own solutions, as he and his colleagues did with the shallow-deep networks. And researchers should be explicit and clear about the limits and potential threats of their machine learning models and techniques.

If we look at the papers proposing early-exit architectures, we see theres no effort to understand security risks although they claim that these solutions are of practical value, he says. If an industry practitioner finds these papers and implements these solutions, they are not warned about what can go wrong. Although groups like ours try to expose potential problems, we are less visible to a practitioner who wants to use an early-exit model. Even including a paragraph about the potential risks involved in a solution goes a long way.

More:
Machine learning security needs new perspectives and incentives - TechTalks

Adversarial attacks in machine learning: What they are and how to stop them – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

Adversarial machine learning, a technique that attempts to fool models with deceptive data, is a growing threat in the AI and machine learning research community. The most common reason is to cause a malfunction in a machine learning model. An adversarial attack might entail presenting a model with inaccurate or misrepresentative data as its training, or introducing maliciously designed data to deceive an already trained model.

As the U.S. National Security Commission on Artificial Intelligences 2019 interim report notes, a very small percentage of current AI research goes toward defending AI systems against adversarial efforts. Some systems already used in production could be vulnerable to attack. For example, by placing a few small stickers on the ground, researchers showed that they could cause a self-driving car to move into the opposite lane of traffic. Other studies have shown that making imperceptible changes to an image can trick a medical analysis system into classifying a benign mole as malignant, and that pieces of tape can deceive a computer vision system into wrongly classifying a stop signas a speed limit sign.

The increasing adoption of AI is likely to correlate with a rise in adversarial attacks. Its a never-ending arms race, but fortunately, effective approaches exist today to mitigate the worst of the attacks.

Attacks against AI models are often categorized along three primary axes influence on the classifier, the security violation, and their specificity and can be further subcategorized as white box or black box. In white box attacks, the attacker has access to the models parameters, while in black box attacks, the attacker has no access to these parameters.

An attack can influence the classifier i.e., the model by disrupting the model as it makes predictions, while a security violation involves supplying malicious data that gets classified as legitimate. A targeted attack attempts to allow a specific intrusion or disruption, or alternatively to create general mayhem.

Evasion attacks are the most prevalent type of attack, where data are modified to evade detection or to be classified as legitimate. Evasion doesnt involve influence over the data used to train a model, but it is comparable to the way spammers and hackers obfuscate the content of spam emails and malware. An example of evasion is image-based spam in which spam content is embedded within an attached image to evade analysis by anti-spam models. Another example is spoofing attacks against AI-powered biometric verification systems..

Poisoning, another attack type, is adversarial contamination of data. Machine learning systems are often retrained using data collected while theyre in operation, and an attacker can poison this data by injecting malicious samples that subsequently disrupt the retraining process. An adversary might input data during the training phase thats falsely labeled as harmless when its actually malicious. For example, large language models like OpenAIs GPT-3 can reveal sensitive, private information when fed certain words and phrases, research has shown.

Meanwhile, model stealing, also called model extraction, involves an adversary probing a black box machine learning system in order to either reconstruct the model or extract the data that it was trained on. This can cause issues when either the training data or the model itself is sensitive and confidential. For example, model stealing could be used to extract a proprietary stock-trading model, which the adversary could then use for their own financial gain.

Plenty of examples of adversarial attacks have been documented to date. One showed its possible to 3D-print a toy turtle with a texture that causes Googles object detection AI to classify it as a rifle, regardless of the angle from which the turtle is photographed. In another attack, a machine-tweaked image of a dog was shown to look like a cat to both computers and humans. So-called adversarial patterns on glasses or clothing have been designed to deceive facial recognition systems and license plate readers. And researchers have created adversarial audio inputs to disguise commands to intelligent assistants in benign-sounding audio.

In apaper published in April, researchers from Google and the University of California at Berkeley demonstrated that even the best forensic classifiers AI systems trained to distinguish between real and synthetic content are susceptible to adversarial attacks. Its a troubling, if not necessarily new, development for organizations attempting to productize fake media detectors, particularly considering the meteoric riseindeepfakecontent online.

One of the most infamous recent examples is Microsofts Tay, a Twitter chatbot programmed to learn to participate in conversation through interactions with other users. While Microsofts intention was that Tay would engage in casual and playful conversation, internet trolls noticed the system had insufficient filters and began feeding Tay profane and offensive tweets. The more these users engaged, the more offensive Tays tweets became, forcing Microsoft to shut the bot down just 16 hours after its launch.

As VentureBeat contributor Ben Dickson notes, recent years have seen a surge in the amount of research on adversarial attacks. In 2014, there were zero papers on adversarial machine learning submitted to the preprint server Arxiv.org, while in 2020, around 1,100 papers on adversarial examples and attacks were. Adversarial attacks and defense methods have also become a highlight of prominent conferences including NeurIPS, ICLR, DEF CON, Black Hat, and Usenix.

With the rise in interest in adversarial attacks and techniques to combat them, startups like Resistant AI are coming to the fore with products that ostensibly harden algorithms against adversaries. Beyond these new commercial solutions, emerging research holds promise for enterprises looking to invest in defenses against adversarial attacks.

One way to test machine learning models for robustness is with whats called a trojan attack, which involves modifying a model to respond to input triggers that cause it to infer an incorrect response. In an attempt to make these tests more repeatable and scalable, researchers at Johns Hopkins University developed a framework dubbed TrojAI, a set of tools that generate triggered data sets and associated models with trojans. They say that itll enable researchers to understand the effects of various data set configurations on the generated trojaned models and help to comprehensively test new trojan detection methods to harden models.

The Johns Hopkins team is far from the only one tackling the challenge of adversarial attacks in machine learning. In February, Google researchers released apaper describing a framework that either detects attacks or pressures the attackers to produce images that resemble the target class of images. Baidu, Microsoft, IBM, and Salesforce offer toolboxes Advbox, Counterfit, Adversarial Robustness Toolbox, and Robustness Gym for generating adversarial examples that can fool models in frameworks like MxNet, Keras, Facebooks PyTorch and Caffe2, Googles TensorFlow, and Baidus PaddlePaddle. And MITs Computer Science and Artificial Intelligence Laboratory recently released a tool called TextFoolerthat generates adversarial text to strengthen natural language models.

More recently, Microsoft, the nonprofit Mitre Corporation, and 11 organizations including IBM, Nvidia, Airbus, and Bosch releasedtheAdversarial ML Threat Matrix, an industry-focused open framework designed to help security analysts to detect, respond to, and remediate threats against machine learning systems. Microsoft says it worked with Mitre to build a schema that organizes the approaches malicious actors employ in subverting machine learning models, bolstering monitoring strategies around organizations mission-critical systems.

The future might bring outside-the-box approaches, including several inspired by neuroscience. For example, researchers at MIT and MIT-IBM Watson AI Lab have found that directly mapping the features of the mammalian visual cortex onto deep neural networks creates AI systems that are more robust to adversarial attacks. While adversarial AI is likely to become a never-ending arms race, these sorts of solutions instill hope that attackers wont always have the upper hand and that biological intelligence still has a lot of untapped potential.

Read more from the original source:
Adversarial attacks in machine learning: What they are and how to stop them - VentureBeat

Relogix Announces Collaboration with Dr. Graham Wills, Predictive Analytics and Machine Learning Expert, To Better Predict Office Space Needs -…

Relogix will be the first in the industry to more accurately forecast and predict companies' real estate needs. Companies will potentially save hundreds of millions of real estate spend, year over year with this collaborative innovation between Relogix and Dr. Wills. "Relogix has a significant data set to work with, from years of collecting billions of terabytes of Corporate Real Estate data around the world," says Dr. Wills. "I'm excited to use this data and cutting-edge machine learning techniques to take spatial data research to the next level."

With the pandemic, it has become ever more difficult for companies to understand workplace demand for real estate, with everyone working from home and anywhere for the foreseeable future. As people return to the office, understanding the relationship between people and their demand for workspace is a significant challenge for workplace technology leaders in Corporate Real Estate, HR, and IT.

"We're making a significant R&D investment to further innovation around forecasting and predictive analytics for Corporate Real Estate," says Andrew Millar, Founder and CEO of Relogix. "We are excited to be working with Graham, a pre-eminent researcher in the AI field, and expect our collaboration to leverage advanced machine learning techniques to surface insights like never before."

As an outstanding data science leader for over 20 years, Wills is a disruptive innovator, who has been innovating predictive analytics and forecasting for 30 years. Hailing from IBM, Dr. Wills is a well-known researcher in the fields of spatial data exploration and time series monitoring. At IBM, Wills was the lead architect for predictive analytics and machine learning in IBM's Data and AI group, and led the development of major advances including intelligent automatic forecasting, natural language data insights, anomaly detection and key driver identification.

About Graham Wills, PhD:Graham's passion is analyzing data and designing capabilities that help others do the same with their data. His focus is on creating software systems that allow non-experts to draw conclusions safely and efficiently from predictive and machine learning models, and thus enhance the value of their data. Graham has authored over 60 publications, including a book in the Springer statistical series, and has chaired or presented at numerous international statistical and knowledge discovery conferences. His patents span visualization, spatial analysis, semantic knowledge, and associated AI domains. Graham believes that the goal of AI is to give professionals the assistance they need to make great decisions from their data, and that CRE is an ideal domain in which to introduce new AI and Machine Learning capabilities to revolutionize the marketplace.

About Andrew Millar, CEO:Andrew's mission is to turn data into valuable outcomes. With over 20 years as a corporate real estate solutions and insights provider, Relogix founder and CRE veteran, Andrew Millar, recognized the need for technology in the CRE industry. He founded Relogix out of a need to create solutions to help organizations evolve their workspace and get high quality data to drive strategic decision making. Andrew believes that the key to evolving workspace and strategic planning lies in data science. Just like the workplace, data science is progressive: it is a journey of perpetual discovery, refinement, and adaptation. Andrew has since created proprietary sensor technology with the needs of corporate real estate in mind technology created for CRE professionals by CRE professionals.

About Relogix:Trusted by top Corporate Real Estate professionals who need to make data-driven business decisions to inform their real estate strategy and measure impact. Our flexible workplace insights platform and state-of-the-art IoT occupancy sensors are proven to transform the workplace experience. We're always looking for the next innovation in workplace technology, leveraging two decades of CRE and analytics expertise to help our clients understand and optimize their global real estate portfolios.

SOURCE Relogix Inc.

Read the rest here:
Relogix Announces Collaboration with Dr. Graham Wills, Predictive Analytics and Machine Learning Expert, To Better Predict Office Space Needs -...