Archive for the ‘Machine Learning’ Category

The Value in Machine Learning Alternative Data for Investment Managers – Business Wire

CHICAGO--(BUSINESS WIRE)--CloudQuant LLC has proven the value in the Precision Alpha Machine Learning Signals (PA Signals) alternative data set. Its detailed data science study shows a long-short portfolio outperforms the equal-weight S&P 500 ETF by an average of 37.9% per year after transaction costs. CloudQuant found that over 91.5% of the total return is pure alpha. The results of the study are significant to the 99th percent level.

Cutting-edge machine learning is transforming quantitative analysis for portfolio managers and traders. PA Identifies structural breaks and exposes investment signals that market participants are currently unable to see. The PA Signal offers a favorable risk-adjusted return that can be used to create large-scale investment algorithms.

Backtesting on CloudQuants Mariner showed that a long top 5%-short bottom 5% quantile intraday strategy achieved overall Sharpe Ratio1 of 5.36 and a very low CAPM beta, said Morgan Slade, Chief Executive Officer of CloudQuant.

The growing quality and quantity of Alternative Data Sets have created a dilemma for many investment managers. Profitable information is contained in new data but most investors lack the resources to onboard and then research the data. CloudQuants quantamental researchers have studied the PA Signals and provide a detailed white paper, and backtesting algorithm with source code (free upon qualified request) that allows any portfolio manager to replicate the research and immediately begin to reproduce the results.

With CloudQuant investment professionals can jumpstart their research without incurring the cost of dataset ingress and curation. They are able to see the value in the data, says Mark Temple-Raston, Ph.D. and Chief Data Scientist of Precision Alpha.

About CloudQuant

CloudQuant provides quantamental data showcasing services to alternative data providers including bespoke AI, Machine Learning, and data science services. Fundamental and quantitative investors utilize the cloud-based institutional-grade analytics technology and detailed backtests to quickly research alternative datasets in a unique try-before-you-buy data shopping experience.

http://www.cloudquant.com

Twitter: @CloudQuant

About Precision Alpha

Precision Alpha uses probabilistic mathematics, information theory and machine learning to expose alpha for investors. They calculate a set of exact, unbiased, equity measurements that reveal market price moves before they occur for every security on 85+ global financial exchanges. Precision Alphas proprietary technology leverages machine learning to generate accurate, predictive Alpha for Investment Funds, Family Offices, Traders and professional investors.

http://www.precisionalpha.com

Twitter: @PrecisionAlpha

1 ASharpe Ratio is the performance of an investment by adjusting for risk. This ratio is commonly used to judge the performance of an investment strategy.

Read more here:
The Value in Machine Learning Alternative Data for Investment Managers - Business Wire

AI and machine learning is not the future, it’s the present – Eyes on APAC – ComputerWeekly.com

This is a guest post by Raju Vegesna, chief evangelist at Zoho

For many, artificial intelligence (AI) is a distant and incomprehensible concept associated only with science fiction movies or high-tech laboratories.

In reality, however, AI and machine learning is already changing the world we know. From TVs and toothbrushes to real-time digital avatars that interact with humans, the recent CES show demonstrated how widespread AI is becoming in everyday life.

The same can be said of the business community, with the latest Gartner research revealing that 37% of organisations had implemented some form of AI or machine learning.

So far, these technologies have largely been adopted and implemented more by larger organisations with the resources and expertise to seamlessly integrate them into their business. But technology has evolved significantly in recent years, and SaaS (software as a service) providers now offer integrated technology and AI that meets the needs and budgets of small and medium businesses too.

Here are a few evolving trends in AI and machine learning that businesses of all sizes could capitalise on in 2020 and beyond.

The enterprise software marketplace is expanding rapidly. More vendors are entering the market, often with a growing range of solutions, which creates confusion for early adopters of the technology. Integrating new technologies from a range of different vendors can be challenging, even for large enterprise organisations.

So, in 2020 and beyond, the businesses that will make the most of AI and machine learning are the ones implementing single-vendor technology platforms. Its a challenge to work with data that is scattered across different applications using different data models, but organisations that consolidate all its data in one integrated platform will find it much easier to feed it into a machine learning algorithm.

After all, the more data thats available, the more powerful your AI and machine learning models will be. By capitalising on the wealth of data supplied by integrated software platforms, advanced business applications will be able to answer our questions or help us navigate interfaces. If youre a business owner, planning to utilise AI and machine learning for your business in 2020, then the single-vendor strategy is the way to go.

Technology has advanced at such a rate that businesses no longer need to compromise to fit the technology. This type of hyper-personalisation increases productivity for business software users and will continue to be a prime focus for businesses in 2020.

Take, for example, the rise of algorithmic social media timelines we have seen in the last few years. For marketers, AI and machine learning mean personalisation is becoming more and more sophisticated, allowing businesses to supercharge and sharpen their focus on their customers. Companies which capture insights to create personalised customer experiences and accelerate sales will likely win in 2020.

With AI and machine learning, vast amounts of data is processed every second of the day. In 2020, one of the significant challenges faced by companies implementing AI and machine learning is data cleansing the process of detecting, correcting or removing corrupt or inaccurate records from a data set.

Smaller organisations can begin to expect AI functionality in everyday software like spreadsheets, where theyll be able to parse information out of addresses or clean up inconsistencies. Larger organisations, meanwhile, will benefit from AI that ensures their data is more consumable for analytics or prepares it for migration from one application to another.

Businesses can thrive with the right content and strategic, innovative marketing. Consider auto-tagging, which could soon become the norm. Smartphones can recognise and tag objects in your photos, making your photo library much more searchable. Well start to see business applications auto-tag information to make it much more accessible.

Thanks to AI, customer relationship management (CRM) systems will continue to be a fantastic and always-advancing channel through which businesses can market to their customers. Today, businesses can find its top customers in a CRM system by running a report and sorting by revenue or sales. In the coming years, businesses will be able to search top customers, and its CRM system will know what theyre looking for.

With changing industry trends and demands, its important for all businesses to use the latest technology to create a positive impact on its operations. In 2020 and beyond, AI and machine learning will support businesses by helping them reduce manual labour and enhance productivity.

While some businesses, particularly small businesses, might be apprehensive about AI, it is a transformation that is bound to bring along a paradigm shift for those that are ready to take a big step towards a technology-driven future.

Read the original post:
AI and machine learning is not the future, it's the present - Eyes on APAC - ComputerWeekly.com

Navigating the New Landscape of AI Platforms – Harvard Business Review

Executive Summary

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tooling for AI systems than they do building the AI systems themselves. Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling, and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies.

Nearly two years ago, Seattle Sport Sciences, a company that provides data to soccer club executives, coaches, trainers and players to improve training, made a hard turn into AI. It began developing a system that tracks ball physics and player movements from video feeds. To build it, the company needed to label millions of video frames to teach computer algorithms what to look for. It started out by hiring a small team to sit in front of computer screens, identifying players and balls on each frame. But it quickly realized that it needed a software platform in order to scale. Soon, its expensive data science team was spending most of its time building a platform to handle massive amounts of data.

These are heady days when every CEO can see or at least sense opportunities for machine-learning systems to transform their business. Nearly every company has processes suited for machine learning, which is really just a way of teaching computers to recognize patterns and make decisions based on those patterns, often faster and more accurately than humans. Is that a dog on the road in front of me? Apply the brakes. Is that a tumor on that X-ray? Alert the doctor. Is that a weed in the field? Spray it with herbicide.

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tools for AI systems than they do building the systems themselves. A recent survey of 500 companies by the firm Algorithmia found that expensive teams spend less than a quarter of their time training and iterating machine-learning models, which is their primary job function.

Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies, like Seattle Sports Science.

Frustrated that its data science team was spinning its wheels, Seattle Sports Sciences AI architect John Milton finally found a commercial solution that did the job. I wish I had realized that we needed those tools, said Milton. He hadnt factored the infrastructure into their original budget and having to go back to senior management and ask for it wasnt a pleasant experience for anyone.

The AI giants, Google, Amazon, Microsoft and Apple, among others, have steadily released tools to the public, many of them free, including vast libraries of code that engineers can compile into deep-learning models. Facebooks powerful object-recognition tool, Detectron, has become one of the most widely adopted open-source projects since its release in 2018. But using those tools can still be a challenge, because they dont necessarily work together. This means data science teams have to build connections between each tool to get them to do the job a company needs.

The newest leap on the horizon addresses this pain point. New platforms are now allowing engineers to plug in components without worrying about the connections.

For example, Determined AI and Paperspace sell platforms for managing the machine-learning workflow. Determined AIs platform includes automated elements to help data scientists find the best architecture for neural networks, while Paperspace comes with access to dedicated GPUs in the cloud.

If companies dont have access to a unified platform, theyre saying, Heres this open source thing that does hyperparameter tuning. Heres this other thing that does distributed training, and they are literally gluing them all together, said Evan Sparks, cofounder of Determined AI. The way theyre doing it is really with duct tape.

Labelbox is a training data platform, or TDP, for managing the labeling of data so that data science teams can work efficiently with annotation teams across the globe. (The author of this article is the companys co-founder.) It gives companies the ability to track their data, spot, and fix bias in the data and optimize the quality of their training data before feeding it into their machine-learning models.

Its the solution that Seattle Sports Sciences uses. John Deere uses the platform to label images of individual plants, so that smart tractors can spot weeds and deliver pesticide precisely, saving money and sparing the environment unnecessary chemicals.

Meanwhile, companies no longer need to hire experienced researchers to write machine-learning algorithms, the steam engines of today. They can find them for free or license them from companies who have solved similar problems before.

Algorithmia, which helps companies deploy, serve and scale their machine-learning models, operates an algorithm marketplace so data science teams dont duplicate other peoples effort by building their own. Users can search through the 7,000 different algorithms on the companys platform and license one or upload their own.

Companies can even buy complete off-the-shelf deep learning models ready for implementation.

Fritz.ai, for example, offers a number of pre-trained models that can detect objects in videos or transfer artwork styles from one image to another all of which run locally on mobile devices. The companys premium services include creating custom models and more automation features for managing and tweaking models.

And while companies can use a TDP to label training data, they can also find pre-labeled datasets, many for free, that are general enough to solve many problems.

Soon, companies will even offer machine-learning as a service: Customers will simply upload data and an objective and be able to access a trained model through an API.

In the late 18th century, Maudslays lathe led to standardized screw threads and, in turn, to interchangeable parts, which spread the industrial revolution far and wide. Machine-learning tools will do the same for AI, and, as a result of these advances, companies are able to implement machine-learning with fewer data scientists and less senior data science teams. Thats important given the looming machine-learning, human resources crunch: According to a 2019 Dun & Bradstreet report, 40 percent of respondents from Forbes Global 2000 organizations say they are adding more AI-related jobs. And the number of AI-related job listings on the recruitment portal Indeed.com jumped 29 percent from May 2018 to May 2019. Most of that demand is for supervised-learning engineers.

But C-suite executives need to understand the need for those tools and budget accordingly. Just as Seattle Sports Sciences learned, its better to familiarize yourself with the full machine-learning workflow and identify necessary tooling before embarking on a project.

That tooling can be expensive, whether the decision is to build or to buy. As is often the case with key business infrastructure, there are hidden costs to building. Buying a solution might look more expensive up front, but it is often cheaper in the long run.

Once youve identified the necessary infrastructure, survey the market to see what solutions are out there and build the cost of that infrastructure into your budget. Dont fall for a hard sell. The industry is young, both in terms of the time that its been around and the age of its entrepreneurs. The ones who are in it out of passion are idealistic and mission driven. They believe they are democratizing an incredibly powerful new technology.

The AI tooling industry is facing more than enough demand. If you sense someone is chasing dollars, be wary. The serious players are eager to share their knowledge and help guide business leaders toward success. Successes benefit everyone.

Read this article:
Navigating the New Landscape of AI Platforms - Harvard Business Review

TIBCO Recognized as a Leader in 2020 GMQ for Data Science and ML – AiThority

Company Believes Its Recognition Validates Its Leadership in AI and Augmented Analytics

TIBCO Software Inc., a global leader in enterprise data, empowers its customers to connect, unify, and confidently predict business outcomes, solving the worlds most complex data-driven challenges. Today, TIBCO announced it is recognized as a Leader inGartners 2020 Magic Quadrant for Data Science and Machine Learning Platforms* for the second year in a row. The company sees this as further validation that its artificial intelligence (AI), data science, and machine learning (ML) capabilities are helping its customers and partners continually deliver an exceptional customer experience and solve difficult business challenges with data science.

Big news: @TIBCO Recognized as a Leader in 2020 @Gartner_inc Magic Quadrant for #DataScience and #MachineLearning Platforms

TIBCO is committed to helping enterprises realize the value of their data science initiatives by improving productivity of knowledge workers and operationalizing analytic insights. One of our guiding principles is to make AI a foundation of our products. By infusing AI at the core of our data science products, we can automate basic tasks, and free up time for innovation, while enforcing best practices, said Michael OConnell, chief analytics officer, TIBCO. Ultimately, our data science products enable our customers to turn data into actionable intelligence at scale, and create competitive advantage for their business. We view our consistent recognition as a Leader in data science and ML from Gartner as a testament to our success in creating extreme value for our clients.

Recommended AI News: Cryptocurrency Volatility A Friend Or A Foe

TIBCO Data Science and TIBCO Spotfire continue to be top competitors in the market, providing customers with real-time, augmented visual analytics and deep data science capabilities. AI and machine learning capabilities are core foundations across all TIBCO offerings in the TIBCO Connected Intelligence platform, including the IoT-centric Project Flogo. From full-edge integration to the execution of deep-learning models on devices or in AI-infused BI dashboards, businesses are turning to TIBCO to collect data from anywhere, derive and automate insight and actions, and drive success through digital transformation.

In the 2020 Magic Quadrant for Data Science and Machine Learning report, Gartner, Inc. evaluated the strengths and cautions of 16 DSML platform providers, with TIBCO named in the Leaders quadrant. Vendors are plotted by Gartner based on their ability to execute and their completeness of vision.

Recommended AI News: Skills to Master Before your First Data Science Interview

Read the fullGartner 2020 Magic Quadrant for Data Science and Machine LearningPlatformsreport for further insights.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartners research organization and should not be construed as statements of fact. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Recommended AI News: Demystifying The Role Of Intelligent Automation In Outsourcing

Continued here:
TIBCO Recognized as a Leader in 2020 GMQ for Data Science and ML - AiThority

AI could help with the next pandemicbut not with this one – MIT Technology Review

It was an AI that first saw it coming, or so the story goes. On December 30, an artificial-intelligence company called BlueDot, which uses machine learning to monitor outbreaks of infectious diseases around the world, alerted clientsincluding various governments, hospitals, and businessesto an unusual bump in pneumonia cases in Wuhan, China. It would be another nine days before the World Health Organization officially flagged what weve all come to know as Covid-19.

BlueDot wasnt alone. An automated service called HealthMap at Boston Childrens Hospital also caught those first signs. As did a model run by Metabiota, based in San Francisco. That AI could spot an outbreak on the other side of the world is pretty amazing, and early warnings save lives.

You can read all of ourcoverage of the coronavirus/Covid-19 outbreakfor free, and also sign up for ourcoronavirus newsletter. But pleaseconsider subscribingto support our nonprofit journalism..

But how much has AI really helped in tackling the current outbreak? Thats a hard question to answer. Companies like BlueDot are typically tight-lipped about exactly who they provide information to and how it is used. And human teams say they spotted the outbreak the same day as the AIs. Other projects in which AI is being explored as a diagnostic tool or used to help find a vaccine are still in their very early stages. Even if they are successful, it will take timepossibly monthsto get those innovations into the hands of the health-care workers who need them.

The hype outstrips the reality. In fact, the narrative that has appeared in many news reports and breathless press releasesthat AI is a powerful new weapon against diseasesis only partly true and risks becoming counterproductive. For example, too much confidence in AIs capabilities could lead to ill-informed decisions that funnel public money to unproven AI companies at the expense of proven interventions such as drug programs. Its also bad for the field itself: overblown but disappointed expectations have led to a crash of interest in AI, and consequent loss of funding, more than once in the past.

So heres a reality check: AI will not save us from the coronaviruscertainly not this time. But theres every chance it will play a bigger role in future epidemicsif we make some big changes. Most wont be easy. Some we wont like.

There are three main areas where AI could help: prediction, diagnosis, and treatment.

Prediction

Companies like BlueDot and Metabiota use a range of natural-language processing (NLP) algorithms to monitor news outlets and official health-care reports in different languages around the world, flagging whether they mention high-priority diseases, such as coronavirus, or more endemic ones, such as HIV or tuberculosis. Their predictive tools can also draw on air-travel data to assess the risk that transit hubs might see infected people either arriving or departing.

The results are reasonably accurate. For example, Metabiotas latest public report, on February 25, predicted that on March 3 there would be 127,000 cumulative cases worldwide. It overshot by around 30,000, but Mark Gallivan, the firms director of data science, says this is still well within the margin of error. It also listed the countries most likely to report new cases, including China, Italy, Iran, and the US. Again: not bad.

Sign up for The Algorithm artificial intelligence, demystified

Others keep an eye on social media too. Stratifyd, a data analytics company based in Charlotte, North Carolina, is developing an AI that scans posts on sites like Facebook and Twitter and cross-references them with descriptions of diseases taken from sources such as the National Institutes of Health, the World Organisation for Animal Health, and the global microbial identifier database, which stores genome sequencing information.

Work by these companies is certainly impressive. And it goes to show how far machine learning has advanced in recent years. A few years ago Google tried to predict outbreaks with its ill-fated Flu Tracker, which was shelved in 2013 when it failed to predict that years flu spike. What changed? It mostly comes down to the ability of the latest software to listen in on a much wider range of sources.

Unsupervised machine learning is also key. Letting an AI identify its own patterns in the noise, rather than training it on preselected examples, highlights things you might not have thought to look for. When you do prediction, you're looking for new behavior, says Stratifyds CEO, Derek Wang.

But what do you do with these predictions? The initial prediction by BlueDot correctly pinpointed a handful of cities in the viruss path. This could have let authorities prepare, alerting hospitals and putting containment measures in place. But as the scale of the epidemic grows, predictions become less specific. Metabiotas warning that certain countries would be affected in the following week might have been correct, but it is hard to know what to do with that information.

Whats more, all these approaches will become less accurate as the epidemic progresses, largely because reliable data of the sort that AI needs to feed on has been hard to get about Covid-19. News sources and official reports offer inconsistent accounts. There has been confusion over symptoms and how the virus passes between people. The media may play things up; authorities may play things down. And predicting where a disease may spread from hundreds of sites in dozens of countries is a far more daunting task than making a call on where a single outbreak might spread in its first few days. Noise is always the enemy of machine-learning algorithms, says Wang. Indeed, Gallivan acknowledges that Metabiotas daily predictions were easier to make in the first two weeks or so.

One of the biggest obstacles is the lack of diagnostic testing, says Gallivan. Ideally, we would have a test to detect the novel coronavirus immediately and be testing everyone at least once a day, he says. We also dont really know what behaviors people are adoptingwho is working from home, who is self-quarantining, who is or isnt washing handsor what effect it might be having. If you want to predict whats going to happen next, you need an accurate picture of whats happening right now.

Its not clear whats going on inside hospitals, either. Ahmer Inam at Pactera Edge, a data and AI consultancy, says prediction tools would be a lot better if public health data wasnt locked away within government agencies as it is in many countries, including the US. This means an AI must lean more heavily on readily available data like online news. By the time the media picks up on a potentially new medical condition, it is already too late, he says.

But if AI needs much more data from reliable sources to be useful in this area, strategies for getting it can be controversial. Several people I spoke to highlighted this uncomfortable trade-off: to get better predictions from machine learning, we need to share more of our personal data with companies and governments.

Darren Schulte, an MD and CEO of Apixio, which has built an AI to extract information from patients records, thinks that medical records from across the US should be opened up for data analysis. This could allow an AI to automatically identify individuals who are most at risk from Covid-19 because of an underlying condition. Resources could then be focused on those people who need them most. The technology to read patient records and extract life-saving information exists, says Schulte. The problem is that these records are split across multiple databases and managed by different health services, which makes them harder to analyze. Id like to drop my AI into this big ocean of data, he says. But our data sits in small lakes, not a big ocean.

Health data should also be shared between countries, says Inam: Viruses dont operate within the confines of geopolitical boundaries. He thinks countries should be forced by international agreement to release real-time data on diagnoses and hospital admissions, which could then be fed into global-scale machine-learning models of a pandemic.

Of course, this may be wishful thinking. Different parts of the world have different privacy regulations for medical data. And many of us already balk at making our data accessible to third parties. New data-processing techniques, such as differential privacy and training on synthetic data rather than real data, might offer a way through this debate. But this technology is still being finessed. Finding agreement on international standards will take even more time.

For now, we must make the most of what data we have. Wangs answer is to make sure humans are around to interpret what machine-learning models spit out, making sure to discard predictions that dont ring true. If one is overly optimistic or reliant on a fully autonomous predictive model, it will prove problematic, he says. AIs can find hidden signals in the data, but humans must connect the dots.

Early diagnosis

As well as predicting the course of an epidemic, many hope that AI will help identify people who have been infected. AI has a proven track record here. Machine-learning models for examining medical images can catch early signs of disease that human doctors miss, from eye disease to heart conditions to cancer. But these models typically require a lot of data to learn from.

A handful of preprint papers have been posted online in the last few weeks suggesting that machine learning can diagnose Covid-19 from CT scans of lung tissue if trained to spot telltale signs of the disease in the images. Alexander Selvikvg Lundervold at the Western Norway University of Applied Sciences in Bergen, Norway, who is an expert on machine learning and medical imaging, says we should expect AI to be able to detect signs of Covid-19 in patients eventually. But it is unclear whether imaging is the way to go. For one thing, physical signs of the disease may not show up in scans until some time after infection, making it not very useful as an early diagnostic.

AP Images

Whats more, since so little training data is available so far, its hard to assess the accuracy of the approaches posted online. Most image recognition systemsincluding those trained on medical imagesare adapted from models first trained on ImageNet, a widely used data set encompassing millions of everyday images. To classify something simple that's close to ImageNet data, such as images of dogs and cats, can be done with very little data, says Lundervold. Subtle findings in medical images, not so much.

Thats not to say it wont happenand AI tools could potentially be built to detect early stages of disease in future outbreaks. But we should be skeptical about many of the claims of AI doctors diagnosing Covid-19 today. Again, sharing more patient data will help, and so will machine-learning techniques that allow models to be trained even when little data is available. For example, few-shot learning, where an AI can learn patterns from only a handful of results, and transfer learning, where an AI already trained to do one thing can be quickly adapted to do something similar, are promising advancesbut still works in progress.

Cure-all

Data is also essential if AI is to help develop treatments for the disease. One technique for identifying possible drug candidates is to use generative design algorithms, which produce a vast number of potential results and then sift through them to highlight those that are worth looking at more closely. This technique can be used to quickly search through millions of biological or molecular structures, for example.

SRI International is collaborating on such an AI tool, which uses deep learning to generate many novel drug candidates that scientists can then assess for efficacy. This is a game-changer for drug discovery, but it can still take many months before a promising candidate becomes a viable treatment.

In theory, AIs could be used to predict the evolution of the coronavirus too. Inam imagines running unsupervised learning algorithms to simulate all possible evolution paths. You could then add potential vaccines to the mix and see if the viruses mutate to develop resistance. This will allow virologists to be a few steps ahead of the viruses and create vaccines in case any of these doomsday mutations occur, he says.

Its an exciting possibility, but a far-off one. We dont yet have enough information about how the virus mutates to be able to simulate it this time around.

In the meantime, the ultimate barrier may be the people in charge. What Id most like to change is the relationship between policymakers and AI, says Wang. AI will not be able to predict disease outbreaks by itself, no matter how much data it gets. Getting leaders in government, businesses, and health care to trust these tools will fundamentally change how quickly we can react to disease outbreaks, he says. But that trust needs to come from a realistic view of what AI can and cannot do nowand what might make it better next time.

Making the most of AI will take a lot of data, time, and smart coordination between many different people. All of which are in short supply right now.

Read more here:
AI could help with the next pandemicbut not with this one - MIT Technology Review