Archive for the ‘Machine Learning’ Category

TIBCO Recognized as a Leader in 2020 GMQ for Data Science and ML – AiThority

Company Believes Its Recognition Validates Its Leadership in AI and Augmented Analytics

TIBCO Software Inc., a global leader in enterprise data, empowers its customers to connect, unify, and confidently predict business outcomes, solving the worlds most complex data-driven challenges. Today, TIBCO announced it is recognized as a Leader inGartners 2020 Magic Quadrant for Data Science and Machine Learning Platforms* for the second year in a row. The company sees this as further validation that its artificial intelligence (AI), data science, and machine learning (ML) capabilities are helping its customers and partners continually deliver an exceptional customer experience and solve difficult business challenges with data science.

Big news: @TIBCO Recognized as a Leader in 2020 @Gartner_inc Magic Quadrant for #DataScience and #MachineLearning Platforms

TIBCO is committed to helping enterprises realize the value of their data science initiatives by improving productivity of knowledge workers and operationalizing analytic insights. One of our guiding principles is to make AI a foundation of our products. By infusing AI at the core of our data science products, we can automate basic tasks, and free up time for innovation, while enforcing best practices, said Michael OConnell, chief analytics officer, TIBCO. Ultimately, our data science products enable our customers to turn data into actionable intelligence at scale, and create competitive advantage for their business. We view our consistent recognition as a Leader in data science and ML from Gartner as a testament to our success in creating extreme value for our clients.

Recommended AI News: Cryptocurrency Volatility A Friend Or A Foe

TIBCO Data Science and TIBCO Spotfire continue to be top competitors in the market, providing customers with real-time, augmented visual analytics and deep data science capabilities. AI and machine learning capabilities are core foundations across all TIBCO offerings in the TIBCO Connected Intelligence platform, including the IoT-centric Project Flogo. From full-edge integration to the execution of deep-learning models on devices or in AI-infused BI dashboards, businesses are turning to TIBCO to collect data from anywhere, derive and automate insight and actions, and drive success through digital transformation.

In the 2020 Magic Quadrant for Data Science and Machine Learning report, Gartner, Inc. evaluated the strengths and cautions of 16 DSML platform providers, with TIBCO named in the Leaders quadrant. Vendors are plotted by Gartner based on their ability to execute and their completeness of vision.

Recommended AI News: Skills to Master Before your First Data Science Interview

Read the fullGartner 2020 Magic Quadrant for Data Science and Machine LearningPlatformsreport for further insights.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartners research organization and should not be construed as statements of fact. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Recommended AI News: Demystifying The Role Of Intelligent Automation In Outsourcing

Continued here:
TIBCO Recognized as a Leader in 2020 GMQ for Data Science and ML - AiThority

AI could help with the next pandemicbut not with this one – MIT Technology Review

It was an AI that first saw it coming, or so the story goes. On December 30, an artificial-intelligence company called BlueDot, which uses machine learning to monitor outbreaks of infectious diseases around the world, alerted clientsincluding various governments, hospitals, and businessesto an unusual bump in pneumonia cases in Wuhan, China. It would be another nine days before the World Health Organization officially flagged what weve all come to know as Covid-19.

BlueDot wasnt alone. An automated service called HealthMap at Boston Childrens Hospital also caught those first signs. As did a model run by Metabiota, based in San Francisco. That AI could spot an outbreak on the other side of the world is pretty amazing, and early warnings save lives.

You can read all of ourcoverage of the coronavirus/Covid-19 outbreakfor free, and also sign up for ourcoronavirus newsletter. But pleaseconsider subscribingto support our nonprofit journalism..

But how much has AI really helped in tackling the current outbreak? Thats a hard question to answer. Companies like BlueDot are typically tight-lipped about exactly who they provide information to and how it is used. And human teams say they spotted the outbreak the same day as the AIs. Other projects in which AI is being explored as a diagnostic tool or used to help find a vaccine are still in their very early stages. Even if they are successful, it will take timepossibly monthsto get those innovations into the hands of the health-care workers who need them.

The hype outstrips the reality. In fact, the narrative that has appeared in many news reports and breathless press releasesthat AI is a powerful new weapon against diseasesis only partly true and risks becoming counterproductive. For example, too much confidence in AIs capabilities could lead to ill-informed decisions that funnel public money to unproven AI companies at the expense of proven interventions such as drug programs. Its also bad for the field itself: overblown but disappointed expectations have led to a crash of interest in AI, and consequent loss of funding, more than once in the past.

So heres a reality check: AI will not save us from the coronaviruscertainly not this time. But theres every chance it will play a bigger role in future epidemicsif we make some big changes. Most wont be easy. Some we wont like.

There are three main areas where AI could help: prediction, diagnosis, and treatment.

Prediction

Companies like BlueDot and Metabiota use a range of natural-language processing (NLP) algorithms to monitor news outlets and official health-care reports in different languages around the world, flagging whether they mention high-priority diseases, such as coronavirus, or more endemic ones, such as HIV or tuberculosis. Their predictive tools can also draw on air-travel data to assess the risk that transit hubs might see infected people either arriving or departing.

The results are reasonably accurate. For example, Metabiotas latest public report, on February 25, predicted that on March 3 there would be 127,000 cumulative cases worldwide. It overshot by around 30,000, but Mark Gallivan, the firms director of data science, says this is still well within the margin of error. It also listed the countries most likely to report new cases, including China, Italy, Iran, and the US. Again: not bad.

Sign up for The Algorithm artificial intelligence, demystified

Others keep an eye on social media too. Stratifyd, a data analytics company based in Charlotte, North Carolina, is developing an AI that scans posts on sites like Facebook and Twitter and cross-references them with descriptions of diseases taken from sources such as the National Institutes of Health, the World Organisation for Animal Health, and the global microbial identifier database, which stores genome sequencing information.

Work by these companies is certainly impressive. And it goes to show how far machine learning has advanced in recent years. A few years ago Google tried to predict outbreaks with its ill-fated Flu Tracker, which was shelved in 2013 when it failed to predict that years flu spike. What changed? It mostly comes down to the ability of the latest software to listen in on a much wider range of sources.

Unsupervised machine learning is also key. Letting an AI identify its own patterns in the noise, rather than training it on preselected examples, highlights things you might not have thought to look for. When you do prediction, you're looking for new behavior, says Stratifyds CEO, Derek Wang.

But what do you do with these predictions? The initial prediction by BlueDot correctly pinpointed a handful of cities in the viruss path. This could have let authorities prepare, alerting hospitals and putting containment measures in place. But as the scale of the epidemic grows, predictions become less specific. Metabiotas warning that certain countries would be affected in the following week might have been correct, but it is hard to know what to do with that information.

Whats more, all these approaches will become less accurate as the epidemic progresses, largely because reliable data of the sort that AI needs to feed on has been hard to get about Covid-19. News sources and official reports offer inconsistent accounts. There has been confusion over symptoms and how the virus passes between people. The media may play things up; authorities may play things down. And predicting where a disease may spread from hundreds of sites in dozens of countries is a far more daunting task than making a call on where a single outbreak might spread in its first few days. Noise is always the enemy of machine-learning algorithms, says Wang. Indeed, Gallivan acknowledges that Metabiotas daily predictions were easier to make in the first two weeks or so.

One of the biggest obstacles is the lack of diagnostic testing, says Gallivan. Ideally, we would have a test to detect the novel coronavirus immediately and be testing everyone at least once a day, he says. We also dont really know what behaviors people are adoptingwho is working from home, who is self-quarantining, who is or isnt washing handsor what effect it might be having. If you want to predict whats going to happen next, you need an accurate picture of whats happening right now.

Its not clear whats going on inside hospitals, either. Ahmer Inam at Pactera Edge, a data and AI consultancy, says prediction tools would be a lot better if public health data wasnt locked away within government agencies as it is in many countries, including the US. This means an AI must lean more heavily on readily available data like online news. By the time the media picks up on a potentially new medical condition, it is already too late, he says.

But if AI needs much more data from reliable sources to be useful in this area, strategies for getting it can be controversial. Several people I spoke to highlighted this uncomfortable trade-off: to get better predictions from machine learning, we need to share more of our personal data with companies and governments.

Darren Schulte, an MD and CEO of Apixio, which has built an AI to extract information from patients records, thinks that medical records from across the US should be opened up for data analysis. This could allow an AI to automatically identify individuals who are most at risk from Covid-19 because of an underlying condition. Resources could then be focused on those people who need them most. The technology to read patient records and extract life-saving information exists, says Schulte. The problem is that these records are split across multiple databases and managed by different health services, which makes them harder to analyze. Id like to drop my AI into this big ocean of data, he says. But our data sits in small lakes, not a big ocean.

Health data should also be shared between countries, says Inam: Viruses dont operate within the confines of geopolitical boundaries. He thinks countries should be forced by international agreement to release real-time data on diagnoses and hospital admissions, which could then be fed into global-scale machine-learning models of a pandemic.

Of course, this may be wishful thinking. Different parts of the world have different privacy regulations for medical data. And many of us already balk at making our data accessible to third parties. New data-processing techniques, such as differential privacy and training on synthetic data rather than real data, might offer a way through this debate. But this technology is still being finessed. Finding agreement on international standards will take even more time.

For now, we must make the most of what data we have. Wangs answer is to make sure humans are around to interpret what machine-learning models spit out, making sure to discard predictions that dont ring true. If one is overly optimistic or reliant on a fully autonomous predictive model, it will prove problematic, he says. AIs can find hidden signals in the data, but humans must connect the dots.

Early diagnosis

As well as predicting the course of an epidemic, many hope that AI will help identify people who have been infected. AI has a proven track record here. Machine-learning models for examining medical images can catch early signs of disease that human doctors miss, from eye disease to heart conditions to cancer. But these models typically require a lot of data to learn from.

A handful of preprint papers have been posted online in the last few weeks suggesting that machine learning can diagnose Covid-19 from CT scans of lung tissue if trained to spot telltale signs of the disease in the images. Alexander Selvikvg Lundervold at the Western Norway University of Applied Sciences in Bergen, Norway, who is an expert on machine learning and medical imaging, says we should expect AI to be able to detect signs of Covid-19 in patients eventually. But it is unclear whether imaging is the way to go. For one thing, physical signs of the disease may not show up in scans until some time after infection, making it not very useful as an early diagnostic.

AP Images

Whats more, since so little training data is available so far, its hard to assess the accuracy of the approaches posted online. Most image recognition systemsincluding those trained on medical imagesare adapted from models first trained on ImageNet, a widely used data set encompassing millions of everyday images. To classify something simple that's close to ImageNet data, such as images of dogs and cats, can be done with very little data, says Lundervold. Subtle findings in medical images, not so much.

Thats not to say it wont happenand AI tools could potentially be built to detect early stages of disease in future outbreaks. But we should be skeptical about many of the claims of AI doctors diagnosing Covid-19 today. Again, sharing more patient data will help, and so will machine-learning techniques that allow models to be trained even when little data is available. For example, few-shot learning, where an AI can learn patterns from only a handful of results, and transfer learning, where an AI already trained to do one thing can be quickly adapted to do something similar, are promising advancesbut still works in progress.

Cure-all

Data is also essential if AI is to help develop treatments for the disease. One technique for identifying possible drug candidates is to use generative design algorithms, which produce a vast number of potential results and then sift through them to highlight those that are worth looking at more closely. This technique can be used to quickly search through millions of biological or molecular structures, for example.

SRI International is collaborating on such an AI tool, which uses deep learning to generate many novel drug candidates that scientists can then assess for efficacy. This is a game-changer for drug discovery, but it can still take many months before a promising candidate becomes a viable treatment.

In theory, AIs could be used to predict the evolution of the coronavirus too. Inam imagines running unsupervised learning algorithms to simulate all possible evolution paths. You could then add potential vaccines to the mix and see if the viruses mutate to develop resistance. This will allow virologists to be a few steps ahead of the viruses and create vaccines in case any of these doomsday mutations occur, he says.

Its an exciting possibility, but a far-off one. We dont yet have enough information about how the virus mutates to be able to simulate it this time around.

In the meantime, the ultimate barrier may be the people in charge. What Id most like to change is the relationship between policymakers and AI, says Wang. AI will not be able to predict disease outbreaks by itself, no matter how much data it gets. Getting leaders in government, businesses, and health care to trust these tools will fundamentally change how quickly we can react to disease outbreaks, he says. But that trust needs to come from a realistic view of what AI can and cannot do nowand what might make it better next time.

Making the most of AI will take a lot of data, time, and smart coordination between many different people. All of which are in short supply right now.

Read more here:
AI could help with the next pandemicbut not with this one - MIT Technology Review

Machine learning could improve the diagnosis of mastitis infections in cows – Jill Lopez

The new study, published today inScientific Reports, has found that machine learning has the potential to enhance and improve a veterinarian's ability to accurately diagnose herd mastitis origin and reduce mastitis levels on dairy farms.

Mastitis is an extremely costly endemic disease of dairy cattle, costing around 170 million in the UK. A crucial first step in the control of mastitis is identifying where mastitis causing pathogens originate; does the bacteria come from the cows' environment or is it contagiously spread through the milking parlour?

This diagnosis is usually performed by a veterinarian by analysing data from the dairy farm and is a cornerstone of the widely used Agriculture and Horticulture Development Board (AHDB) mastitis control plan, however this requires both time and specialist veterinary training.

Machine learning algorithms are widely used, from filtering spam emails and the suggestion of Netflix movies to the accurate classification of skin cancer. These algorithms approach diagnostic problems as a student doctor or veterinarian might; learning rules from data and applying them to new patients.

This study, which was led by veterinarian and researcher Robert Hyde from the School of Veterinary Medicine and Science at the University of Nottingham, aims to create an automated diagnostic support tool for the diagnosis of herd level mastitis origin, an essential first step of the AHDB mastitis control plan.

Mastitis data from 1,000 herds' was inputted for several three-month periods. Machine learning algorithms were used to classify herd mastitis origin and compared with expert diagnosis by a specialist vet.

The machine learning algorithms were able to achieve a classification accuracy of 98% for environmental vs contagious mastitis, and 78% accuracy was achieved for the classification of lactation vs dry period environmental mastitis when compared with expert veterinary diagnosis.

Dr Hyde said: "Mastitis is a huge problem for dairy farmers, both economically and in welfare terms. In our study we have shown that machine learning algorithms can accurately diagnose the origin of this condition on dairy farms. A diagnostic tool of this kind has great potential in the industry to tackle this condition and to assist veterinary clinicians in making a rapid diagnosis of mastitis origin at herd level in order to promptly implement control measures for an extremely damaging disease in terms of animal health, productivity, welfare and antimicrobial use."

Excerpt from:
Machine learning could improve the diagnosis of mastitis infections in cows - Jill Lopez

Banks’ machine learning/AI-based algorithms are gaining traction and generating alpha – Institutional Asset Manager

New research published by TABB Group says the sell-side is in a survival of the fittest race for the top spot in buy-side clients algo wheels.

According to New York-based senior equity analyst Michael Mollemans the author of AI in Sell-Side Equity Algorithms: Survival of the Fittest, the sell-sides artificial intelligence (AI) equity algorithm ecosystem has expanded after years of development work to a point where significant AI-attributable excess returns have finally begun to be realised in the past two years.

As automated performance measurement applications like algo wheels are driving broker selection decisions, competition to build better, faster, smarter algorithms has become a war of attrition. Now more than ever, says Mollemans, not keeping up means youre going backwards, which is why we believe consolidation in the algorithmic trading space will continue, just as the sell-side overall continues to consolidate.

TABB Group interviewed 50 AI algorithm experts from the buy side, sell side, and fintech vendors and produced AI algo ecosystem case studies on US, European, and Asian banks. The 27 page, nine-exhibit report, created to help traders gain depth and breadth of insight and a better understanding about whats happening under the hood in their AI algorithms, covers eight key areas:

How sell-side firms must stay ahead of rapidly evolving, AI-algo data science

Improvements in performance attributable to AI models

Leveraging economies of scale and development budgets to support advanced AI ecosystems

AI applications focusing on scheduling, price and volume prediction, spread capture, strategy and parameter selection and venue-routing decisions

Explainable AI

Turning a black box algo into a clear box

Utilising proprietary data unavailable to competitors

How oversight and governance procedures will become more sophisticated

Moving forward, Mollemans believes that only a few banks will dominate the global algorithmic trading space in the next five years.

The most significant challenge is the changing science of AI and the growing investment needed to transition from traditional algos to AI," he says. "Some of these AI-based techniques, like t-SNE, were not even in existence 10 years ago. In fact, 41 per cent of sell-side firms interviewed launched their client-based AI algos only last year.

See more here:
Banks' machine learning/AI-based algorithms are gaining traction and generating alpha - Institutional Asset Manager

The Impact of Python: How It Could Rule the AI World? – insideBIGDATA

Holdyour head up high! The rise of artificial intelligence (AI) and machinelearning (ML) are poised to bring a new era of civilization and not destroythem.

Yet,theres fear that technology will displace the current workers or tasks, andthats partly true. As predicted by researches, the speed at which AI isreplacing jobs is bound to skyrocket, impacting the jobs of several workerssuch as factory workers, accountants, radiologists, paralegal, and truckers.

Shufflingand transformation of jobs around the workforce are being witnessed, thanks tothe technological epoch.

Buthey, were still far from Terminator.

What can be the odds?

The fear is good, perhaps it is only a matter of time before AI and automation will replace the jobs of millions of tech professionals. A 2018 report by the World Economic Forum suggested that around 75 million jobs will be displaced due to automation and AI in the next five years. The good news is, despite these many jobs will be replaced, at the same time, there will also be a creation of 133 million newer job roles for AI engineers and AI experts.

Simplysaid, within the next five years, there will be near about 58 million newer jobroles in the field of AI.

Insteadof worrying about AI and automation stealing your job, you should beconsidering how you need to reshape your career.

AI and ML in theworkplace: How prepared are you for the impact?

AIand machine learning projects are now leading every industry and sector intothe future of technological advancements. The question is, what are the bestways for you to bring these experiences into reality? What are the programminglanguages that can be used for machine learning and AI?

Thinkahead, you can start by considering Python for machine learning and AI.

But why Python?

Python is the foundational language for AI. However, the projects do differ from a traditional software project, thus, it is necessary to dive deeper into the subject. The crux of building an AI career is by learning Python a programming language that is loved by all because it is both stable and flexible. It is now widely used for machine learning applications and why not, it has become one of the best choices across industries.

Over here, we will list down why Python is the most preferred programming language by AI experts today:

Huge bundle of libraries/frameworks

Itis often a tricky task to choose what best fits while running an ML or an AIalgorithm. It is crucial to have the right set of libraries, a well-structuredenvironment for developers to come up with the best coding solution.

Toease their development timings, most developers rely on Python libraries andframeworks. In a software library, there are already pre-written codes that thedevelopers look up to solve programming challenges. This is where Pythonspre-existing extensive set of libraries play a major role in providing themwith the set of libraries and frameworks to choose from. To name a few are:

With these solutions, it gets easier for the developer to develop your product faster. Even so, the development team needs to waste time finding the libraries that will best suit their project. They can always use an existing library for the implementation of further changes.

Holds a strong community and wide popularity

Accordingto a developer survey Stack Overflow (2018), Python was seen to be among thetop most popular programming language amongst developers. This simply means,for every job that you seek in the job market, AI will always be one of theskillsets that they will look to hire for.

Itis also seen that there are nearly more than 140,000 online repositories thathave custom-built software packages of Python. For instance, Python librariessuch as SciPy, NumPy, and Matplotlib can easily be installed in a program thatruns on Python.

Pythonwas pointed out to be 2019s 8th fastest growing programminglanguage with a growth rate of 151% year on year.

Now, these packages used in machine learning helps AI engineers detect patterns from a large dataset. Pythons popularity is widely known that even Google uses this language to crawl web pages. Pixar, an animation studio uses it to produce movies. Surprisingly, even Spotify uses Python for song recommendation.

Within the past years, Python has managed to grow its community worldwide. You can find multiple platforms and forums where machine learning solutions are shared. For every problem, youve faced youll always find theres already someone who has been through with the same problem. Thus, it is easy to find solutions and guidance through this community.

Platform-independent

This simply means, a programming language or a framework allows developers to implement things on a single machine learning, and the same can be used on another machine learning without further changing anything. The best factor about Python is that it is a language that is platform-independent and is supported by several other platforms such as Windows, macOS, and Linux.

Python code can itself create a standalone program that is executable in most operating systems without even needing a Python interpreter.

Simple and most loved programming language

Python is said to be the simplest and the most consistent programming language offering readable code. While there are complex algorithms that stand along with machine learning, Pythons concise and easy readability allows AI professionals to write easy systems that are reliable. This allows the developers to solve complex machine learning problems instead of dealing with technical issues of the language.

Sofar Python is projected to be the only language that is easy for developers tolearn. Some say Python is intuitive as compared to other programming languages.While others believe, it is due to the number of libraries Python offers thatmakes it suitable for all developers to use.

In conclusion

Pythons power and ease of use has catapulted it to become one of the core languages to provide machine learning solutions. Moreover, AI and ML have been the biggest innovation so far ever since the launch of microchip, developing a career in this realm will pave a way toward the future of tomorrow.

About the Author

Michael Lyam is a writer, AI researcher, business strategist, and top contributor on Medium. He is passionate about technology and is inspired to find new ways to create captivating content. Michaels areas of expertise are: AI, machine learning, data science, and business strategy.

Sign up for the free insideBIGDATAnewsletter.

Visit link:
The Impact of Python: How It Could Rule the AI World? - insideBIGDATA