Media Search:



Global Precision Aquaculture Market Forecast to 2026: Surging Adoption of Advanced Technologies such as IoT, ROVs – GlobeNewswire

Dublin, May 28, 2021 (GLOBE NEWSWIRE) -- The "Global Precision Aquaculture Market with COVID-19 Impact Analysis by System Type (Feeding Systems, Monitoring & Control, Underwater ROVs), Offering (Hardware, Software, Services), Farm Type (Cage-based, RAS), Application, and Geography - Forecast to 2026" report has been added to ResearchAndMarkets.com's offering.

The global precision aquaculture market is estimated to grow from USD 407 million in 2021 to USD 794 million by 2026; at a CAGR of 14.3%.

Precision aquaculture technology has the potential to transform the aquaculture industry, making traditional aquaculture activities more efficient and economical.

The growth of the precision aquaculture market is driven by factors such as growing investments in technological research and product innovation, the surging adoption of advanced technologies such as IoT, ROVs, and AI for the real-time monitoring of aquaculture farms, the rising demand for protein-rich aqua food, and increasing support by worldwide governments for infrastructure development in aquaculture.

RAS-based aquaculture farm to register higher CAGR during the forecast period

The precision aquaculture market for RAS-based aquaculture farms is projected to register the higher CAGR during the forecast period, by aquaculture farm type. The market for RAS-based aquaculture farms is expected to grow at a higher rate during the forecast period. Currently, RAS-based aquaculture farms constitute around 5% to 8% of the total farms in the world, which is expected to reach ~30% by 2030 (As per the Food and Agriculture Organization of United Nations). Growing awareness about the benefits of RAS-based aquaculture farming such as the requirement for less water as compared to conventional systems and environment-friendly systems is fueling the growth of the market during the forecast period.

Feed optimization is estimated to hold the largest share of the market during forecast period

The feed optimization segment of the precision aquaculture market is estimated to register largest market share in 2026, by application. Factors contributing toward the predominance of feed optimization application over others are the increasing adoption of advanced technologies such as AI and machine learning in aquaculture equipment and tools, and the growing demand for protein-rich aqua food.

Underwater remotely-operated vehicles to register higher CAGR during the forecast period

The precision aquaculture market for underwater remotely-operated vehicles is projected to register the higher CAGR during the forecast period, by system type. The adoption trend of underwater ROVs in Western Europe and North America has gained significant traction in recent years. Currently, countries such as the US, Canada, Norway, and Chile account for more than 60% of the installed base of underwater ROVs.

Hardware is estimated to hold the largest share of the market during forecast period

The hardware segment of the precision aquaculture market is estimated to register largest market share in 2026, by offering. Hardware components such as sensors, monitoring and control devices, smart feeding systems, underwater remotely-operated vehicles (ROVs), and climate control systems are expected to continue to account for the largest market share during the forecast year owing to the high adoption of automated aquaculture farm monitoring devices by aquaculture farm owners for increasing farm productivity and the efficient management of the farm.

South America is projected to become the fastest geographical market between 2021 and 2026

South America is expected to witness the highest CAGR in the precision aquaculture market during the forecast period due to the increasing adoption of automated solutions including underwater ROVs and smart camera systems in aquaculture farms and the growing focus on the deployment of IoT-based monitoring devices in aquaculture farms owing to various advantages such as increased productivity and the early detection of diseases among aquatic species offered by these monitoring devices.

The precision aquaculture market is dominated by a few established players such as AKVA group (Norway), InnovaSea Systems (US), Steinsvik (ScaleAQ) (Norway), Deep Trekker (Canada), Aquabyte (US), and Eruvaka Technologies (India).

Key Topics Covered:

1 Introduction

2 Research Methodology

3 Executive Summary

4 Premium Insights4.1 Surging Adoption of Advanced Technologies such as IoT, ROVs, and AI in Aquaculture Farms4.2 China and Monitoring & Control Systems to be Largest Shareholders of Precision Aquaculture Market in Asia-Pacific, by Country and System Type, Respectively, in 20204.3 Farm Monitoring & Surveillance Application to Hold Largest Share of Precision Aquaculture Market in 20264.4 Hardware Segment to Dominate Precision Aquaculture Market in Terms of Size During Forecast Period4.5 Open Aquaculture Farms to Account for Larger Market Share Between 2021 and 20264.6 Americas to Gain Significant Market Share of Global Precision Aquaculture Market by 2026

5 Market Overview5.1 Market Dynamics5.1.1 Drivers5.1.1.1 Surging Adoption of Advanced Technologies such as IoT, ROVs, and AI for Real-Time Monitoring of Aquaculture Farms5.1.1.2 Growing Investments in Technological Research and Product Innovations5.1.1.3 Rising Income Levels and Demand for Protein-Rich Aqua Food5.1.1.4 Increasing Government Support Worldwide for Freshwater Aquaculture Production5.1.2 Restraints5.1.2.1 High Upfront Costs and Capital Expenditure5.1.2.2 Need for Skilled Operators for Effective Management of Complex Systems5.1.2.3 Lack of Technological Awareness Among Aquaculture Farmers5.1.3 Opportunities5.1.3.1 Surging Adoption of Aquaculture Monitoring and Feed Optimization Devices in Developing Countries5.1.3.2 Increasing Number of Cage-based Farms in Developing Regions Such as India, China, and South-East Asian Countries5.1.3.3 Growing Popularity of Land-Based Recirculating Aquaculture Systems5.1.4 Challenges5.1.4.1 Environmental Concerns due to Extensive Aquaculture Farming5.1.4.2 Lack of Common Information Management System Platform in Aquaculture Industry

6 Industry Trends6.1 Value Chain Analysis6.1.1 Major Market Players in Precision Aquaculture Market6.2 Industry Trends6.2.1 Advent of AI, AR/VR, and Blockchain in Aquaculture to Accelerate Market Growth6.2.2 Use of Farm Automation Solutions, Remotely-Operated Vehicles, and Feeding Robots to Reduce Labor Costs6.2.3 Adoption of Robotic Cages and Underwater Drones in Aquaculture Farms6.3 Pricing Analysis: Average Selling Price (ASP) Trends6.4 List of Key Patents and Innovations in Precision Aquaculture Market, 2015-20206.5 Trade Data6.6 Case Studies: Precision Aquaculture Market6.6.1 Introduction6.6.2 Open Blue: Innovasea Helps Open Blue Become Largest Open Ocean Fish Farm in World6.6.3 Earth Ocean Farms: Innovasea Enables Earth Ocean Farms to Expand Production with Rugged Evolution Pens6.6.4 Erko Seafood AS: With AKVA Group's Expertise and Quick Service, Erko Seafood as Found Real Deal6.6.5 Vermont Hatchery: Vermont Hatchery Saves Millions in Energy Costs with Innovasea-Designed Recirculating Aquaculture System6.6.6 The Kingfish Company: With Philips Lighting, the Kingfish Zeeland Fishery Aims to Obtain Sustainable Production of Premium Marine Seafood6.6.7 Gifas: Philips Led Solutions Offer New Possibilities for Salmon Industry6.6.8 Lingalaks: Philips Led Lighting Helps in Preventing Sea Lice in Salmon Production6.7 Porter's Five Forces Analysis

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/og9cf0

See the original post here:
Global Precision Aquaculture Market Forecast to 2026: Surging Adoption of Advanced Technologies such as IoT, ROVs - GlobeNewswire

On Thinking Machines, Machine Learning, And How AI Took Over Statistics – Forbes

Sixty-five years ago, Arthur Samuel went on TV to show the world how the IBM 701 plays checkers. He was interviewed on a live morning news program, sitting remotely at the 701, with Will Rogers Jr. at the TV studio, together with a checkers expert who played with the computer for about an hour. Three years later, in 1959, Samuel published Some Studies in Machine Learning Using the Game of Checkers, in the IBM Journal of Research and Development, coining the term machine learning. He defined it as the programming of a digital computer to behave in a way which, if done by human beings or animals, would be described as involving the process of learning.

On February 24, 1956, Arthur Samuels Checkers program, which was developed for play on the IBM 701, ... [+] was demonstrated to the public on television

A few months after Samuels TV appearance, ten computer scientists convened in Dartmouth, NH, for the first-ever workshop on artificial intelligence, defined a year earlier by John McCarthy in the proposal for the workshop as making a machine behave in ways that would be called intelligent if a human were so behaving.

In some circles of the emerging discipline of computer science, there was no doubt about the human-like nature of the machines they were creating. Already in 1949, computer pioneer Edmund Berkeley wrote inGiant Brains or Machines that Think: Recently there have been a good deal of news about strange giant machines that can handle information with vast speed and skill... These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.

Maurice Wilkes, a prominent developer of one of those giant brains, retorted in 1953: Berkeley's definition of what is meant by a thinking machine appears to be so wide as to miss the essential point of interest in the question, Can machines think? Wilkes attributed this not-very-good human thinking to a desire to believe that a machine can be something more than a machine. In the same issue of the Proceeding of the I.R.E that included Wilkes article, Samuel published Computing Bit by Bit or Digital Computers Made Easy. Reacting to what he called the fuzzy sensationalism of the popular press regarding the ability of existing digital computers to think, he wrote: The digital computer can and does relieve man of much of the burdensome detail of numerical calculations and of related logical operations, but perhaps it is more a matter of definition than fact as to whether this constitutes thinking.

Samuels polite but clear position led Marvin Minsky in 1961 to single him out, according to Eric Weiss, as one of the few leaders in the field of artificial intelligence who believed computers could not think and probably never would. Indeed, he pursued his life-long hobby of developing checkers-playing computer programs and professional interest in machine learning not out of a desire to play God but because of the specific trajectory and coincidences of his career. After working for 18 years at Bell Telephone Laboratories and becoming an internationally recognized authority on microwave tubes, he decided at age 45 to move on, as he was certain, says Weiss in his review of Samuels life and work, that vacuum tubes soon will be replaced by something else.

The University of Illinois came calling, asking him to revitalize their EE graduate research program. In 1948, the project to build the Universitys first computer was running out of money. Samuel thought (as he recalled in an unpublished autobiography cited by Weiss) that it ought to be dead easy to program a computer to play checkers and that if their program could beat a checkers world champion, the attention it would generate will also generate the required funds.

The next year, Samuel started his 17-year tenure with IBM, working as a senior engineer on the team developing the IBM 701, IBMs first mass-produced scientific computer. The chief architect of the entire IBM 700 series was Nathaniel Rochester, later one of the participants in the Dartmouth AI workshop. Rochester was trying to decide the word length and order structure of the IBM 701 and Samuel decided to rewrite his checkers-playing program using the order structure that Rochester was proposing. In his autobiography, Samuel recalled that I was a bit fearful that everyone in IBM would consider checker-playing program too trivial a matter, so I decided that I would concentrate on the learning aspects of the program. Thus, more or less by accident, I became one of the first people to do any serious programing for the IBM 701 and certainly one of the very first to work in the general field later to become known as artificial intelligence. In fact, I became so intrigued with this general problem of writing a program that would appear to exhibit intelligence that it was to occupy my thoughts almost every free moment during the entire duration of my employment by IBM and indeed for some years beyond.

But in the early days of computing, IBM did not want to fan the popular fears that man was losing out to machines, so the company did not talk about artificial intelligence publicly, observed Samuel later. Salesmen were not supposed to scare customers with speculation about future computer accomplishments. So IBM, among other activities aimed at dispelling the notion that computers were smarter than humans, sponsored the movie Desk Set, featuring a methods engineer (Spencer Tracy) who installs the fictional and ominous-looking electronic brain EMERAC, and a corporate librarian (Katharine Hepburn) telling her anxious colleagues in the research department: They cant build a machine to do our jobthere are too many cross-references in this place. By the end of the movie, she wins both a match with the computer and the engineers heart.

In his1959 paper, Samuel described his approach to machine learning as particularly suited for very specific tasks, in distinction to the Neural-Net approach, which he thought could lead to the development of general-purpose learning machines. Samuels program searched the computers memory to find examples of checkerboard positions and selected the moves that were previously successful. The computer plays by looking ahead a few moves and by evaluating the resulting board positions much as a human player might do, wrote Samuel.

His approach to machine learning still would work pretty well as a description of whats known as reinforcement learning, one of the basket of machine-learning techniques that has revitalized the field of artificial intelligence in recent years, wrote Alexis Madrigal in a 2017 survey of checkers-playing computer programs. One of the men who wrote the bookReinforcement Learning, Rich Sutton, called Samuels research the earliest work thats now viewed as directly relevant to the current AI enterprise.

The current AI enterprise is skewed more in favor of artificial neural networks (or deep learning) then reinforcement learning, although Googles DeepMind famously combined the two approaches in its Go-playing program which successfully beat Go master Lee Sedol in a five-game match in 2016.

Already popular among computer scientists in Samuels time (in 1951, Marvin Minsky and Dean Edmunds built SNARCStochastic Neural Analog Reinforcement Calculatorthe first artificial neural network, using 3000 vacuum tubes to simulate a network of 40 neurons), the neural networks approach was inspired by a1943 paperby Warren S. McCulloch and Walter Pitts in which they described networks of idealized and simplified artificial neurons and how they might perform simple logical functions, leading to the popular (and very misleading) description of todays artificial neural networks-based AI as mimicking the brain.

Over the years, the popularity of neural networks have gone up and down a number of hype cycles, starting with thePerceptron, a 2-layer artificial neural network that was considered by the U.S. Navy, according to a 1958 New York Times report, to be "the embryo of an electronic computer that.. will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." In addition to failing to meet these lofty expectations, neural networks suffered from a fierce competition from a growing cohort of computer scientists (including Minsky) who preferred the manipulation of symbols rather than computational statistics as the better path to creating a human-like machine.

Inflated expectations meeting the trough of disillusionment, no matter what approach was taken, resulted in at least two periods of gloomy AI Winter. But with the invention and successful application of backpropagation as a way to overcome the limitations of simple neural networks, sophisticated statistical analysis was againon the ascendance, now cleverly labeled as deep learning. In 1988, R. Colin Johnson and Chappell Brown published Cognizers: Neural Networks and Machine That Think, proclaiming that neural networks can actually learn to recognize objects and understand speech just like the human brain and, best of all, they wont need the rules, programming, or high-priced knowledge-engineering services that conventional artificial intelligence systems requireCognizers could very well revolutionize our society and will inevitably lead to a new understanding of our own cognition.

Johnson and Brown predicted that as early as the next two years, neural networks will be the tool of choice for analyzing the contents of a large database. This predictionand no doubt similar ones in the popular press and professional journalsmust have sounded the alarm among those who did this type of analysis for a living in academia and in large corporations, having no clue of what the computer scientists were talking about.

InNeural Networks and Statistical Models, Warren Sarle explained in 1994 to his worried and confused fellow statisticians that the ominous-sounding artificial neural networks are nothing more than nonlinear regression and discriminant models that can be implemented with standard statistical software like many statistical methods, [artificial neural networks] are capable of processing vast amounts of data and making predictions that are sometimes surprisingly accurate; this does not make them intelligent in the usual sense of the word. Artificial neural networks learn in much the same way that many statistical algorithms do estimation, but usually much more slowly than statistical algorithms. If artificial neural networks are intelligent, then many statistical methods must also be considered intelligent.

Sarle provided his colleagues with a handy dictionary translating the terms used by neural engineers to the language of statisticians (e.g., features are variables). In anticipation of todays data science (a more recent assault led by computer programmers) and predictions of algorithms replacing statisticians (and even scientists), Sarle reassured his fellow statisticians that no black box can substitute for human intelligence: Neural engineers want their networks to be black boxes requiring no human interventiondata in, predictions out. The marketing hype claims that neural networks can be used with no experience and automatically learn whatever is required; this, of course, is nonsense. Doing a simple linear regression requires a nontrivial amount of statistical expertise.

In a footnote to his mention of neural networks in his 1959 paper, Samuel cited Warren S. McCulloch who has compared the digital computer to the nervous system of a flatworm, and declared: To extend this comparison to the situation under discussion would be unfair to the worm since its nervous system is actually quite highly organized as compared to [the most advanced artificial neural networks of the day]. In 2019, Facebooks top AI researcher and Turing Award-winner Yann LeCun declared that Our best AI systems have less common sense than a house cat. In the sixty years since Samuel first published his seminal machine learning work, artificial intelligence has advanced from being not as smart as a flatworm to having less common sense than a house cat.

Continued here:
On Thinking Machines, Machine Learning, And How AI Took Over Statistics - Forbes

How AI and machine learning help fight the COVID-19 battle – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

This post was written by Vatsal Ghiya, co-founder and chief operating officer of Shaip.

It is hard to imagine fighting a global pandemic without technologies such as Artificial Intelligence (AI) and Machine Learning (ML). The exponential rise of Covid-19 cases around the world left many health infrastructures paralyzed. However, institutions, governments, and organizations were able to fight back with the help of advanced technologies. Artificial intelligence and machine learning, once seen as a luxury for elevated lifestyles and productivity, have become life-saving agents in combating Covid thanks to their innumerable applications.

With allied technologies like Big Data, IoT, and data science, AI offered tools to frontline caregivers and resources to researchers and drug developers. In this post, we explore how AI and ML have helped battle Covid-19 and how they will continue to assist us in recovering from the chaos.

One of the most practical solutions to curb the spread of the virus is through contact tracing. This allows officials and healthcare providers to identify possible victims and carriers they have come in contact with. With this information, they can isolate Covid-positive patients and deliver healthcare solutions.

By coming up with models like SIR (Susceptible, Infectious, and Recovered), caregivers have been able to seamlessly trace contacts, identify vulnerable regions and clusters, announce containment zones, deploy additional healthcare facilities, and more.

In addition to offering prescriptive solutions, AI has also been used to predict positivity and mortality rates, probable mutations of viruses and their reflections on symptoms, and even arrive at dates and times when the contagion will be at its peak. With data-driven statistics and credible AI modules, officials have been able to proactively take measures like announcing lockdowns and shelter in place protocols, procuring vaccines, oxygen cylinders, PPE kits, testing apparatus, and more. This has been of immense help in developing nations with higher population density to stop the spread of the virus, or at least curb the intensity.

The circulation of fake news concerning the virus has been a significant challenge.With social media devoid of supervision or any form of moderation, many people (anonymously) took to social media platforms and instant messengers to circulate false information and conspiracy theories.

From posts that claimed how to cure Covid through home remedies to theories about last Junes Great Reset meeting of the World Economic Forum, thousands of unfounded messages and posts have been going viral. This has been increasing anxiety levels and paranoia among a world population that has already faced a high level of stress. However, through moderations and screening, AI has been doing an incredible job at preventing conspiracy theories and fake information from making the rounds.

Healthcare centers and institutions have been overburdened like never before. For more than a year, many frontline workers including doctors, nurses, and paramedics have been overworked beyond their capacity. With every incoming patient requiring immediate attention, it becomes nearly impossible to maintain sufficient focus to treat everyone.

Thankfully, AI systems have come to the rescue with precise diagnostic chatbots. Through tech concepts such as Natural Language Processing (NLP), an organization called Paginemediche rolled out a chatbot that offered a highly accurate diagnosis of Covid-19 through data fed to it by users.

Based on responses to questions, the chatbot retrieved and offered guidelines, diagnosis, and solutions from the most credible resources and suggested if a patient needed to be isolated, seek medical attention. or understand that their infection is a common flu and not Covid-19. This has slowed the flow of patients to hospitals and healthcare centers to a significant extent.

Vaccines typically are developed through extensive, time-consuming rounds of clinical trials. However, with AI and ML, Covid vaccine development moved forward at lightning speeds compared to previous viral outbreaks. Through pattern recognition and simulation, researchers have been able to come up with the most effective formulas of medications to help the body develop antigens and build immunity against the virus.

Before the AI models were able to provide accurate results for combating Covid, they went through extensive testing. Covid datasets from multiple resources have all assisted solution providers and development companies to launch reliable Covid-related services. For a healthcare-based AI solution to be precise, healthcare datasets that are fed to it should be airtight.

Also, despite offering such revolutionary apps and solutions, AI models for battling Covd are not universally applicable. Every region of the world is fighting its own version of a mutated virus and a population behavior and immune system specific to that particular geographic location. Thats why there is an inherent need for more AI-driven healthcare solutions to penetrate deeper levels of specific world populations.

Any AI or MLcompany looking to develop a solution and contribute to the fight against the virus should be working with highly accurate medical datasets to ensure optimized results. This is the only they you can offer meaningful services or solutions to society right now. The functionality of your solution is crucial. Thats why we recommend you source your healthcare datasets from the most credible avenues in the market, so you have a fully functional solution to roll out and help those in need.

As co-founder and chief operating officer of Shaip, Vatsal Ghiya has 20-plus years of experience in healthcare software and services. Ghiya also co-founded ezDI, a cloud-based software solution company that provides a Natural Language Processing (NLP) engine and a medical knowledge base with products including ezCAC and ezCDI.

Read the rest here:
How AI and machine learning help fight the COVID-19 battle - VentureBeat

Hardening AI: Is machine learning the next infosec imperative? – ITProPortal

As enterprise deployments of machine learning continue at a strong pace, including in mission-critical environments such as in contact centers, for fraud detection and in regulated sectors like healthcare and finance for example, they are doing so against a backdrop of rising and evermore ferocious cyberattacks.

Take, for example, the SolarWinds hack in December 2020, arguably one of the largest on record, or the recent exploits that hit Exchange servers and affected tens of thousands of customers. Alongside such attacks, we've seen new impetus behind the regulation of artificial intelligence (AI), with the world's first regulatory framework for the technology arriving in April 2021. The EU's landmark proposals build on GDPR legislation, carrying heavy penalties for enterprises that fail to consider the risks and ensure that trust goes hand in hand with success in AI.

Altogether, a climate is emerging in which the significance of securing machine learning can no longer be ignored. Although this is a burgeoning field with much more innovation to come, the market is already starting to take the threat seriously.

Our research surveys reveal a steep change in deployments of machine learning during the pandemic, with more than 80 percent of enterprises saying they are trialing the technology or have put it into production, up from just over half a year ago.

But the topic of securing those systems has received little fanfare by comparison, even though research into the security of machine learning models goes back to the early 2000s.

We've seen several high-profile incidents that highlight the risks stemming from greater use of the technology. In 2020, a misconfigured server at Clearview AI, the controversial facial recognition start-up, leaked the company's internal files, apps and source code. In 2019, hackers were able to trick the Autopilot system of a Tesla Model S by using adversarial approaches involving sticky notes. Both pale in comparison to more dangerous scenarios, including the autonomous car that killed a pedestrian in 2018 and a facial recognition system that caused the wrongful arrest of an innocent person in 2019.

The security community is becoming more alert to the dangers of real-world AI. The CERT Coordination Center, which tracks security vulnerabilities globally, published its first note on machine learning risks in late 2019, and in December 2020, The Partnership on AI introduced its AI Incident Database, the first to catalog events in which AI has caused "safety, fairness, or other real-world problems".

The challenges that organizations are facing with machine learning are also shifting in this direction.

Several years ago, problems with preparing data, gaining skills and applying AI to specific business problems were the dominant headaches, but new topics are now coming to the fore. Among them are governance, auditability, compliance and above all, security.

According to CCS Insight's latest survey of senior IT leaders, security is now the biggest hurdle companies face with AI, cited by over 30 percent of respondents. Many companies struggle with the most rudimentary areas of security at the moment, but machine learning is a new frontier, particularly as business leaders start to think more about the risks that arise as the technology is embedded into more business operations.

Missing until recently are tools that help customers improve the security of their machine learning systems. A recent Microsoft survey, for example, found that 90 percent of businesses said they lack tools to secure their AI systems and that security pros were looking for specific guidance in the field.

Responding to this need, the market is now stepping up. In October 2020, non-profit organization MITRE, in collaboration with 12 firms including Microsoft, Airbus, Bosch, IBM and Nvidia, released an Adversarial ML Threat Matrix, an industry-focused open framework to help security analysts detect and respond to threats against machine learning systems.

Additionally, in April 2021, Algorithmia, a supplier of an enterprise machine learning operations (MLOps) platform that specializes in the governance and security of the machine learning life cycle, released a host of new security features focused on the integration of machine learning into the core IT security environment. They include support for proxies, encryption, hardened images, API security and auditing and logging. The release is an important step, highlighting my view that security will become intrinsic to the development, deployment and use of machine learning applications.

Finally, just last week, Microsoft released Counterfit, an open-source automation tool for security testing AI systems. Counterfit helps organizations conduct AI security risk assessments to ensure that algorithms used in businesses are robust, reliable and trustworthy. The tool enables pen testing of AI systems, vulnerability scanning and logging to record attacks against a target model.

These are early but important first steps that indicate the market is starting to take security threats to AI seriously. I encourage machine learning engineers and security professionals to get going begin to familiarize yourselves with these tools and the kinds of threats your AI systems could face in the not-so-distant future.

As machine learning becomes part of standard software development and core IT and business operations in the future, vulnerabilities and new methods of attack are inevitable. The immature and open nature of machine learning makes it particularly susceptible to hacking and that's why I predicted last year that we would see security become the top priority for enterprises' investment in machine learning by 2022.

A new category of specialism will emerge devoted to AI security and posture management. It will include core security areas applied to machine learning, like vulnerability assessments, pen testing, auditing and compliance and ongoing threat monitoring. In future, it will track emerging security vectors such as data poisoning, model inversions and adversarial attacks. Innovations like homomorphic encryption, confidential machine learning and privacy protection solutions such as federated learning and differential privacy will all help enterprises navigate the critical intersection of innovation and trust.

Above all, it's great to see the industry beginning to tackle this imminent problem now. Matilda Rhode, Senior Cybersecurity Researcher at Airbus, perhaps captures this best when she states, "AI is increasingly used in industry; it is vital to look ahead to securing this technology, particularly to understand where feature space attacks can be realized in the problem space. The release of open-source tools for security practitioners to evaluate the security of AI systems is both welcome and a clear indication that the industry is taking this problem seriously".

I look forward to tracking how enterprises progress in this critical field in the months ahead.

Nick McQuire, Chief of Enterprise Research, CCS Insight

Read the original here:
Hardening AI: Is machine learning the next infosec imperative? - ITProPortal

AI is learning how to create itself – MIT Technology Review

But theres another crucial observation here. Intelligence was never an endpoint for evolution, something to aim for. Instead, it emerged in many different forms from countless tiny solutions to challenges that allowed living things to survive and take on future challenges. Intelligence is the current high point in an ongoing and open-ended process. In this sense, evolution is quite different from algorithms the way people typically think of themas means to an end.

Its this open-endedness, glimpsed in the apparently aimless sequence of challenges generated by POET, that Clune and others believe could lead to new kinds of AI. For decades AI researchers have tried to build algorithms to mimic human intelligence, but the real breakthrough may come from building algorithms that try to mimic the open-ended problem-solving of evolutionand sitting back to watch what emerges.

Researchers are already using machine learning on itself, training it to find solutions to some of the fields hardest problems, such as how to make machines that can learn more than one task at a time or cope with situations they have not encountered before. Some now think that taking this approach and running with it might be the best path to artificial general intelligence.We could start an algorithm that initially does not have much intelligence inside it, and watch it bootstrap itself all the way up potentially to AGI, Clune says.

The truth is that for now, AGI remains a fantasy. But thats largely because nobody knows how to makeit.Advances in AI are piecemeal and carried out by humans, with progress typically involving tweaks to existing techniques or algorithms, yielding incremental leaps in performance or accuracy. Clune characterizes these efforts as attempts to discover the building blocks for artificial intelligence without knowing what youre looking for or how many blocks youll need. And thats just the start. At some point, we have to take on the Herculean task of putting them all together, he says.

Asking AI to find andassemble those building blocks for usis a paradigm shift. Its saying we want to create an intelligent machine, but we dont care what it might look likejust give us whatever works.

Even if AGI is never achieved, the self-teaching approach may still change what sorts of AI are created. The world needsmore than a very good Go player, says Clune. For him, creating a supersmart machine means building a system that invents its own challenges, solves them, and then invents new ones. POET is a tiny glimpse of this in action. Clune imagines a machine that teaches a bot to walk, then to play hopscotch, then maybe to play Go. Then maybe it learns math puzzles and starts inventing its own challenges, he says. The system continuously innovates, and the skys the limit in terms of where it might go.

See the rest here:
AI is learning how to create itself - MIT Technology Review