Archive for the ‘Alphago’ Category

Anandtech: "Using AI to Build Better Processors: Google Was Just the Start, Says Synopsys" – AnandTech

In an exclusive to AnandTech, we spoke with Synopsys CEO Aart de Geus ahead of a pair of keynote presentations at two upcoming technical semiconductor industry events this year. Synopsys reached out to give us an overview of the key topic of the day, of the year:as part of these talks, Aart will discuss what was considered impossible only a few years ago the path to finding a better and automated way into chip design through the use of machine learning solutions. Within the context of EDA tools,as Google has demonstrated recently, engineers can be assisted in building better processors using machine learning algorithms.

If you read mainstream columns about technology and growth today, there is an eminent focus on the concepts of big data, artificial intelligence, and the value of analyzing that data. With enough data that has been analyzed effectively, companies have shown that they are proactive to customers,predict their needs in advance, or identify trends and react before a human has even seen the data. The more data you have analyzed, the better your actions or reactions can be. This has meant that analyzing the amount of data itself has intrinsic value, as well as the speed at which it is processed. This has causedan explosion of the demand for better analysis tools but also an explosion in data creation itself. Many senior figures in technology and business see the intersection and development of machine learning data analysis tools to churn through that data as the mark of the next generation of economics.

Graph showing manufacturing growth of key silicon product lines since 2016at TSMC, the world's largest contract manufacturer

The desire to have the best solution is accelerating the development of better utilities, but at the same time, the need to deploy it at scale is creating immense demand for resources. All the while, a number of critics are forecasting that Moores Law, a 1960s observation around the exponential development of complex computing that has held true for 50 years, is reaching its end. Othersare busy helping it to stay on track. As driving performance requires innovation on multiple levels, including hardware and software, the need to optimize every abstraction layer to continue that exponential growth has become more complex, more expensive, and requires a fundamental economic gain to those involved to continue investment.

One of the ways in driving performance on the hardware side is in designing processors to work faster and more efficiently. Two processors with the same fundamental building blocks can have those blocks placed in many different orientations, with some arrangements beneficial for power, others for performance, or perhaps for design area, while some configurations make no sense whatsoever. Finding the best combination in light of the economics at the time is often crucial to the competitiveness of the product and the buoyancy of the company that relies on the success of that product. The semiconductor industry is rare in that most chip design companies effectively bet the entire company on the success of the next generation, which makes every generation's design more important than the last.

In light of the rate of innovation, chip design teams have spent tens of thousands of hours honing their skills over decades. But we are at a stage where a modern complex processor has billions of transistors and millions of building blocks to put together in something the size of a toenail. These teams use their expertise, intuition, and nous to place these units in the best configuration, and it gets simulated over the course of 72 hours. The results that come through are analyzed, the design goes back to be updated, and the process repeats. Getting the best human-designed processor in this fashion can take six months or more, because the number of arrangements possible is equivalent to the number of atoms in the known universe risen to the power of the number of atoms in the known universe. With numbers so large, using computers to brute force the best configuration is impossible. At least, it was thought to be.

Work from Google was recently published in the scientific journal Nature about how the company is already using custom AI tools to develop better silicon, which in turn helps develop better custom AI tools. In the research paper, the company applied machine learning algorithms to find the best combination of power, performance, and die area for a number of test designs.

In order to reduce the complexity of the problem, Google limited its scope to certain layers within the design. Take, for example, an electrical circuit that is designed to add numbers together - in Googles work, rather than try and find the best way to build a circuit like this every time, they took a good adder design as a fundamental building block of the problem, mapped how it interacts with other different fundamental blocks, and then the AI software found the best way to build these fundamental blocks. This cuts down the number of different configurations needed, but the problem is still a difficult one to crack, as these blocks will interact with other blocks to varying degrees based on proximity, connections, and electrical/thermal interactions. The nature of the work always depends on what level of abstraction these different building blocks take, and how complex/basic you make them.

Simple 8-stage example of block placement and routing affects the design choices

In Googles paper, the company states that their tools have already been put to use in helping design four parts of an upcoming Google TPU processor designed for machine learning acceleration. While the paper showcases that AI tools werent used across the whole processor, it is taking some of the work that used to be painstaking in engineer labor hours and accelerating the process through computation. The beauty of this application is that the way these building blocks can be put together can scale, and companies like Google can use their datacenters to test thousands of configurations in a single day, rather than having a group of engineers provide a handful of options after several months.

Googles approach also details the effect of using optimized machine learning (so algorithms that have learned how to be better by examining previous designs) against fresh machine learning (algorithms with only a basic understanding that learn from their own trial and error). Both these areas are important, showcasing that in some circumstances, the algorithms do not need to be pre-trained but can still deliver a better-than-human result. That result still requires additional validation for effectiveness, and the results are fed back into the software team to create better algorithms.

But this is just the tip of the iceberg, according to Synopsys CEO Aart de Geus, whose company's software helps develop more silicon processing intellectual property in the industry today than anyone else. Synopsys has been involved in silicon design for over 35+ years, with hundreds of customers, and its latest AI-acceleratedproduct is already in use at a number of high-profile silicon design teams making processors today to help accelerate time to market with a better semiconductor placement than humans can achieve.

Synopsys is a company that makes EDA tools, or Electronic Design Automation, and every semiconductor company in the industry, both old and new, relies on some form of EDA to actually bring silicon to market. EDA tools allow semiconductor designers to effectively write code that describes what they are trying to make, and that can be simulated to sufficient accuracy to tell the designer if it fits within strict parameters, meets the requirements for the final manufacturing, or if it has thermal problems, or perhaps signal integrity does not meet required specifications for a given standard.

EDA tools also rely on abstraction, decades of algorithm development, and as the industry is moving to multi-chip designs and complex packaging technologies, the software teams behind these tools have to be quick to adapt to an ever-changing landscape. Having relied on complex non-linear algorithm solutions to assist designers to date, the computational requirements of EDA tools are quite substantial, and often not scalable. Thus, ultimately any significant improvement to EDA tool design is a welcome beacon in this market.

For context, the EDA tools market has two main competitors, with a combined market cap of $80B and a combined annual revenue of $6.5B. All the major foundries work with these two EDA vendors, and it is actively encouraged to stay within these toolchains, rather than to spin your own, to maintain compatibility.

Synopsys CEO Aart de Geus is set to take the keynote presentations at two upcoming technical semiconductor industry events this year: ISSCC and Hot Chips. As part of these talks, Aart will discuss what was considered impossible only a few years ago the path to finding a better and automated way into chip design through the use of machine learning solutions. Within the context of EDA tools, as Google has demonstrated publicly, engineers can be assisted in building better processors, or similarly not so many engineers are needed to build a good processor. To this point, Aarts talk at Hot Chips will be titled:

Does Artificial Intelligence Require Artificial Architects?

I spent about an hour speaking with Aart on this topic and what it means to the wider industry. The discussion would have made a great interview on the topic, although unfortunately this was just an informal discussion! But in our conversation, aside from the simple fact that machine learning can help silicon design teams optimize more variations with better performance in a fraction of the time, Aart was clear that the fundamental drive and idea of Moores Law, regardless of the exact way you want to interpret what Gordon Moore actually said, is still driving the industry forward in very much the same way that is has been the past 50 years. The difference is now that machine learning, as a cultural and industrial revolution, is enabling emergent compute architectures and designs leading to a new wave of complexity, dubbed systemic complexity.

Aart also presented to me the factual way how the semiconductor industry has evolved. At each stage of fundamental improvement, whether thats manufacturing improvement through process node lithography such as EUV or transistor architectures like FinFET or Gate-All-Around, or topical architecture innovation for different silicon structures such as high performance compute or radio frequency, we have been relying on architects and research to enable those step-function improvements. In a new era of machine learning assisted design, such as the tip of the iceberg presented by Google, new levels of innovation can emerge, albeit with a new level of complexity on top.

Aart described that with every major leap, such as moving from 200mm to 300mm wafers, or planar to FinFET transistors, or from DUV to EUV, it all relies on economics no one company can make the jump without the rest of the industry coming along and scaling costs. Aart sees the use of machine learning in chip design, for use at multiple abstraction layers, will become a de-facto benefit that companies will use as a result of the current economic situation the need to have the most optimized silicon layout for the use case required. Being able to produce 100 different configurations overnight, rather than once every few days, is expected to revolutionize how computer chips are made in this decade.

The era of AI accelerated chip design is going to be exciting. Hard work, but very exciting.

From Synopsys point of view, the goal of introducing Aart to me and having the ability to listen to his view and ask questions was to give me a flavor ahead of his Hot Chips talk in August. Synopsys has some very exciting graphs to show, one of which they have provided to me in advance below,on how its own DSO.ai software is tackling these emerging design complexities. The concepts apply to all areas of EDA tools, but this being a business, Synopsys clearly wants to show how much progress it has made in this area and what benefits it can bring to the wider industry.

In this graph, we are plotting power against wire delay. The best way to look at this graph is to start at the labeled point at the top, which says Start Point.

All of the small blue points indicate one full AI sweep of placing the blocks in the design. Over 24 hours, the resources in this test showcase over 100 different results, with the machine learning algorithm understanding what goes where with each iteration. The end result is something well beyond what the customer requires, giving them a better product.

There is a fifth point here that isn't labeled, and that is the purple dots that represent even better results. This comes from the DSO algorithm on a pre-trained network specifically for this purpose. The benefit here is that in the right circumstances, even a better result can be achieved. But even then, an untrained network can get almost to that pointas well, indicated by the best untrained DSO result.

Synopsys has already made some disclosures with customers, such as Samsung. Across four design projects, time to design optimization was reduced by 86%, from a month do days, using up to 80% fewer resources and often beating human-led design targets.

I did come away with several more questions that I hope Aart will address when the time comes.

Firstly I would like to address where the roadmaps lie in improving machine learning in chip design. It is one thing to make the algorithm that finds a potentially good result and then to scale it and produce 100s or 1000s of different configurations overnight, but is there an artificial maximum of what can be considered best, limited perhaps by the nature of the algorithm being used?

Second, Aart and I discussed Googles competition with Go Master and 18-time world champion Lee Sedol, in which Google beat the worlds best Go player 4-1 in a board game that was considered impossible only five years prior for computers to come close to the best humans. In that competition, both the Google DeepMind AI and the human player made a 1-in-10000 move, which is rare in an individual game, but one might argue is more likely to occur in human interactions. My question to Aart is whether machine learning for chip design will ever experience those 1-in-10000 moments, or rather in more technical terms, would the software still be able to find a best global minimum if it gets stuck in a local minimum over such a large (1 in 102500 combinations for chip design vs 1 in 10230 in Go) search space.

Third, and perhaps more importantly, is how applying machine learning at different levels of the design can violate those layers. Most modern processor design relies on specific standard cells and pre-defined blocks there will be situations where modified versions of those blocks might be better in some design scenarios when coupled close to different parts of the design. With all of these elements interacting with each other and having variable interaction effects, the complexity is in managing these interactions within the machine learning algorithms in a time-efficient way, but how these tradeoffs are made is still a point to prove.

In my recent interview with Jim Keller, I asked him if at one point we will see silicon design look unfathomable to even the best engineers he said Yeah, and its coming pretty fast. It is one thing to talk holistically about what AI can bring to the world, but its another to have it working in action to improve semiconductor design and providing a fundamental benefit at the base level of all silicon. Im looking forward to further disclosures on AI-accelerated silicon design from Synopsys, its competitors, and hopefully some insights from those that are using it to design their processors.

Go here to read the rest:
Anandtech: "Using AI to Build Better Processors: Google Was Just the Start, Says Synopsys" - AnandTech

Computer scientists are questioning whether Alphabets DeepMind will ever make A.I. more human-like – CNBC

David Silver, leader of the reinforcement learning research group at DeepMind, being awarded an honorary "ninth dan" professional ranking for AlphaGo.

JUNG YEON-JE | AFP | Getty Images

Computer scientists are questioning whether DeepMind, the Alphabet-owned U.K. firm that's widely regarded as one of the world's premier AI labs, will ever be able to make machines with the kind of "general" intelligence seen in humans and animals.

In its quest for artificial general intelligence, which is sometimes called human-level AI, DeepMind is focusing a chunk of its efforts on an approach called "reinforcement learning."

This involves programming an AI to take certain actions in order to maximize its chance of earning a reward in a certain situation. In other words, the algorithm "learns" to complete a task by seeking out these preprogrammed rewards. The technique has been successfully used to train AI models how to play (and excel at) games like Go and chess. But they remain relatively dumb, or "narrow." DeepMind's famous AlphaGo AI can't draw a stickman or tell the difference between a cat and a rabbit, for example, while a seven-year-old can.

Despite this, DeepMind, which was acquired by Google in 2014 for around $600 million, believes that AI systems underpinned by reinforcement learning could theoretically grow and learn so much that they break the theoretical barrier to AGI without any new technological developments.

Researchers at the company, which has grown to around 1,000 people under Alphabet's ownership, argued in a paper submitted to the peer-reviewed Artificial Intelligence journal last month that "Reward is enough" to reach general AI. The paper was first reported by VentureBeat last week.

In the paper, the researchers claim that if you keep "rewarding" an algorithm each time it does something you want it to, which is the essence of reinforcement learning, then it will eventually start to show signs of general intelligence.

"Reward is enough to drive behavior that exhibits abilities studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalization and imitation," the authors write.

"We suggest that agents that learn through trial and error experience to maximize reward could learn behavior that exhibits most if not all of these abilities, and therefore that powerful reinforcement learning agents could constitute a solution to artificial general intelligence."

Not everyone is convinced, however.

Samim Winiger, an AI researcher in Berlin, told CNBC that DeepMind's "reward is enough" view is a "somewhat fringe philosophical position, misleadingly presented as hard science."

He said the path to general AI is complex and that the scientific community is aware that there are countless challenges and known unknowns that "rightfully instill a sense of humility" in most researchers in the field and prevent them from making "grandiose, totalitarian statements" such as "RL is the final answer, all you need is reward."

DeepMind told CNBC that while reinforcement learning has been behind some of its most well-known research breakthroughs, the AI technique accounts for only a fraction of the overall research it carries out. The company said it thinks it's important to understand things at a more fundamental level, which is why it pursues other areas such as "symbolic AI" and "population-based training."

"In somewhat typical DeepMind fashion, they chose to make bold statements that grabs attention at all costs, over a more nuanced approach," said Winiger. "This is more akin to politics than science."

Stephen Merity, an independent AI researcher, told CNBC that there's "a difference between theory and practice." He also noted that "a stack of dynamite is likely enough to get one to the moon, but it's not really practical."

Ultimately, there's no proof either way to say whether reinforcement learning will ever lead to AGI.

Rodolfo Rosini, a tech investor and entrepreneur with a focus on AI, told CNBC: "The truth is nobody knows and that DeepMind's main product continues to be PR and not technical innovation or products."

Entrepreneur William Tunstall-Pedoe, who sold his Siri-like app Evi to Amazon, told CNBC that even if the researchers are correct "that doesn't mean we will get there soon, nor does it mean that there isn't a better, faster way to get there."

DeepMind's "Reward is enough" paper was co-authored by DeepMind heavyweights Richard Sutton and David Silver, who met DeepMind CEO Demis Hassabis at the University of Cambridge in the 1990s.

"The key problem with the thesis put forth by 'Reward is enough' is not that it is wrong, but rather that it cannot be wrong, and thus fails to satisfy Karl Popper's famous criterion that all scientific hypotheses be falsifiable," said a senior AI researcher at a large U.S. tech firm, who wished to remain anonymous due to the sensitive nature of the discussion.

"Because Silver et al. are speaking in generalities, and the notion of reward is suitably underspecified, you can always either cherry pick cases where the hypothesis is satisfied, or the notion of reward can be shifted such that it is satisfied," the source added.

"As such, the unfortunate verdict here is not that these prominent members of our research community have erred in any way, but rather that what is written is trivial. What is learned from this paper, in the end? In the absence of practical, actionable consequences from recognizing the unalienable truth of this hypothesis, was this paper enough?"

While AGI is often referred to as the holy grail of the AI community, there's no consensus on what AGI actually is. One definition is it's the ability of an intelligent agent to understand or learn any intellectual task that a human being can.

But not everyone agrees with that and some question whether AGI will ever exist. Others are terrified about its potential impacts and whether AGI would build its own, even more powerful, forms of AI, or so-called superintelligences.

Ian Hogarth, an entrepreneur turned angel investor, told CNBC that he hopes reinforcement learning isn't enough to reach AGI. "The more that existing techniques can scale up to reach AGI, the less time we have to prepare AI safety efforts and the lower the chance that things go well for our species," he said.

Winiger argues that we're no closer to AGI today than we were several decades ago. "The only thing that has fundamentally changed since the 1950/60s, is that science-fiction is now a valid tool for giant corporations to confuse and mislead the public, journalists and shareholders," he said.

Fueled with hundreds of millions of dollars from Alphabet every year, DeepMind is competing with the likes of Facebook and OpenAI to hire the brightest people in the field as it looks to develop AGI. "This invention could help society find answers to some of the world's most pressing and fundamental scientific challenges," DeepMind writes on its website.

DeepMind COO Lila Ibrahim said on Monday that trying to "figure out how to operationalize the vision" has been the biggest challenge since she joined the company in April 2018.

See the article here:
Computer scientists are questioning whether Alphabets DeepMind will ever make A.I. more human-like - CNBC

Chinese AI Learns To Beat Top Fighter Pilot In Simulated Combat – Forbes

A Chinese AI system has defeated a top human pilot in a simulated dogfight, according to Chinese media. The AI was pitted against Fang Guoyu, a Group Leader in a PLA aviation brigade and a previous champion in such contests.

"At first, it was not difficult to win against the AI," said Fang in a report in Global Times, a Chinese state newspaper. But as the exercise continued the AI learned from each encounter and steadily improved. By the end it was able to defeat Fang using tactics it had learned from him, coupled with inhuman speed and precision.

"The AI has shown adept flight control skills and errorless tactical decisions, said brigade commander Du Jianfeng.

The Chinese exercise of setting human pilots against AI aims to improve both. The AI gives the pilots a new and challenging opponent which thinks out of the box and can come up with unexpected tactics, while each dogfight adds to the AIs experience and helps it improve.

The AI was developed by a number of unspecified research institutes working with the aviation brigade, according to the report.

In the culmination of DARPA's AlphaDogfight exercise, the Falco AI decisively beat a skilled human ... [+] pilot in simulated combat between F-16s.

The event echoes DARPAs AlphaDogfight competition last year which featured human and AI pilots fighting it out in simulated F-16s. In the initial rounds, different AIs competed to find the best. In the final round, the winning AI, Falco from Heron Systems, took on the human champion, an unnamed U.S. Air Force pilot. The AI triumphed, scoring a perfect 5-0 win in a series of encounters.

AIs have significant advantages in this situation. One is that they are fearless and highly aggressive compared to human pilots; another term might be reckless. They can react faster than any human, and can track multiple aircraft in all directions, identifying the greatest threats and the best targets in a rapidly changing situation. They also have faster and more precise control: Falco was notably skilled at taking aim and unleashing a stream of simulated cannon fire at opponents who were still lining up their shot. Whether these advantages would carry over into a messy real-world environment is open to question further planned exercises by DARPA, the USAF and others may help settle the matter.

DARPAs ACES program, of which AlphaDogfight was part, plans to port dogfighting algorithms onto small drones and test various scenarios of one-on-one, one-versus-two, and two-versus-two encounters in the next year. At the same time they are also preparing for combat autonomy on a full-scale aircraft. This may utilize existing dumb QF-16 target aircraft, the drone versions of F-16s used for air-to-air combat practice.

The QF-16, an unmanned version of the F-16 used as an aerial target, could be upgraded to a ... [+] dogfighter with smart software

The contest for AI supremacy between the U.S. and China is attracting increasing attention, with the National Security Commission on AI (NSCAI) concluding in March that, for the first time since World War II, Americas technological predominance is under threat. China has created hundreds of new AI professorships and developed an efficient ecosystem for AI start-ups with tax breaks and lucrative government contracts on offer.

AI fighter pilots are just a tiny piece in the military balance, and not a meaningful indicator on their own. However, the fact that China chooses to publicize the latest development sends a message that they are hard on Americas heels, if not drawing ahead, in direct military applications of AI. If their AI can really learn skills that rapidly from contests with human pilots, then, like DeepMind's AlphaGo, it may now be competing with versions of itself and developing tactics and levels of skill impossible for humans.

Meanwhile, in the larger evolutionary contest between humans and AIs, the machines have just taken another tiny step forward in chipping away our superiority. The new Top Gun movie out later this year may be nostalgic on more ways than one.

Continued here:
Chinese AI Learns To Beat Top Fighter Pilot In Simulated Combat - Forbes

Different Types of Robot Programming Languages – Analytics Insight

Robots are by far the most efficient use of modern science. Robots not only reduce human labor but also execute error-free activities. Many businesses are expressing an interest in robotics. Automated machines have gained popularity in recent years. Keeping the situation in mind, we shall discuss robotic computer languages.

So, in order for robots to do tasks, they must be programmed. Robot programming is the process through which robots acquire instructions from computers. A robotic programmer must be fluent in several programming languages. So lets get started.

There are about 1500 robotic programming languages accessible worldwide. They are all involved in robotic training. In this section, we will go through the top programming languages accessible today.

The easiest way to get started with robotics is to learn C and C++. Both of these are general-purpose programming languages with almost identical features. C++ is a modified version of C that adds a few features. You should now see why C++ is the most popular robotic programming language. It enables a low-level hardware interface and delivers real-time performance.

C++ is the most mature programming language for getting the greatest results from a robot. C++ allows you to code in three different ways. The Constructor, Autonomous, and OperatorControl methods are among these. In this constructor mode, the initializing code runs to build a class. It will execute at the start of the program in this scenario.

It aids in the initialization of sensors and the creation of other WPILib objects. The autonomous approach guarantees that the code is executed. It only works for a set amount of time. The robot then moves on to the teleoperation section. The OperatorControl technique is used in this case.

Python is a powerful programming language that may be used to create and test robots. In terms of automation and post-process robotic programming, it outperforms other platforms. You may use this to build a script that will compute, record, and activate a robot code.

It is not necessary to teach anything by hand. This enables rapid testing and visualization of the simulations, programs, and logic solutions. Python uses fewer lines of code than other programming languages. It also includes a large number of libraries for fundamental functions. Pythons primary goal is to make programming easier and faster.

Any item can be created, modified, or deleted. In addition, we may code the robots motions in the same script. All of this is accomplished with very little code. Python is among the finest robotic programming languages as a result of this.

Java is a programming language that enables robots to do activities that are similar to those performed by humans. It also provides a variety of APIs to meet the demands of robots. Java has artificial language characteristics to a high degree.

It enables you to construct high-level algorithms, searching, and neural algorithmic algorithms. Java also allows you to run the same code on many computers.

Java is not built into machine code since it is an interpretative language. Rather, in execution, the Java virtual computer interprets the commands. Java has become quite popular in the field of robotics as a result of this. As a result, Java is preferable to alternative robotic programming languages. Java is used by modern AIs such as IBM Watson and AlphaGo.

Microsofts .NET programming language is used to create apps with Visual Studio. It provides a good basis for anyone interested in pursuing a career in robotics. .NET is primarily used by programmers for port and socket development.

It supports various languages while allowing for horizontal scaling. It also offers a uniform environment and makes programming in C++ or Java easier. All of the tools and IDEs have been thoroughly tested and are accessible on the Microsoft Developer Network.

In addition, the merging of languages is smooth. As a result, we can confidently rank this among the best robotic programming languages.

In robotic engineering, MATLAB and its open-source cousins like Octave are extremely popular. In terms of data analysis, it is considerably ahead of many other robotic computer languages. MATLAB is not really a programming language in the traditional sense. Yet, engineering solutions based on complex mathematics can be found here.

Robotic developers will learn how to create sophisticated graphs using MATLAB data. It is quite helpful in the development of the complete robotic system. It also aids the development of deeply established robotic foundations in the robot business. Its a tool that lets you apply your methods to simulate the outcome. Engineers may use this simulation to fine-tune the system design and eliminate mistakes.

There have been cases when MATLAB has been used to build a complete robot. As a result, it must be included among the top ten languages. Kuka kr6 is one of the greatest instances of MATLAB application. MATLAB was also used to create and simulate this robot by the developers.

One of the first robotic computer languages was Lisp. It was introduced to the market to allow computer applications to use mathematical terminology. Lisp is an AI domain that is mostly used for creating Robot Operating Systems.

Tree data structures, automated storage management, syntax highlighting, and elevated-order characteristics are among the features available. As a result, it is simple to use and aids in the elimination of implementation mistakes after an issue have been identified.

This problem-solving procedure takes place at the prototype stage, not the manufacturing stage. It also includes capabilities like the read-eval-print loop and self-hosting compilation.

One of the earliest programming languages to hit the market was Pascal. Its still quite useful, especially for newcomers. It is based on the Fundamental programming language and teaches excellent programming skills. Pascal is being used by manufacturers to create robotic programming languages.

ABBs RAPID and Kukas KRL are two examples. Nevertheless, most developers consider Pascal to be obsolete for everyday use. Theyve also highlighted its significance for newcomers.

It will assist you in learning other robot programming languages more quickly. This is only recommended for complete novices. When youve gained some expertise in robotics programming, you can transition to another language.

And its a wrap. We hope that you found this article helpful regarding robotic programming languages. Weve covered all of the pros and cons of the top robotic programming languages. You can choose the most appropriate language for your needs. Even now, robotics has a promising future. So now is the ideal moment to get started.

Read more from the original source:
Different Types of Robot Programming Languages - Analytics Insight

AI in MedTech: Risks and Opportunities of Innovative Technologies in Medical Applications – MedTech Intelligence

An increasing number of medical devices incorporate artificial intelligence (AI) capabilities to support therapeutic and diagnostic applications. In spite of the risks connected with this innovative technology, the applicable regulatory framework does not specify any requirements for this class of medical devices. To make matters even more complicated for manufacturers, there are no standards, guidance documents or common specifications for medical devices on how to demonstrate conformity with the essential requirements.

The term artificial intelligence (AI) describes the capability of algorithms to take over tasks and decisions by mimicking human intelligence.1 Many experts believe that machine learning, a subset of artificial intelligence, will play a significant role in the medtech sector.2,3 Machine learning is the term used to describe algorithms capable of learning directly from a large volume of training data. The algorithm builds a model based on training data and applies the experience, it has gained from the training to make predictions and decisions on new, unknown data. Artificial neural networks are a subset of machine learning methods, which have evolved from the idea of simulating the human brain.22 Neural networks are information-processing systems used for machine learning and comprise multiple layers of neurons. Between the input layer, which receives information, and the output layer, there are numerous hidden layers of neurons. In simple terms, neural networks comprise neurons also known as nodes which receive external information or information from other connected nodes, modify this information, and pass it on, either to the next neuron layer or to the output layer as the final result.5 Deep learning is a variation of artificial neural networks, which consist of multiple hidden neural network layers between the input and output layers. The inner layers are designed to extract higher-level features from the raw external data.

The role of artificial intelligence and machine learning in the health sector was already the topic of debate well before the coronavirus pandemic.6 As shown in an excerpt from PubMed several approaches for AI in medical devices have already been implemented in the past (see Figure 1). However, the number of publications on artificial intelligence and medical devices has grown exponentially since roughly 2005.

Artificial intelligence in the medtech sector is at the beginning of a growth phase. However, expectations for this technology are already high, and consequently prospects for the digital future of the medical sector are auspicious. In the future, artificial intelligence may be able to support health professionals in critical tasks, controlling and automating complex processes. This will enable diagnosis, therapy and care to be optimally aligned to patients individual needs, thereby increasing treatment efficiency, which in turn will ensure an effective and affordable healthcare sector in the future.4

However, some AI advocates tend to overlook some of the obstacles and risks encountered when artificial intelligence is implemented in clinical practice. This is particularly true for the upcoming regulation of this innovative technology. The risks of incorporating artificial intelligence in medical devices include faulty or manipulated training data, attacks on AI such as adversarial attacks, violation of privacy and lack of trust in technology. In spite of these technology-related risks, the applicable standards and regulatory frameworks do not include any specific requirements for the use of artificial intelligence in medical devices. After years of negotiations in the European Parliament, Regulation (EU) 2017/745 on medical devices and Regulation (EU) 2017/746 on in-vitro diagnostic medical devices entered into force on May 25, 2017. In contrast to Directives, EU Regulations enter directly into force in the EU Member States and do not have to be transferred into national law. The new regulations impose strict demands on medical device manufacturers and the Notified Bodies, which manufacturers must involve in the certification process of medical devices and in-vitro diagnostic medical devices (excluding class I medical devices and nonsterile class A in-vitro diagnostic medical devices, for which the manufacturers self-declaration will be sufficient).

Annex I to both the EU Regulation on medical devices (MDR) and the EU Regulation on in vitro diagnostic medical devices (IVDR) define general safety and performance requirements for medical devices and in-vitro diagnostics. However, these general requirements do not address the specific requirements related to artificial intelligence. To make matters even more complicated for manufacturers, there are no standards, guidance documents or common specifications on how to demonstrate conformity with the general requirements. To place a medical device on the European market, manufacturers must meet various criteria, including compliance with the essential requirements and completion of the appropriate conformity assessment procedure. By complying with the requirements, manufacturers ensure that their medical devices fulfill the high levels of safety and health protection required by the respective regulations.

To ensure the safety and performance of artificial intelligence in medical devices and in-vitro diagnostics, certain minimum requirements must be fulfilled. However, the above regulations define only general requirements for software. According to the general safety and performance requirements, software must be developed and manufactured in keeping with the state of the art. Factors to be taken into account include the software lifecycle process and risk management. Beyond the above, repeatability, reliability and performance in line with the intended use of the medical device must be ensured. This implicitly requires artificial intelligence to be repeatable, performant, reliable and predictable. However, this is only possible with a verified and validated model. Due to the absence of relevant regulatory requirements and standards, manufacturers and Notified Bodies are determining the state of the art for developing and testing artificial intelligence in medical devices, respectively. During the development, assessment and testing of AI, fundamental differences between artificial intelligence (particularly machine learning) and conventional software algorithms become apparent.

Towards the end of 2019, and thus just weeks before the World Health Organizations (WHO) warning of an epidemic in China, a Canadian company (BlueDot) specializing in AI-based monitoring of the spread of infectious diseases alerted its customers to the same risk. To achieve this the companys AI combed through news reports and databases of animal and plant diseases. By accessing global flight ticketing data, the AI system correctly forecast the spread of the virus in the days after it emerged. This example shows the high level of performance that can already be achieved with artificial intelligence today.7 However, it also reveals one of the fundamental problems encountered with artificial intelligence: Despite the distribution of information of the outbreak to various health organizations in different countries, international responses were few. One reason for this lack of response to the AI-based warning is the lack of trust in technology that we do not understand, which plays a particularly significant role in medical applications.

In clinical applications, artificial intelligence is predominantly used for diagnostic purposes. Analysis of medical images is the area where the development of AI models is most advanced. Artificial intelligence is successfully used in radiology, oncology, ophthalmology, dermatology and other medical disciplines.2 The advantages of using artificial intelligence in medical applications include the speed of data analysis and the capability of identifying patterns invisible to the human eye.

Take the diagnosis of osteoarthritis, for example. Although medical imaging enables healthcare professionals to identify osteoarthritis, this is generally at a late stage after the disease has already caused some cartilage breakdown. Using an artificial-intelligence system, a research team led by Dr. Shinjini Kundu analyzed magnetic resonance tomography (MRT) images. The team was able to predict osteoarthritis three years before the first symptoms manifested themselves.8 However, the team members were unable to explain how the AI system arrived at its diagnosis. In other words, the system was not explainable. The question now is whether patients will undergo treatment such as surgery, based on a diagnosis made by an AI system, which no doctor can either explain or confirm.

Further investigations revealed that the AI system identified diffusion of water into cartilage. It detected a symptom invisible to the human eye and, even more important, a pattern that had previously been unknown to science. This example again underlines the importance of trust in the decision of artificial intelligence, particularly in the medtech sector. Justification of decisions is one of the cornerstones of a doctor-patient (or AI-patient) relationship based on mutual trust. However, to do so the AI system must be explainable, understandable and transparent. Patients, doctors and other users will only trust in AI systems if their decisions can be explained and understood.

Many medical device manufacturers wonder why assessment and development of artificial intelligence must follow a different approach to that of conventional software. The reason is based on the principles of how artificial intelligence is developed and how it performs. Conventional software algorithms take an input variable X, process it using a defined algorithm and supply the result Y as the output variable (if X, then Y). The algorithm is programmed, and its correct function can be verified and validated. The requirements for software development, validation and verification are described in the two standards IEC 62304 and IEC 82304-1. However, there are fundamental differences between conventional software and artificial intelligence implementing a machine learning algorithm. Machine learning is based on using data to train a model without explicitly programming the data flow line by line. As described above, machine learning is trained using an automated appraisal of existing information (training data). Given this, both the development and conformity assessment of artificial intelligence necessitate different standards. The following sections provide a brief overview of the typical pitfalls.

A major disadvantage of artificial intelligence, in particular machine learning based on neural networks, is the complexity of the algorithms. This makes them highly non-transparent, hence their designation of black-box AI (see Figure 2). The complex nature of AI algorithms not only concerns their mathematical description but alsoin the case of deep-learning algorithmstheir high level of dimensionality and abstraction. For these classes of AI, the extent to which input information contributes to a specific decision is mostly impossible to determine. This is why AI is often referred to as black box AI. Can we trust the prediction of the AI system in such a case and, in a worst-case scenario, can we identify a failure of the system or a misdiagnosis?

A world-famous example of the result of a black-box AI was the match between AlphaGo, the artificial intelligence system made by DeepMind (Google) and the Go world champion, Lee Sedol. In the match, which was watched by an audience of 60 million including experts, move 37 showed the significance of these particular artificial intelligence characteristics. The experts described the move as a mistake, predicting that AlphaGo would lose the match since in their opinion the move made no sense at all. In fact, they went even further and said, Its not a human move. Ive never seen a human play this move9.

None of them understood the level of creativity behind AlphaGos move, which proved to be critical for winning the match. While understanding the decision made by the artificial intelligence system would certainly not change the outcome of the match, it still shows the significance of the explainability and transparency of artificial intelligence, particularly in the medical field. AlphaGo could also have been wrong!

One example of AI with an intended medical use was the application of artificial intelligence for determining a patients risk of pneumonia. This example shows the risk of black-box AI in the MedTech sector. The system in question surprisingly identified the high-risk patients as non-significant.10 Rich Caruana, one of the leading AI experts at Microsoft, who was also one of the developers of the system, advised against the use of the artificial intelligence he had developed: I said no. I said we dont understand what it does inside. I said I was afraid.11

In this context, it is important to note that open or explainable artificial intelligence, also referred to as white box, is by no means inferior to black-box AI. While there have been no standard methods for opening the black box, there are promising approaches for ensuring the plausibility of the predictions made by AI models. Some approaches try to achieve explainability based on individual predictions on input data. Others, by contrast, try to limit the range of input pixels that impact the decisions of artificial intelligence.12

Medical devices and their manufacturers must comply with further regulatory requirements in addition to the Medical Device Regulation (MDR) and the In-vitro Diagnostics Regulation (IVDR). The EUs General Data Protection Regulation (GDPR), for instance, is of particular relevance for the explainability of artificial intelligence. It describes the rules that apply to the processing of personal data and is aimed at ensuring their protection. Article 110 of the Medical Device Regulation (MDR) explicitly requires measures to be taken to protect personal data, referencing the predecessor of the General Data Protection Regulation.

AI systems which influence decisions that might concern an individual person must comply with the requirements of Articles 13, 22 and 35 of the GDPR.

Where personal data are collected, the controller shall provide.the following information: the existence of automated decision-making and at least in those cases, meaningful information of the logic involved13

In simple terms this means, that patients who are affected by automated decision-making must be able to understand this decision and have the possibility to take legal action against it. However, this is precisely the type of understanding which is not possible in the case of black box AI. Is a medical product implemented as black-box AI eligible for certification as a medical device? The exact interpretation of the requirements specified in the General Data Protection Regulation is currently the subject of legal debate.14

The Medical Device Regulation places manufacturers under the obligation to ensure the safety of medical devices. Among other specifications, Annex I to the regulation includes, , requirements concerning the repeatability, reliability and performance of medical devices (both for stand-alone software and software embedded into a medical device):

Devices that incorporate electronic programmable systems, including software, shall be designed to ensure repeatability, reliability and performance in line with their intended use. (MDR Annex I, 17.1)15

Compliance with general safety and performance requirements can be demonstrated by utilizing harmonized standards. Adherence to a harmonized standard leads to the assumption of conformity, whereby the requirements of the regulation are deemed to be fulfilled. Manufacturers can thus validate artificial intelligence models in accordance with the ISO 13485:2016 standard, which, among other requirements, describes the processes for the validation of design and development in clause 7.3.7.

For machine learning two independent sets of data must be considered. In the first step, one set of data is needed to train the AI model. Subsequently, another set of data is necessary to validate the model. Validation of the model should use independent data and can also be performed by cross-validation in the meaning of the combined use of both data sets. However, it must be noted that AI models can only be validated using an independent data set. Now, which ratio is recommended for the two sets of data? This is not an easy question to answer without more detailed information about the characteristics of the AI model. A look at the published literature (state of the art) recommends a ratio of approx. 80% training data to approx. 20% validation data. However, the ratio being used depends on many factors and is not set in stone. The notified bodies will continue to monitor the state of the art in this area and, within the scope of conformity assessment, also request the reasons underlying the ratio used.

Another important question concerns the number of data sets. As the number of data sets depends on the following factors, this issue is difficult to assess, depending on:

Generally, the larger the number of data the more performant the model can be assumed to work. In their publication on speech recognition, Banko and Brill from Microsoft state, After throwing more than one billion words within context at the problem, any algorithm starts to perform incredibly well16

At the other end of the scale, i.e. the minimum number of data sets required, computational learning theory offers approaches for estimating the lower threshold. However, general answers to this question are not yet known and these approaches are based on ideal assumptions and only valid for simple algorithms.

Manufacturers need to look not only at the number of data, but also at the statistical distribution of both sets of data. To prevent bias, the data used for training and validation must represent the statistical distribution of the environment of application. Training with data that are not representative will result in bias. The U.S. healthcare system, for example, uses artificial intelligence algorithms to identify and help patients with complex health needs. However, it soon became evident that where patients had the same level of health risks, the model suggested Afro-Americans less often for enrolment in these special high-risk care management programs.17 Studies carried out by Obermeyer, et al. showed the cause for this to be racial bias in training data. Bias in training data not only involves ethical and moral aspects that need to be considered by manufacturers: it can also affect the safety and performance of a medical device. Bias in training data could, for example, result in certain indications going undetected on fair skin.

Many deep learning models rely on a supervised learning approach, in which AI models are trained using labelled data. In cases involving labelled data, the bottleneck is not the number of data, but the rate and accuracy at which data are labeled. This renders labeling a critical process in model development. At the same time, data labelling is error-prone and frequently subjective, as it is mostly done by humans. Humans also tend to make mistakes in repetitive tasks (such as labelling thousands of images).

Labeling of large data volumes and selection of suitable identifiers is a time- and cost-intensive process. In many cases, only a very minor amount of the data will be processed manually. These data are used to train an AI system. Subsequently the AI system is instructed to label the remaining data itselfa process that is not always error-free, which in turn means that errors will be reproduced.7 Nevertheless, the performance of artificial intelligence combined with machine learning very much depends on data quality. This is where the accepted principle of garbage in, garbage out becomes evident. If a model is trained using data of inferior quality, the developer will also obtain a model of the same quality.

Other properties of artificial intelligence that manufacturers need to take into account are adversarial learning problems and instabilities of deep learning algorithms. Generally, the assumption in most machine learning algorithms is that training and test data are governed by identical distributions. However, this statistical assumption can be influenced by an adversary (i.e., an attacker that attempts to fool the model by providing deceptive input). Such attackers aim to destabilize the model and to cause the AI to make false predictions. The introduction of certain adversarial patterns to the input data that are invisible to the human eye causes major errors of detection to be made by the AI system. In 2020, for example, the security company McAfee demonstrated their ability to trick Teslas Mobileye EyeQ3 AI System into driving 80 km/h over the speed limit, simply by adding a 5 cm strip of black tape to a speed limit sign.24

AI methods used in the reconstruction of MRT and CT images have also proved unstable in practice time and again. A study investigating six of the most common AI methods used in the reconstruction of MRT and CT images proved these methods to be highly unstable. Even minor changes in the input images, invisible to the human eye, result in completely distorted reconstructed image.18 The distorted images included artifacts such as removal of tissue structures, which might result in misdiagnosis. Such an attack may cause artificial intelligence to reconstruct a tumor at a location where there is none in reality or even remove cancerous tissue from the real image. These artifacts are not present when manipulated images are reconstructed using conventional algorithms.18

Another vulnerability of artificial intelligence concerns image-scaling attacks. This vulnerability has been known since as long ago as 2019.19 Image-scaling attacks enable the attacker to manipulate the input data in such a way that artificial intelligence models with machine learning and image scaling can be brought under the attackers control. Xiao et al., for example, succeeded in manipulating the well-known machine-learning scaling library, TensorFlow, in such a manner that attackers could even replace complete images.19 An example of such an image-scaling attack is shown in Figure 3. In this scaling operation, the image of a cat is replaced by an image of a dog. Image-scaling attacks are particularly critical as they can both distort training of artificial intelligence and influence the decisions of artificial intelligence trained using manipulated images.

Adversarial attacks and stability issues pose significant threats to the safety and performance of medical devices incorporating artificial intelligence. Especially concerning is the fact that the conditions of when and where the attacks could occur, are difficult to predict. Furthermore, the response of AI to adversarial attacks is difficult to specify. If, for instance, a conventional surgical robot is attacked, it can still rely on other sensors. However, changing the policy of the AI in a surgical robot might lead to unpredictable behavior and by this to catastrophic (from a human perspective) responses of the system. Methods to address the above vulnerabilities and reduce susceptibility to errors do exist. For example, the models can be subjected to safety training, making them more resilient to the vulnerabilities. Defense techniques such as adversarial training and defense distillation have already been practiced successfully in image reconstruction algorithms.21 Further methods include human-in-the-loop approaches, as humans performance is strongly robust against adversarial attacks targeting AI systems. However, this approach has limited applicability in instances where humans can be directly involved.25

Although many medical devices using artificial intelligence have already been approved, the regulatory pathways in the medtech sector are still open. At present no laws, common specifications or harmonized standards exist to regulate AI application in medical devices. In contrast to the EU authorities, the FDA published a discussion paper on a proposed regulatory framework for artificial intelligence in medical devices in 2019. The document is based on the principle of risk management, software-change management, guidance on the clinical evaluation of software and a best-practice approach to the software lifecycle.20 in 2021, the FDA published their action plan on furthering AI in medical devices. The action plan consists of five next steps, with the foremost being to develop a regulatory framework explicitly for change control of AI, a good machine learning practice and new methods to evaluate algorithm bias and robustness 26

In 2020 the European Union also published a position paper on the regulation of artificial intelligence and medical devices. The EU is currently working on future regulation, with a first draft expected in 2021.

Chinas National Medical Products Administration (NMPA) published the Technical Guiding Principles of Real-World Data for Clinical Evaluation of Medical Devices guidance document. It specifies obligations concerning requirements-analysis, data collection and processing, model definition, verification, and validation as well as post-market surveillance.

Japans Ministry of Health, Labour and Welfare is working on a regional standard for artificial intelligence in medical devices. However, to date this standard is available in Japanese only. Key points of assessment are plasticity the predictability of models, quality of data and degree of autonomy. 27

In Germany, the Notified Bodies have developed their own guidance for artificial intelligence. The guidance document was prepared by the Interest Group of the Notified Bodies for Medical Devices in Germany (IG-NB) and is aimed at providing guidance to Notified Bodies, manufacturers and interested third parties. The guidance follows the principle that the safety of AI-based medical devices can only be achieved by means of a process-focused approach that covers all relevant processes throughout the whole life cycle of a medical device. Consequently, the guidance does not define specific requirements for products, but for processes.

The World Health Organization, too, is currently working on a guideline addressing artificial intelligence in health care.

Artificial intelligence is already used in the medtech sector, albeit currently somewhat sporadically. However, at the same time the number of AI algorithms certified as medical devices has increased significantly over the last years.28 Artificial intelligence is expected to play a significant role in all stages of patient care. According to the requirements defined in the Medical Device Regulation, any medical device, including those incorporating AI, must be designed in such a way as to ensure repeatability, reliability and performance according to its intended use. In the event of a fault condition (single fault condition), the manufacturer must implement measures to minimize unacceptable risks and reduction of the performance of the medical device (MDR Annex I, 17.1). However, this requires validation and verification of the AI model.

Many of the AI models used are black-box models. In other words, there is no transparency in how these models arrive at their decisions. This poses a problem where interpretability and trustworthiness of the systems are concerned. Without transparent and explainable AI predictions, the medical validity of a decision might be doubted. Some current errors of AI in pre-clinical applications might fuel doubts further. Explainable and approvable AI decisions are a prerequisite for the safe use of AI on actual patients. This is the only way to inspire trust and maintain it in the long term.

The General Data Protection Regulation demands a high level of protection of personal data. Its strict legal requirements also apply to processing of sensitive health data in the development or verification of artificial intelligence.

Adversarial attacks aim at influencing artificial intelligence, both during the training of the model and in the classification decision. These risks must be kept under control by taking suitable measures.

Impartiality and fairness are important, safety-relevant, moral and ethical aspects of artificial intelligence. To safeguard these aspects, experts must take steps to prevent bias when training the system.

Another important question concerns the responsibility and accountability of artificial intelligence. Medical errors made by human doctors can generally be traced back to the individuals, who can be held accountable if necessary. However, if artificial intelligence makes a mistake the lines of responsibility become blurred. For medical devices on the other hand, the question is straightforward. The legal manufacturer of the medical device incorporating artificial intelligence must ensure the safety and security of the medical device and assume liability for possible damage.

Regulation of artificial intelligence is likewise still at the beginning of development involving various approaches. All major regulators around the globe have defined or are starting to define requirements for artificial intelligence in medical devices. A high level of safety in medical devices will only be possible with suitable measures in place to regulate and control artificial intelligencebut this must not impair the development of technical innovation.

Follow to Page 2 for References.

The Chinese government is investing heavily in the development of new technologies that leverage AI.

If youre looking to market your medical device, there are many tasks to complete.

The term "Big Data" is a few years old, but its implications for medical devices

The race to apply AI to medical treatment is rapidly accelerating in China and Japan.

Excerpt from:
AI in MedTech: Risks and Opportunities of Innovative Technologies in Medical Applications - MedTech Intelligence