Archive for the ‘Machine Learning’ Category

Tesla wants to take machine learning silicon to the Dojo – The Register

To quench the thirst for ever larger AI and machine learning models, Tesla has revealed a wealth of details at Hot Chips 34 on their fully custom supercomputing architecture called Dojo.

The system is essentially a massive composable supercomputer, although unlike what we see on the Top 500, it's built from an entirely custom architecture that spans the compute, networking, and input/output (I/O) silicon to instruction set architecture (ISA), power delivery, packaging, and cooling. All of it was done with the express purpose of running tailored, specific machine learning training algorithms at scale.

"Real world data processing is only feasible through machine learning techniques, be it natural-language processing, driving in streets that are made for human vision to robotics interfacing with the everyday environment," Ganesh Venkataramanan, senior director of hardware engineering at Tesla, said during his keynote speech.

However, he argued that traditional methods for scaling distributed workloads have failed to accelerate at the rate necessary to keep up with machine learning's demands. In effect, Moore's Law is not cutting it and neither are the systems available for AI/ML training at scale, namely some combination of CPU/GPU or in rarer circumstances by using speciality AI accelerators.

"Traditionally we build chips, we put them on packages, packages go on PCBs, which go into systems. Systems go into racks," said Venkataramanan. The problem is each time data moves from the chip to the package and off the package, it incurs a latency and bandwidth penalty.

So to get around the limitations, Venkataramanan and his team started over from scratch.

"Right from my interview with Elon, he asked me what can you do that is different from CPUs and GPUs for AI. I feel that the whole team is still answering that question."

Tesla's Dojo Training Tile

This led to the development of the Dojo training tile, a self-contained compute cluster occupying a half-cubic foot capable of 556 TFLOPS of FP32 performance in a 15kW liquid-cooled package.

Each tile is equipped with 11GBs of SRAM and is connected over a 9TB/s fabric using a custom transport protocol throughout the entire stack.

"This training tile represents unparalleled amounts of integration from computer to memory to power delivery, to communication, without requiring any additional switches," Venkataramanan said.

At the heart of the training tile is Tesla's D1, a 50 billion transistor die, based on TSMC's 7nm process. Tesla says each D1 is capable of 22 TFLOPS of FP32 performance at a TDP of 400W. However, Tesla notes that the chip is capable of running a wide range of floating point calculations including a few custom ones.

Tesla's Dojo D1 die

"If you compare transistors for millimeter square, this is probably the bleeding edge of anything which is out there," Venkataramanan said.

Tesla then took 25 D1s, binned them for known good dies, and then packaged them using TSMC's system-on-wafer technology to "achieve a huge amount of compute integration at very low latency and very-high bandwidth," he said.

However, the system-on-wafer design and vertically stacked architecture introduced challenges when it came to power delivery.

According to Venkataramanan, most accelerators today place power directly adjacent to the silicon. And while proven, this approach means a large area of the accelerator has to be dedicated to those components, which made it impractical for Dojo, he explained. Instead, Tesla designed their chips to deliver power directly though the bottom of the die.

"We could build an entire datacenter or an entire building out of this training tile, but the training tile is just the compute portion. We also need to feed it," Venkataramanan said.

Tesla's Dojo Interface Processor

For this, Tesla also developed the Dojo Interface Processor (DIP), which functions as a bridge between the host CPU and training processors. The DIP also serves as a source of shared high-bandwidth memory (HBM) and as a high-speed 400Gbit/sec NIC.

Each DIP features 32GB of HBM and up to five of these cards can be connected to a training tile at 900GB/s for an aggregate of 4.5TB/s to the host for a total of 160GB of HBM per tile.

Tesla's V1 configuration pairs of these tiles or 150 D1 dies in array supported four host CPUs each equipped with five DIP cards to achieve a claimed exaflop of BF16 or CFP8 performance.

Tesla's V1 Arrangement

Put together, Venkataramanan says the architecture detailed in depth here by The Next Platform enables Tesla to overcome the limitations associated with traditional accelerators from the likes of Nvidia and AMD.

"How traditional accelerators work, typically you try to fit an entire model into each accelerator. Replicate it, and then flow the data through each of them," he said. "What happens if we have bigger and bigger models? These accelerators can fall flat because they run out of memory."

This isn't a new problem, he noted. Nvidia's NV-switch for example enables memory to be pooled across large banks of GPUs. However, Venkataramanan argues this not only adds complexity, but introduces latency and compromises on bandwidth.

"We thought about this right from the get go. Our compute tiles and each of the dies were made for fitting big models," Venkataramanan said.

Such a specialized compute architecture demands a specialized software stack. However, Venkataramanan and his team recognized that programmability would either make or break Dojo.

"Ease of programmability for software counterparts is paramount when we design these systems," he said. "Researchers won't wait for your software folks to write a handwritten kernel for adapting to a new algorithm that we want to run."

To do this, Tesla ditched the idea of using kernels, and designed Dojo's architecture around compilers.

"What we did was we used PiTorch. We created an intermediate layer, which helps us parallelize to scale out hardware beneath it.Underneath everything is compiled code," he said. "This is the only way to create software stacks that are adaptable to all those future workloads."

Despite the emphasis on software flexibility, Venkataramanan notes that the platform, which is currently running in their labs, is limited to Tesla use for the time being.

"We are focused on our internal customers first," he said. "Elon has made it public that over time, we will make this available to researchers, but we don't have a time frame for that.

See the original post here:
Tesla wants to take machine learning silicon to the Dojo - The Register

‘Machine Learning’ Predicts The Future With More Reliable Diagnostics – Nation World News

Headquarters of the Council of Higher Scientific Research (CSIC).

a bone scan Every two years for all women aged 50-69. Since 1990, that is The biggest testing challenge for the national health systemAnd it aims to prevent one of the most common cancers in Spain, that is Mother, The method is X-rays that detect potentially cancerous areas; If something suspicious is found, that test is followed by more tests, often High probability of false positives, harmful and costly,

they are curvature This is the main reason why screening is limited to the highest risk groups. By adding predictive algorithms to mammograms, the risk areas of a patients breasts would be limited and the reliability of diagnosis increased to 90 percent. Therefore, they can be done with Often and the age range of the women they target Expansion,

It is a process that already exists, which uses artificial intelligenceand that . develops a team of Superior Council of Scientific Inquiry (CSIC), specifically the Institute of Corpuscular Physics (IFIC). it is part of the scope of machine learning (machine learning) in precision medicine, and a research network that seeks to increase the efficiency with which each patient is treated and optimize health care resources.

To understand how, you must first understand the concepts that come into play. The first is artificial intelligence. the ability of a computer or robot to perform tasks normally associated with intelligent beings, defined as sara degli-apostic You carlos sierra, author of the CSIC white paper on the subject. That is, they are the processes that are used replace human work with robotsWith the aim of accomplishing this with greater accuracy and greater efficiency.

And where can artificial intelligence work in medicine today? On many fronts, he replies. dolores del castilloResearchers from CSICs Center for Automation and Robotics, From the administrative to the management of clinical documentation. And, in a more specific way, in the analysis of images, or in the monitoring and follow-up of patients. And where are the still bigger limits? Above all, in the field of health care, in legal and ethical aspects when dealing with important matters. And whats more, theres still a long way to go, explains Del Castillo, who works on the projects, among others. neurological movement disorderTraining for a large section of healthcare workers.

We find the second concept as a subfield of artificial intelligence, along with its advantages and disadvantages: machine learning, This can be translated as machine learning. That is, artificial intelligence that works through computers thatand find patterns in population groups, With these patterns, predictions are made about what is most likely to happen. machine learning translate data Algorithm,

Precision medicine to predict disease

and after artificial intelligence and machine learningThere is a third concept: the precision medicine, The one that suits the person, his genes, his background, his lifestyle, his socialization. a model that must first be able predictable disease, Second, Francisco Albiol from IFIC, continues to assess each patient, apply the best treatment based on clinical evidence, identify the most complex cases, and assess their inclusion in management programs.

It makes sense high impact disease, and does not make sense for serious diseases; For example, distinguishing the flu from a cold in primary care, as the benefits will not compensate for the effort required.

The key to the use of artificial intelligence in medicine is also cost optimization, which is very important for public health. Spains population has increased from 42 to 47 million people between 2003 and 2022, that is, more than 10 percent. and from 2005 to 2022, The average age of the population has increased from 40 to 44, We are getting older and older.

Therefore, Dolores del Castillo says, the best valued projects and, therefore, likely to be funded, are those that incorporate artificial intelligence techniques to address the prevention, diagnosis and treatment of cardiovascular diseases, neurodegenerative diseases, cancer and obesity. There is also a special focus on personal and home medicine, elderly care, and new drug offerings. The need for healthcare has been heightened by our demographics, and The aim should be to reduce and simplify the challenges with technology, we tried machine learning, summarizes Albiol.

Albiol is one of the scientists who led a program to improve breast cancer detection through algorithms. He defends, like other researchers, that if we mix machine learning with precision medicine, we should be talking about 4p medicine. Which brings together four features: Predictive, personal, preventive and participatory,

Because most purists confine precision medicine to the field of patient genetics, and would not include it in the bag that takes more characteristics into account. Those who do say that we are talking about something much broader: Applied to precision medicine, machine learning allows for Analyze large amounts of very different types of data (genomic, biochemical, social, medical imaging) and model them to be able to offer together individual diagnosismore precise and thus more effective treatment, summarizes researcher Lara Loret Iglesias of the Institute of Physics of Cantabria.

Lloret is part of a network of scientists who, like Albiol or Del Castillo, are dedicated to projects on machine learning and precision medicine. One of them developed by his team, which he leads together with fellow physicist Miriam Kobo Cano, is called Branyas. It is in honor of Spains oldest woman, Maria Branyas, who managed to overcome Covid-19: she has done so at the age of 113. In this they bring together the many casuistries of more than 3,000 elderly people, much less just genetics: machine learning establish Risk profile of getting sick or dying as a result of coronavirus, We derived data from the analysis of three risk profiles: a sociodemographic, a biological and an extended biological, which will add information on issues such as aspects related to the intestinal microbiota, vaccination and immunity.

Precision Medicine, Cancer and Alzheimers

also explain this Joseph Lewis Arcosfrom the Artificial Intelligence Research Institute. common diseases There are cancer and Alzheimers linked to precision medicine, but they have stood out with the Ictus project. Launched in the middle of a pandemic (which has made things difficult, he admits), he has treated patients at Barcelonas Belwitz Hospital who suffered strokes and, after a severe and acute phase, Have become long term,

In particular, those with movement difficulty in one hand or both. made over 700 sessions In which patients have been asked to play the keyboard of the electronic piano. Then, they transferred the analysis of finger movements to the results to see what the patterns of difficulties and improvements are. And theyve gotten particularly positive feedback among users because its not only doing an exercise, but it affects a very emotional part. The goal is now to expand it to hospitals in the United Kingdom.,

and future? Dolores del Castillo replies, I believe that the challenge of artificial intelligence in medicine is to incorporate research results into daily practice in a generalized way, but always without forgetting that it is the experts who have is the last word. To do that, doctors need to be able to rely on these systems and Interact with them in the most natural and simple wayEven helping with its design.

Lara Loret believes that we have to be able to build generalizable prediction systems, that is, the efficiency of the model does not depend on unnecessary things such as which machine the data is taken in, or how the calibration is. Francisco Albiol focuses on a problem that may be in the long run must have a solutionAt present, larger hospitals are preferred in these technologies than smaller cities or towns. convenience and reduce costs It also has to do with reaching out to everyone.

While it may include statements, data or notes from health institutions or professionals, the information contained in medical writing is edited and prepared by journalists. We advise the reader to consult a health professional on any health-related questions.

Read the rest here:
'Machine Learning' Predicts The Future With More Reliable Diagnostics - Nation World News

Wind – Machine learning and AI specialist Cognitive Business collaborates with Weatherquest on weather forecasts for offshore wind platform -…

A data driven tool that predicts with 99.9 percent accuracy the safest and most successful windows for crew transfers to offshore wind platforms WAVES is the first technology of its kind and is already being used by RWE across its Robin Rigg and Rampion windfarms.

The collaboration has now seen RWE integrate Weatherquests API into the already operational WAVES platform on Robin Rigg to work alongside other forecast data to enable in-day and week-ahead O&M decision-support for turbine specific accessibility.

The integration of WAVES with Weatherquests API allows us to develop our unique technology yet further to make it an even more trusted tool for windfarm owners and operators to plan and schedule their O&M programmes said MD at Cognitive Business, Ty Burridge Oakland, speaking about the upgrades to its WAVES technology. WAVES is already a hugely accurate and relied upon technology in the industry for effectively, efficiently and safely deploying crews onto windfarms to conduct repairs and maintenance and by integrating weather forecast data, we can confidently say we have made an already highly valued technology an even more robust tool for managing and planning offshore wind repair and maintenance programmes.

Developed by Nottingham and London based, Cognitive Business in 2020, WAVES was funded in the same year by the Offshore Wind Growth Partnership to better predict safer and more successful windows for crew transfers to offshore wind platforms.

Steve Dorling, Chief Executive at Weatherquest, added that WAVES has developed a reputation within the offshore wind industry, over a number of years, for enabling owners and operators to deploy their crews with real accuracy and has been working to great effect on some of the UKs largest windfarms.

It therefore made absolute sense for us both, as data analysis businesses focused on supporting safety and productivity, to combine our expertise in this innovative way said Mr Dorling. Its great that we can further enhance the WAVES technology together in a market where it is already a trusted technology for identifying optimal windows for offshore wind crew transfers.

Cognitive Business is an industry leader in machine learning and applied A.I, developing a wide range of decision support, performance monitoring, and predictive maintenance solutions for offshore wind operations and maintenance applications.

Weatherquest is a privately owned weather forecasting and weather analysis company headquartered at the University of East Anglia providing weather forecasting support services across sectors in the UK and Northern Europe including onshore and offshore wind energy and ports.

For additional information:

Cognitive Business

Weatherquest

Read more:
Wind - Machine learning and AI specialist Cognitive Business collaborates with Weatherquest on weather forecasts for offshore wind platform -...

Best Machine Learning Books to Read This Year [2022 List] – CIO Insight

Advertiser disclosure: We may be compensated by vendors who appear on this page through methods such as affiliate links or sponsored partnerships. This may influence how and where their products appear on our site, but vendors cannot pay to influence the content of our reviews. For more info, visit ourTerms of Use page.

Machine learning (ML) books are a valuable resource for IT professionals looking to expand their ML skills or pursue a career in machine learning. In turn, this expertise helps organizations automate and optimize their processes and make data-driven decisions. Machine learning books can help ML engineers learn a new skill or brush up on old ones.

Beginners and seasoned experts alike can benefit from adding machine learning books to their reading lists, though the right book depends on the learners goals. Some books serve as an entry point to the world of machine learning, while others build on existing knowledge.

The books in this list are roughly ranked in order of difficultybeginners should avoid pursuing the books toward the end until theyve mastered the concepts introduced in the books at the top of the list.

Machine Learning for Absolute Beginners is an excellent introduction to the machine learning field of study. Its a clear and concise overview of the high-level concepts that drive machine learning, so its ideal for beginners. The e-book format has free downloadable resources, code exercises, and video tutorials to satisfy a variety of learning styles.

Readers will learn the basic ML libraries and other tools needed to build their first model. In addition, this book covers data scrubbing techniques, data preparation, regression analysis, clustering, and bias/variance. This book may be a bit too basic for readers who are interested in learning more about coding, deep learning, or other advanced skills.

As the name implies, The Hundred-Page Machine Learning Book provides a brief overview of machine learning and the mathematics involved. Its suitable for beginners, but some knowledge of probability, statistics, and applied mathematics will help readers get through the material faster.

The book covers a broad range of ML topics at a high level and focuses on the aspects of ML that are of significant practical value. These include:

Several reviewers said that the text explains complicated topics in a way that is easy for most readers to understand. It doesnt dive into any one topic too deeply, but it provides several practice exercises and links to other resources for further reading.

Introduction to Machine Learning with Python is a starting point for aspiring data scientists who want to learn about machine learning through Python frameworks. It doesnt require any prior knowledge of machine learning or Python, though familiarity with NumPy and matplotlib libraries will enhance the learning experience.

In this book, readers will gain a foundational understanding of machine learning concepts and the benefits and drawbacks of using standard ML algorithms. It also explains how all of the algorithms behind various Python libraries fit together in a way thats easy to understand for even the most novice learners.

Python Machine Learning by Example builds on existing machine learning knowledge for engineers who want to dive deeper into Python programming. Each chapter demonstrates the practical application of common Python ML skills through concrete examples. These skills include:

This book walks through each problem with a step-by-step guide for implementing the right Python technique. Readers should have prior knowledge of both machine learning and Python, and some reviewers recommended supplementing this guide with more theoretical reference materials for advanced comprehension.

Hands-on Machine Learning with Scikit-Learn, Keras & TensorFlow provides a practical introduction to machine learning with a focus on three Python frameworks. Readers will gain an understanding of numerous machine learning concepts and techniques, including linear regression, neural networks, and deep learning. Then, readers can apply what they learn to practical exercises throughout the book.

Though this book is marketed toward beginners, some reviewers said it requires a basic understanding of machine learning principles. With this in mind, it may be better suited for readers who want to refresh their existing knowledge through concrete examples.

Machine learning for Hackers is written for experienced programmers who want to maximize the impact of their data. The text builds on existing knowledge of the R programming language to create basic machine learning algorithms and analyze datasets.

Each chapter walks through a different machine learning challenge to illustrate various concepts. These include:

This book is best suited for intermediate learners who are fluent in R and want to learn more about the practical applications of machine learning code. Students looking to delve into machine learning theory should opt for a more advanced book like Deep Learning, Hands-on Machine Learning, or Mathematics for Machine Learning.

Pattern Recognition and Machine Learning is an excellent reference for understanding statistical methods in machine learning. It provides practical exercises to introduce the reader to comprehensive pattern recognition techniques.

The text is broken into chapters that cover the following concepts:

Readers should have a thorough understanding of linear algebra and multivariable calculus, so it may be too advanced for beginners. Familiarity with basic probability theory, decision theory, and information theory will make the material easier to understand as well.

Mathematics for Machine Learning teaches the fundamental mathematical concepts necessary for machine learning. These topics include:

Some reviewers said this book leans more into mathematical theorems than practical application, so its not recommended for those without prior experience in applied mathematics. However, its one of the few resources that bridge the gap between mathematics and machine learning, so its a worthwhile investment for intermediate learners.

For advanced learners, Deep Learning covers the mathematics and concepts that power deep learning, a subset of machine learning that makes human-like decisions. This book walks through deep learning computations, techniques, and research including:

There are about 30 pages that cover practical applications of deep learning like computer vision and natural language processing, but the majority of the book deals with the theory behind deep learning. With this in mind, readers should have a working knowledge of machine learning concepts before delving into this text.

Read next: Ultimate Machine Learning Certification Guide

Go here to see the original:
Best Machine Learning Books to Read This Year [2022 List] - CIO Insight

Deep learning pioneer Geoffrey Hinton receives prestigious Royal Medal from the Royal Society – University of Toronto

The University of Torontos Geoffrey Hinton has been honoured withthe Royal Societysprestigious Royal Medal for his pioneering work in deep learning a field of artificial intelligence that mimics the way humans acquire certain types of knowledge.

TheU.K.s national academy of sciencessaid it is recognizing Hinton,a University Professor Emeritus in the department of computer science in the Faculty of Arts & Science, for pioneering work on algorithms that learn distributed representations in artificial neural networks and their application to speech and vision, leading to a transformation of the international information technology industry.

Its the latest in along list of accolades for Hinton, who is alsochief scientific adviser at theVector Institute for Artificial Intelligenceand a vice-president and engineering fellow at Google. Others includethe Association for Computing Machinerys A. M. Turing Award, widely considered the Nobel Prize of computing.

It is a great honour to receive the Royal Medal a medal previously awarded to intellectual giants like Darwin, Faraday, Boole and G.I. Taylor, Hinton says.

But unlike them, my success was the result of recruiting and nurturing an extraordinarily talented set of graduate students and post-docs who were responsible for many of the breakthroughs in deep learning that revolutionized artificial intelligence over the last 15 years.

Royal Medalshave been awarded annually since 1826 for advancements in the physical and biological sciences. A third medal for applied sciences has been awarded since 1965.

Previous U of T winners of the Royal Medalinclude Anthony Pawson andNobel Prize-winner John Polanyi.

Hinton, meanwhile,has been a Fellow of the Royal Society since 1998 and a Fellow of the Royal Society of Canada since 1996.

The Royal Medal is one of the most significant acknowledgements of an individuals research and career, says Melanie Woodin, dean of the Faculty of Arts & Science. And Professor Hinton is truly deserving of the distinction for his foundational research and for the exceptional contribution hes made toward shaping the modern world and the future. I am thrilled to congratulate him on this award.

I want to congratulate Geoff on this spectacular achievement, adds Eyal de Lara, chair of the department of computer science. We are very proud of the seminal contributions he has made to field of computer science, which are fundamentally reshaping our discipline and impacting society at large.

Deep learning is a typeof machine learningthat relies on a neural network modelled on the network of neurons in the human brain. In 1986, Hinton and his collaborators developed the breakthrough approach based on the backpropagation algorithm, a central mechanism by which artificial neural networks learn that would realize the promise of neural networks and form the current foundation of that technology.

Hinton and his colleagues in Toronto built on that initial work with a number of critical developments that enhanced the potential of AI and helped usher in todays revolution in deep learning with applications in speech and image recognition, self-driving vehicles, automated diagnosis of images and language, and more.

I believe that the spectacular recent progress in large language models, image generation and protein structure prediction is evidence that the deep learning revolution has only just started, Hinton says.

See the original post here:
Deep learning pioneer Geoffrey Hinton receives prestigious Royal Medal from the Royal Society - University of Toronto