How AI is reinventing what computers are – MIT Technology Review

Fall 2021: the season of pumpkins, pecan pies, and peachy new phones. Every year, right on cue, Apple, Samsung, Google, and others drop their latest releases. These fixtures in the consumer tech calendar no longer inspire the surprise and wonder of those heady early days. But behind all the marketing glitz, theres something remarkable going on.

Googles latest offering, the Pixel 6, is the first phone to have a separate chip dedicated to AI that sits alongside its standard processor. And the chip that runs the iPhone has for the last couple of years contained what Apple calls a neural engine, also dedicated to AI. Both chips are better suited to the types of computations involved in training and running machine-learning models on our devices, such as the AI that powers your camera. Almost without our noticing, AI has become part of our day-to-day lives. And its changing how we think about computing.

What does that mean? Well, computers havent changed much in 40 or 50 years. Theyre smaller and faster, but theyre still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how theyre programmed, and how theyre used. Ultimately, it will change what they are for.

The core of computing is changing from number-crunching to decision-making, says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes.

The first change concerns how computersand the chips that control themare made. Traditional computing gains came as machines got faster at carrying out one calculation after another. For decades the world benefited from chip speed-ups that came with metronomic regularity as chipmakers kept up with Moores Law.

But the deep-learning models that make current AI applications work require a different approach: they need vast numbers of less precise calculations to be carried out all at the same time. That means a new type of chip is required: one that can move data around as quickly as possible, making sure its available when and where its needed. When deep learning exploded onto the scene a decade or so ago, there were already specialty computer chips available that were pretty good at this: graphics processing units, or GPUs, which were designed to display an entire screenful of pixels dozens of times a second.

Anything can become a computer. Indeed, most household objects, from toothbrushes to light switches to doorbells, already come in a smart version.

Now chipmakers like Intel and Arm and Nvidia, which supplied many of the first GPUs, are pivoting to make hardware tailored specifically for AI. Google and Facebook are also forcing their way into this industry for the first time, in a race to find an AI edge through hardware.

For example, the chip inside the Pixel 6 is a new mobile version of Googles tensor processing unit, or TPU. Unlike traditional chips, which are geared toward ultrafast, precise calculations, TPUs are designed for the high-volume but low-precision calculations required by neural networks. Google has used these chips in-house since 2015: they process peoples photos and natural-language search queries. Googles sister company DeepMind uses them to train its AIs.

In the last couple of years, Google has made TPUs available to other companies, and these chipsas well as similar ones being developed by othersare becoming the default inside the worlds data centers.

AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-learning algorithma type of AI that learns how to solve a task through trial and errorto design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think ofbut they worked. This kind of AI could one day develop better, more efficient chips.

The second change concerns how computers are told what to do. For the past 40 years we have been programming computers; for the next 40 we will be training them, says Chris Bishop, head of Microsoft Research in the UK.

Traditionally, to get a computer to do something like recognize speech or identify objects in an image, programmers first had to come up with rules for the computer.

With machine learning, programmers no longer write rules. Instead, they create a neural network that learns those rules for itself. Its a fundamentally different way of thinking.

More here:
How AI is reinventing what computers are - MIT Technology Review

Related Posts

Comments are closed.