Mary Shelleys 1818 novel Frankenstein the urtext for science fiction is all about creating artificial life. And Fritz Langs seminal 1927 film Metropolis established an astonishing number of fantasy horror tropes with its Maschinenmensch the machine human robot that wreaks murderous chaos.
Actually creating AI, however, remained firmly in the realm of science fiction until the advent of the first digital computers soon after the end of the Second World War. Central to this story is Alan Turing, the brilliant British mathematician best known for his work cracking Nazi ciphers at Bletchley Park. Though his code-breaking work was vital for the Allied war effort, Turing deserves to be at least as well known for his work on the development of computers and AI.
While studying for his PhD in the 1930s, he produced a design for a mathematical device now known as a Turing machine, providing a blueprint for computers that is still standard today. In 1948, Turing took a job at Manchester University to work on Britains first computer, the so-called Manchester baby. The advent of computers sparked a wave of curiosity about these electronic brains, which seemed to be capable of dazzling intellectual feats.
Alan Turing deserves to be at least as well known for his work on the development of computers and AI
Turing apparently became frustrated by dogmatic arguments that intelligent machines were impossible and, in a 1950 article in the journal MIND, sought to settle the debate. He proposed a method which he called the Imitation Game but which is now known as the Turing test for detecting a machines ability to display intelligence. A human interrogator engages in conversations with another person and a machine but the dialogue is conducted via teleprinter, so the interrogator doesnt know which is which. Turing argued that if a machine couldnt be reliably distinguished from a person through such a test, that machine should be considered intelligent.
At the same time, on the other side of the Atlantic, US academic John McCarthy had become interested in the possibility of intelligent machines. In 1955, while applying for funding for a scientific conference the following year, he coined the term artificial intelligence.
McCarthy had grand expectations for his event: he thought that, having brought together researchers with relevant interests, AI would be developed within just a few weeks.In the event, they made little progress at the conference but McCarthys delegates gave birth to a new field, and an unbroken thread connects those scientists through their academic descendants down to todays AI.
At the end of the 1950s, only a handful of digital computers existed worldwide. Even so, McCarthy and his colleagues had by then constructed computer programs that could learn, solve problems, complete logic puzzles and play games. They assumed that progress would continue to be swift, particularly because computers were rapidly becoming faster and cheaper.
But momentum waned and, by the 1970s, research funding agencies had become frustrated by over-optimistic predictions of progress. Cuts followed, and AI acquired a poor reputation. A new wave of ideas prompted a decade of excitement in the 1980s but, once again, progress stalled and, once again, AI researchers were accused of overinflating expectations of breakthroughs.
Things really began to change this century with the development of a new class of deep learning AI systems based on neural network technology itself a very old idea. Animal brains and nervous systems comprise huge numbers of cells called neurons, connected to one another in vast networks: the human brain, for example, contains tens of billions of neurons, each of which has, on average, of the order of 7,000 connections. Each neuron recognises simple patterns in data received by its network connections, prompting it to communicate with its neighbours via electro-chemical signals.
The human brain contains tens of billions of neurons, each of which has, on average, of the order of 7,000 connections
Human intelligence somehow arises from these interactions. In the 1940s, US researchers Warren McCulloch and Walter Pitts were struck by the idea that electrical circuits might simulate such systems and the field of neural networks was born. Although theyve been studied continuously since McCulloch and Pitts proposal, it took further scientific advances to make neural networks a practical reality.
Notably, scientists had to work out how to train or configure networks. The required breakthroughs were delivered by British-born researcher Geoffrey Hinton and colleagues in the 1980s. This work prompted a short lived flurry of interest in the field, but it died down when it became clear that computer technology of the time was not powerful enough to build useful neural networks.
Come the new century, that situation changed: today we live in an age of abundant, cheap computer power and data both of which are essential for building the deep-learning networks that underpin recent advances in AI.
Neural networks represent the core technology underpinning ChatGPT, the AI program released by OpenAI in November 2022. ChatGPT the neural networks of which comprise around a trillion components each immediately went viral, and is now used by hundreds of millions of people every day. Some of its success can be attributed to the fact that it feels exactly like the kind of AI we have seen in the movies. Using ChatGPT involves simply having a conversation with something that seems both knowledgeable and smart.
What its neural networks are doing, however, is quite basic. When you type something, ChatGPT simply tries to predict what text should appear next. To do this, it has been trained using vast amounts of data (including all of the text published on the world wide web). Somehow, those huge neural networks and data enable it to provide extraordinarily impressive responses for all intents and purposes, passing Turings test.
The success of ChatGPT has brought to the fore a primal fear: that we might bring something to life and then lose control. This is the nightmare of Frankenstein, Metropolis and The Terminator. With the unnerving ability of ChatGPT, you might believe that such scenarios could be close at hand. However, though ChatGPT is remarkable, we shouldnt credit it with too much real intelligence. It is not actually a mind it only tries to suggest text that might appear next.
The success of ChatGPT has brought to the fore a primal fear: that we might bring something to life and then lose control
It isnt wondering why you are asking it about curry recipes or the performance of Liverpool Football Club in fact, it isnt wondering anything. It doesnt have any beliefs or desires, nor any purpose other than to predict words. ChatGPT is not going to crawl out of the computer and take over.
That doesnt mean, of course, that there are no potential dangers in AI. One of the most immediate is that ChatGPT or its like may be used to generate disinformation on an industrial scale to influence forthcoming US and UK elections. We also dont know the extent to which such systems acquire the countless human biases we all display, and which are likely evident in its training data. The program, after all, is doing its best to predict what we would write so the large scale adoption of this technology may essentially serve to hold up a mirror to our prejudices. We may not like what we see.
Michael Wooldridge is professor of computer science at the University of Oxford, and author of The Road to Conscious Machines: The Story of AI (Pelican, 2020)
This article was first published in the August 2023 issue of BBC History Magazine
View post:
Artificial Intelligence History: The Turing Test & Fears Of A.I. - BBC History Magazine