The Discontents Of Artificial Intelligence In 2022 – Inventiva

The Discontents of Artificial Intelligence in 2022

Recent years have seen a boom in the use of Artificial Intelligence. This review essay is divided into two parts: part I introduces contemporary AI, and part II discusses its implications. Part-II will be dedicated to the widespread and rapid adoption of artificial intelligence and its resulting crises.

In recent years, Artificial Intelligence or AI has flooded the world with applications outside of the research laboratory. There are now a number of standard Artificial Intelligence techniques, including face recognition, keyboard suggestions, Amazon recommendations, Twitter followers, image similarity search, and text translation. Artificial intelligence is also being applied in areas such as radiological diagnostics, pharmaceutical drug development, and drone navigation far removed from the ordinary user. Therefore, artificial intelligence is the new buzzword of the day and is seen as a portal to the future.

In 1956, John McCarthy and others conceptualized a summer research project aimed at replicating human activity. It is thought that this led to the discipline of artificial intelligence. In the beginning, these pioneers worked under the premise that every aspect of learning or intelligence could be so precisely described that it could be simulated by a machine.

Although the objective was ambitious, board games have often been used to test artificial intelligence methods due to pragmatic considerations. Board games have precise rules that can be encoded into a computational framework, so playing board games with skill is a hallmark of intelligence.

Earlier this year, a program called AlphaGo created a sensation by defeating the reigning Go champion. The program was developed by DeepMind, a Google company.

Gary Kasparov, then the world chess champion, was shocked by IBMs Deep Blue in a celebrated encounter between humans and machines in 1997. Kasparovs defeat was unnerving as it was the breach of a frontier in chess, which is traditionally thought of as a cerebral game. The notion that a machine could defeat the world champion at the board game of Go was considered to be an unlikely dream at the time. Based on this belief, the number of possible move sequences in Go is very much more significant than those in chess and Go played on a much larger board than chess.

Nevertheless, in 2016 a computer program made headlines after it defeated the reigning world Go champion, Lee Sedol, using a program developed by DeepMind, a company owned by Google. By 1997, commentators celebrated this victory as the beginning of a new era in which machines would eventually surpass humans in intelligence.

The reality was completely different. By any measure, AlphaGo was a sophisticated tool, but it could not be considered intelligent. While it was able to pick the best move at any time, the software did not understand the reasoning behind its choices.

In AI, a key lesson is that machines can be endowed with abilities previously possessed only by humans, although they do not have to be intelligent in the same way as sentient beings. The case of arithmetic computation is one non-AI example. The task of multiplying two large numbers was a difficult one throughout history.

Logarithm tables had to be painstakingly produced to accomplish these tasks, which required a lot of human effort. Even the most straightforward computer can now perform such calculations efficiently and reliably for many decades now. The same can be said about virtually any human task involving routine operations that can be solved with AI.

In addition, AI is beginning to make inroads into the domains of science and engineering, where domain knowledge is required. Healthcare is one such area.

Todays AI will be able to extend the above metaphor beyond simple, routine tasks to more sophisticated ones with unprecedented advances in computing power and data availability. Millions of people are already using AI tools. Nonetheless, AI is starting to make headway in areas like science and engineering, where domain knowledge is involved.

A place of universal relevance includes healthcare, where AI tools can be used to assess a persons health, provide a diagnosis based on clinical data, or analyze large-scale study data. Using artificial intelligence for solving highly complex problems such as protein folding or fluid dynamics has been developed recently in more esoteric fields. Such advances are expected to have a multitude of practical applications in the real world.

History

Many early AI works centred around symbolic reasoning laying out a set of propositions and logically deducing their implications. However, this enterprise soon ran into trouble as enumerating all the operational rules in a specific problem context was impossible.

A competing paradigm is a connectionism, which aims to overcome the difficulty of describing rules explicitly by inferring them implicitly through data. An artificial neural network is created based on the strength (weight) of connections between neurons, loosely based on the properties of neurons and their connectivity in the brain.

A number of leading figures have claimed a definitive solution to the problem of computational intelligence is imminent, based on one paradigms success or another. In spite of progress, the challenges proved far more complex, and the hype was typically followed by a period of profound disillusionment and a significant reduction in funding for American academics-a period referred to as the AI winter.

Thus, DeepMinds recent success should serve as an endorsement of its approaches as they could help society find answers to some of the worlds most pressing and fundamental scientific problems. If the reader is interested in the critical concepts in AI, as well as the background of the field and its boom-bust cycles, two recently published popular expositions written by long-term researchers may be of interest.

These are Melanie Mitchells Artificial Intelligence: A Guide for Thinking Humans (Pelican Books, 2019) and Michael Wooldridges The Road to Conscious Machines: The Story of Artificial Intelligence (Pelican Books, 2020).

Artificial Intelligence has been confronted with two issues of profound significance since its inception. While it is impressive to defeat a world champion at their own game, the real world is a much messier environment than the one in which ironclad rules govern everything.

Due to this reason, the successful AI methods developed to solve narrowly defined problems cannot be generalized to other situations involving diverse aspects of intelligence. Developing the ability to use ones hands for delicate tasks is an essential skill that a child learns effortlessly through robotics research.

Although AlphaGo worked out the winning moves, its human representative had to reposition the stones on the board, a seemingly mundane task. Intelligence isnt defined by a single skill like winning games because intelligence is a whole lot more than the sum of its parts. It encompasses, among other things, the ability to interact with ones environment, which is one of the essentials of embodied behaviour.

One of the most essential skills that a child develops effortlessly is that of using their hands to perform delicate tasks. Robotics has yet to develop this skill.

Moreover, the question of how to define intelligence itself looms more considerable and more significant than how AI tools can overcome the technical limitations. Researchers often assume that approaches developed to tackle narrowly defined problems like winning at Go can be used to solve more general intelligence problems. There has been scepticism towards this rather brash belief, both from those within the community as well as from older disciplines like philosophy and psychology.

Intelligence has been the subject of heavy debate regarding its ability to be substantially or entirely captured in a computational paradigm or whether it is irreducible and ineffable. Hubert Dreyfus well-known 1965 report entitled Alchemy and Artificial Intelligence reveal the disdain and hostility some people feel towards AI claims. Dreyfus views were called a budget of fallacies by a well-known AI researcher.

AI is also viewed with unbridled optimism that it can transcend biological limitations, a notion known as Singularity, thereby breaking all barriers. The futurist Ray Kurzweil claims that machine intelligence will overwhelm human intelligence as the capabilities of AI systems grow exponentially. Kurzweil has attracted a fervent following despite his ridiculous argument regarding exponential growth in technology. It is best to consider Singularity as a kind of technological rapture without intellectual severe foundations.

Intelligence has been a bone of contention for decades, primarily about whether it can be wholly or essentially captured through computations or if it has an ineffable, irreducible human core.

Stuart Russell, the first author of the most widely used textbook on artificial intelligence, is an AI researcher who does not shy away from defining intelligence. Humans are intelligent to the extent that they can be expected to reach their objectives (Russell, Human Compatible, 9). Machine intelligence can be defined in the same way. An approach such as this does help pin down the elusive notion of intelligence, but as anyone who has read about utility in economics can attest, it falls back on an accurate description of our goals to provide the meaning.

The style of Russell differs significantly from the writing of Mitchell and Wooldridge: he is terse and expects his readers to keep engaged; he gives no quarter. Although Human Compatible is a highly thought-provoking book, it also possesses a personal narrative that jumps from flowing argument to the abstruse hypothesis.

A recent study found that none of the hundreds of AI tools developed for detecting Covid was effective.

Additionally, Human Compatible differs significantly from other AI expositions by examining the dangers of future AI surpassing human capabilities. While Russell avoids evoking dystopian Hollywood imagery, he does argue that AI agents might combine to cause harm and accidents in the future. He points to the story of Leo Szilard, who figured out the physics of nuclear chains after Ernest Rutherford had argued that the idea of atomic power was moonshine and warned against the belief that such an eventuality was highly unlikely or impossible.

After that, nuclear warfare unleashed its horrors. Human Compatible focuses on guarding against the possibility of AI robots taking over the world. Wooldridges argument is not convincing here. The decades of AI research suggest that human-level AI differs from a nuclear chain reaction that can be described as a simple mechanism (Wooldridge, The Road to Consciousness, 244).

It is enriching but ultimately undecidable to debate the nature of intelligence and the fate of humanity in philosophy. Most researchers in AI research are focused on specific problems and are indifferent to more significant debates due to the two distinct tracks of cognitive science and engineering. Unfortunately, the objectives and claims of these two approaches are often conflated in the public discourse, leading to much confusion.

Relevantly, terms like neurons and learning have a mathematical meaning within the discipline. However, they are immediately associated with their commonsense connotation, leading to severe misunderstandings about the entire enterprise. The concept of a neural network is not the same as the concept of the human brain, and learning is a broad set of statistical principles and methods that are essentially sophisticated curve fitting and decision rule algorithms.

It has almost completely replaced other methods of machine learning since deep learning was discovered nearly a decade ago.

It was considered ineffective a few decades ago to develop neural networks that could learn from data. With the development of deep learning, neural networks garnered renewed interest in 2012, leading to significant improvements in image and speech recognition methods. Currently, successful AI methods such as AlphaGo and its successors and widely used tools such as Google Translate employ deep learning, in which the adjective does not signify profundity but rather a multiple layering of the network.

Deep understanding has been sweeping many disciplines since it was introduced over a decade ago, and it is now nearly wholly replacing other methods of machine learning. Three of its pioneers received the Turing Award in 2018, the highest honour in the field of computer science, anointing their paradigmatic dominance.

Success in AI is accompanied by hype and hubris. In 2016, Geoff Hinton, one of the Turing trio, stated: We should have ceased training radiologists by now, because it will become clear in five years that deep learning will provide better outcomes than radiologists. The failure to deliver us from flawed radiologists and other problems with the method did not hinder Hinton from stating in 2020 that deep learning will be able to do everything. In addition, a recent study concluded that none of the hundreds of AI tools developed for finding Covid was effective.

AI follows success with hype and hubris as an iron law.

Our understanding of the nature of contemporary learning-based AI tools will be enhanced by looking at how they are developed. As an example, consider detecting chairs from images. Various components of a chair can be observed: legs, backrests, armrests, cushions, etc. All of these combinations are recognizable as chairs, so there are potentially countless combinations of such elements.

Other things, such as bean bags, can trump any rule we may formulate about what a chair should contain. Methods such as deep learning seek to overcome precisely the limitations of symbolic, rule-based deduction. We may collect a number of images of chairs and other objects instead of trying to define rules that cover all of their varieties and feed these into a neural network along with the correct output (output of chair vs non-chair).

A deep learning approach would then modify the weights of the connections in the network in the training phase to mimic as best as possible the desired input-output relationships. Basically, the network will now be able to answer the question of whether previously unseen test images contain chairs if it has been done correctly.

For a chair-recognizer of this nature, many images of chairs of different shapes and sizes are needed. As an extension of that analogy, one may now consider any number of categories one can imagine, including chairs, tables, trees, people, and so on, all of which appear in the world in a variety of glorious but maddening variety. As a result, it is essential to acquire adequately representative images of objects.

It has been shown that deep learning methods can work extraordinarily well, but they are often unreliable and unpredictable.

A number of significant advances were made in 2012 in automatic image recognition thanks to the combination of relatively cheap and powerful hardware, as well as the rapid expansion of the internet, which enabled researchers to build a large dataset, known as ImageNet, containing millions of images labelled with thousands of categories.

Despite working well, deep learning methods are unreliable when it comes to their behaviour. Suppose, for example; an American school bus is mistaken for an ostrich due to tiny changes in images that cannot be seen by the human eye. Additionally, it is recognized that sometimes incorrect results can arise from spurious and unreliable statistical correlations rather than from any deep understanding.

When a boat is shown in an image that is surrounded by water, it is correctly recognized. A ship is not modelled or envisioned in the method. The limitations and problems of AI may have typically been academic concerns in the past. In this case, however, it is different since a number of AI tools have been taken from the laboratory and deployed into real life, often with grave consequences.

Due to a relentless push towards automation, a number of data-driven methods have already been developed and deployed locally, including in India, well before deep learning became a fad. Among the tools that have achieved extraordinary notoriety is COMPAS, which is used by US courts to determine the appropriate sentence length based upon the risk of recidivism.

A tool such as this uses statistics from existing criminal records to determine a defendants chances of committing a crime in the future. The device, even without explicitly biasing itself against black people, resulted in racial bias in a well-known investigation. When judges use artificial intelligence to predict sentence length, they discriminate based on race.

For biometric identification and authentication, fingerprints and face images are even more valuable. Many law enforcement agencies and other state agencies have adopted face recognition tools due to their utility in surveillance and forensics. Affective computing and other dubious techniques for detecting emotion have also been used in a number of contexts, including employment decisions as well as more intrusive surveillance methods.

A number of necessary studies have shown that many face recognition programs available in the commercial sector are profoundly flawed and discriminatory. A recent audit of commercially available tools revealed that black women could experience face recognition error rates as high as 35% higher than white women, causing growing calls for their halt. In India and China especially, face and emotion recognition is becoming more widespread and is having tremendous implications for human rights and welfare. This deserves a much more thorough discussion than the one presented here.

Various sources of bias result from relying on real-world data for decision making. Many of these sources can be grouped under the heading of bias. Face recognition suffers from a bias caused by the low number of people of colour in many datasets used to develop the tools. Another limitation is the limited relevance of the past for defining the contours of the society we want to build. If an AI algorithm relies on past records, as is done in the US recidivism modelling, it would disparately harm the poor since they have historically experienced higher incarceration rates.

Additionally, if one were to consider automating the hiring process for a professional position in India, models based on past hirings would automatically lead to caste bias, even if caste was not explicitly considered. As Cathy ONeil details in her famous book, Weapons of Math Destruction: How Big Data is Increasing Inequality and Threatening Democracy (Penguin Books, 2016), which details a number of such incidents in the American context, her argument here can be summarized as follows:

Likewise, models based on past hires in India would automatically result in caste bias if one were to automate hiring people for, say, a professional position.

Artificial intelligence methods do not learn from the world directly but from a dataset as a proxy. A lack of ethical oversight and the lack of design of data collection have long plagued AI research in academia. Scholars from a range of disciplines have put a great deal of effort into developing discussions of bias in AI tools and datasets, including their ramifications in society, particularly among those who are poor and traditionally discriminated against.

Additionally, many modern AI tools are impossible to reason about or interpret, in addition to bias. Since those who are affected by a decision often have a right to know the reasoning used to arrive at a conclusion, the problem of explainability has profound implications for transparency.

Within the computer science community broadly, there has been an interest in formalizing these problems, which has led to academic conferences and an online textbook in preparation. An essential result of this exercise has been a theoretical understanding of the impossibility of fairness, which is a result of multiple notions of fairness not all being possible to satisfy simultaneously.

Research and practice in AI should also consider the trade-offs involved in designing software and the societal implications of these choices. The second part of this essay will show, however, that these considerations are seldom adequate as the rapid expansion of contemporary AI technology from the research lab into daily life has unleashed a wide range of problems.

Like Loading...

Related

Read the original:
The Discontents Of Artificial Intelligence In 2022 - Inventiva

Related Posts

Comments are closed.