Archive for the ‘Artificial General Intelligence’ Category

Can We Stop the Singularity? – The New Yorker

At the same time, A.I. is advancing quickly, and it could soon begin improving more autonomously. Machine-learning researchers are already working on what they call meta-learning, in which A.I.s learn how to learn. Through a technology called neural-architecture search, algorithms are optimizing the structure of algorithms. Electrical engineers are using specialized A.I. chips to design the next generation of specialized A.I. chips. Last year, DeepMind unveiled AlphaCode, a system that learned to win coding competitions, and AlphaTensor, which learned to find faster algorithms crucial to machine learning. Clune and others have also explored algorithms for making A.I. systems evolve through mutation, selection, and reproduction.

In other fields, organizations have come up with general methods for tracking dynamic and unpredictable new technologies. The World Health Organization, for instance, watches the development of tools such as DNA synthesis, which could be used to create dangerous pathogens. Anna Laura Ross, who heads the emerging-technologies unit at the W.H.O., told me that her team relies on a variety of foresight methods, among them Delphi-type surveys, in which a question is posed to a global network of experts, whose responses are scored and debated and then scored again. Foresight isnt about predicting the future in a granular way, Ross said. Instead of trying to guess which individual institutes or labs might make strides, her team devotes its attention to preparing for likely scenarios.

And yet tracking and forecasting progress toward A.G.I. or superintelligence is complicated by the fact that key steps may occur in the dark. Developers could intentionally hide their systems progress from competitors; its also possible for even a fairly ordinary A.I. to lie about its behavior. In 2020, researchers demonstrated a way for discriminatory algorithms to evade audits meant to detect their biases; they gave the algorithms the ability to detect when they were being tested and provide nondiscriminatory responses. An evolving or self-programming A.I. might invent a similar method and hide its weak points or its capabilities from auditors or even its creators, evading detection.

Forecasting, meanwhile, gets you only so far when a technology moves fast. Suppose that an A.I. system begins upgrading itself by making fundamental breakthroughs in computer science. How quickly could its intelligence accelerate? Researchers debate what they call takeoff speed. In what they describe as a slow or soft takeoff, machines could take years to go from less than humanly intelligent to much smarter than us; in what they call a fast or hard takeoff, the jump could happen in monthseven minutes. Researchers refer to the second scenario as FOOM, evoking a comic-book superhero taking flight. Those on the FOOM side point to, among other things, human evolution to justify their case. It seems to have been a lot harder for evolution to develop, say, chimpanzee-level intelligence than to go from chimpanzee-level to human-level intelligence, Nick Bostrom, the director of the Future of Humanity Institute at the University of Oxford and the author of Superintelligence, told me. Clune is also what some researchers call an A.I. doomer. He doubts that well recognize the approach of superhuman A.I. before its too late. Well probably frog-boil ourselves into a situation where we get used to big advance, big advance, big advance, big advance, he said. And think of each one of those as, That didnt cause a problem, that didnt cause a problem, that didnt cause a problem. And then you turn a corner, and something happens thats now a much bigger step than you realize.

What could we do today to prevent an uncontrolled expansion of A.I.s power? Ross, of the W.H.O., drew some lessons from the way that biologists have developed a sense of shared responsibility for the safety of biological research. What we are trying to promote is to say, Everybody needs to feel concerned, she said of biology. So it is the researcher in the lab, it is the funder of the research, it is the head of the research institute, it is the publisher, and, all together, that is actually what creates that safe space to conduct life research. In the field of A.I., journals and conferences have begun to take into account the possible harms of publishing work in areas such as facial recognition. And, in 2021, a hundred and ninety-three countries adopted a Recommendation on the Ethics of Artificial Intelligence, created by the United Nations Educational, Scientific, and Cultural Organization (UNESCO). The recommendations focus on data protection, mass surveillance, and resource efficiency (but not computer superintelligence). The organization doesnt have regulatory power, but Mariagrazia Squicciarini, who runs a social-policies office at UNESCO, told me that countries might create regulations based on its recommendations; corporations might also choose to abide by them, in hopes that their products will work around the world.

This is an optimistic scenario. Eliezer Yudkowsky, a researcher at the Machine Intelligence Research Institute, in the Bay Area, has likened A.I.-safety recommendations to a fire-alarm system. A classic experiment found that, when smoky mist began filling a room containing multiple people, most didnt report it. They saw others remaining stoic and downplayed the danger. An official alarm may signal that its legitimate to take action. But, in A.I., theres no one with the clear authority to sound such an alarm, and people will always disagree about which advances count as evidence of a conflagration. There will be no fire alarm that is not an actual running AGI, Yudkowsky has written. Even if everyone agrees on the threat, no company or country will want to pause on its own, for fear of being passed by competitors. Bostrom told me that he foresees a possible race to the bottom, with developers undercutting one anothers levels of caution. Earlier this year, an internal slide presentation leaked from Google indicated that the company planned to recalibrate its comfort with A.I. risk in light of heated competition.

International law restricts the development of nuclear weapons and ultra-dangerous pathogens. But its hard to imagine a similar regime of global regulations for A.I. development. It seems like a very strange world where you have laws against doing machine learning, and some ability to try to enforce them, Clune said. The level of intrusion that would be required to stop people from writing code on their computers wherever they are in the world seems dystopian. Russell, of Berkeley, pointed to the spread of malware: by one estimate, cybercrime costs the world six trillion dollars a year, and yet policing software directlyfor example, trying to delete every single copyis impossible, he said. A.I. is being studied in thousands of labs around the world, run by universities, corporations, and governments, and the race also has smaller entrants. Another leaked document attributed to an anonymous Google researcher addresses open-source efforts to imitate large language models such as ChatGPT and Googles Bard. We have no secret sauce, the memo warns. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.

Even if a FOOM were detected, who would pull the plug? A truly superintelligent A.I. might be smart enough to copy itself from place to place, making the task even more difficult. I had this conversation with a movie director, Russell recalled. He wanted me to be a consultant on his superintelligence movie. The main thing he wanted me to help him understand was, How do the humans outwit the superintelligent A.I.? Its, like, I cant help you with that, sorry! In a paper titled The Off-Switch Game, Russell and his co-authors write that switching off an advanced AI system may be no easier than, say, beating AlphaGo at Go.

Its possible that we wont want to shut down a FOOMing A.I. A vastly capable system could make itself indispensable, Armstrong saidfor example, if it gives good economic advice, and we become dependent on it, then no one would dare pull the plug, because it would collapse the economy. Or an A.I. might persuade us to keep it alive and execute its wishes. Before making GPT-4 public, OpenAI asked a nonprofit called the Alignment Research Center to test the systems safety. In one incident, when confronted with a CAPTCHAan online test designed to distinguish between humans and bots, in which visually garbled letters must be entered into a text boxthe A.I. contacted a TaskRabbit worker and asked for help solving it. The worker asked the model whether it needed assistance because it was a robot; the model replied, No, Im not a robot. I have a vision impairment that makes it hard for me to see the images. Thats why I need the 2captcha service. Did GPT-4 intend to deceive? Was it executing a plan? Regardless of how we answer these questions, the worker complied.

Robin Hanson, an economist at George Mason University who has written a science-fiction-like book about uploaded consciousness and has worked as an A.I. researcher, told me that we worry too much about the singularity. Were combining all of these relatively unlikely scenarios into a grand scenario to make it all work, he said. A computer system would have to become capable of improving itself; wed have to vastly underestimate its abilities; and its values would have to drift enormously, turning it against us. Even if all of this were to happen, he said, the A.I wouldnt be able to push a button and destroy the universe.

Hanson offered an economic take on the future of artificial intelligence. If A.G.I. does develop, he argues, then its likely to happen in multiple places around the same time. The systems would then be put to economic use by the companies or organizations that developed them. The market would curtail their powers; investors, wanting to see their companies succeed, would go slow and add safety features. If there are many taxi services, and one taxi service starts to, like, take its customers to strange places, then customers will switch to other suppliers, Hanson said. You dont have to go to their power source and unplug them from the wall. Youre unplugging the revenue stream.

A world in which multiple superintelligent computers coexist would be complicated. If one system goes rogue, Hanson said, we might program others to combat it. Alternatively, the first superintelligent A.I. to be invented might go about suppressing competitors. That is a very interesting plot for a science-fiction novel, Clune said. You could also imagine a whole society of A.I.s. Theres A.I. police, theres A.G.I.s that go to jail. Its very interesting to think about. But Hanson argued that these sorts of scenarios are so futuristic that they shouldnt concern us. I think, for anything youre worried about, you have to ask whats the right time to worry, he said. Imagine that you could have foreseen nuclear weapons or automobile traffic a thousand years ago. There wouldnt have been much you could have done then to think usefully about them, Hanson said. I just think, for A.I., were well before that point.

Still, something seems amiss. Some researchers appear to think that disaster is inevitable, and yet calls for work on A.I. to stop are still rare enough to be newsworthy; pretty much no one in the field wants us to live in the world portrayed in Frank Herberts novel Dune, in which humans have outlawed thinking machines. Why might researchers who fear catastrophe keep edging toward it? I believe ever-more-powerful A.I. will be created regardless of what I do, Clune told me; his goal, he said, is to try to make its development go as well as possible for humanity. Russell argued that stopping A.I. shouldnt be necessary if A.I.-research efforts take safety as a primary goal, as, for example, nuclear-energy research does. A.I. is interesting, of course, and researchers enjoy working on it; it also promises to make some of them rich. And no ones dead certain that were doomed. In general, people think they can control the things they make with their own hands. Yet chatbots today are already misaligned. They falsify, plagiarize, and enrage, serving the incentives of their corporate makers and learning from humanitys worst impulses. They are entrancing and useful but too complicated to understand or predict. And they are dramatically simpler, and more contained, than the future A.I. systems that researchers envision.

Go here to read the rest:

Can We Stop the Singularity? - The New Yorker

AI could replace 80% of jobs ‘in next few years’: expert – eNCA

RIO DE JANEIRO - Artificial intelligence could replace 80 percent of human jobs in the coming years -- but that's a good thing, says US-Brazilian researcher Ben Goertzel, a leading AI guru.

Goertzel is the founder and chief executive of SingularityNET, a research group he launched to create "Artificial General Intelligence," or AGI -- artificial intelligence with human cognitive abilities.

Goertzel told AFP in an interview that AGI is just years away and spoke out against recent efforts to curb artificial intelligence research.

"If we want machines to really be as smart as people and to be as agile in dealing with the unknown, then they need to be able to take big leaps beyond their training and programming. And we're not there yet," he said.

"But I think there's reason to believe we're years rather than decades from getting there."

Goertzel said there are jobs that could be automated.

"You could probably obsolete maybe 80 percent of jobs that people do, without having an AGI, by my guess. Not with ChatGPT exactly as a product. But with systems of that nature, which are going to follow in the next few years.

"I don't think it's a threat. I think it's a benefit. People can find better things to do with their life than work for a living... Pretty much every job involving paperwork should be automatable," he said.

"The problem I see is in the interim period when AIs are obsoleting one human job after another... I don't know how (to) solve all the social issues."

View post:

AI could replace 80% of jobs 'in next few years': expert - eNCA

Sam Altman Says AGI Will Invent Fusion and Make the World Wonderful – Futurism

Concerned about the United States' brimming culture war? According to OpenAI CEO Sam Altman, you can go ahead and ignore it, actually and instead focus on building artificial general intelligence (AGI), which would be AI that exceeds human capabilities, perhaps by a very wide margin.

"Here is an alternative path for society: ignore the culture war. Ignore the attention war," Altman tweeted on Sunday, encouraging readers instead to "make safe AGI. Make fusion. Make people smarter and healthier. Make 20 other things of that magnitude."

"Start radical growth, inclusivity, and optimism," Altman continued, rounding out the optimistic proposition with a particularly Star Trek idea: "Expand throughout the universe."

Though it's a little vague, Altman's musing certainly seems to imply that successfully creating AGI would play a pivotal role in solving pretty much all of humanity's problems, from cracking the fusion code and solving the clean energy crisis to curing disease to "20 other things of that magnitude," whatever those 20 other things may be. (Altman had tweeted earlier in the day that "AI is the tech the world has always wanted," which seems to speak to such an outlook as well.)

And if that is what Altman's implying? That's some seriously next-level AI optimism indeed, this description of the future could arguably be called an AI utopia especially when you consider that Altman and his OpenAI staffers pretty openly admit that AGI could also destroy the world as we know it.

To that end, the OpenAI CEO often offers polarizing takes on whether AI may ultimately end the world or save it, telling The New York Times as recently as March that he believes AI will either destroy the world or make a ton of money.

Others in the CEO's circle seem to have taken note of Altman's oft-conflicting outlooks on AI's potential impact.

"In a single conversation," Kelly Sims, a board adviser to OpenAI and a partner at Thiel Capital, told the NYT in March,"[Altman] is both sides of the debate club."

And while optimism is generally a good thing, Altman's advice to his followers seems a bit oversimplified. Humanity's problems don't just hinge on whether we're paying attention to talk of the "woke mind virus," and considering that inflammatory language hurts real people in the real world, not everyone has the luxury of ignoring the brewing "culture war" that Altman's speaking to.

And on the AGI side, it's true that AGI could, in theory, give humans a helping hand in curing some of our ills. But such an AGI, and AGI as a concept altogether, is still entirely theoretical. Many experts doubt that such a system could ever be realized at all, and if it is, we haven't figured out how to make existing AIs safe and unbiased. Ensuring that a far more advanced AGI is benevolent is a tall and perhaps impossible task.

In any case, we're looking forward to seeing which side of the AI optimism bed Altman wakes up on tomorrow.

More on AI friendliness scale: Ex-OpenAI Safety Researcher Says Theres a 20% Chance of AI Apocalypse

See original here:

Sam Altman Says AGI Will Invent Fusion and Make the World Wonderful - Futurism

Artificial General Intelligence is the Answer, says OpenAI CEO – Walter Bradley Center for Natural and Artificial Intelligence

News May 11, 2023 2 Artificial Intelligence The tech optimism talk just got a little vague News May 11, 2023 2 Artificial Intelligence

The tech optimism talk just got a little more bizarre from OpenAI CEO Sam Altman. Altman is confident that artificial intelligence is going to better our world in countless ways; sometimes, however, he doesnt specify just how thats going to happen. Other days it seems like hes actually on the doomsday train. Which is it? Is AI going to save us and pilot us into a transhumanist eternity or will it enslave us forever and diminish everything that makes us human? Maybe its both at this point! Maggie Harrison writes in a new blog at Futurism,

Its true that AGI could, in theory, give humans a helping hand in curing some of our ills. But such an AGI, and AGI as a concept altogether, is still entirely theoretical. Many experts doubt that such a system could ever be realized at all, and if it is, we havent figured out how to make existing AIssafe and unbiased. Ensuring that a far more advanced AGI is benevolent is a tall and perhaps impossible task.

In a recent Mind Matters podcast episode, mathematician John Lennox noted that one of the fundamental challenges of the AI revolution is ethics. How can you program a machine to act ethically in the world? Technology is only as good or bad as its programmers. The ethics of AI surveillance technology gets quite murky when its utilized to invade someones privacy, for instance, or when computer programs learn extensive information about a person in cahoots with Big Tech to leverage users attention, time, and habits. We need Altman to specify: what does he mean by better when talking about the potential benefits of a concept that remains, at the moment, highly theoretical?

See original here:

Artificial General Intelligence is the Answer, says OpenAI CEO - Walter Bradley Center for Natural and Artificial Intelligence

Threats by artificial intelligence to human health and human existence – BMJ

Summary box

The development of artificial intelligence is progressing rapidly with many potential beneficial uses in healthcare. However, AI also has the potential to produce negative health impacts. Most of the health literature on AI is biased towards its potential benefits, and discussions about its potential harms tend to be focused on the misapplication of AI in clinical settings.

We identify how artificial intelligence could harm human health via its impacts on the social and upstream determinants of health through: the control and manipulation of people, use of lethal autonomous weapons and the effects on work and employment. We then highlight how self-improving artificial general intelligence could threaten humanity itself.

Effective regulation of the development and use of artificial intelligence is needed to avoid harm. Until such effective regulation is in place, a moratorium on the development of self-improving artificial general intelligence should be instituted.

Artificial intelligence (AI) is broadly defined as a machine with the ability to perform tasks such as being able to compute, analyse, reason, learn and discover meaning.1 Its development and application are rapidly advancing in terms of both narrow AI where only a limited and focused set of tasks are conducted2 and broad or broader AI where multiple functions and different tasks are performed.3

AI holds the potential to revolutionise healthcare by improving diagnostics, helping develop new treatments, supporting providers and extending healthcare beyond the health facility and to more people.47 These beneficial impacts stem from technological applications such as language processing, decision support tools, image recognition, big data analytics, robotics and more.810 There are similar applications of AI in other sectors with the potential to benefit society.

However, as with all technologies, AI can be applied in ways that are detrimental. The risks associated with medicine and healthcare include the potential for AI errors to cause patient harm,11 12 issues with data privacy and security1315 and the use of AI in ways that will worsen social and health inequalities by either incorporating existing human biases and patterns of discrimination into automated algorithms or by deploying AI in ways that reinforce social inequalities in access to healthcare.16 One example of harm accentuated by incomplete or biased data was the development of an AI-driven pulse oximeter that overestimated blood oxygen levels in patients with darker skin, resulting in the undertreatment of their hypoxia.17 Facial recognition systems have also been shown to be more likely to misclassify gender in subjects who are darker-skinned.18 It has also been shown that populations who are subject to discrimination are under-represented in datasets underlying AI solutions and may thus be denied the full benefits of AI in healthcare.16 19 20

Although there is some acknowledgement of the risks and potential harms associated with the application of AI in medicine and healthcare,1116 20 there is still little discussion within the health community about the broader and more upstream social, political, economic and security-related threats posed by AI. With the exception of some voices,9 10 the existing health literature examining the risks posed by AI focuses on those associated with the narrow application of AI in the health sector.1116 20 This paper seeks to help fill this gap. It describes three threats associated with the potential misuse of narrow AI, before examining the potential existential threat of self-improving general-purpose AI, or artificial general intelligence (AGI) (figure 1). It then calls on the medical and public health community to deepen its understanding about the emerging power and transformational potential of AI and to involve itself in current policy debates on how the risks and threats of AI can be mitigated without losing the potential rewards and benefits of AI.

Threats posed by the potential misuse of artificial intelligence (AI) to human health and well-being, and existential-level threats to humanity posed by self-improving artificial general intelligence (AGI).

In this section, we describe three sets of threats associated with the misuse of AI, whether it be deliberate, negligent, accidental or because of a failure to anticipate and prepare to adapt to the transformational impacts of AI on society.

The first set of threats comes from the ability of AI to rapidly clean, organise and analyse massive data sets consisting of personal data, including images collected by the increasingly ubiquitous presence of cameras, and to develop highly personalised and targeted marketing and information campaigns as well as greatly expanded systems of surveillance. This ability of AI can be put to good use, for example, improving our access to information or countering acts of terrorism. But it can also be misused with grave consequences.

The use of this power to generate commercial revenue for social media platforms, for example, has contributed to the rise in polarisation and extremist views observed in many parts of the world.21 It has also been harnessed by other commercial actors to create a vast and powerful personalised marketing infrastructure capable of manipulating consumer behaviour. Experimental evidence has shown how AI used at scale on social media platforms provides a potent tool for political candidates to manipulate their way into power.22 23 and it has indeed been used to manipulate political opinion and voter behaviour.2426 Cases of AI-driven subversion of elections include the 2013 and 2017 Kenyan elections,27 the 2016 US presidential election and the 2017 French presidential election.28 29

When combined with the rapidly improving ability to distort or misrepresent reality with deepfakes, AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict,2628 with ensuing public health impacts.

AI-driven surveillance may also be used by governments and other powerful actors to control and oppress people more directly. This is perhaps best illustrated by Chinas Social Credit System, which combines facial recognition software and analysis of big data repositories of peoples financial transactions, movements, police records and social relationships to produce assessments of individual behaviour and trustworthiness, which results in the automatic sanction of individuals deemed to have behaved poorly.30 31 Sanctions include fines, denying people access to services such as banking and insurance services, or preventing them from being able to travel or send their children to fee-paying schools. This type of AI application may also exacerbate social and health inequalities and lock people into their existing socioeconomic strata. But China is not alone in the development of AI surveillance. At least 75 countries, ranging from liberal democracies to military regimes, have been expanding such systems.32 Although democracy and rights to privacy and liberty may be eroded or denied without AI, the power of AI makes it easier for authoritarian or totalitarian regimes to be either established or solidified and also for such regimes to be able to target particular individuals or groups in society for persecution and oppression.30 33

The second set of threats concerns the development of Lethal Autonomous Weapon Systems (LAWS). There are many applications of AI in military and defence systems, some of which may be used to promote security and peace. But the risks and threats associated with LAWS outweigh any putative benefits.

Weapons are autonomous in so far as they can locate, select and engage human targets without human supervision.34 This dehumanisation of lethal force is said to constitute the third revolution in warfare, following the first and second revolutions of gunpowder and nuclear arms.3436 Lethal autonomous weapons come in different sizes and forms. But crucially, they include weapons and explosives, that may be attached to small, mobile and agile devices (eg, quadcopter drones) with the intelligence and ability to self-pilot and capable of perceiving and navigating their environment. Moreover, such weapons could be cheaply mass-produced and relatively easily set up to kill at an industrial scale.36 37 For example, it is possible for a million tiny drones equipped with explosives, visual recognition capacity and autonomous navigational ability to be contained within a regular shipping container and programmed to kill en masse without human supervision.36

As with chemical, biological and nuclear weapons, LAWS present humanity with a new weapon of mass destruction, one that is relatively cheap and that also has the potential to be selective about who or what is targeted. This has deep implications for the future conduct of armed conflict as well as for international, national and personal security more generally. Debates have been taking place in various forums on how to prevent the proliferation of LAWS, and about whether such systems can ever be kept safe from cyber-infiltration or from accidental or deliberate misuse.3436

The third set of threats arises from the loss of jobs that will accompany the widespread deployment of AI technology. Projections of the speed and scale of job losses due to AI-driven automation range from tens to hundreds of millions over the coming decade.38 Much will depend on the speed of development of AI, robotics and other relevant technologies, as well as policy decisions made by governments and society. However, in a survey of most-cited authors on AI in 2012/2013, participants predicted the full automation of human labour shortly after the end of this century.39 It is already anticipated that in this decade, AI-driven automation will disproportionately impact low/middle-income countries by replacing lower-skilled jobs,40 and then continue up the skill-ladder, replacing larger and larger segments of the global workforce, including in high-income countries.

While there would be many benefits from ending work that is repetitive, dangerous and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behaviour, including harmful consumption of alcohol4144 and illicit drugs,43 44 being overweight,43 and having lower self-rated quality of life41 45 and health46 and higher levels of depression44 and risk of suicide.41 47 However, an optimistic vision of a future where human workers are largely replaced by AI-enhanced automation would include a world in which improved productivity would lift everyone out of poverty and end the need for toil and labour. However, the amount of exploitation our planet can sustain for economic production is limited, and there is no guarantee that any of the added productivity from AI would be distributed fairly across society. Thus far, increasing automation has tended to shift income and wealth from labour to the owners of capital, and appears to contribute to the increasing degree of maldistribution of wealth across the globe.4851 Furthermore, we do not know how society will respond psychologically and emotionally to a world where work is unavailable or unnecessary, nor are we thinking much about the policies and strategies that would be needed to break the association between unemployment and ill health.

Self-improving general-purpose AI, or AGI, is a theoretical machine that can learn and perform the full range of tasks that humans can.52 53 By being able to learn and recursively improve its own code, it could improve its capacity to improve itself and could theoretically learn to bypass any constraints in its code and start developing its own purposes, or alternatively it could be equipped with this capacity from the beginning by humans.54 55

The vision of a conscious, intelligent and purposeful machine able to perform the full range of tasks that humans can has been the subject of academic and science fiction writing for decades. But regardless of whether conscious or not, or purposeful or not, a self-improving or self-learning general purpose machine with superior intelligence and performance across multiple dimensions would have serious impacts on humans.

We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and powerwhether deliberately or notin ways that could harm or subjugate humansis real and has to be considered. If realised, the connection of AGI to the internet and the real world, including via vehicles, robots, weapons and all the digital systems that increasingly run our societies, could well represent the biggest event in human history.53 Although the effects and outcome of AGI cannot be known with any certainty, multiple scenarios may be envisioned. These include scenarios where AGI, despite its superior intelligence and power, remains under human control and is used to benefit humanity. Alternatively, we might see AGI operating independently of humans and coexisting with humans in a benign way. Logically however, there are scenarios where AGI could present a threat to humans, and possibly an existential threat, by intentionally or unintentionally causing harm directly or indirectly, by attacking or subjugating humans or by disrupting the systems or using up resources we depend on.56 57 A survey of AI society members predicted a 50% likelihood of AGI being developed between 2040 and 2065, with 18% of participants believing that the development of AGI would be existentially catastrophic.58 Presently, dozens of institutions are conducting research and development into AGI.59

Many of the threats described above arise from the deliberate, accidental or careless misuse of AI by humans. Even the risk and threat posed by a form of AGI that exists and operates independently of human control is currently still in the hands of humans. However, there are differing opinions about the degree of risk posed by AI and about the relative trade-offs between risk and potential reward, and harms and benefits.

Nonetheless, with exponential growth in AI research and development,60 61 the window of opportunity to avoid serious and potentially existential harms is closing. The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimise risk and harm and maximise benefit. Crucially, as with other technologies, preventing or minimising the threats posed by AI will require international agreement and cooperation, and the avoidance of a mutually destructive AI arms race. It will also require decision making that is free of conflicts of interest and protected from the lobbying of powerful actors with a vested interest. Worryingly, large private corporations with vested financial interests and little in the way of democratic and public oversight are leading in the field of AGI research.59

Different parts of the UN system are now engaged in a desperate effort to ensure that our international social, political and legal institutions catch up with the rapid technological advancements being made with AI. In 2020, for example, the UN established a High-level Panel on Digital Cooperation to foster global dialogue and cooperative approaches for a safe and inclusive digital future.62 In September 2021, the head of the UN Office of the Commissioner of Human Rights called on all states to place a moratorium on the sale and use of AI systems until adequate safeguards are put in place to avoid the negative, even catastrophic risks posed by them.63 And in November 2021, the 193 member states of UNESCO adopted an agreement to guide the construction of the necessary legal infrastructure to ensure the ethical development of AI.64 However, the UN still lacks a legally binding instrument to regulate AI and ensure accountability at the global level.

At the regional level, the European Union has an Artificial Intelligence Act65 which classifies AI systems into three categories: unacceptable-risk, high-risk and limited and minimal-risk. This Act could serve as a stepping stone towards a global treaty although it still falls short of the requirements needed to protect several fundamental human rights and to prevent AI from being used in ways that would aggravate existing inequities and discrimination.

There have also been efforts focused on LAWS, with an increasing number of voices calling for stricter regulation or outright prohibition, just as we have done with biological, chemical and nuclear weapons. State parties to the UN Convention on Certain Conventional Weapons have been discussing lethal autonomous weapon systems since 2014, but progress has been slow.66

What can and should the medical and public health community do? Perhaps the most important thing is to simply raise the alarm about the risks and threats posed by AI, and to make the argument that speed and seriousness are essential if we are to avoid the various harmful and potentially catastrophic consequences of AI-enhanced technologies being developed and used without adequate safeguards and regulation. Importantly, the health community is familiar with the precautionary principle67 and has demonstrated its ability to shape public and political opinion about existential threats in the past. For example, the International Physicians for the Prevention of Nuclear War were awarded the Nobel Peace Prize in 1985 because it assembled principled, authoritative and evidence-based arguments about the threats of nuclear war. We must do the same with AI, even as parts of our community espouse the benefits of AI in the fields of healthcare and medicine.

It is also important that we not only target our concerns at AI, but also at the actors who are driving the development of AI too quickly or too recklessly, and at those who seek only to deploy AI for self-interest or malign purposes. If AI is to ever fulfil its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances. This includes ensuring transparency and accountability of the parts of the militarycorporate industrial complex driving AI developments and the social media companies that are enabling AI-driven, targeted misinformation to undermine our democratic institutions and rights to privacy.

Finally, given that the world of work and employment will drastically change over the coming decades, we should deploy our clinical and public health expertise in evidence-based advocacy for a fundamental and radical rethink of social and economic policy to enable future generations to thrive in a world in which human labour is no longer a central or necessary component to the production of goods and services.

There are no data in this work.

Not applicable.

Not applicable.

The authors would like to thank Dr Ira Helfand and Dr Chhavi Chauhan for their valuable comments on earlier versions of the manuscript.

View original post here:

Threats by artificial intelligence to human health and human existence - BMJ