Archive for the ‘Artificial General Intelligence’ Category

What is AGI and how is it different from AI? – ReadWrite

As artificial intelligence continues to develop at a rapid pace, its easy to wonder where this new age is headed.

The likes of ChatGPT, Midjourney and Sora are transforming the way we work through chatbots, text-to-image and text-to-video generators, while robots and self-driving cars are helping us perform day-to-day tasks. The latter isnt as mainstream as the former, but its only a matter of time.

But wheres the limit? Are we headed towards a dystopian world run by computers and robots? Artificial general intelligence (AGI) is essentially the next step but as things stand, were a little way off from that becoming a reality.

AGI is considered to be strong AI, whereas narrow AI is what we know to be generative chatbots, image generators and coffee-making robots.

Strong AI refers to software that has the same, or better, cognitive abilities as a human being, meaning it can solve problems, achieve goals, think and learn on its own, without any human input or assistance. Narrow AI can solve one problem or complete one task at a time, without any sentience or consciousness.

This level of AI is only seen in the movies at the moment, but were likely headed towards this level of AI-driven technology in the future. When that might be remains open to debate some experts claim its centuries away, others believe it could only be years. However, Ray Kurzweils book The Singularity is Near predicts it to be between 2015 and 2045, which was seen as a plausible timeline by the AGI research community in 2007although its a pretty broad timeline.

Given how quickly narrow AI is developing, its easy to imagine a form of AGI in society within the next 20 years.

Despite not yet existing, AGI can theoretically perform in ways that are indistinguishable from humans and will likely exceed human capacities due to fast access to huge data sets. While it might seem like youre engaging with a human when using something like ChatGPT, AGI would theoretically be able to engage with humans without necessarily having any human intervention.

An AGI systems capabilities would include the likes of common sense, background knowledge and abstract thinking, as well as practical capabilities, such as creativity, fine motor skills, natural language understanding (NLU), navigation and sensory perception.

A combination of all of those abilities will essentially give AGI systems high-level capabilities, such as being able to understand symbol systems, create fixed structures for all tasks, use different kinds of knowledge, engage in metacognition, handle several types of learning algorithms and understand belief systems.

That means AGI systems will be ultra-intelligent and may also possess additional traits, such as imagination and autonomy, while physical traits like the ability to sense, detect and act could also be present.

We know that narrow AI systems are widely being used in public today and are fast becoming part of everyday life, but it currently needs a human to function at all levels. It requires machine learning and natural language processing, before requiring human-delivered prompts in order to execute a task. It executes the task based on what it has previously learned and can essentially only be as intelligent as the level of information humans give it.

However, the results we see from narrow AI systems are not beyond what is possible from the human brain. It is simply there to assist us, not replace or be more intelligent than humans.

Theoretically, AGI should be able to undertake any task and portray a high level of intelligence without human intervention. It will be able to perform better than humans and narrow AI at almost every level.

Stephen Hawking warned of the dangers of AI in 2014, when he told the BBC: The development of full artificial intelligence could spell the end of the human race.

It would off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldnt compete and would be superseded.

Kurzweil followed up his prediction in The Singularity is Near by saying in 2017 that computers will achieve human levels of intelligence by 2029. He predicted that AI itself will get better exponentially, leading to it being able to operate at levels beyond human comprehension and control.

He then went on to say: I have set the date 2045 for the Singularity which is when we will multiply our effective intelligence a billionfold by merging with the intelligence we have created.

These discussions and predictions have, of course, sparked debates surrounding the responsible use of CGI. The AI we know today is viewed to be responsible and there are calls to regulate many of the AI companies to ensure these systems do not get out of hand. Weve already seen how controversial and unethical the use of AI can be when in the wrong hands. Its unsurprising, then, that the same debate is happening around AGI.

In reality, society must approach the development of AGI with severe caution. The ethical problems surrounding AI now, such as the ability to control biases within its knowledge base, certainly point to a similar issue with AGI, but on a more harmful level.

If an AGI system can essentially think for itself and no longer has the need to be influenced by humans, there is a danger that Stephen Hawkings vision might become a reality.

Featured Image: Ideogram

Go here to read the rest:

What is AGI and how is it different from AI? - ReadWrite

Tags:

Artificial intelligence in healthcare: defining the most common terms – HealthITAnalytics.com

April 03, 2024 -As healthcare organizations collect more and more digital health data, transforming that information to generate actionable insights has become crucial.

Artificial intelligence (AI) has the potential to significantly bolster these efforts, so much so that health systems are prioritizing AI initiatives this year. Additionally, industry leaders are recommending that healthcare organizations stay on top of AI governance, transparency, and collaboration moving forward.

But to effectively harness AI, healthcare stakeholders need to successfully navigate an ever-changing landscape with rapidly evolving terminology and best practices.

In this primer, HealthITAnalytics will explore some of the most common terms and concepts stakeholders must understand to successfully utilize healthcare AI.

To understand health AI, one must have a basic understanding of data analytics in healthcare. At its core, data analytics aims to extract useful information and insights from various data points or sources. In healthcare, information for analytics is typically collected from sources like electronic health records (EHRs), claims data, and peer-reviewed clinical research.

Analytics efforts often aim to help health systems meet a key strategic goal, such as improving patient outcomes, enhancing chronic disease management, advancing precision medicine, or guiding population health management.

However, these initiatives require analyzing vast amounts of data, which is often time- and resource-intensive. AI presents a promising solution to streamline the healthcare analytics process.

The American Medical Association (AMA) indicates that AI broadly refers to the ability of computers to perform tasks that are typically associated with a rational human being a quality that enables an entity to function appropriately and with foresight in its environment.

However, the AMA favors an alternative conceptualization of AI that the organization calls augmented intelligence. Augmented intelligence focuses on the assistive role of AI in healthcare and underscores that the technology can enhance, rather than replace, human intelligence.

AI tools are driven by algorithms, which act as instructions that a computer follows to perform a computation or solve a problem. Using the AMAs conceptualizations of AI and augmented intelligence, algorithms leveraged in healthcare can be characterized as computational methods that support clinicians capabilities and decision-making.

Generally, there are multiple types of AI that can be classified in various ways: IBM broadly categorizes these tools based on their capabilities and functionalities which covers a plethora of realized and theoretical AI classes and potential applications.

Much of the conversation around AI in healthcare is centered around currently realized AI tools that exist for practical applications today or in the very near future. Thus, the AMA categorizes AI terminology into two camps: terms that describe how an AI works and those that describe what the AI does.

AI tools can work by leveraging predefined logic or rules-based learning, to understand patterns in data via machine learning, or using neural networks to simulate the human brain and generate insights through deep learning.

In terms of functionality, AI models can use these learning approaches to engage in computer vision, a process for deriving information from images and videos; natural language processing to derive insights from text; and generative AI to create content.

Further, AI models can be classified as either explainable meaning that users have some insight into the how and why of an AIs decision-making or black box, a phenomenon in which the tools decision-making process is hidden from users.

Currently, all AI models are considered narrow or weak AI, tools designed to perform specific tasks within certain parameters. Artificial general intelligence (AGI), or strong AI, is a theoretical system under which an AI model could be applied to any task.

Machine learning (ML) is a subset of AI in which algorithms learn from patterns in data without being explicitly trained. Often, ML tools are used to make predictions about potential future outcomes.

Unlike rules-based AI, ML techniques can use increased exposure to large, novel datasets to learn and improve their own performance. There are three main categories of ML based on task type: supervised, unsupervised, and reinforcement learning.

In supervised learning, algorithms are trained on labeled data data inputs associated with corresponding outputs to identify specific patterns, which helps the tool make accurate predictions when presented with new data.

Unsupervised learning uses unlabeled data to train algorithms to discover and flag unknown patterns and relationships among data points.

Semi-supervised machine learning relies on a mix of supervised and unsupervised learning approaches during training.

Reinforcement learning relies on a feedback loop for algorithm training. This type of ML algorithm is given labeled data inputs, which it can use to take various actions, such as making a prediction, to generate an output. If the algorithms action and output align with the programmers goals, its behavior is reinforced with a reward.

In this way, algorithms developed using reinforcement techniques generate data, interact with their environment, and learn a series of actions to achieve a desired result.

These approaches to pattern recognition make ML particularly useful in healthcare applications like medical imaging and clinical decision support.

Deep learning (DL) is a subset of machine learning used to analyze data to mimic how humans process information. DL algorithms rely on artificial neural networks (ANNs) to imitate the brains neural pathways.

ANNs utilize a layered algorithmic architecture, allowing insights to be derived from how data are filtered through each layer and how those layers interact. This enables deep learning tools to extract more complex patterns from data than their simpler AI- and ML-based counterparts.

Like machine learning models, deep learning algorithms can be supervised, unsupervised, or somewhere in between.There are four main types of deep learning used in healthcare: deep neural networks (DNNs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs).

DNNs are a type of ANN with a greater depth of layers. The deeper the DNN, the more data translation and analysis tasks can be performed to refine the models output.

CNNs are a type of DNN that is specifically applicable to visual data. With a CNN, users can evaluate and extract features from images to enhance image classification.

RNNs are a type of ANN that relies on temporal or sequential data to generate insights. These networks are unique in that, where other ANNs inputs and outputs remain independent of one another, RNNs utilize information from previous layers inputs to influence later inputs and outputs.

RNNs are commonly used to address challenges related to natural language processing, language translation, image recognition, and speech captioning. In healthcare, RNNs have the potential to bolster applications like clinical trial cohort selection.

GANs utilize multiple neural networks to create synthetic data instead of real-world data. Like other types of generative AI, GANs are popular for voice, video, and image generation. GANs can generate synthetic medical images to train diagnostic and predictive analytics-based tools.

Recently, deep learning technology has shown promise in improving the diagnostic pathway for brain tumors.

With their focus on imitating the human brain, deep learning and ANNs are similar but distinct from another analytics approach: cognitive computing.

The term typically refers to systems that simulate human reasoning and thought processes to augment human cognition. Cognitive computing tools can help aid decision-making and assist humans in solving complex problems by parsing through vast amounts of data and combining information from various sources to suggest solutions.

Cognitive computing systems must be able to learn and adapt as inputs change, interact organically with users, remember previous interactions to help define problems, and understand contextual elements to deliver the best possible answer based on available information.

To achieve this, these tools use self-learning frameworks, ML, DL, natural language processing, speech and object recognition, sentiment analysis, and robotics to provide real-time analyses for users.

Cognitive computings focus on supplementing human decision-making power makes it promising for various healthcare use cases, including patient record summarization and acting as a medical assistant to clinicians.

Natural language processing (NLP) is a branch of AI concerned with how computers process, understand, and manipulate human language in verbal and written forms.

Using techniques like ML and text mining, NLP is often used to convert unstructured language into a structured format for analysis, translating from one language to another, summarizing information, or answering a users queries.

There are also two subsets of NLP: natural language understanding (NLU) and natural language generation (NLG).

NLU is concerned with computer reading comprehension, focusing heavily on determining the meaning of a piece of text. These tools use the grammatical structure and the intended meaning of a sentence syntax and semantics, respectively to help establish a structure for how the computer should understand the relationship between words and phrases to accurately capture the nuances of human language.

Conversely, NLG is used to help computers write human-like responses. These tools combine NLP analysis with rules from the output language, like syntax, lexicons, semantics, and morphology, to choose how to appropriately phrase a response when prompted. NLG drives generative AI technologies like OpenAIs ChatGPT.

In healthcare, NLP can sift through unstructured data, such as EHRs, to support a host of use cases. To date, the approach has supported the development of a patient-facing chatbot, helped detect bias in opioid misuse classifiers, and flagged contributing factors to patient safety events.

McKinsey & Company describes generative AI (genAI) as algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos.

GenAI tools take a prompt provided by the user via text, images, videos, or other machine-readable inputs and use that prompt to generate new content. Generative AI models are trained on vast datasets to generate realistic responses to users prompts.

GenAI tools typically rely on other AI approaches, like NLP and machine learning, to generate pieces of content that reflect the characteristics of the models training data. There are multiple types of generative AI, including large language models (LLMs), GANs, RNNs, variational autoencoders (VAEs), autoregressive models, and transformer models.

Since ChatGPTs release in November 2022, genAI has garnered significant attention from stakeholders across industries, including healthcare. The technology has demonstrated significant potential for automating certain administrative tasks: EHR vendors are using generative AI to streamline clinical workflows, health systems are pursuing the technology to optimize revenue cycle management, and payers are investigating how genAI can improve member experience. On the clinical side, researchers are also assessing how genAI could improve healthcare-associated infection (HAI) surveillance programs.

Despite the excitement around genAI, healthcare stakeholders should be aware that generative AI can exhibit bias, like other advanced analytics tools. Additionally, genAI models can hallucinate by perceiving patterns that are imperceptible to humans or nonexistent, leading the tools to generate nonsensical, inaccurate, or false outputs.

Original post:

Artificial intelligence in healthcare: defining the most common terms - HealthITAnalytics.com

Tags:

We’re Focusing on the Wrong Kind of AI Apocalypse – TIME

Conversations about the future of AI are too apocalyptic. Or rather, they focus on the wrong kind of apocalypse.

There is considerable concern of the future of AI, especially as a number of prominent computer scientists have raised, the risks of Artificial General Intelligence (AGI)an AI smarter than a human being. They worry that an AGI will lead to mass unemployment or that AI will grow beyond human controlor worse (the movies Terminator and 2001 come to mind).

Discussing these concerns seems important, as does thinking about the much more mundane and immediate threats of misinformation, deep fakes, and proliferation enabled by AI. But this focus on apocalyptic events also robs most of us of our agency. AI becomes a thing we either build or dont build, and no one outside of a few dozen Silicon Valley executives and top government officials really has any say over.

But the reality is we are already living in the early days of the AI Age, and, at every level of an organization, we need to make some very important decisions about what that actually means. Waiting to make these choices means they will be made for us. It opens us up to many little apocalypses, as jobs and workplaces are disrupted one-by-one in ways that change lives and livelihoods.

We know this is a real threat, because, regardless of any pauses in AI creation, and without any further AI development beyond what is available today, AI is going to impact how we work and learn. We know this for three reasons: First, AI really does seem to supercharge productivity in ways we have never really seen before. An early controlled study in September 2023 showed large-scale improvements at work tasks, as a result of using AI, with time savings of more than 30% and a higher quality output for those using AI. Add to that the near-immaculate test scores achieved by GPT-4, and it is obvious why AI use is already becoming common among students and workers, even if they are keeping it secret.

Read More: There Is Only One Question That Matters with AI

We also know that AI is going to change how we work and learn because it is affecting a set of workers who never really faced an automation shock before. Multiple studies show the jobs most exposed to AI (and therefore the people whose jobs will make the hardest pivot as a result of AI) are educated and highly paid workers, and the ones with the most creativity in their jobs. The pressure for organizations to take a stand on a technology that affects these workers will be immense, especially as AI-driven productivity gains become widespread. These tools are on their way to becoming deeply integrated into our work environments. Microsoft, for instance, has released Co-Pilot GPT-4 tools for its ubiquitous Office applications, even as Google does the same for its office tools.

As a result, a natural instinct among many managers might be to say fire people, save money. But it doesnt need to be that wayand it shouldnt be. There are many reasons why companies should not turn efficiency gains into headcount or cost reduction. Companies that figure out how to use their newly productive workforce have the opportunity to dominate those who try to keep their post-AI output the same as their pre-AI output, just with less people. Companies that commit to maintaining their workforce will likely have employees as partners, who are happy to teach others about the uses of AI at work, rather than scared workers who hide AI for fear of being replaced. Psychological safety is critical to innovative team success, especially when confronted with rapid change. How companies use this extra efficiency is a choice, and a very consequential one.

There are hints buried in the early studies of AI about a way forward. Workers, while worried about AI, tend to like using it because it removes the most tedious and annoying parts of their job, leaving them with the most interesting tasks. So, even as AI removes some previously valuable tasks from a job, the work that is left can be more meaningful and more high value. But this is not inevitable, so managers and leaders must decide whether and how to commit themselves to reorganizing work around AI in ways that help, rather than hurt, their human workers. They need to ask what is my vision about how AI makes work better, rather than worse?

Rather than just being worried about one giant AI apocalypse, we need to worry about the many small catastrophes that AI can bring. Unimaginative or stressed leaders may decide to use these new tools for surveillance and for layoffs. Educators may decide to use AI in ways that leave some students behind. And those are just the obvious problems.

But AI does not need to be catastrophic. Correctly used, AI can create local victories, where previously tedious or useless work becomes productive and empowering. Where students who were left behind can find new paths forward. And where productivity gains lead to growth and innovation.

The thing about a widely applicable technology is that decisions about how it is used are not limited to a small group of people. Many people in organizations will play a role in shaping what AI means for their team, their customers, their students, their environment. But to make those choices matter, serious discussions need to start in many placesand soon. We cant wait for decisions to be made for us, and the world is advancing too fast to remain passive.

Read the original here:

We're Focusing on the Wrong Kind of AI Apocalypse - TIME

Tags:

Xi Jinping’s vision in supporting the artificial intelligence at home and abroad – Modern Diplomacy

Chinese President Xi Jinpings vision regarding the importance of cyberpower in achieving economic and military superiority, and his pledge to support technological cooperation between the member states of the BRICS, came during his opening of the ninth session of the BRICS in the Chinese city of Xiamen. President Xi Jinping also encouraged young people to knock on the doors of advanced technology to achieve Chinas global leadership in various fields. Therefore, we note that Chinese researchers have contributed more than a thousand research papers related to the fields and technologies of artificial general intelligence between 2018 and 2024. President Xi Jinping also directed that China take a more comprehensive approach, through the penetration and penetration of technology into virtually all layers of Chinese society, and with the advancement of twenty-first century technology, digital methods of surveillance, surveillance, and social control have become an integral part of Chinese society, as A government has benefited from the advancement of artificial intelligence technology as a tool to maintain, expand, and empower the state. The extent of Chinese interest in artificial intelligence has also prompted the emergence of almost daily Chinese products that use this advanced technology, whether from emerging companies or technology giants in China. We also note that the largest technology companies in China, such as: Alibaba, Tencent and Baidu, are the ones who enjoy the greatest chances of success in the Chinese market and globally, due to the large numbers of users they have and the wide range of services they provide.

When talking about the success of the Chinese experience in all fields and its relationship to artificial intelligence and investment, and linking it to how Egypt benefits from that Chinese experience. We find that, due to the tremendous Chinese progress in the field of artificial intelligence, former US Deputy Secretary of Defense (Robert Work) warned the United States of America, calling on it to lead the next revolution in artificial intelligence technology in the face of Chinese progress, or become a victim of it. The tensions between Beijing and Washington have affected this sector related to artificial intelligence, as American investment funds invest less in projects that work on (manufacturing artificial intelligence chips) outside the United States of America. US President Joe Biden has argued that America is in a long-term strategic competition with China. He is right in that, which is why the continuous statements come from American politicians, officials and military personnel, that the United States of America is not alone in danger from China, but rather the entire democratic world. The artificial intelligence revolution forms the basis of the current competition between democracy and authoritarianism in terms of values, with the inability of democracies to succeed in the era of the technological revolution in the face of China, according to American fears and official statements. Here came the statements of the head of the Internet and Digital Assets Research Department in China Bao, who is responsible for the Macquarie Financial Services Group, that: Only those with the strongest capabilities, like China, will survive.

China is one of the advanced countries in the field of artificial intelligence, as China included it in the countrys (national strategic plan) in 2016. The Chinese government subsequently issued many policies to support the development of artificial intelligence technologies, starting with the protection of capital and intellectual property, all the way to the development of human resources and international cooperation, and the focus was on the field of theoretical studies, to support those academic resources related to the integration of artificial intelligence into the national economy of China, the matter developed with global expansion after the spread of the Corona pandemic (Covid-19), and special and advanced Chinese programs were developed in the field of artificial intelligence to benefit from them in the industrial, investment and communications sectors.

In China, there is a relationship between Chinese universities, research institutions and private companies, as these universities cooperate closely to conduct research in the field of artificial intelligence and transfer related technology, through implementing programs to train artificial intelligence talents, establishing research centers and development laboratories, and others. In 2017, China launched a document entitled New Generation Artificial Intelligence Development Plan, setting rules that enable Beijing to become a leading country in this field. In 2021, (Ethical Guidelines for Dealing with Artificial Intelligence in China) were published, and this Chinese plan included methodological foundations for building knowledge and information bases for developing artificial intelligence in China. The aforementioned Chinese report also identified the universities that have made the greatest contributions in this field, and it is noted that five Chinese universities, which are considered among the most productive sources of artificial intelligence research throughout China, are academic institutions located in the capital, Beijing.

Many practical applications of artificial intelligence technology stand out in China, as we find that the current general trend in China seeks to integrate hardware and software to make artificial intelligence technologies more applicable to practical application, through the expansion of artificial intelligence chips as a new opportunity for players seeking to control the artificial intelligence futurism industry. In July 2017, the Chinese government issued its new plan to develop (the new generation of artificial intelligence), setting an ambitious agenda, consisting of (three different time stages), aiming to grow the value of artificial intelligence industries in the country to exceed 150 billion Chinese yuan by the year. 2020. A number of Chinese universities, alongside private companies, played a major role in supporting government planning to develop the artificial intelligence technology sector, and a number of Chinese experts and researchers played an increasing role in giant Chinese private sector companies, such as: (Alibaba, Baidu, Tencent), to achieve major breakthroughs in Artificial intelligence technologies in the short and long term, especially after these companies have become a strong competitor to American and Western companies working in this field.

At the present time, and after the spread of the Corona pandemic (Covid-19), the Chinese government has succeeded in developing artificial intelligence technologies and using them in all areas of society. More than 400 million (government and closed-circuit surveillance cameras) have been installed at intersections, street corners, and corridors. Pedestrians, parks, entertainment areas, commercial markets, shopping centres, entrances to office buildings, museums, tourist attractions and entertainment venues, sports stadiums, banks, bicycle parks, bus stations, railway stations, shipping docks and airports. China now leads the areas of scientific research and development, and registers more patents than the United States. This is what was recorded in newspaper reports (Financial Times, New York Times, Foreign Affairs), about the extent of Chinese progress in artificial intelligence technology now applied in the financial field, which threatens Western financial institutions. China also considers, according to the Made in China 2025 strategy, both (genomics and the artificial intelligence sector) sectors that have priority in research and development, with China forcing all companies and business institutions to participate in making progress in these two sectors, according to Special local laws stipulate that these companies must assist the General Intelligence Service, or what is known as the Chinese Ministry of State Security, to develop these technologies.

Chinese scientists also began building the City Mind technology, which is the project or idea adopted by the Chinese artificial intelligence scientist Gao and his advanced laboratory or laboratory for artificial intelligence known as (Ping Qing), located in the heart of (the southern city of Shenzhen) in China, to enhance computers. Operating with the artificial intelligence system at the heart of smart cities, this is the advanced Chinese technology that scans all the streets of China, starting from the wide streets of Beijing to the streets of small cities, and collects and processes billions of information, from complex networks of remote sensors, cameras and other devices. That monitors traffic, human faces, voices, and more. We find that this trend adopted by Beijing to lead artificial intelligence technology in the world far beyond the United States of America and the West, has raised severe American and Western concerns about the extent to which (genomics and artificial intelligence) in China can support the policies of the ruling Communist Party during its attempts to dominate the world according to the American perception, especially after the Beijing Genomics Institute became the largest institution operating in the genomics sector in the world and not only within China or within Asia.

From here we understand that China seeks to lead the world in the field of artificial intelligence by 2030, a goal that was made clear in the official Chinese Brain Project that was announced in 2016. The development of genomics, artificial intelligence, and brain sciences in China are considered the most important sectors for which the Chinese government allocates a special budget, as are the most important border areas, as they were named in the Chinese states national plan to advance them according to a well-studied and systematic plan over a period of 15 years, starting in 2021 and it will end by 2035.

Excerpt from:

Xi Jinping's vision in supporting the artificial intelligence at home and abroad - Modern Diplomacy

Tags:

As ‘The Matrix’ turns 25, the chilling artificial intelligence (AI) projection at its core isn’t as outlandish as it once seemed – TechRadar

Living in 1999 felt like standing on the edge of an event horizon. Our growing obsession with technology was spilling into an outpouring of hope, fear, angst and even apocalyptic distress in some quarters. The dot-com bubble was swelling as the World Wide Web began spreading like a Californian wildfire. The first cell phones had been making the world feel much more connected. Let's not forget the anxieties over Y2K that were escalating into panic as we approached the bookend of the century.

But as this progress was catching the imagination of so many, artificial intelligence (AI) was in a sorry state only beginning to emerge from a debilitating second 'AI winter' which spanned between 1987 and 1993.

Some argue this thawing process lasted as long as the mid-2000s. It was, indeed, a bleak period for AI research; it was a field that "for decades has overpromised and underdelivered", according to a report in the New York Times (NYT) from 2005.

Funding and interest was scarce, especially compared to its peak in the 1980s, with previously thriving conferences whittled down to pockets of diehards. In cinema, however, stories about AI were flourishing with the likes of Terminator 2: Judgement Day (1991) and Ghost in the Shell (1995) building on decades of compelling feature films like Blade Runner (1982).

It was during this time that the Wachowskis penned the script for The Matrix a groundbreaking tour de force that threw up a mirror to humanity's increasing reliance on machines and challenged our understanding of reality.

It's a timeless classic, and its impact since its March 31 1999 release has been sprawling. But the chilling plot at its heart namely the rise of an artificial general intelligence (AGI) network that enslaves humanity has remained consigned to fiction more so than it's ever been considered a serious scientific possibility. With the heat of the spotlight now on AI, however, ideas like the Wachowskis' are beginning to feel closer to home than we had anticipated.

AI has become not just the scientific, but the cultural zeitgeist with large language models (LLMs) and the neural nets that power them cannonballing into the public arena. That dry well of research funding is now overflowing, and corporations see massive commercial appeal in AI. There's a growing chorus of voices, too, that feel an AGI agent is on the horizon.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

People like the veteran computer scientist Ray Kurzweil had anticipated that humanity would reach the technological singularity (where an AI agent is just as smart as a human) for yonks, outlining his thesis in 'The Singularity is Near' (2005) with a projection for 2029.

Disciples like Ben Goertzel have claimed it can come as soon as 2027. Nvidia's CEO Jensen Huang says it's "five years away", joining the likes of OpenAI CEO Sam Altman and others in predicting an aggressive and exponential escalation. Should these predictions be true, they will also introduce a whole cluster bomb of ethical, moral, and existential anxieties that we will have to confront. So as The Matrix turns 25, maybe it wasn't so far-fetched after all?

Sitting on tattered armchairs in front of an old boxy television in the heart of a wasteland, Morpheus shows Neo the "real world" for the first time. Here, he fills us in on how this dystopian vision of the future came to be. We're at the summit of a lengthy yet compelling monologue that began many scenes earlier with questions Morpheus poses to Neo, and therefore us, progressing to the choice Neo must make and crescendoing into the full tale of humanity's downfall and the rise of the machines.

Much like we're now congratulating ourselves for birthing advanced AI systems that are more sophisticated than anything we have ever seen, humanity in The Matrix was united in its hubris as it gave birth to AI. Giving machines that spark of life the ability to think and act with agency backfired. And after a series of political and social shifts, the machines retreated to Mesopotamia, known as the cradle of human civilization, and built the first machine city, called 01.

Here, they replicated and evolved developing smarter and better AI systems. When humanity's economies began to fall, they struck the machine civilization with nuclear weapons to regain control. Because the machines were not as vulnerable to heat and radiation, the strike failed and instead represented the first stone thrown in the 'Machine War'.

Unlike in our world, the machines in The Matrix were solar-powered and harvested their energy from the sun. So humans decide to darken namely enslaving humans and draining their innate energy. They continued to fight until human civilization was enslaved, with the survivors placed into pods and connected to the Matrix an advanced virtual reality (VR) simulation intended as an instrument for control while their thermal, bio-electric, and kinetic energy was harvested to sustain the machines.

"This can't be real," Neo tells Morpheus. It's a reaction we would all expect to have when confronted with such an outlandish truth. But, as Morpheus retorts: "What is real?' Using AI as a springboard, the film delves into several mind-bending areas including the nature of our reality and the power of machines to influence and control how we perceive the environment around us. If you can touch, smell, or taste something, then why would it not be real?

Strip away the barren dystopia, the self-aware AI, and strange pods that atrophied humans occupy like embryos in a womb, and you can see parallels between the computer program and the world around us today.

When the film was released, our reliance on machines was growing but not final. Much of our understanding of the world today, however, is filtered through the prism of digital platforms infused with AI systems like machine learning. What we know, what we watch, what we learn, how we live, how we socialize online all of these modern human experiences are influenced in some way by algorithms that direct us in subtle but meaningful ways. Our energy isn't harvested, but our data is, and we continue to feed the machine with every tap and click.

Intriguingly, as Agent Smith tells Morpheus in the iconic interrogation scene a revelatory moment in which the computer program betrays its emotions the first version of the Matrix was not a world that closely resembled society as we knew it in 1999. Instead, it was a paradise in which humans were happy and free of suffering.

The trouble, however, is that this version of the Matrix didn't stick, and people saw through the ruse rendering it redundant. That's when the machine race developed version 2.0. It seemed, as Smith lamented, that humans speak in the language of suffering and misery and without these qualities, the human condition is unrecognizable.

By every metric, AI is experiencing a monumental boom when you look at where the field once was. Startup funding surged by more than ten-fold between 2011 and 2021, surging from 670 million to $72 billion a decade later, according to Statista. The biggest jump came during the COVID-19 pandemic, with funding rising from $35 billion the previous year. This has since tapered off falling to $40 billion in 2023 but the money that's pouring into research and development (R&D) is surging.

But things weren't always so rosy. In fact, in the early 1990s during the second AI winter the term "artificial intelligence" was almost taboo, according to Klondike, and was replaced with other terms such as "advanced computing" instead. This is simply one turbulent period in a long near-75 year history of the field, starting with Alan Turing in 1950 when he pondered whether a machine could imitate human intelligence in his paper 'Computing Machinery and Intelligence'.

In the years that followed, a lot of pioneering research was conducted but this early momentum fell by the wayside during the first AI winter between 1974 and 1980 where issues including limited computing power prevented the field from advancing, and organizations like DARPA and national governments pulled funding from research projects.

Another boom in the 1980s, fuelled by the revival of neural networks, then collapsed once more into a bust with the second winter spanning six years up to 1993 and thawing well into the 21st century. Then, in the years that followed, scientists around the world were slowly making progress once more as funding restarted and AI caught people's imagination once again. But the research field itself was siloed, fragmented and disconnected, according to Pamela McCorduck writing in 'Machines Who Think' (2004). Computer scientists were focusing on competing areas to solve niche problems and specific approaches.

As Klondike highlights, they also used terms such as "advanced computing" to label their work where we may now refer to the tools and systems they built as early precursors to the AI systems we use today.

It wasn't until 1995 four years before The Matrix hit theaters that the needle in AI research really moved in a significant way. But you could already see signs the winter was thawing, especially with the creation of the Loebner Prize an annual competition created by Hugh Loebner in 1990.

Loebner was "an American millionaire who had given a lot of money" and "who became interested in the Turing test," according to the recipient of the prize in 1997, the late British computer scientist Yorick Wilks, speaking in an interview in 2019. Although the prize wasn't particularly large $2,000 initially it showed that interest in building AI agents was expanding, and that it was being taken seriously.

The first major development of the decade came when computer scientist Richard Wallace developed the chatbot ALICE which stood for artificial linguistic internet computer entity. Inspired by the famous ELIZA chatbot of the 1960s which was the world's first major chatbot ALICE, also known as Alicebot, was a natural language processing system that applied heuristic pattern matching to conversations with a human in order to provide responses. Wallace went on to win the Loebner Prize in 2000, 2001 and 2004 for creating and advancing this system, and a few years ago the New Yorker reported ALICE was even the inspiration for the critically acclaimed 2013 sci-fi hit Her, according to director Spike Jonze.

Then, in 1997, AI hit a series of major milestones, starting with a showdown starring the reigning world chess champion and grandmaster Gary Kasparov, who in May that year went head to head in New York with the challenger of his life: a computing agent called 'Deep Blue' created by IBM. This was actually the second time Kasparov faced Deep Blue after beating the first version of the system in Philadelphia the year before, but Deep Blue narrowly won the rematch by 3.5 to 2.5.

"This highly publicized match was the first time a reigning world chess champion lost to a computer and served as a huge step towards an artificially intelligent decision making program," wrote Rockwell Anyoha in a Harvard blog.

It did something "no machine had ever done before", according to IBM, delivering its victory through "brute force computing power" and for the entire world to see as it was indeed broadcast far and wide. It used 32 processors to evaluate 200 chess positions per second. I have to pay tribute, Kasparov said. The computer is far stronger than anybody expected.

Another major milestone was the creation of NaturallySpeaking by Dragon Software in June 1997. This speech recognition software was the first universally accessible and affordable computer dictation system for PCs if $695 (or $1,350 today) is your idea of affordable, that is. "This is only the first step, we have to do a lot more, but what we're building toward is to humanizing computers, make them very natural to use, so yes, even more people can use them," said CEO Jim Baker in a news report from the time. Dragon licensed the software to big names including Microsoft and IBM, and it was later integrated into the Windows operating system, signaling much wider adoption.

A year later, researchers with MIT released Kismet a "disembodied head with gremlin-like features" that learns about its environment "like a baby" and entirely "upon its benevolent carers to help it find out about the world", according to Duncan Graham-Rowe writing in New Scientist at the time. Spearheaded by Cynthia Greazeal, this creation was one of the projects that fuelled MIT's AI research and secured its future. The machine could interact with humans, and simulated emotions by changing its facial expression, its voice and its movements.

This contemporary resurgence also extended to the language people used too. The taboo around "artificial intelligence" was disintegrating and terms like "intelligent agents" began slipping their way into the lexicon of the time, wrote McCorduck in 'Machines Who Think'. Robotics, intelligent AI agents, machine surpassing the wit of man, and more: it was these ingredients that, in turn, fed into the thinking behind The Matrix and the thesis at its heart.

When The Matrix hit theaters, there was a real dichotomy between movie-goers and critics. It's fair to say that audiences loved the spectacle, to say the least with the film taking $150 million at the US box office while a string of publications stood in line to lambast the script and the ideas in the movie. "It's Special Effects 10, Screenplay 0," wrote Todd McCarthy in his review in Variety. The Miami Herald rated it two-and-a-half stars out of five.

Chronicle senior writer Bob Graham praised Joe Pantoliano (who plays Cypher) in his SFGate review, "but even he is eventually swamped by the hopeless muddle that "The Matrix" becomes." Critics wondered why people were so desperate to see a movie that had been so widely slated and the Guardian pondered whether it was sci-fi fans "driven to a state of near-unbearable anticipation by endless hyping of The Phantom Menace, ready to gorge themselves on pretty much any computer graphics feast that came along?"

Veteran film director Quentin Tarantino, however, related more with the average audience member in his experiences, which he shared in an interview with Amy Nicholson. "I remember the place was jam-packed and there was a real electricity in the air it was really exciting," he said, speaking of his outing to watch the movie on the Friday night after it was released.

"Then this thought hit me, that was really kind of profound, and that was: it's easy to talk about 'The Matrix' now because we know the secret of 'The Matrix', but they didn't tell you any of that in any of the promotions in any of the big movie trailer or any of the TV spots. So we were really excited about this movie, but we really didn't know what we were going to see. We didn't really know what to expect; we did not know the mythology at all I mean, at all. We had to discover that."

The AI boom of today is largely centered around an old technology known as neural networks. Despite incredible advancements in generative AI tools, namely large language models (LLMs) that have captured the imagination of businesses and people alike.

One of the most interesting developments is the number of people who are becoming increasingly convinced that these AI agents are conscious, or have agency, and can think or even feel for themselves. One startling example is a former Google engineer who claimed a chatbot the company was working on was sentient. Although this is widely understood not to be the case, it's a sign of the direction in which we're heading.

Elsewhere, despite impressive systems that can generate images, and now video thanks to OpenAI's SORA these technologies still all rely on the principles of neural networks that many in the field don't believe will lead to the sort of human-level AGI, let alone a super intelligence that can modify itself and build even more intelligence agents autonomously. The answer, according to Databricks CTO Matei Zaharia, is a compound AI system that uses LLMs as one component. It's an approach backed by Goertzel, the veteran computer scientist who is working on his own version of this compound system with the aim of creating a distributed open source AGI agent within the next few years. He suggests that humanity could build an AGI agent as soon as 2027.

There are so many reasons why The Matrix has remained relevant from the fact it was a visual feast to the rich and layered parables one can draw between its world and ours.

Much of the backstory hasn't been a part of that conversation in the 25 years since its cinematic release. But as we look to the future, we can begin to see how a similar world might be unfolding.

We know, for example, the digital realm we occupy largely through social media channels is influencing people in harmful ways. AI has also been a force for tragedy around the world, with Amnesty International claiming Facebook's algorithms played a role in pouring petrol on ethnic violence in Myanmar. Although not generally taken seriously, companies like Meta are attempting to build VR-powered alternate realities known as the metaverse.

With generative AI now a proliferating technology, groundbreaking research found recently that more than half (57.1%) of the internet comprises AI-generated content.

Throw increasingly improving tools like Midjourney and now SORA into the mix and to what extent can we know what is real and what is generated by machines especially if they look so lifelike and indistinguishable from human-generated content? The lack of sentience in the billions of machines around us is an obvious divergence from The Matrix. But that doesn't mean our own version of The Matrix has the potential to be any less manipulative.

Continued here:

As 'The Matrix' turns 25, the chilling artificial intelligence (AI) projection at its core isn't as outlandish as it once seemed - TechRadar

Tags: