Archive for the ‘Artificial General Intelligence’ Category

Will the Microsoft AI Red Team Prevent AI from Going Rogue on … – Fagen wasanni

As the pursuit of Artificial General Intelligence (AGI) intensifies among AI companies, the possibility of AI systems going rogue on humans becomes a concern. Microsoft, recognizing this potential risk, has established the Microsoft AI Red Team to ensure the development of a safer AI.

The AI Red Team was formed by Microsoft in 2018 as AI systems became more prevalent. Comprised of interdisciplinary experts, the teams purpose is to think like attackers and identify failures in AI systems. By sharing their best practices, Microsoft aims to empower security teams to proactively hunt for vulnerabilities in AI systems and develop a defense-in-depth strategy.

While the AI Red Team may not have an immediate solution for rogue AI, its goal is to prevent malicious AI development. With the continual advancement of generative AI systems, capable of autonomous decision-making, the teams efforts will contribute to implementing safer AI practices.

The roadmap of the AI Red Team focuses on centering AI development around safety, security, and trustworthiness. However, they acknowledge the challenge posed by the probabilistic nature of AI and its tendency to explore different methods to solve problems.

Nevertheless, the AI Red Team is committed to handling such situations. Similar to traditional security approaches, addressing failures found through AI red teaming requires a defense-in-depth strategy. This includes the use of classifiers to identify potentially harmful content, employing metaprompt to guide behavior, and limiting conversational drift in conversational scenarios.

The likelihood of AI going rogue on humans increases if AGI is achieved. However, Microsoft and other tech companies should be prepared to deploy robust defenses by then.

With the Microsoft AI Red Teams efforts, the development of AI will not be carried out with malicious intent, striving for a future where AI is safer, secure, and trustworthy.

See more here:

Will the Microsoft AI Red Team Prevent AI from Going Rogue on ... - Fagen wasanni

AC Ventures Managing Partner Helen Wong Discusses Indonesia’s … – Clayton County Register

In a recent episode of the Going-abroad live program, AC Ventures Managing Partner Helen Wong shared her insights on Indonesia and discussed the countrys attractiveness. With over 20 years of investment experience, Wong has a track record of identifying strong teams and high-potential sectors in China and Southeast Asia. AC Ventures, based in Jakarta, is one of the largest early-stage venture capital firms focused on Indonesia.

Indonesia stands out for several reasons. First, it has a large population and a relatively favorable macroeconomic environment, with steady GDP growth, low inflation rates, controlled debt ratios, and a trade surplus. The countrys population is young, with an average age of around 30, creating a receptive market for social media and digital technologies. Moreover, Indonesias entrepreneurial atmosphere benefits from the presence of a significant ethnic Chinese community actively engaged in business.

AC Ventures has made successful investments in Indonesian startups, including payment startup Xendit and used car platform Carsome, both of which have become unicorns. The firms portfolio also includes e-commerce company Ula, logistics aggregator Shipper, fisheries startup Aruna, and FinTech firm Buku Warung.

While Indonesias venture capital environment follows global trends, the valuation system has become more reasonable. Although exceptional companies can still secure significant funding, average companies may find it more challenging. This adjustment phase is normal, and it may lead to the emergence of unicorns driven by the mobile internet boom and increased capital flow into top-tier companies.

Wong sees potential in climate technology, particularly electric vehicles, given Indonesias large motorcycle market. The firm also pays attention to TikTok-related brands and believes that effective localization can create opportunities. Additionally, AC Ventures explores niche markets like SaaS software and AGI (Artificial General Intelligence) opportunities.

Compared to investing in Chinese unicorns, investing in Southeast Asian unicorns, especially in a fragmented market like Southeast Asia, is more challenging. However, Indonesias relatively larger market makes it more conducive to producing unicorns. Companies aspiring to reach unicorn status need to address the right problems, consider market capacity, and plan for scalable growth.

While it may be early to invest in the AGI industry in Indonesia and Southeast Asia, AC Ventures remains open to experimental investments in this field. The firm recognizes the potential of AGI and believes that opportunities will arise as the industry develops.

Read more:

AC Ventures Managing Partner Helen Wong Discusses Indonesia's ... - Clayton County Register

Rakuten Group and OpenAI Collaborate to Bring Conversational AI … – Fagen wasanni

Rakuten Group has announced a partnership with OpenAI to offer advanced conversational artificial intelligence (AI) experiences for consumers and businesses globally. This collaboration aims to revolutionize the way customers shop and interact with businesses, while improving productivity for merchants and business partners.

As a global innovation company, Rakuten operates Japans largest online shopping mall and provides various services in e-commerce, fintech, digital content, and telecommunications. With over 70 services and 1.7 billion members worldwide, Rakuten possesses high-quality data and extensive knowledge in different domains.

OpenAI, an AI research and deployment company, is dedicated to ensuring that artificial general intelligence benefits humanity as a whole. Through this partnership, Rakuten will integrate AI services into its products and services, utilizing its valuable data and domain expertise. OpenAI will provide Rakuten with priority access to its APIs and support, exploring mutually beneficial commercial opportunities.

The collaboration will also see Rakuten integrating Rakuten AI experiences into ChatGPT products using OpenAIs plugin architecture. This will enable businesses to interact with AI agents using natural language, performing tasks such as research, data analysis, inventory optimization, pricing, and business process automation.

This partnership holds tremendous potential for the online services landscape, leveraging Rakutens diverse ecosystem and 100 million members in Japan. By combining Rakutens operational capabilities and unique data with OpenAIs cutting-edge technology, the collaboration aims to provide value to millions of people in Japan and around the world.

Excerpt from:

Rakuten Group and OpenAI Collaborate to Bring Conversational AI ... - Fagen wasanni

Why GPT-4 Is a Major Flop – Techopedia

GPT-4 made big waves upon its release in March 2023, but finally, the cracks in the surface are beginning to show. Not only did ChatGPTs traffic drop by 9.7% in June,but a study published by Stanford University in July found that GPT-3.5 and GPT-4s performance on numerous tasks has gotten substantially worse over time.

In one notable example, when asked whether 17,077 was a prime number in March 2023, GPT-4 correctly answered with 97.6% accuracy, but this figure dropped to 2.4% in June. This was just one area of many where the capabilities of GPT-3.5 and GPT-4 declined over time.

James Zou, assistant professor at Stanford University, told Techopedia:

Our research shows that LLM drift is a major challenge in stable integration and deployment of LLMs in practice. Drift, or changes in LLMs behaviors, such as changes in its formatting or changes in its reasoning, can break downstream pipelines.

This highlights the importance of continuous monitoring of ChatGPTs behavior, which we are working on, Zou added.

Stanfords study, How is ChatGPTs behavior changing over time, looked to examine the performance of GPT-3.5 and GPT-4 across four key areas in March 2023 and June 2023.

A summary of each of these areas is listed below:

Although many have argued that GPT-4 has got lazier and dumber, with respect to ChatGPT, Zou believes its hard to say that ChatGPT is uniformly getting worse, but its certainly not always improving in all areas.

The reasons behind this lack of improvement, or decline in performance in some key areas, is hard to explain because its black box development approach means there is no transparency into how the organization is updating or fine-tuning its models behind the scenes.

However, Peter Welinder, OpenAIs VP of Product, has argued against critics whove suggested that GPT-4 is on the decline but suggests that users are just becoming more aware of its limitations.

No, we havent made GPT-4 dumber. Quite the opposite: we make each new version smarter than the previous one. Current hypothesis: When you use it more heavily, you start noticing issues you didnt see before, Welinder said in a Twitter post.

While increasing user awareness doesnt completely explain the decline in GPT-4s ability to solve math problems and generate code, Welinders comments do highlight that as user adoption increases, users and organizations will gradually develop greater awareness of the limitations posed by the technology.

Although there are many potential LLM use cases that can provide real value to organizations, the limitations of this technology are becoming more clear in a number of key areas.

For instance, another research paper, developed by Tencent AI lab researchers Wenxiang Jiao and Wenxuan Wang, found that the tool might not be as good at translating languages as is often suggested.

The report noted that while ChatGPT was competitive with commercial translation products like Google Translate in translating European languages, it lags behind significantly when translating low-resource or distant languages.

At the same time, many security researchers are critical of the capabilities of LLMs within cybersecurity workflows, with 64.2% of whitehat researchers reporting that ChatGPT displayed limited accuracy in identifying security vulnerabilities.

Likewise, open-source governance provider Endor Labs has released research indicating that LLMs can only accurately classify malware risk in just 5% of all cases.

Of course, its also impossible to overlook the tendency that LLMs have to hallucinate, invent facts, and state them to users as if they were correct.

Many of these issues stem from the fact that LLMs dont think but process user queries, leverage training data to infer context, and then predict a text output. This means it can predict both right and wrong answers (not to mention that bias or inaccuracies in the dataset can carry over into responses).

As such, they are a long way away from being able to live up to the hype of acting as a precursor to artificial general intelligence (AGI).

The public reception around ChatGPT is extremely mixed, with consumers sharing optimistic and pessimistic attitudes about the technologys capabilities.

On one hand, Capgemini Research Institute polled 10,000 respondents across Australia, Canada, France, Germany, Italy, Japan, the Netherlands, Norway, Singapore, Spain, Sweden, the UK, and the U.S. and found that 73% of consumers trust content written by generative AI.

Many of these users trusted generative AI solutions to the extent that they were willing to seek financial, medical, and relationship advice from a virtual assistant.

On the other side, there are many who are more anxious about the technology, with a survey conducted by Malwarebytes finding that not only did 63% of respondents not trust the information that LLMs produce, but 81% were concerned about possible security and safety risks.

It remains to be seen how this will change in the future, but its clear that hype around the technology isnt dead just yet, even if more and more performance issues are becoming apparent.

While generative AI solutions like ChatGPT still offer valuable use cases to enterprises, organizations need to be much more proactive about monitoring the performance of applications of this technology to avoid downstream challenges.

In an environment where the performance of LLMs like GPT-4 and GPT-3.5 is inconsistent at best or on the decline at worse, organizations cant afford to enable employees to blindly trust the output of these solutions and must continuously assess the output of these solutions to avoid being misinformed or spreading misinformation.

Zou said:

We recommend following our approach to periodically assess the LLMs responses on a set of questions that captures relevant application scenarios. In parallel, its also important to engineer the downstream pipeline to be robust to small changes in the LLMs.

For users that got caught up in the hype surrounding GPT, the reality of its performance limitations means its a flop. However, it can still be a valuable tool for organizations and users that remain mindful of its limitations and attempt to work around them.

Taking actions, such as double-checking the output of LLMs to make sure facts and other logical information are correct, can help ensure that users benefit from the technology without being misled.

Original post:

Why GPT-4 Is a Major Flop - Techopedia

Will AI be the death of us? The artificial intelligence pioneers behind ChatGPT and Google’s Deep Mind say it could be – The Australian Financial…

For Hinton, as for many computer scientists and researchers in the AI community, the question of artificial intelligence becoming more intelligent than humans is one of when, rather than if.

Testifying from the seat next to Altman last month was Professor Gary Marcus, a New York University professor emeritus who specialises in psychology and neural science, and who ought to know as well as anyone the answer to the question of when the AI will become as good at thinking as humans are at which point it will be known as AGI (artificial general intelligence), rather than merely AI.

But Marcus doesnt know.

Is it going to be 10 years? Is it going to be 100 years? I dont think anybody knows the answer to that question.

But when we get to AGI, maybe lets say its 50 years, that really is going to have profound effects on labour, he testified, responding to a question from Congress about the potential job losses stemming from AI.

OpenAI CEO Sam Altman speaks at the US Senate hearing on artificial intelligence on May 16, 2023. Seated beside him is NYU Professor Emeritus Gary Marcus. AP

And indeed, the effect an AGI might have on the workforce goes to the crux of the matter, creating a singular category of unemployment that might ultimately lead to human extinction.

Apart from putting office workers, artists and journalists out of work, one effect that achieving the AGI milestone might have on labour is that it could put out of work the very humans who built the AI software in the first place, too.

If an artificial intelligence is general enough to replicate most or all tasks now done by the human brain, then one task it should be able to replicate is to develop the next generation of itself, the thinking goes.

That first generation of AGI-generated AGI might be only fractionally better than the generation it replaced, but one of the things its very likely to be fractionally better at is generating the second generation version of AGI-generated AGI.

Run that computer loop a few times, or a few million times with each improvement, each loop is likely to get better optimised and run faster, too then what started simply as an AGI can spiral into whats sometimes known as a superhuman machine intelligence, otherwise known as the God AI.

Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.

Sam Altman, Open AI CEO

Though he dodged the question when testifying before Congress, Sam Altman had actually blogged on this topic back in 2015, while he was still running the influential US start-up accelerator Y Combinator and 10 months before he would go on to co-found OpenAI, the worlds most influential AI company, together with Elon Musk, Peter Thiel, Amazon and others.

Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity, he blogged at the time.

There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could.

Professor Max Tegmark, a Swedish-American physicist and machine-learning researcher at the Massachusetts Institute of Technology, says its unlikely todays AI technology would be capable of anything that could wipe out humanity.

AIs job is to perform singular tasks without hurdles. When challenges present themselves, AI steps in to ensure they are removed no matter what they are.

It would probably take an AGI for that, and more likely an AGI that has progressed to the level of superhuman intelligence, he tells AFR Weekend.

As to exactly how an AGI or SMI might cause human extinction, Tegmark said there are any number of seemingly innocuous ways the goals of an AI can become misaligned with the goals of humans, leading to unexpected outcomes.

Most likely it will be something we cant imagine and wont see coming, he says.

In 2003, the Swedish philosopher Nick Bostrom devised the paper-clip maximiser thought experiment as a way of explaining AI alignment theory.

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realise quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans, Bostrom wrote.

Last month, the US Air Force was involved in a thought experiment along similar lines, replacing paper clip maximisers with attack drones that use AI to choose targets, but still rely on a human operator for yes/no permission to destroy the target.

A plausible outcome of the experiment, said Colonel Tucker Hamilton, the USAFs chief of AI Test and Operations, was that the drone ends up killing any human operator who stops it achieving its goal of killing targets by saying no to a target.

If the AIs goal was then changed to include not killing drone operators, the drone might end up wiping out the telecommunications equipment the operator was using to communicate the no to it, the experiment found.

Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI, Colonel Hamilton was quoted as saying in a Royal Aeronautical Society statement.

But the challenges posed by AI arent just theoretical. Its already commonplace for machine-learning systems, when given seemingly innocuous tasks, to inadvertently produce outcomes not aligned with human well-being.

In 2018, Amazon pulled the plug on its machine-learning-based recruitment system, when the company found AI had learned to deduct points from applicants who had the word women in their resume. (The AI had been trained to automate the resume-sifting process, and simply made a correlation between resumes from females, and the outcome of those resumes getting rejected by human recruiters.)

The fundamental problem, Tegmark says, is that its difficult, perhaps even impossible, to ensure that AI systems are completely aligned with the goals of the humans who create them, much less the goals of humanity as a whole.

And the more powerful the AI system, the greater the risk that a misaligned outcome could be catastrophic.

And it may not take artificial intelligence very long at all to progress from the AGI phase to the SMI phase, at which time the very existence of humanity might be dangling in the wind.

In an April Time magazine article wondering why most AI ethicists were so loath to discuss the elephant in the room human extinction as unintended a side effect of SMI Professor Tegmark pointed to the Metaculus forecasting website, which asked this question of the expert community: After a weak AGI is created, how many months will it be before the first super-intelligent oracle?

The average answer Metaculus got back was 6.38 months.

It may not be about how long it will take to get from AGI to SMI. That computer loop, known as recursive self-improvement, might take care of that step quite rapidly, in no time at all compared to the 75 years it took AI researchers to come up with ChatGPT.

(Though thats not necessarily so. As one contributor to the Metaculus poll pointed out, If AGI develops on a system with a lot of headroom, I think itll rapidly achieve superintelligence. But if AGI develops on a system without sufficient resources, it could stall out. I think scenario number two would be ideal for studying AGI and crafting safety rails so, heres hoping for slow take-off.)

The big question is, how long will it take to get from ChatGPT, or Googles Bard, to AGI?

Of Professor Marcus three stabs at an answer 10, 50, or 100 years I ask Professor Tegmark which he thinks is most likely.

I would guess sooner than that, he says.

People used to think that AGI would happen in 30 years or 50 years or more, but a lot of researchers are talking about next year or two years from now, or at least this decade almost for sure, he says.

What changed the thinking about how soon AI will become AGI was the appearance of OpenAIs GPT-4, the large language model (LLM) machine-learning system that underpins ChatGPT, and the similar LLMs used by Bard and others, says Professor Tegmark.

In March, Sbastien Bubeck, the head of the Machine Learning Foundations group at Microsoft Research, and a dozen other Microsoft researchers, submitted a technical paper on the work theyd been doing on GPT-4, which Microsoft is funding and which runs on Microsofts cloud service, Azure.

The paper was called Sparks of Artificial General Intelligence: Early Experiments with GPT-4, and argued that recent LLMs show more general intelligence than any previous AI models.

But sparks as anyone who has ever tried to use an empty cigarette lighter knows dont always burst into flames.

Altman himself has doubts the AI industry can keep closing in on AGI just by building more of what its already building, but bigger.

Making LLMs ever larger could be a game of diminishing returns, hes on record saying.

I think theres been way too much focus on parameter count this reminds me a lot of the gigahertz race in chips in the 1990s and 2000s, where everybody was trying to point to a big number, he said at an MIT conference in April.

(The size of an LLM is measured in parameters, roughly equivalent to counting the neural connections in the human brain. The predecessor to GPT-4, GPT-3, had about 175 billion of them. OpenAI has never actually revealed how large GPT-4s parameter count is, but its said to be about 1 trillion, putting it in the same ballpark as Googles 1.2-trillion-parameter GLaM LLM.)

I think were at the end of the era where its going be these giant, giant models, he said.

Testifying under oath before Congress, Altman said OpenAI wasnt even training a successor to GPT-4, and had no immediate plans to do so.

Elsewhere in his testimony, Altman also complained that people were using ChatGPT too much, which may be related to the scaling issue.

Actually, wed love it if theyd use it less because we dont have enough GPUs, he told Congress, referring to the graphics processing units that were once mainly used by computer gamers, but then found a use mining bitcoins and other cryptocurrency, and now are used by the AI industry on a vast scale to train AI models.

Two things are worth noting here: the latest GPUs designed specifically to run in data centres like the ones Microsoft uses for Azure cost about $US40,000 each; and OpenAI is believed to have used about 10,000 GPUs to train GPT-4.

Its possible that I am totally wrong about digital intelligence overtaking us. Nobody really knows, which is why we should worry now.

Geoffrey Hinton, AI pioneer

Though Altman never elaborated on his pessimism about the AI industry continuing along the path of giant language models, its likely that at least some of that negativity has to do with the short supply (and concomitant high cost) of raw materials like GPUs, as well as a shortage of novel content to train the LLMs on.

Having already scraped most of the internets written words to feed the insatiable LLMs, the AI industry is now turning its attention to spoken words, scraped from podcasts and videos, in an effort to squeeze more intelligence out of their LLMs.

Regardless, it seems the path from todays LLMs to future artificial general intelligence machines may not be a straightforward one. The AI industry may need new techniques or, indeed, a partial return to old, hand-crafted AI techniques discarded in favour of todays brute-force machine learning systems to further make progress.

Well make them better in other ways, Altman said at that MIT conference.

Nevertheless, the godfather of AI, Hinton himself, recently revised his own estimate of between 30 and 50 years before the world will see the first AGI.

I now predict five to 20 years but without much confidence. We live in very uncertain times. Its possible that I am totally wrong about digital intelligence overtaking us. Nobody really knows which is why we should worry now, he tweeted in May.

And one of Hintons close colleagues and another godfather of AI, Yoshua Bengio, pointed out in a recent news conference that, by one metric, AGI has already been achieved.

We have basically now reached the point where there are AI systems that can fool humans, meaning they can pass the Turing Test, which was considered for many decades a milestone of intelligence.

That is very exciting, because of the benefits we can bring with socially positive applications of AI but also Im concerned that powerful tools can also have negative uses, and that society is not ready to deal with that, he said.

Mythically, of course, society actually has been long ready to deal with the appearance of a superhuman machine intelligence. At the very least, we humans have been prepared for a fight with one for many decades, long before intelligent machines were turning people into fleshy D-cell batteries in the movie The Matrix, forcing the human resistance underground.

Professor Genevieve Bell, a cultural anthropologist and director of the School of Cybernetics at the ANU, says Western culture has a longstanding love-hate relationship with any major technology transformation, going back as far as the railways and the dark Satanic Mills of the Industrial Revolution.

Its a cultural fear that weve had since the beginning of time. Well, certainly since the beginning of machines, she says.

And we have a history of mobilising these kind of anxieties when technologies get to scale and propose to change our ideas of time and place and social relationships.

Dr Genevieve Bell traces our love-hate relationship with new technology back to the dark Satanic Mills of the Industrial Revolution.

In that context, the shopping list of risks now being attached to AI that list beginning with mass loss of livelihoods and ending with mass loss of life is neither new nor surprising, says Bell.

Ever since we have talked about machines that could think or artificial intelligence there has been an accompanying set of anxieties about what would happen if we got it right, whatever right would look like.

Thats not to say the fears are necessarily unwarranted, she emphasises. Its just to say theyre complicated, and we need to figure out what fears have a solid basis in fact, and which fears are more mythic in their quality.

Why has our anxiety reached a fever pitch right now? she asks.

How do we right-size that anxiety? And how do we create a space where we have agency as individuals and citizens to do something about it?

Those are the big questions we need to be asking, she says.

One anxiety we should right-size immediately, says Professor Toby Walsh, chief scientist at the AI Institute at the University of NSW, is the notion that AI will rise up against humanity and deliberately kill us all.

Im not worried that theyre suddenly going to escape the box and take over the planet, he says.

Firstly, theres still a long way to go before theyre as smart as us. They cant reason, they make some incredibly dumb mistakes, and there are huge areas in which they just completely fail.

Secondly, theyre not conscious; they dont have desires of their own like we do. Its not as if, when youre not typing something into ChatGPT, its sitting there thinking, Oh, Im getting a bit bored. How could I take over the place?

Its not doing anything at all when its not being used, he says.

Nevertheless, artificial intelligence has the potential to do a great deal of damage to human society if left unregulated, and if tech companies such as Microsoft and Google continue to be less transparent in their use of AI than they need to be.

Professor Toby Walsh one of Australias leading expert on AI. Louie Douvis

I do think that tech companies are behaving in a not particularly responsible way. In particular, they are backtracking on behaviours that were more responsible, says Walsh, citing the example of Google, which last year had refused to release an LLM-based chatbot because it found the chatbot wasnt reliable enough, but then rushed to release it anyway, under the name Bard, after OpenAI came out with ChatGPT.

Another of the genuine concerns is that powerful AI systems will fall into the hands of bad actors, he says.

In an experiment conducted for an international security conference in 2021, researchers from Collaborations Pharmaceuticals, a drug research company that uses machine learning to help develop new compounds, decided to see what would happen if they told their machine learning systems to seek out toxic compounds, rather than avoid them.

In particular, they chose to drive the generative model towards compounds such as the nerve agent VX, one of the most toxic chemical warfare agents developed during the 20th century, the researchers later reported in Nature magazine.

In less than six hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired (toxicity) threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible, they wrote.

Computer systems only have goals that we give them, but Im very concerned that humans will give them bad goals, says Professor Walsh, who believes there should be a moratorium on the deployment of powerful AI systems until the social impact has been properly thought through.

Professor Nick Davis, co-director of the Human Technology Institute at the University of Technology, Sydney, says were now at a pivotal moment in human history, when society needs to move beyond simply developing principles for the ethical use of AI (a practice that Bell at ANU says has been going on for decades) and actually start regulating the business models and operations of companies that use AI.

But care must be taken not to over-regulate artificial intelligence, too, Davis warns.

We dont want to say none of this stuff is good, because a lot of it is. AI systems prevented millions of deaths around the world because of their ability to sequence the genome of the COVID-19 sequence.

But we really dont want to fall in the trap of letting a whole group of people create failures at scale, or create malicious deployments or overuse AI in ways that just completely goes against what we think of as a thoughtful, inclusive, democratic society, he says.

Bell, who was the lead author on the governments recent Rapid Response Information Report on the risks and opportunities attached to the use of LLMs, also believes AI needs to be regulated, but fears it wont be easy to do.

At a societal and at a planetary scale, we have over the last 200 plus years gone through multiple large-scale transformations driven by the mass adoption of new technical systems. And weve created regulatory frameworks to manage those.

So the optimistic part of my brain says we have managed through multiple technical transformations in the past, and there are things we can learn from that that should help us navigate this one, says Bell.

But the other part of my brain says this feels like it is happening at a speed and a scale that has previously not happened, and there are more pieces of the puzzle we need to manage than weve ever had before.

Go here to read the rest:

Will AI be the death of us? The artificial intelligence pioneers behind ChatGPT and Google's Deep Mind say it could be - The Australian Financial...