AI & robotics briefing: Why superintelligent AI won’t sneak up on us – Nature.com

Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.

Some researchers think that AI could eventually achieve general intelligence, matching and even exceeding humans on most tasks.Credit: Charles Taylor/Alamy

Sudden jumps in large language models apparent intelligence dont mean that they will soon match or even exceed humans on most tasks. Signs that had been interpreted as emerging artificial general intelligence disappear when the systems are tested in different ways, reported scientists at the NeurIPS machine-learning conference in December. Scientific study to date strongly suggests most aspects of language models are indeed predictable, says computer scientist and study co-author Sanmi Koyejo.

Nature | 4 min read

Reference: NeurIPS 2023 Conference paper

A robotic chemist might be the ideal laboratory partner: it scours the literature for instructions, designs experiments and then carries them out to make compounds including paracetamol and aspirin. The system, called Coscientist, is powered by several large language models, including GPT-4 and Claude. It can do most of the things that really well-trained chemists can do, says Coscientist co-developer Gabe Gomes. The team hasnt yet made Coscientists full code freely available, because some applications are likely to be dangerous.

Nature | 4 min read

Reference: Nature paper

A large language model can predict peoples health, earnings and likelihood of a premature death. The system was trained on the equivalent of sentences that were generated from the work and health records of around 6 million people in Denmark. For example, write the researchers, a sentence can capture information along the lines of In September 2012, Francisco received twenty thousand Danish kroner as a guard at a castle in Elsinore. When asked to predict whether a person in the database had died by 2020, it was accurate almost 80% of the time, outperforming other state-of-the-art models by a wide margin. Some scientists caution that the model might not work for other populations and that biases in the data could confound predictions.

Science | 4 min read

Reference: Nature Computational Science paper

Research into the boundaries between conscious and unconscious systems is urgently needed, a trio of scientists say. In comments to the United Nations, theoretical computer scientist Lenore Blum, mathematicians Jonathan Mason and Johannes Kleiner all of the Association for Mathematical Consciousness Science call for more funding for the effort. Some researchers predict that AI with human-like intelligence is 520 years away, yet there is no standard method to assess whether machines are conscious and whether they share human values. We should also consider the possible needs of conscious systems, the researchers say.

Nature | 6 min read

(Y. Yamauchi et al./Front. Robot. AI (CC-BY-4.0))

Reference: Frontiers in Robotics and AI paper

Whether machine-learning algorithms run on quantum computers can be faster or better than those run on classical computers remains an unanswered question. Some scientists hope that quantum AI could spot patterns in data that classical varieties miss even if it isnt faster. This could particularly be the case for data that are already quantum, for example those coming from particle colliders or superconductivity experiments. Our world inherently is quantum-mechanical. If you want to have a quantum machine that can learn, it could be much more powerful, says physicist Hsin-Yuan Huang.

Nature | 9 min read

This year could see the decline of the term large language model as systems increasingly deal in images, audio, video, molecular structures or mathematics. There might even be entirely new types of AI that go beyond the transformer architecture used by almost all generative models so far. At the same time, proprietary AI models will probably continue to outperform open-source approaches. And generating synthetic content has become so easy that some experts are expecting more misinformation, deepfakes and other malicious material. What I most hope for 2024 though it seems slow in coming is stronger AI regulation, says computer scientist Kentaro Toyama.

Forbes | 25 min read & The Conversation | 7 min read

We've never before built machines where even the creators don't know how they will behave, or why, says Jessica Newman, director of the AI Security Initiative. Thats particularly worrying when AI is involved in high-stakes decisions, such as in healthcare and policing. Researchers and policymakers agree that algorithms need to become more explainable, though its still unclear what this means in practice. For AI to be fair, reliable and safe, we need to go beyond opening the black box, says Newman, to ensure there is accountability for any harm that's caused.

Nature Podcast | 38 min listen

Subscribe to the Nature Podcast on Apple Podcasts, Google Podcasts or Spotify, or use the RSS feed.

Psychologist Ada Kaluzna says that using AI in her scientific writing could disrupt her ability to learn and think creatively. (Nature | 5 min read)

Happy new year! Today, Im mesmerized by this short documentary about AI art, made (in large parts) by AI. In truth, there is never going to be a first truly AI-generated documentary because it always will involve labour of some kind, says filmmaker Alan Warburton. Labour is what makes it watchable.

Help this newsletter to have a great start into 2024 by sending your feedback to ai-briefing@nature.com.

Thanks for reading,

Katrina Krmer, associate editor, Nature Briefing

With contributions by Flora Graham

Want more? Sign up to our other free Nature Briefing newsletters:

Nature Briefing our flagship daily e-mail: the wider world of science, in the time it takes to drink a cup of coffee

Nature Briefing: Anthropocene climate change, biodiversity, sustainability and geoengineering

Nature Briefing: Cancer a weekly newsletter written with cancer researchers in mind

Nature Briefing: Translational Research covers biotechnology, drug discovery and pharma

Visit link:

AI & robotics briefing: Why superintelligent AI won't sneak up on us - Nature.com

Related Posts

Tags:

Comments are closed.