Archive for the ‘Artificial General Intelligence’ Category

AI & robotics briefing: Why superintelligent AI won’t sneak up on us – Nature.com

Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.

Some researchers think that AI could eventually achieve general intelligence, matching and even exceeding humans on most tasks.Credit: Charles Taylor/Alamy

Sudden jumps in large language models apparent intelligence dont mean that they will soon match or even exceed humans on most tasks. Signs that had been interpreted as emerging artificial general intelligence disappear when the systems are tested in different ways, reported scientists at the NeurIPS machine-learning conference in December. Scientific study to date strongly suggests most aspects of language models are indeed predictable, says computer scientist and study co-author Sanmi Koyejo.

Nature | 4 min read

Reference: NeurIPS 2023 Conference paper

A robotic chemist might be the ideal laboratory partner: it scours the literature for instructions, designs experiments and then carries them out to make compounds including paracetamol and aspirin. The system, called Coscientist, is powered by several large language models, including GPT-4 and Claude. It can do most of the things that really well-trained chemists can do, says Coscientist co-developer Gabe Gomes. The team hasnt yet made Coscientists full code freely available, because some applications are likely to be dangerous.

Nature | 4 min read

Reference: Nature paper

A large language model can predict peoples health, earnings and likelihood of a premature death. The system was trained on the equivalent of sentences that were generated from the work and health records of around 6 million people in Denmark. For example, write the researchers, a sentence can capture information along the lines of In September 2012, Francisco received twenty thousand Danish kroner as a guard at a castle in Elsinore. When asked to predict whether a person in the database had died by 2020, it was accurate almost 80% of the time, outperforming other state-of-the-art models by a wide margin. Some scientists caution that the model might not work for other populations and that biases in the data could confound predictions.

Science | 4 min read

Reference: Nature Computational Science paper

Research into the boundaries between conscious and unconscious systems is urgently needed, a trio of scientists say. In comments to the United Nations, theoretical computer scientist Lenore Blum, mathematicians Jonathan Mason and Johannes Kleiner all of the Association for Mathematical Consciousness Science call for more funding for the effort. Some researchers predict that AI with human-like intelligence is 520 years away, yet there is no standard method to assess whether machines are conscious and whether they share human values. We should also consider the possible needs of conscious systems, the researchers say.

Nature | 6 min read

(Y. Yamauchi et al./Front. Robot. AI (CC-BY-4.0))

Reference: Frontiers in Robotics and AI paper

Whether machine-learning algorithms run on quantum computers can be faster or better than those run on classical computers remains an unanswered question. Some scientists hope that quantum AI could spot patterns in data that classical varieties miss even if it isnt faster. This could particularly be the case for data that are already quantum, for example those coming from particle colliders or superconductivity experiments. Our world inherently is quantum-mechanical. If you want to have a quantum machine that can learn, it could be much more powerful, says physicist Hsin-Yuan Huang.

Nature | 9 min read

This year could see the decline of the term large language model as systems increasingly deal in images, audio, video, molecular structures or mathematics. There might even be entirely new types of AI that go beyond the transformer architecture used by almost all generative models so far. At the same time, proprietary AI models will probably continue to outperform open-source approaches. And generating synthetic content has become so easy that some experts are expecting more misinformation, deepfakes and other malicious material. What I most hope for 2024 though it seems slow in coming is stronger AI regulation, says computer scientist Kentaro Toyama.

Forbes | 25 min read & The Conversation | 7 min read

We've never before built machines where even the creators don't know how they will behave, or why, says Jessica Newman, director of the AI Security Initiative. Thats particularly worrying when AI is involved in high-stakes decisions, such as in healthcare and policing. Researchers and policymakers agree that algorithms need to become more explainable, though its still unclear what this means in practice. For AI to be fair, reliable and safe, we need to go beyond opening the black box, says Newman, to ensure there is accountability for any harm that's caused.

Nature Podcast | 38 min listen

Subscribe to the Nature Podcast on Apple Podcasts, Google Podcasts or Spotify, or use the RSS feed.

Psychologist Ada Kaluzna says that using AI in her scientific writing could disrupt her ability to learn and think creatively. (Nature | 5 min read)

Happy new year! Today, Im mesmerized by this short documentary about AI art, made (in large parts) by AI. In truth, there is never going to be a first truly AI-generated documentary because it always will involve labour of some kind, says filmmaker Alan Warburton. Labour is what makes it watchable.

Help this newsletter to have a great start into 2024 by sending your feedback to ai-briefing@nature.com.

Thanks for reading,

Katrina Krmer, associate editor, Nature Briefing

With contributions by Flora Graham

Want more? Sign up to our other free Nature Briefing newsletters:

Nature Briefing our flagship daily e-mail: the wider world of science, in the time it takes to drink a cup of coffee

Nature Briefing: Anthropocene climate change, biodiversity, sustainability and geoengineering

Nature Briefing: Cancer a weekly newsletter written with cancer researchers in mind

Nature Briefing: Translational Research covers biotechnology, drug discovery and pharma

Visit link:

AI & robotics briefing: Why superintelligent AI won't sneak up on us - Nature.com

Tags:

Get Ready for the Great AI Disappointment – WIRED

In the decades to come, 2023 may be remembered as the year of generative AI hype, where ChatGPT became arguably the fastest-spreading new technology in human history and expectations of AI-powered riches became commonplace. The year 2024 will be the time for recalibrating expectations.

Of course, generative AI is an impressive technology, and it provides tremendous opportunities for improving productivity in a number of tasks. But because the hype has gone so far ahead of reality, the setbacks of the technology in 2024 will be more memorable.

More and more evidence will emerge that generative AI and large language models provide false information and are prone to hallucinationwhere an AI simply makes stuff up, and gets it wrong. Hopes of a quick fix to the hallucination problem via supervised learning, where these models are taught to stay away from questionable sources or statements, will prove optimistic at best. Because the architecture of these models is based on predicting the next word or words in a sequence, it will prove exceedingly difficult to have the predictions be anchored to known truths.

Anticipation that there will be exponential improvements in productivity across the economy, or the much-vaunted first steps towards artificial general intelligence, or AGI, will fare no better. The tune on productivity improvements will shift to blaming failures on faulty implementation of generative AI by businesses. We may start moving towards the (much more meaningful) conclusion that one needs to know which human tasks can be augmented by these models, and what types of additional training workers need to make this a reality.

Some people will start recognizing that it was always a pipe dream to reach anything resembling complex human cognition on the basis of predicting words. Others will say that intelligence is just around the corner. Many more, I fear, will continue to talk of the existential risks of AI, missing what is going wrong, as well as the much more mundane (and consequential) risks that its uncontrolled rollout is posing for jobs, inequality, and democracy.

We will witness these costs more clearly in 2024. Generative AI will have been adopted by many companies, but it will prove to be just so-so automation of the type that displaces workers but fails to deliver huge productivity improvements.

The biggest use of ChatGPT and other large language models will be in social media and online search. Platforms will continue to monetize the information they collect via individualized digital ads, while competition for user attention will intensify. The amount of manipulation and misinformation online will grow. Generative AI will then increase the amount of time people spend using screens (and the inevitable mental health problems associated with it).

There will be more AI startups, and the open source model will gain some traction, but this will not be enough to halt the emergence of a duopoly in the industry, with Google and Microsoft/OpenAI dominating the field with their gargantuan models. Many more companies will be compelled to rely on these foundation models to develop their own apps. And because these models will continue to disappoint due to false information and hallucinations, many of these apps will also disappoint.

Calls for antitrust and regulation will intensify. Antitrust action will go nowhere, because neither the courts nor policymakers will have the courage to attempt to break up the largest tech companies. There will be more stirrings in the regulation space. Nevertheless, meaningful regulation will not arrive in 2024, for the simple reason that the US government has fallen so far behind the technology that it needs some time to catch upa shortcoming that will become more apparent in 2024, intensifying discussions around new laws and regulations, and even becoming more bipartisan.

Link:

Get Ready for the Great AI Disappointment - WIRED

Tags:

Part 3 Capitalism in the Age of Artificial General Intelligence (AGI) – Medium

As we teeter on the brink of the AGI era, a profound contemplation of a compatible and thriving variant of capitalism becomes indispensable. This requires an exhaustive and intricate exploration into a reimagined economic framework, robust enough to accommodate the profound, multifaceted impacts of AGI.

This is A Blog Series:

AGI, with its unprecedented intellectual capabilities, heralds a tectonic shift not only in technology but also in the foundational pillars of our economic and social structures. Its potential to overhaul industries, inaugurate new markets, and redefine employment necessitates an extensive reevaluation of our current economic paradigms.

Here, I present ten dimensions that are core for the future architecture for rethinking capitalism.

In the threshold of the AGI era, reimagining our economic frameworks is a strategic and ethical imperative. Envisioning the future of capitalism as a flexible, ethical, and inclusive system, capable of leveraging AGIs benefits while mitigating its risks, requires a collective, interdisciplinary collaboration. This vision emphasizes the necessity of constructing an economic model that is resilient and responsive to the rapid technological and societal changes brought about by AGI.

This complex and comprehensive journey calls for visionary thought and a collaborative approach to navigate the intricate realms of the AGI age. Your perspectives are critical in contributing to a deep understanding and strategic direction for global capitalism in this transformative era. Lets engage in this rich and elaborate exploration together.

View original post here:

Part 3 Capitalism in the Age of Artificial General Intelligence (AGI) - Medium

Tags:

Artificial General Intelligence (AGI): what it is and why its discovery can change the world – Medium

Artificial General Intelligence could be a risk, an opportunity and a tool that could change everything.

Since OpenAI launched ChatGPT, and since its exponential leap in quality with GPT-4, there is a term that continues to be constantly repeated among experts who are immersed in the development of AI: the arrival of Artificial General Intelligence. This concept can be heard with all kinds of arguments, those who believe that it is going to be a revolution that will change everything forever, and among those who are totally against it because they believe that there are many dangers related to it. Therefore, our goal today is to try to decipher what Artificial General Intelligence is and when it could reach all of us.

What is AGI

Artificial General Intelligence, or AGI (Artificial General Intelligence) is a conceptual idea that describes a computer capable of thinking and acting like a human at the level of reasoning and intelligence. In this framework, supercomputers are already being created that want to imitate human brains, but there is still a long way to go to ensure that they are as intelligent as us or that have that point of creativity that current AI lacks.

Currently AIs, no matter how good they are, cannot solve errors that are not within their learning base. That is, current AIs are normally perfect for fulfilling a very broad series of functions, but they must always be channeled in such a way that these AIs have a basis in the knowledge they have learned. In fact, AI sometimes makes mistakes on really basic issues, such as childrens math problems, or invents data that never existed.

On the other hand, the AGI would not make these errors. It would be a perfect knowledge machine, capable of operating through its own uniqueness and autonomously with great knowledge in all fields and reasoning with its knowledge as we humans would or in an even better way.

Now, this is a hypothetical term, which means that we may never see it completed.

Arrival date of General Artificial Intelligence

It is really difficult to give an exact date when the AGI will arrive. At the moment it is clear that AI is advancing by leaps and bounds and it is believed that thanks to this process of innovation and development it will arrive sooner or later. However, no one within the industry dares to give a specific date on this issue. For the vast majority, in fact, it is not even a necessity, since they first want to focus on those problems that need to be solved at this very moment.

It is almost impossible to know when AGI will come into our lives.

On many occasions, when talking about General Artificial Intelligence, it is done with the idea of being attractive to investors so that they inject the large sums of money that are necessary for the very expensive development of AI. This makes some venture to give dates, like the CEO of SingularityNET, who assured that it would arrive by the year 2031.

Unfortunately, at the moment it is impossible to know when it will arrive, but everything indicates that there is a very long journey to reach it. Whats more, it could be a chimerical event that may never come, since the point could come at which human beings find an insurmountable development ceiling for current technologies.

Link:

Artificial General Intelligence (AGI): what it is and why its discovery can change the world - Medium

Tags:

Exploring the Path to Artificial General Intelligence – Medriva

Exploring the Path to Artificial General Intelligence

The concept of Artificial General Intelligence (AGI) has sparked a revolution in technological advancements. AGI, a form of artificial intelligence, possesses the cognitive capabilities comparable to a humans ability to understand, learn, adapt, and implement knowledge in a wide range of tasks. The potential for its rapid development is reshaping our expectations for the future of technology, particularly in fields like healthcare, business, and automation.

Artificial Intelligence (AI) comes in various forms, each with its own degree of capability and scope. At one end of the spectrum, we have Artificial Narrow Intelligence (ANI), which is designed to execute specific tasks such as voice recognition or image analysis. At the other end, theres Artificial General Intelligence (AGI), which is expected to comprehensively perform any intellectual task that a human can do. The concept of AI singularity the point at which AGI surpasses human intelligence is predicted by some experts to occur as early as 2030.

Significant strides in technological advancements are propelling us towards AGI at an unprecedented rate. Large Language Models (LLMs) like GPT-3 have played a crucial role in this acceleration. These models are capable of generating human-like text, demonstrating an advanced level of understanding and responsiveness. This degree of sophistication in AI technology is contributing to the steep decline in the expected years until AGIs emergence.

In the healthcare sector, the potential impact of AGI is enormous. By leveraging the comprehensive learning and problem-solving abilities of AGI, healthcare providers could significantly enhance diagnosis accuracy, treatment planning, and patient care. The use of AGI could lead to more personalized medicine, improved patient outcomes, and increased efficiency in healthcare delivery.

The concept of Baby AGI is an intriguing development in the AI field. It explores the creation of autonomous AI systems that possess advanced cognitive abilities. Although not true AGI, Baby AGI is designed to explore fundamental components of intelligence such as goal setting, planning, and decision-making. This approach is considered a stepping stone to AGI and holds promise for applications in automation, robotics, scientific research, education, and creative expression.

While AGI presents exciting possibilities, it also raises important ethical and moral considerations. Issues of bias, transparency, accountability, and potential misuse of AI technology are concerns that need to be addressed. Ensuring these challenges are met will be crucial for responsible AI development. Additionally, business leaders must understand and embrace AI to stay competitive in the rapidly evolving digital landscape.

The journey towards AGI is more than just an extension of deep learning; its a revolution that could redefine our relationship with technology. New resources like the book Artificial General Intelligence: A Revolution Beyond Deep Learning and The Human Brain by Brent Oster and Gunnar Newquist aim to stimulate discussions about the possibilities and limitations of AGI, offering cutting-edge insights and real-world applications.

In conclusion, the path to AGI is accelerating with potential to revolutionize numerous industries. As we navigate this journey, its imperative to foster a comprehensive understanding of AGI, its implications, and the ethical considerations it presents. As we stand on the brink of this technological revolution, the potential of AGI is not just a testament to human innovation, but also a call for responsibility, ethics, and thoughtful engagement with the future of AI.

See the original post:

Exploring the Path to Artificial General Intelligence - Medriva

Tags: