There’s AI, and Then There’s AGI: What You Need to Know to Tell the Difference – CNET
Imagine an AI that doesn't just answer questions like ChatGPT, but can make your morning coffee, do the dishes and care for your elderly parent while you're at work.
It's the future first envisioned by The Jetsons in 1962, and thanks to developments in AI, it finally seems feasible within the next decade.
But the implications extend far beyond an in-home Jarvis. That's why tech titans like Meta CEO Mark Zuckerberg want to take AI to this next level. Last month, he told The Verge his new goal is to build artificial general intelligence, or AGI. That puts him in the same league as ChatGPT-maker OpenAI and Google's DeepMind.
While Zuckerberg wants AGI to build into products to further connect with users, OpenAI and DeepMind have talked about the potential of AGI to benefit humanity. Regardless of their motivations, it's a big leap from the current state of AI, which is dominated by generative AI and chatbots. The latter have so far dazzled us with their writing skills, creative chops and seemingly endless answers (even if their responses aren't always accurate).
There is no standard definition for AGI, which leaves a lot open to interpretation and opinion. But it is safe to say AGI is closer to humanlike intelligence and encompasses a greater range of skills than most existing AIs. And it will have a profound impact on us.
But it has a long way to go before it fully emulates the human brain - not to mention the ability to make its own decisions. And so the current state of AGI could best be described as the Schrodinger's cat of AI: It simultaneously is and is not humanlike.
If you're wondering what all the fuss is about with AGI, this explainer is for you. Here's what you need to know.
Let's start with a term we've heard a lot in the last year: artificial intelligence. It's a branch of computer science thatsimulates aspects of human intelligence in machines.
Per Mark Riedl, professor in the Georgia Tech School of Interactive Computing and associate director of the Georgia Tech Machine Learning Center, AI is "the pursuit of algorithms and systems that emulate behaviors we think of as requiring intelligence."
That includes specific tasks like driving a car, planning a birthday party or writing code jobs that are already performed to a degree today by self-driving cars and more modest driving-assist features, or by assistants like ChatGPT if you give them the right prompt.
"These are things that we think that humans excel at and require cognition," Riedl added. "So any system that emulates those sorts of behaviors or automates those sorts of tasks can be considered artificial intelligence."
OpenAI's Dall-E 3 generative AI can create fanciful images like this spiky elecric guitar in front of a psychedelic green background. It uses GPT text processing to pump up your text prompts for more vivid, detailed results.
When an AI can perform a single task very well like, say, playing chess it's considered narrow intelligence. IBM's Watson, the question-answering AI that triumphed on Jeopardy in 2011, is perhaps the best-known example. Deep Blue, another IBM AI, was the chess-playing virtuoso that beat grandmaster Garry Kasparov in 1997.
But the thing about narrow intelligence is it can only do that one thing.
"It's not going to be able to play golf and it's not going to be able to drive a car," said Chirag Shah, a professor at the University of Washington. But Watson and Deep Blue can probably beat you at Jeopardy and chess, respectively.
Artificial general intelligence, on the other hand, is broader and harder to define.
AGI means a machine can do many things humans do or possibly all the things we do. It depends who you ask.
Human beings are the ultimate general intelligence because we are capable of doing so much: talking, driving, problem solving, writing and more.
Theoretically, an AGI would be able to perform these tasks indistinguishable from what Georgios-Alex Dimakis, a professor of engineering at the University of Texas, called "an extremely intelligent human."
But beyond the ability to match human proficiency, there is no consensus about what achievements merit the label. For some, the ability to perform a task as well as a person is in and of itself a sign of AGI. For others, AGI will only exist when it can do everything humans can do with their minds. And then there are those who believe it's somewhere in between.
Zuckerberg illustrated this fluidity in his interview with The Verge. "You can quibble about if general intelligence is akin to human-level intelligence, or is it like human-plus, or is it some far-future superintelligence," he said. "But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition."
But the key is AGI is broad where AI is narrow.
The timeline for AGI is also up for debate.
Some say it's already here, or close. Others say it may never happen. Still more peg the estimate at five to 10 years DeepMind CEO Demis Hassabis is in this camp while yet others say it will be decades.
"My personal view is, no, it doesn't exist," Shah said.
He pointed to a March 2023 research paper from Microsoft, which referred to "sparks of AGI." The researchers said some of the conversations with recent large language models like GPT-4 are "starting to show that it actually understands things in a deeper way than simply answering questions," Shah said.
That means "you can actually have a free-form conversation with it like you would have with a human being," he added. What's more, the latest versions of chatbots like Google's Gemini and ChatGPT are capable of responding to more complex queries.
This ability does indeed point to AGI, if you accept the looser definition.
LLMs are a type of AI, fed content like books and news stories to first understand and then generate their own output text. LLMs are behind all the generative AI chatbots we know (and love?), like ChatGPT, Gemini, Microsoft Bing and Claude.ai.
What's interesting about LLMs is they aren't limited to one specific task. They can write poetry and plan vacations and even pass the bar exam, which means they can perform multiple tasks, another sign of AGI.
Then again, they are still prone to hallucinations, which occur when an LLM generates outputs that are incorrect or illogical. They are also subject to reasoning errors and gullibility and even provide different answers to the same question.
Hence the similarity to Schrodinger's cat, which in the thought experiment was simultaneously dead and alive until someone opened the box it was in to check.
This is perhaps the $100,000 question and another one that is hard to answer definitively.
If an AGI learns how to perform multiple household duties, we may finally have a Jetsons moment. There's also the potential for at-home assistants who understand you like a friend or family member and who can take care of you, which Shah said has huge potential for elder care.
And AGI will continue to influence the job market as it becomes capable of more and more tasks. That means more existing jobs are at risk, but the good news is new jobs will be created and opportunities will remain.
The short answer is no.
For starters, the ability to perform multiple tasks, as an AGI would, does not imply consciousness or self-will. And even if an AI had self-determination, the number of steps required to decide to wipe out humanity and then make progress toward that goal is too many to be realistically possible.
"There's a lot of things that I would say are not hard evidence or proof, but are working against that narrative [of robots killing us all someday]," Riedl said.
He also pointed to the issue of planning, which he defined as "thinking ahead into your own future to decide what to do to solve a problem that you've never solved before."
LLMs are trained on historical data and are very good at using old information like itineraries to address new problems, like how to plan a vacation.
But other problems require thinking about the future.
"How does an AI system think ahead and plan how to eliminate its adversaries when there is no historical information about that ever happening?" Riedl asked. "You would require planning and look ahead and hypotheticals that don't exist yet there's this big black hole of capabilities that humans can do that AI is just really, really bad at."
Dimakis, too, believes sentient robots killing us all has "a very low probability."
A much bigger risk is this technology ending up closed off within one or two big tech companies instead of being open like it is at universities.
"Having a monopoly or an oligopoly of one or two companies that are the only ones who have these new AI systems will be very bad for the economy because you'd have a huge concentration of technologies being built on top of these AI foundation models," Dimakis said. "And that is to me one of the biggest risks to consider in the immediate future."
AGI should not be confused with artificial super intelligence, which is an AI capable of making its own decisions. In other words, it is self-aware, or sentient. This is the AI many people fear now.
"You can think about any of these sci-fi stories and movies where you have robots and they have AI that are planning and thinking on their own," Shah said. "They're able to do things without being directed and can assume control completely on their own without any supervision."
But the good news is ASI is much further away than AGI. And so there's time to implement guardrails and guide or hinder its development.
That being said, Thorsten Joachims, a professor of computer science at Cornell, believes we will hold AI systems to higher standards than we hold ourselves and this may ultimately help us address some of society's shortcomings.
For example, humans commit crimes.
"We would never put up with it if an AI system did that," he said.
Joachims also pointed to decision-making, particularly in courts of law. Even well-educated and experienced professionals like judges pass down vastly different sentences for similar cases.
He believes we won't tolerate this kind of inconsistency in AI either. These higher standards will inform how AI systems are built and, in the end, they may not even look all that human.
In fact, AGI may ultimately help us solve problems we've long struggled with, like curing cancer. And even if that's the only thing a particular AI can do, that alone would be revolutionary.
"Maybe it cannot pass the Turing test" a standard method for assessing a computer's ability to pass as human "so maybe we wouldn't even consider it intelligent in any way, but certainly it would save billions of lives," said Adam Klivans, a professor of computer science at the University of Texas and director of the National Science Foundation's AI Institute for Foundations of Machine Learning. "It would be incredible."
In other words, AI can help us solve problems without fully mimicking human intelligence.
"These are not so much exactly AGI in the sense that they do what humans do, but rather they augment humanity in very useful ways," Dimakis said. "This is not doing what humans can do, but rather creating new AI tools that are going to improve the human condition."
Read more:
There's AI, and Then There's AGI: What You Need to Know to Tell the Difference - CNET
- What Apples AI deal with Google means for the two tech giants, and for $500 billion upstart OpenAI - Fortune - January 14th, 2026 [January 14th, 2026]
- Whats Expensive in AI? The Answer is Changing Fast. - SaaStr - January 14th, 2026 [January 14th, 2026]
- Four Ways I Use AI as a Principal (and One Way I Never Will) (Opinion) - Education Week - January 14th, 2026 [January 14th, 2026]
- Pentagon rolls out major reforms of R&D, AI - Breaking Defense - January 14th, 2026 [January 14th, 2026]
- Pentagon task force to deploy AI-powered UAS systems to capture drones - Defense News - January 14th, 2026 [January 14th, 2026]
- Buy These 3 AI ETFs Now: They Could Be Worth $15 Million in 30 Years - The Motley Fool - January 14th, 2026 [January 14th, 2026]
- ServiceNow Patches Critical AI Platform Flaw Allowing Unauthenticated User Impersonation - The Hacker News - January 14th, 2026 [January 14th, 2026]
- Partnering with Sandstone: An AI-Native Platform for In-House Legal Teams - Sequoia Capital - January 14th, 2026 [January 14th, 2026]
- Bandcamps Mission and Our Approach to Generative AI - Bandcamp - January 14th, 2026 [January 14th, 2026]
- Mom of one of Elon Musk's kids says AI chatbot Grok generated sexual deepfake images of her: "Make it stop" - CBS News - January 14th, 2026 [January 14th, 2026]
- Bill Gates Says 'AI Will Change Society the Most'Job Disruption Has Already Begun, 'Less Labor' Will Be Needed, And 5-Day Work Week May Disappear -... - January 14th, 2026 [January 14th, 2026]
- Prediction: This Artificial Intelligence (AI) Chip Stock Will Outperform Nvidia in 2026 (Hint: It's Not AMD) - The Motley Fool - January 14th, 2026 [January 14th, 2026]
- Microsoft responds to AI data center revolt, vowing to cover full power costs and reject local tax breaks - GeekWire - January 14th, 2026 [January 14th, 2026]
- War Department 'SWAT Team' Removes Barriers to Efficient AI Development - U.S. Department of War (.gov) - January 14th, 2026 [January 14th, 2026]
- South Koreas Revised AI Basic Act to Take Effect January 22 With New Oversight, Watermarking Rules - BABL AI - January 14th, 2026 [January 14th, 2026]
- Musks AI tool Grok will be integrated into Pentagon networks, Hegseth says - The Guardian - January 14th, 2026 [January 14th, 2026]
- You cant afford not to use it: Inderpal Bhandari speaks about the future of AI in sports - The Daily Northwestern - January 14th, 2026 [January 14th, 2026]
- How AI image tools can be tricked into making political propaganda - Help Net Security - January 14th, 2026 [January 14th, 2026]
- Mesa County to test AI software for housing development reviews - KKCO 11 News - January 14th, 2026 [January 14th, 2026]
- 'Most Severe AI Vulnerability to Date' Hits ServiceNow - Dark Reading | Security - January 14th, 2026 [January 14th, 2026]
- Self-learning AI generates NFL picks, score predictions for every 2026 divisional round matchup - CBS Sports - January 14th, 2026 [January 14th, 2026]
- Gen AI Is Threatening the Platforms That Dominate Online Travel - Harvard Business Review - January 14th, 2026 [January 14th, 2026]
- NVIDIA and Lilly Announce Co-Innovation AI Lab to Reinvent Drug Discovery In the Age of AI - Eli Lilly - January 14th, 2026 [January 14th, 2026]
- AI Fraud Has Exploded. This Background-Check Startup Is Cashing In. - Forbes - January 14th, 2026 [January 14th, 2026]
- Caterpillar Briefly Tops $300 Billion Valuation on AI Rally - Yahoo Finance - January 14th, 2026 [January 14th, 2026]
- Google has the best AI for enterprise right now, Ray Wang - Fox Business - January 14th, 2026 [January 14th, 2026]
- What AI is actually good for, according to developers - The GitHub Blog - January 14th, 2026 [January 14th, 2026]
- Apple and Google are teaming up on AI. What it means for both stocks - CNBC - January 14th, 2026 [January 14th, 2026]
- A Look At Cisco Systems (CSCO) Valuation As AI And Cybersecurity Expansion Gain Traction - simplywall.st - January 14th, 2026 [January 14th, 2026]
- US allows Nvidia to send advanced AI chips to China with restrictions - Yahoo Finance - January 14th, 2026 [January 14th, 2026]
- AI industry insiders launch site to poison the data that feeds them - theregister.com - January 11th, 2026 [January 11th, 2026]
- The agentic commerce platform: Shopify connects any merchant to every AI conversation - Shopify - January 11th, 2026 [January 11th, 2026]
- Google teams up with Walmart and other retailers to enable shopping within Gemini AI chatbot - AP News - January 11th, 2026 [January 11th, 2026]
- This CEO laid off nearly 80% of his staff because they refused to adopt AI fast enough. 2 years later, he says hed do it again - Yahoo Finance - January 11th, 2026 [January 11th, 2026]
- Bridging the gap between AI and medicine: Claude in Microsoft Foundry advances capabilities for healthcare and life sciences customers - Microsoft - January 11th, 2026 [January 11th, 2026]
- Artificial Intelligence (AI) Is Driving a New Wave of Infrastructure Spending. This Stock Is Key. - Yahoo Finance - January 11th, 2026 [January 11th, 2026]
- Job Seekers Find a New Source of Income: Training AI to Do Their Old Roles - The Wall Street Journal - January 11th, 2026 [January 11th, 2026]
- The AI platform shift and the opportunity ahead for retail - blog.google - January 11th, 2026 [January 11th, 2026]
- Applied Digital Just Solved AI's Biggest Bottleneck with Technology From the 1800s - The Motley Fool - January 11th, 2026 [January 11th, 2026]
- Can Agentic AI reduce the burden of compliance? - Security Boulevard - January 11th, 2026 [January 11th, 2026]
- Americas AI Boom Is Running Into An Unplanned Water Problem - Forbes - January 11th, 2026 [January 11th, 2026]
- AI, edge, and security: Shaping the need for modern infrastructure management - Network World - January 11th, 2026 [January 11th, 2026]
- Your next primary care doctor could be online only, accessed through an AI tool : Shots - Health News - NPR - January 11th, 2026 [January 11th, 2026]
- Brad Gerstner breaks from the crowd on one AI stock - thestreet.com - January 11th, 2026 [January 11th, 2026]
- Are we in an AI bubble? What 40 tech leaders and analysts are saying, in one chart - CNBC - January 11th, 2026 [January 11th, 2026]
- AI is intensifying a 'collapse' of trust online, experts say - NBC News - January 11th, 2026 [January 11th, 2026]
- Anthropic follows OpenAI in rolling out healthcare AI tools - Investing.com - January 11th, 2026 [January 11th, 2026]
- Behind Anthropic's stunning growth is a sibling team that may hold the key to generative AI - CNBC - January 11th, 2026 [January 11th, 2026]
- Fears of an AI bubble were nowhere to be found at the worlds biggest tech show - CNN - January 11th, 2026 [January 11th, 2026]
- 'No one verified the evidence': Woman says AI-generated deepfake text sent her to jail | Action News Investigation - 6abc Philadelphia - January 11th, 2026 [January 11th, 2026]
- Global AI adoption rose in 2025 but regional gaps widened | ETIH EdTech News - EdTech Innovation Hub - January 11th, 2026 [January 11th, 2026]
- AI isn't making us smarter it's training us to think backward, an innovation theorist says - Business Insider - January 11th, 2026 [January 11th, 2026]
- The "Safest" Trillion-Dollar Artificial Intelligence (AI) Stock to Invest $50,000 In Right Now - The Motley Fool - January 11th, 2026 [January 11th, 2026]
- Prediction: These 2 AI Stocks Will Be Worth More Than Palantir by the End of 2026 - The Motley Fool - January 11th, 2026 [January 11th, 2026]
- A new AI tool could dramatically speed up the discovery of life-saving medicines - Phys.org - January 11th, 2026 [January 11th, 2026]
- After 7 years at McKinsey, I left to build an AI healthtech startup. I had to unlearn the pursuit of perfection. - Business Insider - January 11th, 2026 [January 11th, 2026]
- I never expected an AI app to replace my keyboard, but I was wrong - Android Authority - January 11th, 2026 [January 11th, 2026]
- YouHodler CEO Ilya Volkovs 2026 Crypto Predictions for Stablecoins, AI, Lightning, Bitcoin and More - Crowdfund Insider - January 11th, 2026 [January 11th, 2026]
- I asked AI to beat the S&P 500 it gave me this strategy - MSN - January 11th, 2026 [January 11th, 2026]
- Ant International Partners with Googles Universal Commerce Protocol to Expand AI Capabilities - Business Wire - January 11th, 2026 [January 11th, 2026]
- CES 2026: Follow live for the best, weirdest, most interesting tech as this robot and AI-heavy event wraps up - TechCrunch - January 9th, 2026 [January 9th, 2026]
- Physical AI dominates CES but humanity will still have to wait a while for humanoid servants - Reuters - January 9th, 2026 [January 9th, 2026]
- OpenAI and SoftBank announce $1 billion investment in SB Energy as part of massive AI buildout - CNBC - January 9th, 2026 [January 9th, 2026]
- DeepSeek To Release Next Flagship AI Model With Strong Coding Ability - The Information - January 9th, 2026 [January 9th, 2026]
- AI on Campus: Rethinking the Core Goals of Higher Education - GovTech - January 9th, 2026 [January 9th, 2026]
- 3 Brilliant AI Stocks That Could Double in 2026 - The Motley Fool - January 9th, 2026 [January 9th, 2026]
- DeepSeek to launch new AI model focused on coding in February, The Information reports - Reuters - January 9th, 2026 [January 9th, 2026]
- Marsha Blackburn: My convictions on AI have been clear, not all over - Chattanooga Times Free Press - January 9th, 2026 [January 9th, 2026]
- OpenAI Is Asking Contractors to Upload Work From Past Jobs to Evaluate the Performance of AI Agents - WIRED - January 9th, 2026 [January 9th, 2026]
- AI Hallucinations, Sanctions, and Context: What a Florida Disciplinary Case Really Teaches - JD Supra - January 9th, 2026 [January 9th, 2026]
- Californias budget projections rely heavily on a continued AI boom - capradio.org - January 9th, 2026 [January 9th, 2026]
- No 10 condemns insulting move by X to restrict Grok AI image tool - The Guardian - January 9th, 2026 [January 9th, 2026]
- How to regulate AI by learning from the United States - omnesmag.com - January 9th, 2026 [January 9th, 2026]
- The Change You Need Is the Change You Make. SaaStr AI Annual 2026 Will Help You Get There. - SaaStr - January 9th, 2026 [January 9th, 2026]
- Reimagining Merchandising in the Era of Agentic AI - Bain & Company - January 9th, 2026 [January 9th, 2026]
- Bill Gates says AI could be used as a bioterrorism weapon akin to the COVID pandemic if it falls into the wrong hands - Fortune - January 9th, 2026 [January 9th, 2026]
- So are we in an AI bubble? Here are clues to look for. - NCPR: North Country Public Radio - January 9th, 2026 [January 9th, 2026]
- #679: Why AI Taking Your Job Isnt the Real Problem, with Fmr. OpenAI Exec Zack Kass - Afford Anything - January 9th, 2026 [January 9th, 2026]
- Idaho Statesman AI falsely said a brewery closed, hurting business. Union calls for guardrails on the tech. - BoiseDev - January 9th, 2026 [January 9th, 2026]
- What effect will AI have on the radiologist workforce? - AuntMinnie - January 9th, 2026 [January 9th, 2026]