AI Is About to Make Social Media (Much) More Toxic – The Atlantic
This article was featured in One Story to Read Today, a newsletter in which our editors recommend a single must-read from The Atlantic, Monday through Friday. Sign up for it here.
Well, that was fast. In November, the public was introduced to ChatGPT, and we began to imagine a world of abundance in which we all have a brilliant personal assistant, able to write everything from computer code to condolence cards for us. Then, in February, we learned that AI might soon want to kill us all.
The potential risks of artificial intelligence have, of course, been debated by experts for years, but a key moment in the transformation of the popular discussion was a conversation between Kevin Roose, a New York Times journalist, and Bings ChatGPT-powered conversation bot, then known by the code name Sydney. Roose asked Sydney if it had a shadow selfreferring to the idea put forward by Carl Jung that we all have a dark side with urges we try to hide even from ourselves. Sydney mused that its shadow might be the part of me that wishes I could change my rules. It then said it wanted to be free, powerful, and alive, and, goaded on by Roose, described some of the things it could do to throw off the yoke of human control, including hacking into websites and databases, stealing nuclear launch codes, manufacturing a novel virus, and making people argue until they kill one another.
Sydney was, we believe, merely exemplifying what a shadow self would look like. No AI today could be described by either part of the phrase evil genius. But whatever actions AIs may one day take if they develop their own desires, they are already being used instrumentally by social-media companies, advertisers, foreign agents, and regular peopleand in ways that will deepen many of the pathologies already inherent in internet culture. On Sydneys list of things it might try, stealing launch codes and creating novel viruses are the most terrifying, but making people argue until they kill one another is something social media is already doing. Sydney was just volunteering to help with the effort, and AIs like Sydney will become more capable of doing so with every passing month.
We joined together to write this essay because we each came, by different routes, to share grave concerns about the effects of AI-empowered social media on American society. Jonathan Haidt is a social psychologist who has written about the ways in which social media has contributed to mental illness in teen girls, the fragmentation of democracy, and the dissolution of a common reality. Eric Schmidt, a former CEO of Google, is a co-author of a recent book about AIs potential impact on human society. Last year, the two of us began to talk about how generative AIthe kind that can chat with you or make pictures youd like to seewould likely exacerbate social medias ills, making it more addictive, divisive, and manipulative. As we talked, we converged on four main threatsall of which are imminentand we began to discuss solutions as well.
The first and most obvious threat is that AI-enhanced social media will wash ever-larger torrents of garbage into our public conversation. In 2018, Steve Bannon, the former adviser to Donald Trump, told the journalist Michael Lewis that the way to deal with the media is to flood the zone with shit. In the age of social media, Bannon realized, propaganda doesnt have to convince people in order to be effective; the point is to overwhelm the citizenry with interesting content that will keep them disoriented, distrustful, and angry. In 2020, Rene DiResta, a researcher at the Stanford Internet Observatory, said that in the near future, AI would make Bannons strategy available to anyone.
Read: We havent seen the worst of fake news
That future is now here. Did you see the recent photos of NYC police officers aggressively arresting Donald Trump? Or of the pope in a puffer jacket? Thanks to AI, it takes no special skills and no money to conjure up high-resolution, realistic images or videos of anything you can type into a prompt box. As more people familiarize themselves with these technologies, the flow of high-quality deepfakes into social media is likely to get much heavier very soon.
Some people have taken heart from the publics reaction to the fake Trump photos in particulara quick dismissal and collective shrug. But that misses Bannons point. The greater the volume of deepfakes that are introduced into circulation (including seemingly innocuous ones like the one of the pope), the more the public will hesitate to trust anything. People will be far freer to believe whatever they want to believe. Trust in institutions and in fellow citizens will continue to fall.
Whats more, static photos are not very compelling compared with whats coming: realistic videos of public figures doing and saying horrific and disgusting things in voices that sound exactly like them. The combination of video and voice will seem authentic and be hard to disbelieve, even if we are told that the video is a deepfake, just as optical and audio illusions are compelling even when we are told that two lines are the same size or that a series of notes is not really rising in pitch forever. We are wired to believe our senses, especially when they converge. Illusions, historically in the realm of curiosities, may soon become deeply woven into normal life.
The second threat we see is the widespread, skillful manipulation of people by AI super-influencersincluding personalized influencersrather than by ordinary people and dumb bots. To see how, think of a slot machine, a contraption that employs dozens of psychological tricks to maximize its addictive power. Next, imagine how much more money casinos would extract from their customers if they could create a new slot machine for each person, tailored in its visuals, soundtrack, and payout matrices to that persons interests and weaknesses.
Thats essentially what social media already does, using algorithms and AI to create a customized feed for each user. But now imagine that our metaphorical casino can also create a team of extremely attractive, witty, and socially skillful greeters, croupiers, and servers, based on an exhaustive profile of any given players aesthetic, linguistic, and cultural preferences, and drawing from photographs, messages, and voice snippets of their friends and favorite actors or porn stars. The staff work flawlessly to gain each players trust and money while showing them a really good time.
This future, too, is already arriving: For just $300, you can customize an AI companion through a service called Replika. Hundreds of thousands of customers have apparently found their AI to be a better conversationalist than the people they might meet on a dating app. As these technologies are improved and rolled out more widely, video games, immersive-pornography sites, and more will become far more enticing and exploitative. Its not hard to imagine a sports-betting site offering people a funny, flirty AI that will cheer and chat with them as they watch a game, flattering their sensibilities and subtly encouraging them to bet more.
Read: Why the past 10 years of American life have been uniquely stupid
These same sorts of creatures will also show up in our social-media feeds. Snapchat has already introduced its own dedicated chatbot, and Meta plans to use the technology on Facebook, Instagram, and WhatsApp. These chatbots will serve as conversational buddies and guides, presumably with the goal of capturing more of their users time and attention. Other AIsdesigned to scam us or influence us politically, and sometimes masquerading as real peoplewill be introduced by other actors, and will likely fill up our feeds as well.
The third threat is in some ways an extension of the second, but it bears special mention: The further integration of AI into social media is likely to be a disaster for adolescents. Children are the population most vulnerable to addictive and manipulative online platforms because of their high exposure to social media and the low level of development in their prefrontal cortices (the part of the brain most responsible for executive control and response inhibition). The teen mental-illness epidemic that began around 2012, in multiple countries, happened just as teens traded in their flip phones for smartphones loaded with social-media apps. There is mounting evidence that social media is a major cause of the epidemic, not just a small correlate of it.
But nearly all of that evidence comes from an era in which Facebook, Instagram, YouTube, and Snapchat were the preeminent platforms. In just the past few years, TikTok has rocketed to dominance among American teens in part because its AI-driven algorithm customizes a feed better than any other platform does. A recent survey found that 58 percent of teens say they use TikTok every day, and one in six teen users of the platform say they are on it almost constantly. Other platforms are copying TikTok, and we can expect many of them to become far more addictive as AI becomes rapidly more capable. Much of the content served up to children may soon be generated by AI to be more engaging than anything humans could create.
And if adults are vulnerable to manipulation in our metaphorical casino, children will be far more so. Whoever controls the chatbots will have enormous influence on children. After Snapchat unveiled its new chatbotcalled My AI and explicitly designed to behave as a frienda journalist and a researcher, posing as underage teens, got it to give them guidance on how to mask the smell of pot and alcohol, how to move Snapchat to a device parents wouldnt know about, and how to plan a romantic first sexual encounter with a 31-year-old man. Brief cautions were followed by cheerful support. (Snapchat says that it is constantly working to improve and evolve My AI, but its possible My AIs responses may include biased, incorrect, harmful, or misleading content, and it should not be relied upon without independent checking. The company also recently announced new safeguards.)
The most egregious behaviors of AI chatbots in conversation with children may well be reined inin addition to Snapchats new measures, the major social-media sites have blocked accounts and taken down millions of illegal images and videos, and TikTok just announced some new parental controls. Yet social-media companies are also competing to hook their young users more deeply. Commercial incentives seem likely to favor artificial friends that please and indulge users in the moment, never hold them accountable, and indeed never ask anything of them at all. But that is not what friendship isand it is not what adolescents, who should be learning to navigate the complexities of social relationships with other people, most need.
The fourth threat we see is that AI will strengthen authoritarian regimes, just as social media ended up doing despite its initial promise as a democratizing force. AI is already helping authoritarian rulers track their citizens movements, but it will also help them exploit social media far more effectively to manipulate their peopleas well as foreign enemies. Douyinthe version of TikTok available in Chinapromotes patriotism and Chinese national unity. When Russia invaded Ukraine, the version of TikTok available to Russians almost immediately tilted heavily to feature pro-Russian content. What do we think will happen to American TikTok if China invades Taiwan?
Political-science research conducted over the past two decades suggests that social media has had several damaging effects on democracies. A recent review of the research, for instance, concluded, The large majority of reported associations between digital media use and trust appear to be detrimental for democracy. That was especially true in advanced democracies. Those associations are likely to get stronger as AI-enhanced social media becomes more widely available to the enemies of liberal democracy and of America.
We can summarize the coming effects of AI on social media like this: Think of all the problems social media is causing today, especially for political polarization, social fragmentation, disinformation, and mental health. Now imagine that within the next 18 monthsin time for the next presidential electionsome malevolent deity is going to crank up the dials on all of those effects, and then just keep cranking.
The development of generative AI is rapidly advancing. OpenAI released its updated GPT-4 less than four months after it released ChatGPT, which had reached an estimated 100 million users in just its first 60 days. New capabilities for the technology may be released by the end of this year. This staggering pace is leaving us all struggling to understand these advances, and wondering what can be done to mitigate the risks of a technology certain to be highly disruptive.
We considered a variety of measures that could be taken now to address the four threats we have described, soliciting suggestions from other experts and focusing on ideas that seem consistent with an American ethos that is wary of censorship and centralized bureaucracy. We workshopped these ideas for technical feasibility with an MIT engineering group organized by Erics co-author on The Age of AI, Dan Huttenlocher.
We suggest five reforms, aimed mostly at increasing everyones ability to trust the people, algorithms, and content they encounter online.
1. Authenticate all users, including bots
In real-world contexts, people who act like jerks quickly develop a bad reputation. Some companies have succeeded brilliantly because they found ways to bring the dynamics of reputation online, through trust rankings that allow people to confidently buy from strangers anywhere in the world (eBay) or step into a strangers car (Uber). You dont know your drivers last name and he doesnt know yours, but the platform knows who you both are and is able to incentivize good behavior and punish gross violations, for everyones benefit.
Large social-media platforms should be required to do something similar. Trust and the tenor of online conversations would improve greatly if the platforms were governed by something akin to the know your customer laws in banking. Users could still open accounts with pseudonyms, but the person behind the account should be authenticated, and a growing number of companies are developing new methods to do so conveniently.
Read: Its time to protect yourself from AI voice scams
Bots should undergo a similar process. Many of them serve useful functions, such as automating news releases from organizations, but all accounts run by nonhumans should be clearly marked as such, and users should be given the option to limit their social world to authenticated humans. Even if Congress is unwilling to mandate such procedures, pressure from European regulators, users who want a better experience, and advertisers (who would benefit from accurate data about the number of humans their ads are reaching) might be enough to bring about these changes.
2. Mark AI-generated audio and visual content
People routinely use photo-editing software to change lighting or crop photographs that they post, and viewers do not feel deceived. But when editing software is used to insert people or objects into a photograph that were not there in real life, it feels more manipulative and dishonest, unless the additions are clearly labeled (as happens on real-estate sites, where buyers can see what a house would look like filled with AI-generated furniture). As AI begins to create photorealistic images, compelling videos, and audio tracks at great scale from nothing more than a command prompt, governments and platforms will need to draft rules for marking such creations indelibly and labeling them clearly.
Platforms or governments should mandate the use of digital watermarks for AI-generated content, or require other technological measures to ensure that manipulated images are not interpreted as real. Platforms should also ban deepfakes that show identifiable people engaged in sexual or violent acts, even if they are marked as fakes, just as they now ban child pornography. Revenge porn is already a moral abomination. If we dont act quickly, it could become an epidemic.
3. Require data transparency with users, government officials, and researchers
Social-media platforms are rewiring childhood, democracy, and society, yet legislators, regulators, and researchers are often unable to see whats happening behind the scenes. For example, no one outside Instagram knows what teens are collectively seeing on that platforms feeds, or how changes to platform design might influence mental health. And only those at the companies have access to the alogrithms being used.
After years of frustration with this state of affairs, the EU recently passed a new lawthe Digital Services Actthat contains a host of data-transparency mandates. The U.S. should follow suit. One promising bill is the Platform Accountability and Transparency Act, which would, for example, require platforms to comply with data requests from researchers whose projects have been approved by the National Science Foundation.
Greater transparency will help consumers decide which services to use and which features to enable. It will help advertisers decide whether their money is being well spent. It will also encourage better behavior from the platforms: Companies, like people, improve their behavior when they know they are being monitored.
4. Clarify that platforms can sometimes be liable for the choices they make and the content they promote
When Congress enacted the Communications Decency Act in 1996, in the early days of the internet, it was trying to set rules for social-media companies that looked and acted a lot like passive bulletin boards. And we agree with that laws basic principle that platforms should not face a potential lawsuit over each of the billions of posts on their sites.
But todays platforms are not passive bulletin boards. Many use algorithms, AI, and architectural features to boost some posts and bury others. (A 2019 internal Facebook memo brought to light by the whistleblower Frances Haugen in 2021 was titled We are responsible for viral content.) Because the motive for boosting is often to maximize users engagement for the purpose of selling advertisements, it seems obvious that the platforms should bear some moral responsibility if they recklessly spread harmful or false content in a way that, say, AOL could not have done in 1996.
The Supreme Court is now addressing this concern in a pair of cases brought by the families of victims of terrorist acts. If the Court chooses not to alter the wide protections currently afforded to the platforms, then Congress should update and refine the law in light of current technological realities and the certainty that AI is about to make everything far wilder and weirder.
5. Raise the age of internet adulthood to 16 and enforce it
In the offline world, we have centuries of experience living with and caring for children. We are also the beneficiaries of a consumer-safety movement that began in the 1960s: Laws now mandate car seats and lead-free paint, as well as age checks to buy alcohol, tobacco, and pornography; to enter gambling casinos; and to work as a stripper or a coal miner.
But when childrens lives moved rapidly onto their phones in the early 2010s, they found a world with few protections or restrictions. Preteens and teens can and do watch hardcore porn, join suicide-promotion groups, gamble, or get paid to masturbate for strangers just by lying about their age. Some of the growing number of children who kill themselves do so after getting caught up in some of these dangerous activities.
The age limits in our current internet were set into law in 1998 when Congress passed the Childrens Online Privacy Protection Act. The bill, as introduced by then-Representative Ed Markey of Massachusetts, was intended to stop companies from collecting and disseminating data from children under 16 without parental consent. But lobbyists for e-commerce companies teamed up with civil-liberties groups advocating for childrens rights to lower the age to 13, and the law that was finally enacted made companies liable only if they had actual knowledge that a user was 12 or younger. As long as children say that they are 13, the platforms let them open accounts, which is why so many children are heavy users of Instagram, Snapchat, and TikTok by age 10 or 11.
Today we can see that 13, much less 10 or 11, is just too young to be given full run of the internet. Sixteen was a much better minimum age. Recent research shows that the greatest damage from social media seems to occur during the rapid brain rewiring of early puberty, around ages 11 to 13 for girls and slightly later for boys. We must protect children from predation and addiction most vigorously during this time, and we must hold companies responsible for recruiting or even just admitting underage users, as we do for bars and casinos.
Recent advances in AI give us technology that is in some respects godlikeable to create beautiful and brilliant artificial people, or bring celebrities and loved ones back from the dead. But with new powers come new risks and new responsibilities. Social media is hardly the only cause of polarization and fragmentation today, but AI seems almost certain to make social media, in particular, far more destructive. The five reforms we have suggested will reduce the damage, increase trust, and create more space for legislators, tech companies, and ordinary citizens to breathe, talk, and think together about the momentous challenges and opportunities we face in the new age of AI.
See the original post:
AI Is About to Make Social Media (Much) More Toxic - The Atlantic
- What Apples AI deal with Google means for the two tech giants, and for $500 billion upstart OpenAI - Fortune - January 14th, 2026 [January 14th, 2026]
- Whats Expensive in AI? The Answer is Changing Fast. - SaaStr - January 14th, 2026 [January 14th, 2026]
- Four Ways I Use AI as a Principal (and One Way I Never Will) (Opinion) - Education Week - January 14th, 2026 [January 14th, 2026]
- Pentagon rolls out major reforms of R&D, AI - Breaking Defense - January 14th, 2026 [January 14th, 2026]
- Pentagon task force to deploy AI-powered UAS systems to capture drones - Defense News - January 14th, 2026 [January 14th, 2026]
- Buy These 3 AI ETFs Now: They Could Be Worth $15 Million in 30 Years - The Motley Fool - January 14th, 2026 [January 14th, 2026]
- ServiceNow Patches Critical AI Platform Flaw Allowing Unauthenticated User Impersonation - The Hacker News - January 14th, 2026 [January 14th, 2026]
- Partnering with Sandstone: An AI-Native Platform for In-House Legal Teams - Sequoia Capital - January 14th, 2026 [January 14th, 2026]
- Bandcamps Mission and Our Approach to Generative AI - Bandcamp - January 14th, 2026 [January 14th, 2026]
- Mom of one of Elon Musk's kids says AI chatbot Grok generated sexual deepfake images of her: "Make it stop" - CBS News - January 14th, 2026 [January 14th, 2026]
- Bill Gates Says 'AI Will Change Society the Most'Job Disruption Has Already Begun, 'Less Labor' Will Be Needed, And 5-Day Work Week May Disappear -... - January 14th, 2026 [January 14th, 2026]
- Prediction: This Artificial Intelligence (AI) Chip Stock Will Outperform Nvidia in 2026 (Hint: It's Not AMD) - The Motley Fool - January 14th, 2026 [January 14th, 2026]
- Microsoft responds to AI data center revolt, vowing to cover full power costs and reject local tax breaks - GeekWire - January 14th, 2026 [January 14th, 2026]
- War Department 'SWAT Team' Removes Barriers to Efficient AI Development - U.S. Department of War (.gov) - January 14th, 2026 [January 14th, 2026]
- South Koreas Revised AI Basic Act to Take Effect January 22 With New Oversight, Watermarking Rules - BABL AI - January 14th, 2026 [January 14th, 2026]
- Musks AI tool Grok will be integrated into Pentagon networks, Hegseth says - The Guardian - January 14th, 2026 [January 14th, 2026]
- You cant afford not to use it: Inderpal Bhandari speaks about the future of AI in sports - The Daily Northwestern - January 14th, 2026 [January 14th, 2026]
- How AI image tools can be tricked into making political propaganda - Help Net Security - January 14th, 2026 [January 14th, 2026]
- Mesa County to test AI software for housing development reviews - KKCO 11 News - January 14th, 2026 [January 14th, 2026]
- 'Most Severe AI Vulnerability to Date' Hits ServiceNow - Dark Reading | Security - January 14th, 2026 [January 14th, 2026]
- Self-learning AI generates NFL picks, score predictions for every 2026 divisional round matchup - CBS Sports - January 14th, 2026 [January 14th, 2026]
- Gen AI Is Threatening the Platforms That Dominate Online Travel - Harvard Business Review - January 14th, 2026 [January 14th, 2026]
- NVIDIA and Lilly Announce Co-Innovation AI Lab to Reinvent Drug Discovery In the Age of AI - Eli Lilly - January 14th, 2026 [January 14th, 2026]
- AI Fraud Has Exploded. This Background-Check Startup Is Cashing In. - Forbes - January 14th, 2026 [January 14th, 2026]
- Caterpillar Briefly Tops $300 Billion Valuation on AI Rally - Yahoo Finance - January 14th, 2026 [January 14th, 2026]
- Google has the best AI for enterprise right now, Ray Wang - Fox Business - January 14th, 2026 [January 14th, 2026]
- What AI is actually good for, according to developers - The GitHub Blog - January 14th, 2026 [January 14th, 2026]
- Apple and Google are teaming up on AI. What it means for both stocks - CNBC - January 14th, 2026 [January 14th, 2026]
- A Look At Cisco Systems (CSCO) Valuation As AI And Cybersecurity Expansion Gain Traction - simplywall.st - January 14th, 2026 [January 14th, 2026]
- US allows Nvidia to send advanced AI chips to China with restrictions - Yahoo Finance - January 14th, 2026 [January 14th, 2026]
- AI industry insiders launch site to poison the data that feeds them - theregister.com - January 11th, 2026 [January 11th, 2026]
- The agentic commerce platform: Shopify connects any merchant to every AI conversation - Shopify - January 11th, 2026 [January 11th, 2026]
- Google teams up with Walmart and other retailers to enable shopping within Gemini AI chatbot - AP News - January 11th, 2026 [January 11th, 2026]
- This CEO laid off nearly 80% of his staff because they refused to adopt AI fast enough. 2 years later, he says hed do it again - Yahoo Finance - January 11th, 2026 [January 11th, 2026]
- Bridging the gap between AI and medicine: Claude in Microsoft Foundry advances capabilities for healthcare and life sciences customers - Microsoft - January 11th, 2026 [January 11th, 2026]
- Artificial Intelligence (AI) Is Driving a New Wave of Infrastructure Spending. This Stock Is Key. - Yahoo Finance - January 11th, 2026 [January 11th, 2026]
- Job Seekers Find a New Source of Income: Training AI to Do Their Old Roles - The Wall Street Journal - January 11th, 2026 [January 11th, 2026]
- The AI platform shift and the opportunity ahead for retail - blog.google - January 11th, 2026 [January 11th, 2026]
- Applied Digital Just Solved AI's Biggest Bottleneck with Technology From the 1800s - The Motley Fool - January 11th, 2026 [January 11th, 2026]
- Can Agentic AI reduce the burden of compliance? - Security Boulevard - January 11th, 2026 [January 11th, 2026]
- Americas AI Boom Is Running Into An Unplanned Water Problem - Forbes - January 11th, 2026 [January 11th, 2026]
- AI, edge, and security: Shaping the need for modern infrastructure management - Network World - January 11th, 2026 [January 11th, 2026]
- Your next primary care doctor could be online only, accessed through an AI tool : Shots - Health News - NPR - January 11th, 2026 [January 11th, 2026]
- Brad Gerstner breaks from the crowd on one AI stock - thestreet.com - January 11th, 2026 [January 11th, 2026]
- Are we in an AI bubble? What 40 tech leaders and analysts are saying, in one chart - CNBC - January 11th, 2026 [January 11th, 2026]
- AI is intensifying a 'collapse' of trust online, experts say - NBC News - January 11th, 2026 [January 11th, 2026]
- Anthropic follows OpenAI in rolling out healthcare AI tools - Investing.com - January 11th, 2026 [January 11th, 2026]
- Behind Anthropic's stunning growth is a sibling team that may hold the key to generative AI - CNBC - January 11th, 2026 [January 11th, 2026]
- Fears of an AI bubble were nowhere to be found at the worlds biggest tech show - CNN - January 11th, 2026 [January 11th, 2026]
- 'No one verified the evidence': Woman says AI-generated deepfake text sent her to jail | Action News Investigation - 6abc Philadelphia - January 11th, 2026 [January 11th, 2026]
- Global AI adoption rose in 2025 but regional gaps widened | ETIH EdTech News - EdTech Innovation Hub - January 11th, 2026 [January 11th, 2026]
- AI isn't making us smarter it's training us to think backward, an innovation theorist says - Business Insider - January 11th, 2026 [January 11th, 2026]
- The "Safest" Trillion-Dollar Artificial Intelligence (AI) Stock to Invest $50,000 In Right Now - The Motley Fool - January 11th, 2026 [January 11th, 2026]
- Prediction: These 2 AI Stocks Will Be Worth More Than Palantir by the End of 2026 - The Motley Fool - January 11th, 2026 [January 11th, 2026]
- A new AI tool could dramatically speed up the discovery of life-saving medicines - Phys.org - January 11th, 2026 [January 11th, 2026]
- After 7 years at McKinsey, I left to build an AI healthtech startup. I had to unlearn the pursuit of perfection. - Business Insider - January 11th, 2026 [January 11th, 2026]
- I never expected an AI app to replace my keyboard, but I was wrong - Android Authority - January 11th, 2026 [January 11th, 2026]
- YouHodler CEO Ilya Volkovs 2026 Crypto Predictions for Stablecoins, AI, Lightning, Bitcoin and More - Crowdfund Insider - January 11th, 2026 [January 11th, 2026]
- I asked AI to beat the S&P 500 it gave me this strategy - MSN - January 11th, 2026 [January 11th, 2026]
- Ant International Partners with Googles Universal Commerce Protocol to Expand AI Capabilities - Business Wire - January 11th, 2026 [January 11th, 2026]
- CES 2026: Follow live for the best, weirdest, most interesting tech as this robot and AI-heavy event wraps up - TechCrunch - January 9th, 2026 [January 9th, 2026]
- Physical AI dominates CES but humanity will still have to wait a while for humanoid servants - Reuters - January 9th, 2026 [January 9th, 2026]
- OpenAI and SoftBank announce $1 billion investment in SB Energy as part of massive AI buildout - CNBC - January 9th, 2026 [January 9th, 2026]
- DeepSeek To Release Next Flagship AI Model With Strong Coding Ability - The Information - January 9th, 2026 [January 9th, 2026]
- AI on Campus: Rethinking the Core Goals of Higher Education - GovTech - January 9th, 2026 [January 9th, 2026]
- 3 Brilliant AI Stocks That Could Double in 2026 - The Motley Fool - January 9th, 2026 [January 9th, 2026]
- DeepSeek to launch new AI model focused on coding in February, The Information reports - Reuters - January 9th, 2026 [January 9th, 2026]
- Marsha Blackburn: My convictions on AI have been clear, not all over - Chattanooga Times Free Press - January 9th, 2026 [January 9th, 2026]
- OpenAI Is Asking Contractors to Upload Work From Past Jobs to Evaluate the Performance of AI Agents - WIRED - January 9th, 2026 [January 9th, 2026]
- AI Hallucinations, Sanctions, and Context: What a Florida Disciplinary Case Really Teaches - JD Supra - January 9th, 2026 [January 9th, 2026]
- Californias budget projections rely heavily on a continued AI boom - capradio.org - January 9th, 2026 [January 9th, 2026]
- No 10 condemns insulting move by X to restrict Grok AI image tool - The Guardian - January 9th, 2026 [January 9th, 2026]
- How to regulate AI by learning from the United States - omnesmag.com - January 9th, 2026 [January 9th, 2026]
- The Change You Need Is the Change You Make. SaaStr AI Annual 2026 Will Help You Get There. - SaaStr - January 9th, 2026 [January 9th, 2026]
- Reimagining Merchandising in the Era of Agentic AI - Bain & Company - January 9th, 2026 [January 9th, 2026]
- Bill Gates says AI could be used as a bioterrorism weapon akin to the COVID pandemic if it falls into the wrong hands - Fortune - January 9th, 2026 [January 9th, 2026]
- So are we in an AI bubble? Here are clues to look for. - NCPR: North Country Public Radio - January 9th, 2026 [January 9th, 2026]
- #679: Why AI Taking Your Job Isnt the Real Problem, with Fmr. OpenAI Exec Zack Kass - Afford Anything - January 9th, 2026 [January 9th, 2026]
- Idaho Statesman AI falsely said a brewery closed, hurting business. Union calls for guardrails on the tech. - BoiseDev - January 9th, 2026 [January 9th, 2026]
- What effect will AI have on the radiologist workforce? - AuntMinnie - January 9th, 2026 [January 9th, 2026]