How AI and ChatGPT are full of promise and peril, explained by experts – Vox.com
At this point, you have tried ChatGPT. Even Joe Biden has tried ChatGPT, and this week, his administration made a big show of inviting AI leaders like Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman to the White House to discuss ways they could make responsible AI.
But maybe, just maybe, you are still fuzzy on some very basics about AI like, how does this stuff work, is it magic, and will it kill us all? but dont want to admit to that.
No worries. We have you covered: Weve spent much of the spring talking to people working in AI, investing in AI, trying to build businesses in AI as well as people who think the current AI boom is overblown or maybe dangerously misguided. We made a podcast series about the whole thing, which you can listen to over at Recode Media.
But weve also pulled out a sampling of insightful and oftentimes conflicting answers we got to some of these very basic questions. Theyre questions that the White House and everyone else needs to figure out soon, since AI isnt going away.
Read on and dont worry, we wont tell anyone that youre confused. Were all confused.
Kevin Scott, chief technology officer, Microsoft: I was a 12-year-old when the PC revolution was happening. I was in grad school when the internet revolution happened. I was running a mobile startup right at the very beginning of the mobile revolution, which coincided with this massive shift to cloud computing. This feels to me very much like those three things.
Dror Berman, co-founder, Innovation Endeavors: Mobile was an interesting time because it provided a new form factor that allowed you to carry a computer with you. I think we are now standing in a completely different time: Weve now been introduced to a foundational intelligence block that has become available to us, one that basically can lean on all the publicly available knowledge that humanity has extracted and documented. It allows us to retrieve all this information in a way that wasnt possible in the past.
Gary Marcus, entrepreneur; emeritus professor of psychology and neural science at NYU: I mean, its absolutely interesting. I would not want to argue against that for a moment. I think of it as a dress rehearsal for artificial general intelligence, which we will get to someday.
But right now we have a trade-off. There are some positives about these systems. You can use them to write things for you. And there are some negatives. This technology can be used, for example, to spread misinformation, and to do that at a scale that weve never seen before which may be dangerous, might undermine democracy.
And I would say that these systems arent very controllable. Theyre powerful, theyre reckless, but they dont necessarily do what we want. Ultimately, theres going to be a question, Okay, we can build a demo here. Can we build a product that we can actually use? And what is that product?
I think in some places people will adopt this stuff. And theyll be perfectly happy with the output. In other places, theres a real problem.
James Manyika, SVP of technology and society, Google: Youre trying to make sure the outputs are not toxic. In our case, we do a lot of generative adversarial testing of these systems. In fact, when you use Bard, for example, the output that you get when you type in a prompt is not necessarily the first thing that Bard came up with.
Were running 15, 16 different types of the same prompt to look at those outputs and pre-assess them for safety, for things like toxicity. And now we dont always get every single one of them, but were getting a lot of it already.
One of the bigger questions that we are going to have to face, by the way and this is a question about us, not about the technology, its about us as a society is how do we think about what we value? How do we think about what counts as toxicity? So thats why we try to involve and engage with communities to understand those. We try to involve ethicists and social scientists to research those questions and understand those, but those are really questions for us as society.
Emily M. Bender, professor of linguistics, University of Washington: People talk about democratizing AI, and I always find that really frustrating because what theyre referring to is putting this technology in the hands of many, many people which is not the same thing as giving everybody a say in how its developed.
I think the best way forward is cooperation, basically. You have sensible regulation coming from the outside so that the companies are held accountable. And then youve got the tech ethics workers on the inside helping the companies actually meet the regulation and meet the spirit of the regulation.
And to make all that happen, we need broad literacy in the population so that people can ask for whats needed from their elected representatives. So that the elected representatives are hopefully literate in all of this.
Scott: Weve spent from 2017 until today rigorously building a responsible AI practice. You just cant release an AI to the public without a rigorous set of rules that define sensitive uses, and where you have a harms framework. You have to be transparent with the public about what your approach to responsible AI is.
Marcus: Dirigibles were really popular in the 1920s and 1930s. Until we had the Hindenburg. Everybody thought that all these people doing heavier-than-air flight were wasting their time. They were like, Look at our dirigibles. They scale a lot faster. We built a small one. Now we built a bigger one. Now we built a much bigger one. Its all working great.
So, you know, sometimes you scale the wrong thing. In my view, were scaling the wrong thing right now. Were scaling a technology that is inherently unstable.
Its unreliable and untruthful. Were making it faster and have more coverage, but its still unreliable, still not truthful. And for many applications thats a problem. There are some for which its not right.
ChatGPTs sweet spot has always been making surrealist prose. It is now better at making surrealist prose than it was before. If thats your use case, its fine, I have no problem with it. But if your use case is something where theres a cost of error, where you do need to be truthful and trustworthy, then that is a problem.
Scott: It is absolutely useful to be thinking about these scenarios. Its more useful to think about them grounded in where the technology actually is, and what the next step is, and the step beyond that.
I think were still many steps away from the things that people worry about. There are people who disagree with me on that assertion. They think theres gonna be some uncontrollable, emergent behavior that happens.
And were careful enough about that, where we have research teams thinking about the possibility of these emergent scenarios. But the thing that you would really have to have in order for some of the weird things to happen that people are concerned about is real autonomy a system that could participate in its own development and have that feedback loop where you could get to some superhumanly fast rate of improvement. And thats not the way the systems work right now. Not the ones that we are building.
Bender: We already have WebMD. We already have databases where you can go from symptoms to possible diagnoses, so you know what to look for.
There are plenty of people who need medical advice, medical treatment, who cant afford it, and that is a societal failure. And similarly, there are plenty of people who need legal advice and legal services who cant afford it. Those are real problems, but throwing synthetic text into those situations is not a solution to those problems.
If anything, its gonna exacerbate the inequalities that we see in our society. And to say, people who can pay get the real thing; people who cant pay, well, here, good luck. You know: Shake the magic eight ball that will tell you something that seems relevant and give it a try.
Manyika: Yes, it does have a place. If Im trying to explore as a research question, how do I come to understand those diseases? If Im trying to get medical help for myself, I wouldnt go to these generative systems. I go to a doctor or I go to something where I know theres reliable factual information.
Scott: I think it just depends on the actual delivery mechanism. You absolutely dont want a world where all you have is some substandard piece of software and no access to a real doctor. But I have a concierge doctor, for instance. I interact with my concierge doctor mostly by email. And thats actually a great user experience. Its phenomenal. It saves me so much time, and Im able to get access to a whole bunch of things that my busy schedule wouldnt let me have access to otherwise.
So for years Ive thought, wouldnt it be fantastic for everyone to have the same thing? An expert medical guru that you can go to that can help you navigate a very complicated system of insurance companies and medical providers and whatnot. Having something that can help you deal with the complexity, I think, is a good thing.
Marcus: If its medical misinformation, you might actually kill someone. Thats actually the domain where Im most worried about erroneous information from search engines
Now people do search for medical stuff all the time, and these systems are not going to understand drug interactions. Theyre probably not going to understand particular peoples circumstances, and I suspect that there will actually be some pretty bad advice.
We understand from a technical perspective why these systems hallucinate. And I can tell you that they will hallucinate in the medical domain. Then the question is: What becomes of that? Whats the cost of error? How widespread is that? How do users respond? We dont know all those answers yet.
Berman: I think society will need to adapt. A lot of those systems are very, very powerful and allow us to do things that we never thought would be possible. By the way, we dont yet understand what is fully possible. We dont also fully understand how some of those systems work.
I think some people will lose jobs. Some people will adjust and get new jobs. We have a company called Canvas that is developing a new type of robot for the construction industry and actually working with the union to train the workforce to use this kind of robot.
And a lot of those jobs that a lot of technologies replace are not necessarily the jobs that a lot of people want to do anyway. So I think that we are going to see a lot of new capabilities that will allow us to train people to do much more exciting jobs as well.
Manyika: If you look at most of the research on AIs impact on work, if I were to summarize it in a phrase, Id say its jobs gained, jobs lost, and jobs changed.
All three things will happen because there are some occupations where a number of the tasks involved in those occupations will probably decline. But there are also new occupations that will grow. So theres going to be a whole set of jobs gained and created as a result of this incredible set of innovations. But I think the bigger effect, quite frankly what most people will feel is the jobs changed aspect of this.
Read the original here:
How AI and ChatGPT are full of promise and peril, explained by experts - Vox.com
- Salesforce announces an AI-heavy makeover for Slack, with 30 new features - TechCrunch - April 1st, 2026 [April 1st, 2026]
- Chinas AI Is Spreading Fast. Heres How to Stop the Security Risks - War on the Rocks - April 1st, 2026 [April 1st, 2026]
- As AI reshapes the office, the Fortune Best Companies to Work For are doubling down on the most human perks - Fortune - April 1st, 2026 [April 1st, 2026]
- AI success is a leadership testand the 100 Best Companies lead the way - Fortune - April 1st, 2026 [April 1st, 2026]
- Kathleen Kennedy Just Told an AI Conference Shes Not So Sure About AI - The Hollywood Reporter - April 1st, 2026 [April 1st, 2026]
- AI-aligned super PACs are pouring millions into Texas congressional races - The Texas Tribune - April 1st, 2026 [April 1st, 2026]
- The New York Times drops freelance journalist who used AI to write book review - The Guardian - April 1st, 2026 [April 1st, 2026]
- CSU made a $17-million AI bet. A year later, students and faculty give it a mixed grade - Los Angeles Times - April 1st, 2026 [April 1st, 2026]
- Norwalk students learning the Good, the Bad and the Ugly of AI in specially designed lessons - Norwalk Hour - April 1st, 2026 [April 1st, 2026]
- VCs agree responsible AI is good business. Now they need to do the work - ImpactAlpha - April 1st, 2026 [April 1st, 2026]
- BD Launches AI-Enabled Medication Dispensing System to the European Market - PR Newswire - April 1st, 2026 [April 1st, 2026]
- AI makes most of us nervous, but can it also make us more purposeful? - Fast Company - April 1st, 2026 [April 1st, 2026]
- From automation to intelligence: How AI and connected data are transforming automation lead generation - CBT News - April 1st, 2026 [April 1st, 2026]
- Nvidia vs. Broadcom: The Smarter AI Stock to Buy in April - The Motley Fool - April 1st, 2026 [April 1st, 2026]
- Dearest gentle reader, it is time to abandon AI in schools - The Thomas B. Fordham Institute - April 1st, 2026 [April 1st, 2026]
- AI Becomes Increasingly Embedded in Project Delivery Across Key Industries, New APM Research Reveals - finance.yahoo.com - April 1st, 2026 [April 1st, 2026]
- AI Is Moving Fast. The Law, Not So Much - AdExchanger - April 1st, 2026 [April 1st, 2026]
- Salesforce reinvents Slack for the AI age, and takes aim at Microsofts Copilot - Fortune - April 1st, 2026 [April 1st, 2026]
- Top global and US AI regulations to look out for - cio.com - April 1st, 2026 [April 1st, 2026]
- These professors built AI tools that ask questions, instead of giving answers - The Washington Post - April 1st, 2026 [April 1st, 2026]
- California to impose new AI regulations in defiance of Trump call - The Guardian - April 1st, 2026 [April 1st, 2026]
- Why AI transparency is the key to richer instruction - University Business - April 1st, 2026 [April 1st, 2026]
- Evotec hires exec with AI experience to lead rebooted commercial team - Fierce Pharma - April 1st, 2026 [April 1st, 2026]
- Sheryl Sandberg: The AI gender gap is about recognition - Axios - April 1st, 2026 [April 1st, 2026]
- 27 Companies With Wide Moats That Wont Be Snuffed Out by AI - Morningstar Canada - April 1st, 2026 [April 1st, 2026]
- Australian government and Anthropic sign MOU for AI safety and research - Anthropic - April 1st, 2026 [April 1st, 2026]
- The jobs AI cant do and the young adults doing them - The Guardian - April 1st, 2026 [April 1st, 2026]
- The global tech boom is over. American AI companies won - Rest of World - April 1st, 2026 [April 1st, 2026]
- How these 3 AI-powered vibe coding tools are helping agencies - Ad Age - April 1st, 2026 [April 1st, 2026]
- Microsoft closes worst quarter on Wall Street since 2008 on AI concerns: 'Redmond is in a pickle' - cnbc.com - April 1st, 2026 [April 1st, 2026]
- First-year student shifts studies to AI - The University of North Carolina at Chapel Hill - March 7th, 2026 [March 7th, 2026]
- Here Are The 6 Safest Jobs With Least AI Risk, According to Anthropic - Forbes - March 7th, 2026 [March 7th, 2026]
- Big Tech Stocks Were Expensive. Then the Market Turned on AI - Yahoo Finance - March 7th, 2026 [March 7th, 2026]
- Ohio State partners with central Ohio schools to advance AI literacy - The Ohio State University - March 7th, 2026 [March 7th, 2026]
- Can AI be a tool for virtue? Catholics grapple with Anthropics claim of virtuous AI - OSV News - March 7th, 2026 [March 7th, 2026]
- Seth MacFarlane Used AI to Turn Himself Into a Creepily Realistic Bill Clinton - The Hollywood Reporter - March 7th, 2026 [March 7th, 2026]
- Worried that AI might replace you? Check out this graph from Anthropic showing the jobs most at risk - Business Insider - March 7th, 2026 [March 7th, 2026]
- Nvidia-Tied AI Stocks Vertiv, Lumentum, Coherent To Join S&P 500 Index - Investor's Business Daily - March 7th, 2026 [March 7th, 2026]
- How Gen AI Can Turn Reams of Text into Actionable Insights - Harvard Business Review - March 7th, 2026 [March 7th, 2026]
- AI-Driven Digital Pathology May Refine Treatment Selection in HR+ Early Breast Cancer - OncLive - March 7th, 2026 [March 7th, 2026]
- The Take: How is the US using Anthropics Claude AI in Iran? - Al Jazeera - March 7th, 2026 [March 7th, 2026]
- Business teachers gather at Bethel to explore how to use AI thoughtfully in the classroom - Bethel University - March 7th, 2026 [March 7th, 2026]
- Google opens the door to OpenClaw and other AI agents with new release - Mashable - March 7th, 2026 [March 7th, 2026]
- Michigan universities embrace AI, but struggle to regulate student use - The Detroit News - March 7th, 2026 [March 7th, 2026]
- Whats the best way to adapt to AI? A new book offers ideas - IT Brew - March 7th, 2026 [March 7th, 2026]
- Two University experts making medical AI safer, reliable - University of Nevada, Reno - March 7th, 2026 [March 7th, 2026]
- Pentagons new chief data officer to push AI capabilities to warfighters - Breaking Defense - March 7th, 2026 [March 7th, 2026]
- Ive turned AI into my therapist. The results were pretty disquieting - The Guardian - March 7th, 2026 [March 7th, 2026]
- The economics of enterprise AI: What the Forrester TEI study reveals about Microsoft Foundry - Microsoft Azure - March 7th, 2026 [March 7th, 2026]
- How our open-source AI model SpeciesNet is helping to promote wildlife conservation - blog.google - March 7th, 2026 [March 7th, 2026]
- Colorado House committee advances bills regulating AI in therapy, insurance decisions - Colorado Politics - March 7th, 2026 [March 7th, 2026]
- The AI-Powered War Machines Are Here | On the Media - WNYC Studios | Podcasts - March 7th, 2026 [March 7th, 2026]
- Tech Hub Pulse explores AI, drones and the future of South Floridas tech economy - Refresh Miami - March 7th, 2026 [March 7th, 2026]
- The era of Doctor AI is already here - Axios - March 7th, 2026 [March 7th, 2026]
- The Top 5 AI Stocks to Buy in March - 24/7 Wall St. - March 7th, 2026 [March 7th, 2026]
- Apple ran a test on the App Store to see if AI could improve search result rankings - 9to5Mac - March 7th, 2026 [March 7th, 2026]
- AI, Surveillance, and the Democratic Guardrails We Cant Afford to Lose - The Times of Israel - March 7th, 2026 [March 7th, 2026]
- Louisiana lawmakers to open 2026 session with insurance, education and AI on the agenda - fox8live.com - March 7th, 2026 [March 7th, 2026]
- Hundreds of applications, no jobs and AI competition: California's brutal tech work landscape - Los Angeles Times - March 7th, 2026 [March 7th, 2026]
- Anthropic bods say AI hasn't had much impact on jobs - theregister.com - March 7th, 2026 [March 7th, 2026]
- Exclusive | OpenAIs Former Research Chief Aims to Automate Manufacturing With AI - WSJ - March 4th, 2026 [March 4th, 2026]
- AI Cant Beat What We Learn in the School of Our Senses - The Gospel Coalition - March 4th, 2026 [March 4th, 2026]
- Scientist studies AI as a learning buddy - University of Miami News - March 4th, 2026 [March 4th, 2026]
- Apple Just Launched an Entire Lineup of AI-Powered Macs, iPhones, and iPads - eWeek - March 4th, 2026 [March 4th, 2026]
- How AI Damages Work Relationshipsand Where It Can Actually Help - Harvard Business Review - March 4th, 2026 [March 4th, 2026]
- Scholars Explore Voting Rights, AI, and the Future of Elections at UC Law SF - UC Law San Francisco - March 4th, 2026 [March 4th, 2026]
- X to ban users from earning revenue if they post unlabelled AI-generated war videos - The Guardian - March 4th, 2026 [March 4th, 2026]
- The second wave of AI governance: The risks of ubiquitous transcription tools - IAPP - March 4th, 2026 [March 4th, 2026]
- How Big of a Sales Team Do You Really Need at a Hot AI B2B Startup? The Two Playbooks. - SaaStr - March 4th, 2026 [March 4th, 2026]
- With AI Finance, Its All in the Timing - The Information - March 4th, 2026 [March 4th, 2026]
- A handful of California schools embraced Trumps AI challenge. Many havent heard of it - CalMatters - March 4th, 2026 [March 4th, 2026]
- Fannie, Freddie, And FHFA Sever Ties With Anthropic AI - National Mortgage Professional - March 4th, 2026 [March 4th, 2026]
- The Rise and Fall? of the AI Czar - The Chronicle of Higher Education - March 4th, 2026 [March 4th, 2026]
- AI & The Body: A discussion on artificial intelligence in professional industries - Elon News Network - March 4th, 2026 [March 4th, 2026]
- From Shadow AI to Sanctioned AI: Enabling Federal AI Innovation With Visibility and Guardrails - MeriTalk - March 4th, 2026 [March 4th, 2026]
- AI companies aim not to help workers, but to replace them' - Vatican News - March 4th, 2026 [March 4th, 2026]
- From Hype to Impact: Leveraging AI in Your Business - New Jersey Business & Industry Association - March 4th, 2026 [March 4th, 2026]
- Ellucian's 3rd Annual Higher Education AI Survey Signals Shift from Individual AI Use to Institutional Strategy, Data Privacy Still the Top Barrier -... - March 4th, 2026 [March 4th, 2026]
- New RFP Template for AI Usage Control and AI Governance - The Hacker News - March 4th, 2026 [March 4th, 2026]
- Broadcom Earnings Are the Latest to Try to Climb AI Wall of Fear - Yahoo Finance - March 4th, 2026 [March 4th, 2026]