We’re Focusing on the Wrong Kind of AI Apocalypse – TIME
Conversations about the future of AI are too apocalyptic. Or rather, they focus on the wrong kind of apocalypse.
There is considerable concern of the future of AI, especially as a number of prominent computer scientists have raised, the risks of Artificial General Intelligence (AGI)an AI smarter than a human being. They worry that an AGI will lead to mass unemployment or that AI will grow beyond human controlor worse (the movies Terminator and 2001 come to mind).
Discussing these concerns seems important, as does thinking about the much more mundane and immediate threats of misinformation, deep fakes, and proliferation enabled by AI. But this focus on apocalyptic events also robs most of us of our agency. AI becomes a thing we either build or dont build, and no one outside of a few dozen Silicon Valley executives and top government officials really has any say over.
But the reality is we are already living in the early days of the AI Age, and, at every level of an organization, we need to make some very important decisions about what that actually means. Waiting to make these choices means they will be made for us. It opens us up to many little apocalypses, as jobs and workplaces are disrupted one-by-one in ways that change lives and livelihoods.
We know this is a real threat, because, regardless of any pauses in AI creation, and without any further AI development beyond what is available today, AI is going to impact how we work and learn. We know this for three reasons: First, AI really does seem to supercharge productivity in ways we have never really seen before. An early controlled study in September 2023 showed large-scale improvements at work tasks, as a result of using AI, with time savings of more than 30% and a higher quality output for those using AI. Add to that the near-immaculate test scores achieved by GPT-4, and it is obvious why AI use is already becoming common among students and workers, even if they are keeping it secret.
Read More: There Is Only One Question That Matters with AI
We also know that AI is going to change how we work and learn because it is affecting a set of workers who never really faced an automation shock before. Multiple studies show the jobs most exposed to AI (and therefore the people whose jobs will make the hardest pivot as a result of AI) are educated and highly paid workers, and the ones with the most creativity in their jobs. The pressure for organizations to take a stand on a technology that affects these workers will be immense, especially as AI-driven productivity gains become widespread. These tools are on their way to becoming deeply integrated into our work environments. Microsoft, for instance, has released Co-Pilot GPT-4 tools for its ubiquitous Office applications, even as Google does the same for its office tools.
As a result, a natural instinct among many managers might be to say fire people, save money. But it doesnt need to be that wayand it shouldnt be. There are many reasons why companies should not turn efficiency gains into headcount or cost reduction. Companies that figure out how to use their newly productive workforce have the opportunity to dominate those who try to keep their post-AI output the same as their pre-AI output, just with less people. Companies that commit to maintaining their workforce will likely have employees as partners, who are happy to teach others about the uses of AI at work, rather than scared workers who hide AI for fear of being replaced. Psychological safety is critical to innovative team success, especially when confronted with rapid change. How companies use this extra efficiency is a choice, and a very consequential one.
There are hints buried in the early studies of AI about a way forward. Workers, while worried about AI, tend to like using it because it removes the most tedious and annoying parts of their job, leaving them with the most interesting tasks. So, even as AI removes some previously valuable tasks from a job, the work that is left can be more meaningful and more high value. But this is not inevitable, so managers and leaders must decide whether and how to commit themselves to reorganizing work around AI in ways that help, rather than hurt, their human workers. They need to ask what is my vision about how AI makes work better, rather than worse?
Rather than just being worried about one giant AI apocalypse, we need to worry about the many small catastrophes that AI can bring. Unimaginative or stressed leaders may decide to use these new tools for surveillance and for layoffs. Educators may decide to use AI in ways that leave some students behind. And those are just the obvious problems.
But AI does not need to be catastrophic. Correctly used, AI can create local victories, where previously tedious or useless work becomes productive and empowering. Where students who were left behind can find new paths forward. And where productivity gains lead to growth and innovation.
The thing about a widely applicable technology is that decisions about how it is used are not limited to a small group of people. Many people in organizations will play a role in shaping what AI means for their team, their customers, their students, their environment. But to make those choices matter, serious discussions need to start in many placesand soon. We cant wait for decisions to be made for us, and the world is advancing too fast to remain passive.
Read the original here:
We're Focusing on the Wrong Kind of AI Apocalypse - TIME
- "I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded - Vox.com - May 18th, 2024 [May 18th, 2024]
- 63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved - PC Gamer - May 18th, 2024 [May 18th, 2024]
- Top OpenAI researcher resigns, saying company prioritized 'shiny products' over AI safety - Fortune - May 18th, 2024 [May 18th, 2024]
- The revolution in artificial intelligence and artificial general intelligence - Washington Times - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Yahoo! Voices - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands safety team focused on risk of artificial intelligence causing 'human extinction' - New York Post - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Port Lavaca Wave - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Moore County News Press - May 18th, 2024 [May 18th, 2024]
- Generative AI Is Totally Shameless. I Want to Be It - WIRED - May 18th, 2024 [May 18th, 2024]
- OpenAI researcher resigns, claiming safety has taken a backseat to shiny products - The Verge - May 18th, 2024 [May 18th, 2024]
- Most of Surveyed Americans Do Not Want Super Intelligent AI - 80.lv - May 18th, 2024 [May 18th, 2024]
- How Artificial General Intelligence Will Shape the Future - Analytics Insight - May 18th, 2024 [May 18th, 2024]
- A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company - Winnipeg Free Press - May 18th, 2024 [May 18th, 2024]
- DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph - April 20th, 2024 [April 20th, 2024]
- Congressional panel outlines five guardrails for AI use in House - FedScoop - April 20th, 2024 [April 20th, 2024]
- The Potential and Perils of Advanced Artificial General Intelligence - elblog.pl - April 20th, 2024 [April 20th, 2024]
- Artificial General Intelligence (AGI) Market size is worth USD 27.47 Billion by 2030 with 37.5 % As Reveale... - WhaTech - April 20th, 2024 [April 20th, 2024]
- DeepMind Head: Google AI Spending Could Exceed $100 Billion - PYMNTS.com - April 20th, 2024 [April 20th, 2024]
- Q&A: Mark Zuckerberg on winning the AI race - The Verge - April 20th, 2024 [April 20th, 2024]
- Say hi to Tong Tong, world's first AGI child-image figure - ecns - April 20th, 2024 [April 20th, 2024]
- Silicon Scholars: AI and The Muslim Ummah - IslamiCity - April 20th, 2024 [April 20th, 2024]
- AI stocks aren't like the dot-com bubble. Here's why - Quartz - April 20th, 2024 [April 20th, 2024]
- AI vs. AGI: The Race for Performance, Battling the Cost? for NASDAQ:GOOG by Moshkelgosha - TradingView - April 20th, 2024 [April 20th, 2024]
- We've Been Here Before: AI Promised Humanlike Machines In 1958 - The Good Men Project - April 20th, 2024 [April 20th, 2024]
- Google will spend more than $100 billion on AI, exec says - Quartz - April 20th, 2024 [April 20th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - ABC News - April 8th, 2024 [April 8th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Bakersfield Californian - April 8th, 2024 [April 8th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Caledonian-Record - April 8th, 2024 [April 8th, 2024]
- What is AGI and how is it different from AI? - ReadWrite - April 8th, 2024 [April 8th, 2024]
- Artificial intelligence in healthcare: defining the most common terms - HealthITAnalytics.com - April 8th, 2024 [April 8th, 2024]
- Xi Jinping's vision in supporting the artificial intelligence at home and abroad - Modern Diplomacy - April 8th, 2024 [April 8th, 2024]
- As 'The Matrix' turns 25, the chilling artificial intelligence (AI) projection at its core isn't as outlandish as it once seemed - TechRadar - April 8th, 2024 [April 8th, 2024]
- AI & robotics briefing: Why superintelligent AI won't sneak up on us - Nature.com - January 10th, 2024 [January 10th, 2024]
- Get Ready for the Great AI Disappointment - WIRED - January 10th, 2024 [January 10th, 2024]
- Part 3 Capitalism in the Age of Artificial General Intelligence (AGI) - Medium - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI): what it is and why its discovery can change the world - Medium - January 10th, 2024 [January 10th, 2024]
- Exploring the Path to Artificial General Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- The Acceleration Towards Artificial General Intelligence (AGI) and Its Implications - Medriva - January 10th, 2024 [January 10th, 2024]
- OpenAI Warns: "AGI Is Coming" - Do we have a reason to worry? - Medium - January 10th, 2024 [January 10th, 2024]
- The fight over ethics intensifies as artificial intelligence quickly changes the world - 9 & 10 News - January 10th, 2024 [January 10th, 2024]
- AI as the Third Window into Humanity: Understanding Human Behavior and Emotions - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI) in Radiation Oncology: Transformative Technology - Medriva - January 10th, 2024 [January 10th, 2024]
- Exploring the Potential of AGI: Opportunities and Challenges - Medium - January 10th, 2024 [January 10th, 2024]
- Full-Spectrum Cognitive Development Incorporating AI for Evolution and Collective Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial Superintelligence - Understanding a Future Tech that Will Change the World! - MobileAppDaily - January 10th, 2024 [January 10th, 2024]
- Title: AI Unveiled: Exploring the Realm of Artificial Intelligence - Medium - January 10th, 2024 [January 10th, 2024]
- The Simple Reason Why AGI (Artificial General Intelligence) Is Not ... - Medium - December 2nd, 2023 [December 2nd, 2023]
- What does the future hold for generative AI? - MIT News - December 2nd, 2023 [December 2nd, 2023]
- One year after its public launch, ChatGPT has succeeded in igniting ... - Morningstar - December 2nd, 2023 [December 2nd, 2023]
- Macy's Could See Over $7.5 Billion in Additional Business Gains ... - CMSWire - December 2nd, 2023 [December 2nd, 2023]
- Securing the cloud and AI: Insights from Laceworks CISO - SiliconANGLE News - December 2nd, 2023 [December 2nd, 2023]
- Amazon unleashes Q, an AI assistant for the workplace - Ars Technica - December 2nd, 2023 [December 2nd, 2023]
- You're not imagining things: The end of the -3- - Morningstar - December 2nd, 2023 [December 2nd, 2023]
- OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say - Reuters - November 24th, 2023 [November 24th, 2023]
- What the OpenAI drama means for AI progress and safety - Nature.com - November 24th, 2023 [November 24th, 2023]
- The fallout from the weirdness at OpenAI - The Economist - November 24th, 2023 [November 24th, 2023]
- How an 'internet of AIs' will take artificial intelligence to the next level - Cointelegraph - November 24th, 2023 [November 24th, 2023]
- OpenAI Is Seeking Additional Investment in Artificial General ... - AiThority - November 24th, 2023 [November 24th, 2023]
- Top AI researcher launches new Alberta lab with Huawei funds after ... - The Globe and Mail - November 24th, 2023 [November 24th, 2023]
- Will AI Replace Humanity? - KDnuggets - November 24th, 2023 [November 24th, 2023]
- This Week in AI: Accelerationism, AGI and the Law - PYMNTS.com - November 24th, 2023 [November 24th, 2023]
- Tesla FSD v12 Rolls Out to Employees With Update 2023.38.10 ... - Not a Tesla App - November 24th, 2023 [November 24th, 2023]
- Searching AI-powered ChatGpt for HNP authors, the Great Salt ... - The Daily Herald - November 24th, 2023 [November 24th, 2023]
- Unveiling the Mechanics of AI: How Artificial Intelligence Works - Medium - August 16th, 2023 [August 16th, 2023]
- The stakes are high so are the rewards: Artificial intelligence and ... - Building - August 16th, 2023 [August 16th, 2023]
- What will AI do to question-based inquiry? (opinion) - Inside Higher Ed - August 16th, 2023 [August 16th, 2023]
- The Department of State's pilot project approach to AI adoption - FedScoop - August 16th, 2023 [August 16th, 2023]
- Anthropic and SK Telecom team up to build AI model for telcos - Tech Monitor - August 16th, 2023 [August 16th, 2023]
- Derry City & Strabane - Explore the Future of Education and ... - Derry City and Strabane District Council - August 16th, 2023 [August 16th, 2023]
- Ethical Considerations of Using AI for Academic Purposes - Unite.AI - August 16th, 2023 [August 16th, 2023]
- Elon Musk says Tesla cars now have a mind, figured out 'some aspects of AGI' - Electrek - August 13th, 2023 [August 13th, 2023]
- To Navigate the Age of AI, the World Needs a New Turing Test - WIRED - August 13th, 2023 [August 13th, 2023]
- What's Behind the Race to Create Artificial General Intelligence? - Truthdig - August 13th, 2023 [August 13th, 2023]
- Why Hawaii Should Take The Lead On Regulating Artificial ... - Honolulu Civil Beat - August 13th, 2023 [August 13th, 2023]
- Artificial Intelligence (AI) Explained in Simple Terms - MUO - MakeUseOf - August 13th, 2023 [August 13th, 2023]
- The Pros and Cons of Artificial Intelligence (AI) - Fagen wasanni - August 13th, 2023 [August 13th, 2023]
- Will "godlike AI" kill us all or unlock the secrets of the universe ... - Salon - August 13th, 2023 [August 13th, 2023]
- What is Artificial Intelligence (AI)? - Fagen wasanni - August 13th, 2023 [August 13th, 2023]
- Will the Microsoft AI Red Team Prevent AI from Going Rogue on ... - Fagen wasanni - August 13th, 2023 [August 13th, 2023]
- AC Ventures Managing Partner Helen Wong Discusses Indonesia's ... - Clayton County Register - August 13th, 2023 [August 13th, 2023]
Tags: