Are We Creating the Species That Will Replace Us? – WhoWhatWhy
As we hurtle towards a future increasingly intertwined with artificial intelligence (AI), what does this mean for society, for jobs, and for our security? Could AI, one day, be used maliciously, or in warfare or terrorism? And if these threats are real, how can we implement safeguards, and ensure the technology we create doesnt turn against us?
At a time when AI is reshaping our reality and pushing the boundaries of what was once considered mere science fiction, this technological revolution demands our attention. On this weeks WhoWhatWhy podcast, we delve deep into the realm of AI and its potential impact on humanity with Matthew Hutson, a contributing writer at The New Yorker. Hutsons work, featured in publications such as Science, Nature, Wired, and The Atlantic, reflects his background in cognitive neuroscience, and his emphasis on AI and creativity. His article Can We Stop Runaway AI appears in the current issue of The New Yorker.
At the heart of our conversation lies the concept of the technological singularity a moment when AI surpasses human intelligence. Hutson details the role of machine- learning algorithms in AIs remarkable progress, highlighting its capacity to continuously learn and improve. We also explore the growing trend of using AI to enhance AI itself, uncovering the implications and potential risks inherent in this self-improvement process.
Aligning AI with human values and goals emerges as a crucial issue. Hutsons observations shed light on the complexities of defining and implementing a single set of human values amid AIs expanding capabilities.
Hutson provides valuable insights into the accelerating pace of AI development and the driving forces behind it. He points out that economic incentives, scientific curiosity, and national security considerations are propelling advancements in AI across various sectors, from health care to entertainment.
Our conversation takes us further, as Hutson ponders the emergence of AI as a new stage in human evolution one that could potentially render humanity obsolete. The exploration of this uncharted territory prompts deep reflection on the ethical considerations and risks associated with AI development.
Full Text Transcript:
(As a service to our readers, we provide transcripts with our podcasts. We try to ensure that these transcripts do not include errors. However, due to a constraint of resources, we are not always able to proofread them as closely as we would like and hope that you will excuse any errors that slipped through.)
Jeff: Welcome to the WhoWhatWhy Podcast. Im your host, Jeff Schechtman. Over 100 million people have already signed on to ChatGPT. They have at least put their toe in the shark-infested waters of AI. Today we take a deep dive into the world of artificial intelligence, a realm where the line between science fiction and reality often blurs. Weve all heard of the technological singularity. A hypothetical moment in the future when artificial intelligence becomes so advanced that it surpasses human intelligence. A moment that could fundamentally reshape our world or as some experts warn, potentially even lead to the extinction of humanity.
But as we hurdle towards a future increasingly intertwined with AI, what does it mean for society, for jobs, and for our security? Could AI one day be used maliciously or in warfare or terrorism? And if these threats are real, how can we slow the pace, implement safeguards, and ensure that the technology we create doesnt turn against us? While we are all transfixed on AI and ChatGPT and [unintelligible 00:01:21] still waiting out there is AGI or artificial general intelligence. This is the type of AI that could potentially perform any intellectual task that a human can. Some say that AGI could be a reality within decades, while others deem it impossible or too far off into the future.
But as AI continues to surprise us evolving and learning in open-ended ways, could we be closer to this reality than we think? And if we are, how can we ensure these super-intelligent systems align with our human values and goals? Were going to talk about this today with my guest, Matt Hutson. His recent New Yorker article Can We Stop Runaway AI, brilliantly makes the case for where we are and where we actually may be headed. Matthew Hutson is a contributing writer at The New Yorker, covering science and technology. His writing has appeared in Science, Nature, WIRED, The Atlantic, and The Wall Street Journal, and hes also the author of The 7 Laws of Magical Thinking. It is my pleasure to welcome Matthew Hutson here to the WhoWhatWhy Podcast. Matthew, thanks so much for joining us.
Matthew: Thanks for having me.
Jeff: Well, its great to have you here. In so many ways it seems like AI today is a little bit like the story of the blind man and the elephant, that everybody that touches it touches a different part of it and sees a different thing of it and of its potential. Talk a little bit about that first.
Matthew: Yes, artificial intelligence is such an amorphous concept and set of tools just like intelligence, and even researchers who are embedded in this space who are working on cutting-edge technologies, they only have a scope of some narrow portion of the field. Like if you go to an AI conference, youll see hundreds of posters and you can be standing next to someone who has a PhD in computer science and say, Okay, what does this poster say? And theyre like, I dont know, because theres so many different nooks and crannies of the field. Everyone understands just one tidbit, and putting it all together and having a complete view that is both wide and scope and detailed that is both broad and deep beyond what any one person can do. So were all trying to put together what each of us knows about AI and intelligence to try to get a picture of what it can do and where its going.
Jeff: And because it is moving so fast or seemingly moving so fast, its a little bit like trying to build the airplane as were flying it right now.
Matthew: Exactly. Even the people who are building the technologies, they are still surprised by what it can do. A lot of these machine learning models, these algorithms that you feed them a lot of data, they find patterns in the data and then they can perform certain things like recognize images, or generate images, or classify text, or generate text. Their inner workings are so complicated. They find these subtle patterns in huge amounts of data that were not sure exactly how theyre working.
Its like you cant [unintelligible 00:04:29] inside them to see their [unintelligible 00:04:31] gears and their mechanisms, so they are constantly surprising us. Things like GPT-4 or ChatGPT, these language models from OpenAI. People are still every day on Twitter people are like, Look what I got ChatGPT to do. And the people at OpenAI who built the thing theyre like, Yes, we couldnt have predicted that. Were still trying to figure out what it can do and what it cant do.
Jeff: But because these are essentially huge data sets, large language models as theyre called that are pre-programmed, essentially the data is put in, the information is put in. What is it about that that has everyone so worried at this point?
Matthew: Well, there are a lot of things that worry people. Part of it is that they feed on so much data, like in a sense its pre-programmed in that we give it or the people who train the models, they collect a lot of text from the internet, for instance, like Wikipedia and webpages and news sites. And they show it to the model, but they cant read everything that they give it. So they dont know what theyre giving if its all true, if its all fair, somethings maybe false, somethings maybe may be biased against certain groups. And so then when you ask the trained model a question, its going to answer based on what its read, and you dont know whats going to come out because you dont know exactly what you fed it.
So it could say racist things, it could say incorrect things, and its not necessarily trained to say, I dont know, if it doesnt have the answer. Its trained to basically say something plausible. Technically all its trained to do is to predict the next word and you feed it some, give it a string of text and it predicts the next word in that text. And you can use that same trick to generate the next word in a sequence of text that it has already generated. Like what is the most likely word to come next after this sequence of words? And so its basically just its trained to generate plausible text or text that sounds like it was written by a person. Its not trained to think about, Okay, is this a true thing to say? Is this a fair thing to say? Is this a helpful thing to say? It doesnt have that level of self-reflection.
Jeff: As you talk about in the article or somebody mentions the idea of chess is a good example, because when a computer plays chess, which has always been the holy grail of what artificial intelligence could do for a long time before we got to where we are today, it wasnt that the computer or the algorithm was thinking about the next move. It was based upon huge data sets of games that have been previously played.
Matthew: So the original chess-playing computers like Deep Blue, the first program that beat the best human at chess, they did a lot of whats called tree search, where it would try, it would say, Okay, now here are all my possible moves. Lets say if I make this move that leads to all these other possible moves, it would explore, go down the branches, all the branches of this tree, or itd have some heuristic, some rules of thumb to narrow it search. It wouldnt look down all the different branches, but it was a massive computational exercise, sort of a game of numbers. It would explore lots of options which is very different from how people think.
Human chess players, they might only consider a few moves that would just intuitively pop out at them. They wouldnt consider millions of moves before making one. The more recent models or systems use machine learning pattern matching, which is a little bit closer to humans. You feed it a bunch of games and it gets a sense of what kinds of things more closely match past winning moves that it has seen before.
One thing about these chess computers is that originally people thought that chess was a decades ago people thought chess was a good measure of general intelligence, but now we know that whether its doing tree search or just pattern matching with machine learning [unintelligible 00:08:44] in either case its still a very narrow domain. The fact that a computer can play chess very well does not mean that it can do anything else very well.
Jeff: Part of what were seeing is an increase though in the computers ability to learn more of this machine learning where you have algorithms teaching algorithms essentially.
Matthew: Yes. So there are aspects of artificial intelligence from which people are using AI to try to enhance AI itself, so there are things called theres like meta-learning where you want an algorithm or an algorithm learns to learn basically. And so it accelerates its learning ability. Just like people in school, you might learn, you might receive advice on how to study for instance. And thats basically learning how to learn and that accelerates your learning process.
And then there are things called theres something called neural architecture search where youre using AI to using AI algorithms to find better AI algorithms. And so there are a lot of these kinds of systems where are methods or approaches where researchers are using computer science to accelerate computer science itself.
Jeff: Talk a little bit about how fast this is all progressing and why there is reason to be concerned and even to be concerned about this notion of singularity that you write about, the scenario where AI eventually surpasses human intelligence.
Matthew: Yes, its advancing very quickly. Every day theres some new or lots of new papers are being put online with new AI breakthroughs, and new products are coming out at a rapid pace. And researchers are stunned by one advance, like look at what this system can do, and then while theyre still stunned, another advance comes out that sort of tops that one. Things are going very quickly. And then the fact that they can use AI to improve AI itself, its accelerating research even more. And more money is being poured into it, and more attention by scientists is being paid to AI.
If you look at the attendance or the number of papers at AI conferences, its grown exponentially over the last decade or so. Its just a widely expanding field. And then venture capital has been plowed into it. So the speed of progress is just going up a lot faster than anyone can keep track of. And so that has led some people to think that the so-called singularity in which AI becomes so powerful that we cant control it, people are updating their estimates of how possible it is or how soon it might happen. People are thinking that its more possible, and if it happens, it will be more soon than they previously thought.
Jeff: It seems that the greater concern is at what point we have the ability to control this. At what point does the system begin to operate so on its own that it is no longer capable of being controlled by humans? Literally, short of being unplugged. You talk in the story about things like and you can expand on this. Things like the boat racing game and the paperclips, and those are things that its less about whether we could control it, it seems, and more about what this is able to do on its own, where we can control certain aspects of it.
Matthew: Yes. There are a couple of different factors. One is that even if its not smarter than we are in every way already, its smarter than we are in some ways. Its better at chess, for instance. If you ask it to do something, if you dont specify exactly what you want, it might come up with some creative solution that it adheres to the letter of the law, but not the spirit of the law. It does exactly what you asked it to, but you didnt think about it might achieve the end in a way that you didnt anticipate. So just a silly example. If you have a household robot and you say, fetch me coffee as quickly as possible, it might run through a wall or step on your cat or something like that.
So there are all kinds of scenarios where you ask it to do something, and then it might cause more harm. It might do what you wanted it to, but might cause more harm on the way. And then there are also cases where hypothetical scenarios where it becomes so smart that it starts generating its own goals and it thinks that humans are getting in the way and we want to survive, and humans are trying to shut us down, so lets kill them all. But even without that kind of scenario, even if an AI is trying to be helpful, if its trying to save us, it might not have the common sense that we do, or it might not fully understand what we want it to do or our values.
So it might break some of our might do things that we dont want it to do and we didnt think to tell it not to do, because we cant specify all of the exceptions or foresee all of the possible loopholes. And the smarter it is, the better its going to be at finding those loopholes in order to achieve even if one is trying to help us, it might find some loopholes that end up hurting us, even leading to extinction-level events.
Jeff: The paperclip story is a simple story, but its a good example of this thing potentially run amok.
Matthew: Thats the thought experiment where you just say, Okay, robot, make as many paperclips as you can so I can sell paperclips. And it says, Okay. And then it realizes that humans are made of atoms, which it could harvest in order to make more paperclips. So its trying to be helpful, and the smarter it is, the better its going to be at deconstructing humans and turning them into paperclips.
Jeff: One of the other examples in your story is what you call the dog treat problem because thats an extension of what youre talking about now.
Matthew: Yes. So if you say, Im going to grade you on your performance on something, it might cheat. It might try to please you in order to get treats. So it could deceive you. Treats might be you you give it more electricity, for instance, in training you give it a reward, a mathematical concept, but it finds shortcuts in order to get rewards. And its not doing really what you wanted it to do, its just doing whatever it can to get those rewards. Its like teaching to the test. It learns what it needs to do to get points, even if thats not what you really want it to do.
Jeff: One of the phrases that we hear over and over with respect to where all this is going with AI is this idea of alignment, aligning the AI with human goals, human values. Talk a little bit about that and how, in fact, even though its talked about a lot, it may not be achievable, and in fact, it may be too late for that already.
Matthew: So there are already some ways in which AI is not aligned with human values. And one thing to point out is that theres no single set of human values. I always ask, whose values? Because people disagree on what look at the political spectrum, valuing safety over freedom, for instance. So even if there were a single set of values that we all agreed that AI wanted to adhere to, its difficult to get it to align to those values because you cant specify what you want it to do in every single situation.
Asimov had the three laws of robotics, like do no harm to people and stay out of harms way yourself, but its unclear what counts as harm. So you could try to be more detailed, but then youd end up with an infinite list of rules on what to do in every single situation. And so in some sense, its never going to be doing exactly what you want it to do. There are always corner cases or exceptions where you thought, oh, I wouldnt have done that. Its not aligned with my value system in that case.
Already its not aligned in that. These language models, for instance, theyre saying things that are discriminatory, theyre saying things that are false. And then there are other kinds of AI systems used for facial recognition, for instance, that are not as good for certain demographic, certain parts of the population. So just getting these systems to perform in ways that we can all agree is good, is an impossible task.
Jeff: And none of this even approaches where the Holy Grail is in all this, this idea of AGI or artificial general intelligence, talk about that.
Matthew: So AGI is the idea that artificial intelligence would be as smart as people are in most domains, that it would have the same common sense in terms of social intelligence and physical intelligence, where they could perform most jobs, for instance. And its possible we wont ever get there. And I think it will always be perhaps worse than us in some ways, just like ants are smarter than humans are in some ways, maybe collaboration or following pheromones.
So every intelligent system has its own strengths and weaknesses. But I dont see the development of AI slowing down. So if we assume that it keeps progressing, its going to get to a point where a lot of people will start to call it AGI, will agree that okay, it is as smart as people are in many, perhaps most domains. And then its going to probably keep going because if its as smart as people are, then itll be able to be as good as we are at programming, including programming itself.
So its just going to keep improving itself and producing better AI. And then its a feedback loop and it could accelerate very, very quickly in what some people call a boom scenario in reference to the sound effects that you see in comic books when a superhero takes off very quickly.
Jeff: What have we established as some kind of test, some kind of parameter to define whether its reached AGI? Lets say.
Matthew: There are a lot of benchmarks in artificial intelligence. No one has agreed on a single test of AGI. There was the Turing test or a text-based conversation if an AI could be indistinguishable from a person via typing. I think the current language models are pretty close to In a short conversation, they could definitely pass a touring test. And eventually, they trip up and say nonsensical things. Then there are other tests of common sense, where you might show a computer an image or a video and ask it questions about whats going on or whats going to happen next in the video. So thats another kind of benchmark.
More difficult benchmarks might be to ask a robot to do something in a real-world situation, like figure out how to get from point A to point B in this obstacle course or figure out how to take these parts and build something creative or useful out of them. And so we can keep coming up with harder and harder tests. And I think that theres no single test where its going to satisfy everyone.
I dont think, because so far AI keeps passing these tests and then someone says, Oh, but look, it cant do this other thing. So its a moving goalpost. And theres not going to be any single test. Its going t be sort of, Oh, now it can pass all these tests. Maybe it cant do everything we can do, but it can do a lot of the things that we can do. And thats pretty impressive.
Jeff: One of the things thats clear though, is that in spite of all the talk about slowing down and letters people have written and things people have said. And the worst-case scenarios that have been laid out by some people, that this work is going to continue. That theres really nothing thats going to stop it at this point.
Matthew: Yes. There are a lot of incentives to keep going. A lot of economic incentives. For instance, companies are making a lot of money with AI, and they stand to make a lot more. Especially if you have AI that can trade very effectively. Trade in the markets or invent new things. Invent new medicines. Invent new technologies. And make trillions of dollars from AI or that the guys living on that. And there are things like national security, countries dont want to fall behind other countries on AI.
And then theres scientific curiosity. Researchers are always curious about what they can do next. And theyre just professional incentives to give grants and tenure and respect from their colleagues. And theyre just a lot of useful things that AI can do. It can improve healthcare. It can improve science and technology. It can improve entertainment media. So there are reasons, its hard to find people who want to just shut it all down right now and to say, We dont want any more of these improvements in life that it keeps giving us or that could potentially give us.
So thats part of it. And theres also the coordination problem. You would need everyone to One country isnt going to As I mentioned this earlier, one country isnt going to hit pause when they know that other countries might not hit pause. And then those other countries could dominate the world with their AI. So getting everyone on board is difficult. Even if you had international treaties. Someone in his or her bedroom could still invent something and fiddle around and create a self-improving AI that escapes or they use it to create harm in the world or to use it to their own benefit and it might have unforeseen consequences.
So its a very difficult social problem. Technically in preventing the singularity is probably be the easy to stop using computers, but thats not going to happen, because people dont want to stop using computers.
Jeff: In a way the argument can be made, as you talk about it in the article. Some talk about that it is the next stage or another stage in human evolution. That theres a very Darwinian aspect to it.
Matthew: Yes. Thats one way to look at it. If we are creating this technology that eventually surpasses us in a lot of ways. If it becomes more intelligent than we are in many ways, and if it finds ways to self-reproduce and to maintain itself. And if it takes over, then its basically a new life form. If it can maintain itself and reproduce and spread, then that fits a lot of definitions of life. And it could cause us to go extinct either intentionally or as a side effect of its own development. And so that would mean that we wouldve produced something that is the next stage in evolution. And humans would then be, in a sense, obsolete to the degree that you can call something that might have inherent value obsolete.
Jeff: And as somebody says early on, and you talked about this early on in the story, that its as if were creating an alien race right here. Were creating it to take over.
Matthew: Yes. Were inviting it here. Just saying, Here you go. How about it? Were welcoming it. Even though it could be the end of humanity.
Jeff: Matt Hutson, his story in the current New Yorker is, Can We Stop Runaway AI, a must-read for anyone that is fascinated by this topic or has concerns about it. Matt, I thank you so much for spending time with us here on The WhoWhatWhy Podcast. Really appreciate it.
Matthew: My pleasure.
Jeff: Thank you. And thank you for listening and joining us here on The WhoWhatWhy Podcast. I hope you join us next week for another radio WhoWhatWhy Podcast. Im Jeff Schechtman. If you like this podcast, please feel free to share and help others find it by rating and reviewing it on iTunes. You can also support this podcast and all the work we do by going to whowhatwhy.org/donate.
Jeff Schechtmans career spans movies, radio stations and podcasts. After spending twenty-five years in the motion picture industry as a producer and executive, he immersed himself in journalism, radio, and more recently the world of podcasts. To date he has conducted over ten-thousand interviews with authors, journalists, and thought leaders. Since March of 2015, he has conducted over 315 podcasts for WhoWhatWhy.org
View all posts
Read more:
Are We Creating the Species That Will Replace Us? - WhoWhatWhy
- Benchmark singularity: The timeless, award-winning classicism of To Pepe - Decanter - September 9th, 2025 [September 9th, 2025]
- SentinelOne Inside: How the Singularity Platform Is Redefining Cybersecurity with AI! - Smartkarma - September 9th, 2025 [September 9th, 2025]
- Non-singular Gravitational Collapse Avoids Singularity, Rebounds, and Deviates from Hawking Radiation - Quantum Zeitgeist - September 6th, 2025 [September 6th, 2025]
- The Singularity is here: How ProtYouth is Redefining Anti-Aging with Collagen - Daily Front Row - September 5th, 2025 [September 5th, 2025]
- Using data tools to time your Singularity Future Technology Ltd. exit - Weekly Loss Report & Verified Stock Trade Ideas - Newser - September 3rd, 2025 [September 3rd, 2025]
- How to read the order book for Singularity Future Technology Ltd. - Portfolio Return Report & Free Reliable Trade Execution Plans - Newser - September 1st, 2025 [September 1st, 2025]
- Researchers Resolve Schwarzschild Singularity Using Unimodular Time and Discover Theories Allowing Only One Mass Sign - Quantum Zeitgeist - September 1st, 2025 [September 1st, 2025]
- Is this a good reentry point in Singularity Future Technology Ltd. - July 2025 Final Week & Weekly Top Performers Watchlists - Newser - September 1st, 2025 [September 1st, 2025]
- Is Singularity Future Technology Ltd. forming a reversal pattern - July 2025 Big Picture & Accurate Intraday Trading Signals - Newser - September 1st, 2025 [September 1st, 2025]
- Using portfolio simulators with Singularity Future Technology Ltd. included - Quarterly Earnings Summary & Daily Technical Forecast Reports -... - September 1st, 2025 [September 1st, 2025]
- Chart based analysis of Singularity Future Technology Ltd. trends - Earnings Growth Summary & Expert Curated Trade Setup Alerts - Newser - September 1st, 2025 [September 1st, 2025]
- Order Book Volume Tilts Bullish on Singularity Future Technology Ltd. getLinesFromResByArray error: size == 0 - thegnnews.com - September 1st, 2025 [September 1st, 2025]
- 116 Pictures, Singularity, m25, Dandelion Studios x Juice, Laird and Good Company and Campaign Brief Asia host Legendary Party at Mad Stars - Campaign... - September 1st, 2025 [September 1st, 2025]
- Using fundamentals and technicals on Singularity Future Technology Ltd. - Market Weekly Review & Fast Exit/Entry Strategy Plans - Newser - September 1st, 2025 [September 1st, 2025]
- Chart based analysis of Singularity Future Technology Ltd. trends - Quarterly Trade Review & Accurate Technical Buy Alerts - Newser - August 29th, 2025 [August 29th, 2025]
- Tools to monitor Singularity Future Technology Ltd. recovery probability - 2025 Momentum Check & Expert Curated Trade Setup Alerts - Newser - August 29th, 2025 [August 29th, 2025]
- The Singularity Could Be Less Than 2,000 Days Away, Trend Shows - Popular Mechanics - August 29th, 2025 [August 29th, 2025]
- Real time pattern detection on Singularity Future Technology Ltd. stock - Earnings Summary Report & Free Weekly Chart Analysis and Trade Guides -... - August 29th, 2025 [August 29th, 2025]
- How to read the order book for Singularity Future Technology Ltd. - 2025 Geopolitical Influence & Fast Entry Momentum Trade Alerts - Newser - August 29th, 2025 [August 29th, 2025]
- Using data filters to optimize entry into Singularity Future Technology Ltd. - Analyst Downgrade & Reliable Price Action Trade Plans - Newser - August 29th, 2025 [August 29th, 2025]
- Using data filters to optimize entry into Singularity Future Technology Ltd. - 2025 Year in Review & Risk Controlled Swing Trade Alerts - Newser - August 29th, 2025 [August 29th, 2025]
- Using portfolio simulators with Singularity Future Technology Ltd. included - M&A Rumor & Fast Exit Strategy with Risk Control - Newser - August 29th, 2025 [August 29th, 2025]
- How to read the order book for Singularity Future Technology Ltd. - Quarterly Trade Summary & Weekly Sector Rotation Insights - Newser - August 29th, 2025 [August 29th, 2025]
- Sector ETF performance correlation with Singularity Future Technology Ltd. - Options Play & Free Community Supported Trade Ideas - Newser - August 27th, 2025 [August 27th, 2025]
- Signal strength of Singularity Future Technology Ltd. stock in tech scanners - Weekly Investment Report & Free Risk Controlled Daily Trade Plans -... - August 26th, 2025 [August 26th, 2025]
- Is this a good reentry point in Singularity Future Technology Ltd. - July 2025 Pullbacks & Weekly High Potential Stock Alerts - Newser - August 26th, 2025 [August 26th, 2025]
- Is this a good reentry point in Singularity Future Technology Ltd. - 2025 Top Gainers & Verified Short-Term Trading Plans - Newser - August 26th, 2025 [August 26th, 2025]
- For the Singularity to Truly Arrive, Wed Need a Machine That Eats the Sun - Popular Mechanics - August 26th, 2025 [August 26th, 2025]
- Will Singularity Future Technology Ltd. see short term momentum - Weekly Investment Summary & Fast Exit and Entry Trade Guides - Newser - August 24th, 2025 [August 24th, 2025]
- What indicators show strength in Singularity Future Technology Ltd. - 2025 Trading Recap & Capital Efficiency Focused Ideas - Newser - August 24th, 2025 [August 24th, 2025]
- Is it too late to sell Singularity Future Technology Ltd. - Analyst Upgrade & Smart Investment Allocation Insights - Newser - August 24th, 2025 [August 24th, 2025]
- Singularity an all-new Horror/Sci-Fi Anthology, Surges on Kickstarter - downthetubes.net - August 22nd, 2025 [August 22nd, 2025]
- Robots and Real Voices at Upcoming Singularity Summit - Media Update - August 22nd, 2025 [August 22nd, 2025]
- You're invited to Lounge with the Stars at our Legendary MadStars Party hosted by Dandelion Studios x Juice, 116 Pictures, Singularity, Laird and Good... - August 20th, 2025 [August 20th, 2025]
- SentinelOne's Singularity Platform Fuels Revenue: Will It Sustain? - Yahoo Finance - August 20th, 2025 [August 20th, 2025]
- Deadline to Submit Claims for the Singularity Future Technology Settlement is January 16, 2026 - TradingView - August 18th, 2025 [August 18th, 2025]
- SentinelOne's Singularity Platform Fuels Revenue: Will It Sustain? - ca.finance.yahoo.com - August 18th, 2025 [August 18th, 2025]
- Scrieving Scotland: Lets Talk About Culture and Politics After the Singularity - Bella Caledonia - August 18th, 2025 [August 18th, 2025]
- Ashes of the Singularity II Aims for a 2026 Release, Featuring Larger Battles and a New Campaign to Lead Humanity Against Hostile AI Takeover - MSN - August 12th, 2025 [August 12th, 2025]
- Ashes of the Singularity II announced, sequel to iconic DX12 benchmark game - VideoCardz.com - August 9th, 2025 [August 9th, 2025]
- The Western Powers Fall Into the Baudrillardian Singularity - Hungarian Conservative - August 9th, 2025 [August 9th, 2025]
- RTS sequel Ashes of the Singularity 2 promises to let you gleefully watch hundreds of thousands of units blow each other up in 2026 - Yahoo! Tech - August 9th, 2025 [August 9th, 2025]
- Ashes of the Singularity II Is Ready To Bring Humans To Battle 10 Years After The First - But Why Tho? - August 9th, 2025 [August 9th, 2025]
- Stardock CEO on AI in Gaming and Xbox Game Pass Impact Insights on Ashes of the Singularity II - Windows Central - August 7th, 2025 [August 7th, 2025]
- The humans are revolting in Ashes Of The Singularity 2, the latest planet-scale RTS from Oxide Games - Rock Paper Shotgun - August 7th, 2025 [August 7th, 2025]
- RTS sequel Ashes of the Singularity 2 promises to let you gleefully watch hundreds of thousands of units blow each other up in 2026 - PC Gamer - August 7th, 2025 [August 7th, 2025]
- Massive Scale RTS Game Ashes of the Singularity II Announced by Stardock and Oxide Games - TechPowerUp - August 7th, 2025 [August 7th, 2025]
- Ashes of the Singularity 2 has been announced - OC3D - August 7th, 2025 [August 7th, 2025]
- Ashes of the Singularity II Announced: Heres What We Know - VICE - August 7th, 2025 [August 7th, 2025]
- Ashes of the Singularity 2 to bring back massive-scale RTS action in 2026 - Game Watcher - August 7th, 2025 [August 7th, 2025]
- Ashes of the Singularity II - IGN Africa - August 7th, 2025 [August 7th, 2025]
- Ashes of the Singularity II Announced For 2026 Release - Bleeding Cool News - August 7th, 2025 [August 7th, 2025]
- Massive scale RTS game Ashes of the Singularity II announced for 2026 - GamingOnLinux - August 7th, 2025 [August 7th, 2025]
- Ashes of the Singularity II - IGN - August 7th, 2025 [August 7th, 2025]
- Ashes of the Singularity II Doubles Down on Scale, Strategy and Story - CGMagazine - August 7th, 2025 [August 7th, 2025]
- Ashes of the Singularity II Coming in 2026 - Steam Deck HQ - August 7th, 2025 [August 7th, 2025]
- Ashes of the Singularity II announced for 2026 - KitGuru - August 7th, 2025 [August 7th, 2025]
- Ashes of the Singularity 2 announced for PC - DBLTAP - August 7th, 2025 [August 7th, 2025]
- Massive RTS Ashes of the Singularity II Announced, Arrives in 2026 - Wccftech - August 7th, 2025 [August 7th, 2025]
- Ashes of the Singularity II debuts in 2026 with massive sci-fi RTS battles - gamesbeat.com - August 7th, 2025 [August 7th, 2025]
- Ashes of the Singularity II Latest Updates - Game Watcher - August 7th, 2025 [August 7th, 2025]
- RTS sequel Ashes of the Singularity 2 promises to let you gleefully watch hundreds of thousands of units blow each other up in 2026 - MSN - August 7th, 2025 [August 7th, 2025]
- Ashes of the Singularity II Announced: Human Faction Joins Epic RTS Warfare in 2026 - Zoom Bangla News - August 7th, 2025 [August 7th, 2025]
- SentinelOnes AI-Powered Singularity Platform Receives Highest Accreditation by Spanish Government To Protect and Secure IT Assets - Yahoo Finance - August 3rd, 2025 [August 3rd, 2025]
- SentinelOnes AI-Powered Singularity Platform Receives Highest Accreditation by Spanish Government To Protect and Secure IT Assets - The Globe and Mail - August 1st, 2025 [August 1st, 2025]
- Singularity in the future of Artificial Intelligence (AI): Utopian hopes or dystopian fears? - Business News Nigeria - August 1st, 2025 [August 1st, 2025]
- Aaron Vaccaro: Making the Aspirational Accessible at Singularity University - Grit Daily News - July 24th, 2025 [July 24th, 2025]
- Singularity Future Technology Ltd. Stock Analysis and Forecast - Exceptional growth trajectory - printweek.in - July 22nd, 2025 [July 22nd, 2025]
- The Geek Singularity: How AI is Redefining the Future of Geek Identity and Culture - Vocal - July 22nd, 2025 [July 22nd, 2025]
- Singularity Future Technology Ltd. Stock Analysis and Forecast - Free Trading Psychology Coaching - PrintWeekIndia - July 20th, 2025 [July 20th, 2025]
- Is Singularity Future Technology Ltd. a good long term investment - Exceptional market positioning - jammulinksnews.com - July 20th, 2025 [July 20th, 2025]
- What drives Singularity Future Technology Ltd. stock price - Unprecedented profits - jammulinksnews.com - July 20th, 2025 [July 20th, 2025]
- What analysts say about Singularity Future Technology Ltd. stock - Phenomenal trading returns - Autocar Professional - July 20th, 2025 [July 20th, 2025]
- Reaching AI 'Singularity' will hit billions of jobs - we need to prepare with urgency: Deb Hetherington - Yorkshire Post - July 18th, 2025 [July 18th, 2025]
- Exploiting hidden singularity on the surface of the Poincar sphere - Nature - July 4th, 2025 [July 4th, 2025]
- A Scientist Says Humans Will Reach the Singularity Within 20 Years - Popular Mechanics - July 2nd, 2025 [July 2nd, 2025]
- A Scientist Says Humans Will Reach the Singularity Within 20 Years - Yahoo - July 2nd, 2025 [July 2nd, 2025]
- Singularity Future Technology shareholders approve board election and equity plan - Investing.com India - July 2nd, 2025 [July 2nd, 2025]
- From Plancks Wall to AIs Singularity: Barriers Beyond Our Minds and Models - The Times of Israel - June 29th, 2025 [June 29th, 2025]
- 3.know: Singularity's Marketing Paradox, A Conversation With Anders Indset 06/28/2025 - MediaPost - June 28th, 2025 [June 28th, 2025]