Top AI researcher launches new Alberta lab with Huawei funds after … – The Globe and Mail
Open this photo in gallery:
Richard Sutton, a computer scientist and well known AI researcher, at home in Edmonton, Alta., on Nov. 23. Prof. Sutton is launching a new AI research institute in Edmonton with funding from Huawei.Amber Bracken/The Globe and Mail
One of the countrys most accomplished artificial intelligence researchers is launching a new non-profit lab with $4.8-million in funding from Huawei Canada, after the federal government restricted the Chinese companys ability to work with publicly funded universities.
Richard Sutton, a professor at the University of Alberta and a pioneer in the field of reinforcement learning, says the Openmind Research Institute will fund researchers following the Alberta Plan, a 12-step guide he co-authored last year that lays out a framework for pursuing the development of AI agents capable of human-level intelligence.
Openmind will be based in Edmonton and kicks off Friday with a weekend retreat in Banff.
Canada banned the use of equipment from Huawei in 5G networks last year, citing the company as a security risk because of its connections to the Chinese government, which could use the company for espionage. Huawei has long denied the accusation.
Jim Hinton, a Waterloo, Ont.-based patent lawyer and senior fellow at the Centre for International Governance Innovation, said Huaweis involvement with Openmind raises concerns. Even if the money is coming with as little strings attached as possible, there is still soft power that is being wielded, he said. The fact that theyre holding the purse strings gives a significant amount of control.
In 2021, Ottawa started restricting funding for research collaborations between publicly funded universities and entities with links to countries considered national security risks, including China. Alberta has implemented similar restrictions for sensitive research at a provincial level. Artificial intelligence is particularly sensitive because the technology has military applications and can be used for nefarious purposes.
I hope that it could counter that narrative and be an example of how things could be really good, Prof. Sutton said of Openmind and Huaweis funding. This is a case where the interaction with China has been really productive, really valuable in contributing to open AI research in Canada.
All of the work done by Openmind, which is separate from Prof. Suttons role at the University of Alberta, will be open-source, and the institute will not pursue intellectual property rights.
Nor will Huawei. I was a little bit surprised that they were willing to do something so open and with no attempt at control, said Prof. Sutton, who has a long-standing relationship with Huawei in Alberta.
Huawei did not respond to requests for comment.
Although the Chinese company has been shut out of 5G networks and restricted in working with universities in Canada, it can still work directly with individual researchers.
Companies linked to Chinas military, like Huawei is, will try to find other ways around the federal rules, including directly funding researchers outside university institutions. It appears Huawei is doing exactly that, said Margaret McCuaig-Johnston, a senior fellow at the Institute for Science, Society and Policy at the University of Ottawa. China pushes the envelope as far as they can.
Prof. Sutton wrote the textbook literally on reinforcement learning, which is an approach to developing AI agents capable of performing actions in an environment to achieve a goal. Reinforcement learning is everywhere in the world of AI, including in autonomous vehicles and in how chatbots such as ChatGPT are polished to sound more human.
Born in the United States, Prof. Sutton completed a PhD at the University of Massachusetts in 1984 and worked in industry before returning to academia. He joined the University of Alberta in 2003, where he founded the Reinforcement Learning and Artificial Intelligence Lab. He left the U.S. for Canada partly because of his opposition to the politics of former president George W. Bush and the countrys military campaigns abroad.
Alphabet Inc. tapped him in 2017 to lead the companys AI research office in Edmonton through its DeepMind subsidiary, but shut it down in January as part of a company-wide restructuring.
The closing left Prof. Sutton with unfinished business, in a sense. His goal is to understand intelligence, as he puts it, a necessary undertaking if we are to build truly intelligent agents. His work at the university is one avenue to pursue that goal, as is his recent post with Keen Technologies, a U.S. AI startup founded by former Meta Platforms Inc. consulting chief technology officer John Carmack. Keen raised US$20-million last year, including from Shopify founder Tobi Ltke.
Openmind is one more way to pursue that goal, Prof. Sutton said. Although large language models, which power chatbots like ChatGPT, have garnered a lot of attention, he isnt particularly interested in them. Its a good, useful thing, but its kind of a distraction, he said.
He is far more interested in building AI applications capable of complex decision-making and achieving goals, which many refer to as artificial general intelligence, or AGI. I imagine machines doing all the different kinds of things that people do, he said. They will interact and find, just like people do, that the best way to get ahead is to work with other people.
Prof. Sutton will sit on the Openmind governing board along with University of Alberta computer science professor Randy Goebel and Joseph Modayil, who previously worked at DeepMind. Mr. Modayil is also Openminds research director.
Understanding the mind is a grand scientific challenge that has driven my work for more than two decades, he said in an e-mail.
A committee that includes Alberta Plan co-authors and U of A professors Michael Bowling and Patrick Pilarski will select the research fellows. Openminds research agenda will be set independently from its funding sources, according to a backgrounder on the institute provided by Prof. Sutton.
The briefing also notes that Openmind researchers will be natural candidates for founding startups and commercializing research outside the non-profit. Although there may be no legal obligation for an Openmind researcher to work with Openmind donors, familiarity, trust, and consilient perspectives would make this a likely outcome, according to the backgrounder.
The backing from Huawei puts the company in a better position to work with Openmind talent, Mr. Hinton said. Even though the research will be open-source, foreign multinational companies such as Huawei are often more equipped to capitalize on it than Canadian firms, which have a poor track record of protecting intellectual property and capturing the economic benefits that come with innovation.
Canadian governments review transactions involving foreign companies and physical assets, such as mines, to ensure the domestic economy benefits. But they fall short with IP. When it comes to intangible assets, we dont understand how that works, Mr. Hinton said.
Prof. Sutton is a big proponent of open-source and has a dim view of IP, saying that the focus on ownership can slow down innovation. You are interacting with lawyers and spending a lot of time and money on things that arent advancing the research, he said. It just doesnt seem like its worked at all for computer science IP.
He is open to more funding for Openmind and said that if donors are uncomfortable with Huaweis involvement they can also support AI research through the reinforcement learning lab at the University of Alberta. Openmind is adamant that Huawei cannot influence the non-profits research, he added, and said he would decline further funding if the company attempted to do so.
I see this as a purely positive and mutually beneficial way for Huawei and academic researchers to interact, he said. It may not last, but while it does, it is entirely a good thing.
Sam Altman was briefly ousted as the CEO of OpenAI, the result of infighting on the companys board of directors. For how long did the attempted corporate coup last?
Take our news quiz to find out.
View post:
Top AI researcher launches new Alberta lab with Huawei funds after ... - The Globe and Mail
- Is Artificial General Intelligence (AGI) Closer Than We Think? - Vocal - June 29th, 2025 [June 29th, 2025]
- Microsoft and OpenAI dueling over artificial general intelligence, The Information reports - MSN - June 29th, 2025 [June 29th, 2025]
- Viewpoint: How AGI (artificial general intelligence) threatens to undermine what it means to be human - Genetic Literacy Project - June 28th, 2025 [June 28th, 2025]
- These two game-changing breakthroughs advance us toward artificial general intelligence - Fast Company - June 28th, 2025 [June 28th, 2025]
- Microsoft and OpenAI dueling over artificial general intelligence, The Information reports By Reuters - Investing.com - June 28th, 2025 [June 28th, 2025]
- OpenAI And Microsoft Reportedly At Odds Over Access To Artificial General Intelligence: 'Talks Are Ongoing And We Are Optimistic' - Benzinga - June 26th, 2025 [June 26th, 2025]
- Is Artificial General Intelligence Here? - Behind The News - Australian Broadcasting Corporation - June 24th, 2025 [June 24th, 2025]
- Did Apples Recent Illusion of Thinking Study Expose Fatal Shortcomings in Using LLMs for Artificial General Intelligence? - Economist Writing Every... - June 20th, 2025 [June 20th, 2025]
- On the construction of artificial general intelligence based on the correspondence between goals and means - Frontiers - June 20th, 2025 [June 20th, 2025]
- The Ardent Belief That Artificial General Intelligence Will Bring Us Infinite Einsteins - Forbes - June 10th, 2025 [June 10th, 2025]
- Mark Zuckerberg is assembling a team of experts to achieve artificial general intelligence - iblnews.org - June 10th, 2025 [June 10th, 2025]
- 'Foolhardy at best, and deceptive and dangerous at worst': Don't believe the hype here's why artificial general intelligence isn't what the... - June 7th, 2025 [June 7th, 2025]
- Mind-Bending New Inventions That Artificial General Intelligence Might Discover For The Sake Of Humanity - Forbes - June 7th, 2025 [June 7th, 2025]
- Why AI-As-Coder Is Said To Be The Fastest Path Toward Reaching Artificial General Intelligence - Forbes - June 7th, 2025 [June 7th, 2025]
- Artificial General Intelligence in Competition and War - RealClearDefense - May 11th, 2025 [May 11th, 2025]
- OpenAI CFO Sarah Friar on the race to build artificial general intelligence - Goldman Sachs - April 16th, 2025 [April 16th, 2025]
- Artificial General Intelligence (AGI) Progress & The Road to ASI - Crowe - April 16th, 2025 [April 16th, 2025]
- What is artificial general intelligence and how does it differ from other types of AI? - Tech Xplore - April 5th, 2025 [April 5th, 2025]
- DeepMind predicts arrival of artificial general intelligence by 2030, warns of potential existential threat to humanity - BizzBuzz - April 5th, 2025 [April 5th, 2025]
- Stop the World: The road to artificial general intelligence, with Helen Toner - | Australian Strategic Policy Institute | ASPI - April 5th, 2025 [April 5th, 2025]
- Artificial General Intelligence: The Next Frontier in AI - The Villager Newspaper - April 3rd, 2025 [April 3rd, 2025]
- Prominent transhumanist on Artificial General Intelligence: We must stop everything. We are not ready. - All Israel News - March 22nd, 2025 [March 22nd, 2025]
- Researchers want to give some common sense to AI to turn it into artificial general intelligence - MSN - March 22nd, 2025 [March 22nd, 2025]
- The AI Obsession: Why Chasing Artificial General Intelligence is a Misguided Dream - Macnifico.pt - March 18th, 2025 [March 18th, 2025]
- Navigating artificial general intelligence development: societal, technological, ethical, and brain-inspired pathways - Nature.com - March 13th, 2025 [March 13th, 2025]
- We meet the protesters who want to ban Artificial General Intelligence before it even exists - The Register - February 23rd, 2025 [February 23rd, 2025]
- How Artificial General Intelligence (AGI) is likely to transform manufacturing in the next 10 years - Wire19 - February 11th, 2025 [February 11th, 2025]
- How Artificial General Intelligence is likely to transform manufacturing in the next 10 years - ET Manufacturing - February 11th, 2025 [February 11th, 2025]
- How Do You Get to Artificial General Intelligence? Think Lighter - WIRED - November 28th, 2024 [November 28th, 2024]
- How much time do we have before Artificial General Intelligence (AGI) to turns into Artificial Self-preserving - The Times of India - November 5th, 2024 [November 5th, 2024]
- Simuli to Leap Forward in the Trek to Artificial General Intelligence through 2027 Hyperdimensional AI Ecosystem - USA TODAY - November 5th, 2024 [November 5th, 2024]
- Implications of Artificial General Intelligence on National and International Security - Yoshua Bengio - - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - Business Insider - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - MSN - October 31st, 2024 [October 31st, 2024]
- Bot fresh hell is this?: Inside the rise of Artificial General Intelligence or AGI - MSN - October 31st, 2024 [October 31st, 2024]
- Artificial General Intelligence (AGI) Market to Reach $26.9 Billion by 2031 As Revealed In New Report - WhaTech - September 26th, 2024 [September 26th, 2024]
- 19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - MSN - September 26th, 2024 [September 26th, 2024]
- Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -... - August 16th, 2024 [August 16th, 2024]
- Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra - August 11th, 2024 [August 11th, 2024]
- OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED - July 22nd, 2024 [July 22nd, 2024]
- OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub - July 22nd, 2024 [July 22nd, 2024]
- One of the Best Ways to Invest in AI Is Dont - InvestorPlace - July 22nd, 2024 [July 22nd, 2024]
- OpenAI is plagued by safety concerns - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework - Ars Technica - July 17th, 2024 [July 17th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite - July 17th, 2024 [July 17th, 2024]
- Heres how OpenAI will determine how powerful its AI systems are - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz - July 17th, 2024 [July 17th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune - July 17th, 2024 [July 17th, 2024]
- AI News Today July 15, 2024 - The Dales Report - July 17th, 2024 [July 17th, 2024]
- The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx - July 17th, 2024 [July 17th, 2024]
- What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer - July 17th, 2024 [July 17th, 2024]
- Companies are losing faith in AI, and AI is losing money - Android Headlines - July 17th, 2024 [July 17th, 2024]
- AGI isn't here (yet): How to make informed, strategic decisions in the meantime - VentureBeat - June 16th, 2024 [June 16th, 2024]
- Apple's AI Privacy Measures, Elon Musk's Robot Prediction, And More: This Week In Artificial Intelligence - Alphabet ... - Benzinga - June 16th, 2024 [June 16th, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire - June 16th, 2024 [June 16th, 2024]
- Apple's big AI announcements were all about AI 'for the rest of us'Google, Meta, Amazon and, yes, OpenAI should ... - Fortune - June 16th, 2024 [June 16th, 2024]
- Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman - The New York Times - June 16th, 2024 [June 16th, 2024]
- Staying Ahead of the AI Train - ATD - June 16th, 2024 [June 16th, 2024]
- OpenAI disbands its AI risk mitigation team - - May 20th, 2024 [May 20th, 2024]
- BEYOND LOCAL: 'Noise' in the machine: Human differences in judgment lead to problems for AI - The Longmont Leader - May 20th, 2024 [May 20th, 2024]
- Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties - Futurism - May 20th, 2024 [May 20th, 2024]
- What Is AI? How Artificial Intelligence Works (2024) - Shopify - May 20th, 2024 [May 20th, 2024]
- Vitalik Buterin says OpenAI's GPT-4 has passed the Turing test - Cointelegraph - May 20th, 2024 [May 20th, 2024]
- "I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded - Vox.com - May 18th, 2024 [May 18th, 2024]
- 63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved - PC Gamer - May 18th, 2024 [May 18th, 2024]
- Top OpenAI researcher resigns, saying company prioritized 'shiny products' over AI safety - Fortune - May 18th, 2024 [May 18th, 2024]
- The revolution in artificial intelligence and artificial general intelligence - Washington Times - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Yahoo! Voices - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands safety team focused on risk of artificial intelligence causing 'human extinction' - New York Post - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Port Lavaca Wave - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Moore County News Press - May 18th, 2024 [May 18th, 2024]
- Generative AI Is Totally Shameless. I Want to Be It - WIRED - May 18th, 2024 [May 18th, 2024]
- OpenAI researcher resigns, claiming safety has taken a backseat to shiny products - The Verge - May 18th, 2024 [May 18th, 2024]
- Most of Surveyed Americans Do Not Want Super Intelligent AI - 80.lv - May 18th, 2024 [May 18th, 2024]
- A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company - Winnipeg Free Press - May 18th, 2024 [May 18th, 2024]
- DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph - April 20th, 2024 [April 20th, 2024]
- Congressional panel outlines five guardrails for AI use in House - FedScoop - April 20th, 2024 [April 20th, 2024]
- The Potential and Perils of Advanced Artificial General Intelligence - elblog.pl - April 20th, 2024 [April 20th, 2024]