Archive for the ‘Ai’ Category

OpenAI Quietly Deletes Ban on Using ChatGPT for Military and Warfare – The Intercept

OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used.

Up until January 10, OpenAIs usage policies page included a ban on activity that has high risk of physical harm, including, specifically, weapons development and military and warfare. That plainly worded prohibition against military applications would seemingly rule out any official, and extremely lucrative, use by the Department of Defense or any other state military. The new policy retains an injunction not to use our service to harm yourself or others and gives develop or use weapons as an example, but the blanket ban on military and warfare use has vanished.

The unannounced redaction is part of a major rewrite of the policy page, which the company said was intended to make the document clearer and more readable, and which includes many other substantial language and formatting changes.

We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs, OpenAI spokesperson Niko Felix said in an email to The Intercept. A principle like Dont harm others is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples.

Felix declined to say whether the vaguer harm ban encompassed all military use, writing, Any use of our technology, including by the military, to [develop] or [use] weapons, [injure] others or [destroy] property, or [engage] in unauthorized activities that violate the security of any service or system, is disallowed.

OpenAI is well aware of the risk and harms that may arise due to the use of their technology and services in military applications, said Heidy Khlaaf, engineering director at the cybersecurity firm Trail of Bits and an expert on machine learning and autonomous systems safety, citing a 2022 paper she co-authored with OpenAI researchers that specifically flagged the risk of military use. Khlaaf added that the new policy seems to emphasize legality over safety. There is a distinct difference between the two policies, as the former clearly outlines that weapons development, and military and warfare is disallowed, while the latter emphasizes flexibility and compliance with the law, she said. Developing weapons, and carrying out activities related to military and warfare is lawful to various extents. The potential implications for AI safety are significant. Given the well-known instances of bias and hallucination present within Large Language Models (LLMs), and their overall lack of accuracy, their use within military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm and civilian casualties.

The real-world consequences of the policy are unclear. Last year, The Intercept reported that OpenAI was unwilling to say whether it would enforce its own clear military and warfare ban in the face of increasing interest from the Pentagon and U.S. intelligence community.

Given the use of AI systems in the targeting of civilians in Gaza, its a notable moment to make the decision to remove the words military and warfare from OpenAIs permissible use policy, said Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission. The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement.

While nothing OpenAI offers today could plausibly be used to directly kill someone, militarily or otherwise ChatGPT cant maneuver a drone or fire a missile any military is in the business of killing, or at least maintaining the capacity to kill. There are any number of killing-adjacent tasks that a LLM like ChatGPT could augment, like writing code or processing procurement orders. A review of custom ChatGPT-powered bots offered by OpenAI suggests U.S. military personnel are already using the technology to expedite paperwork. The National Geospatial-Intelligence Agency, which directly aids U.S. combat efforts, has openly speculated about using ChatGPT to aid its human analysts. Even if OpenAI tools were deployed by portions of a military force for purposes that arent directly violent, they would still be aiding an institution whose main purpose is lethality.

Experts who reviewed the policy changes at The Intercepts request said OpenAI appears to be silently weakening its stance against doing business with militaries. I could imagine that the shift away from military and warfare to weapons leaves open a space for OpenAI to support operational infrastructures as long as the application doesnt directly involve weapons development narrowly defined, said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. Of course, I think the idea that you can contribute to warfighting platforms while claiming not to be involved in the development or use of weapons would be disingenuous, removing the weapon from the sociotechnical system including command and control infrastructures of which its part. Suchman, a scholar of artificial intelligence since the 1970s and member of the International Committee for Robot Arms Control, added, It seems plausible that the new policy document evades the question of military contracting and warfighting operations by focusing specifically on weapons.

Suchman and Myers West both pointed to OpenAIs close partnership with Microsoft, a major defense contractor, which has invested $13 billion in the LLM maker to date and resells the companys software tools.

The changes come as militaries around the world are eager to incorporate machine learning techniques to gain an advantage; the Pentagon is still tentatively exploring how it might use ChatGPT or other large-language models, a type of software tool that can rapidly and dextrously generate sophisticated text outputs. LLMs are trained on giant volumes of books, articles, and other web data in order to approximate human responses to user prompts. Though the outputs of an LLM like ChatGPT are often extremely convincing, they are optimized for coherence rather than a firm grasp on reality and often suffer from so-called hallucinations that make accuracy and factuality a problem. Still, the ability of LLMs to quickly ingest text and rapidly output analysis or at least the simulacrum of analysis makes them a natural fit for the data-laden Defense Department.

While some within U.S. military leadership have expressed concern about the tendency of LLMs to insert glaring factual errors or other distortions, as well as security risks that might come with using ChatGPT to analyze classified or otherwise sensitive data, the Pentagon remains generally eager to adopt artificial intelligence tools. In a November address, Deputy Secretary of Defense Kathleen Hicks stated that AI is a key part of the comprehensive, warfighter-centric approach to innovation that Secretary [Lloyd] Austin and I have been driving from Day 1, though she cautioned that most current offerings arent yet technically mature enough to comply with our ethical AI principles.

Last year, Kimberly Sablon, the Pentagons principal director for trusted AI and autonomy, told a conference in Hawaii that [t]heres a lot of good there in terms of how we can utilize large-language models like [ChatGPT] to disrupt critical functions across the department.

Read more:

OpenAI Quietly Deletes Ban on Using ChatGPT for Military and Warfare - The Intercept

CES Briefing: Brands use CES stage to spotlight AI innovation – Digiday

As CES wraps up, its easy to see that, as predicted, AI dominated conversations on stage and throughout the showroom this year.

On Thursday during CES, Mastercard debuted a pilot AI tool that provides personalized help with starting a small business from applying for grants, sourcing materials, naming the business and creating a marketing campaign.

Mastercards tool, developed in collaboration with Create Labs, was trained using a range of Mastercard content from several publishers including Blavity Media Group, Group Black, Newsweek and TelevisaUnivision to help mitigate AI bias. Mastercard wouldnt disclose which large language models it used to create the platform.

Its almost like being an AI mentor for small businesses, Mastercard CMO Raja Rajamannar told Digiday at CES. It really guides you step-by-step, holds your hand and teaches you, gives you plans, gives you thought starters, helps you shortlist priorities and everything. I think this is going to be a very powerful tool.

Mastercard is just one of a number of marketers to use CES to showcase their AI efforts. Major marketers like LOreal, BMW, Amazon, Walmart, Samsung and more took to CES to tout their use of AI and more formally connect their brands with AI.

This year is the year of AI, said Ben James, chief innovation officer at Gale Agency, of the focus at CES adding the previous years have focused on voice assistants and other technologies. Its really just a tool that speeds us up to move faster. The difference [with AI versus previous technologies that have dominated CES] is weve never seen a tool or a technology really hit so many, impact so many spaces that it basically saves time in many, many, many, many industries and many parts of your workflow all at once.

That marketers would use CES to put a foothold down as first movers and truly connect their brands with AI isnt surprising. Marketers are always drawn to the shiny, new thing and pushing their brands to be tied to said thing to keep them relevant. While that strategy doesnt always work out the metaverse was a point of focus last year but marketers interest in the metaverse seems to have dwindled significantly since then marketers seem bullish on the likelihood that AI is here to stay.

For brands and marketers, they are feeling the early stages of this gen AI explosion, said Brian Yamada, chief innovation officer, VML. Theres all kinds of hype, but a lot of people are standing back on the sidelines waiting for it to be commercially viable. Were at the beginning of brand adoption.

That early movers like Mastercard and LOreal (the beauty behemoth debuted its AI-powered beauty advisor), among others, are using CES to showcase how their brands are adopting AI will have the marketers on the sidelines paying attention to how the first movers are using AI and what it can do for their brands. Even as some marketers are on the sidelines, theres more interest in innovation overall because of the AI hype, according to agency execs.

The AI hype has maybe opened the door a little wider for a clients appetite for innovation, said Yamada, adding that he has seen more curiosity from marketers in visiting the start ups in the Eureka Park section of CES this year. With that being said, VML is having to spend more time to articulate the intent of clients interest in AI because it can be ambiguous at the moment, explained Yamada.

For marketers who are watching early movers and trying to figure out how to use AI for their brands, VML is asking clients what they want to be able to do, what they want to create for their audience or customers, what the problem or use case for AI may be or the problem that it could solve, noted Yamada. Taking that approach will make whatever the application of AI is for the brand less about simply using AI to keep up with other brands but to do something that consumers will appreciate.

I dont view it as like theyre just jumping on the bandwagon kind of thing, said James of brands putting AI front and center at CES. Given the rise of AI and the likelihood that this isnt just a quick phase, its important that they try to engage with the subject and try to do something with it. Kristina Monllos and Marty Swant

Wrapping its blitz of moves with major social and commerce platforms to align the discovery, planning and measurement of marketing-driven influencer and creator content with other media channels, Omnicom is rolling out creator benchmarking insights for all Meta platforms, Digiday has learned.

The news follows research that aims to better understand the value influencers bring to the marketing equation, as well as partnerships and deals with TikTok, YouTube and Amazon all aimed at putting influencer marketing alongside other channels as well as to boost its performative abilities.

The co-development deal with Meta revolves around the ability to benchmark creators mainly within Omni, Omnicoms central operating system, which was intentionally designed to be open to data inputs from any source (ie Meta) to harmonize it with its own data. The benchmarking ability lets planners across Omnicoms global markets analyze the performance of creator content across Facebook and Instagram against inventory that currently includes more than 28,000 pieces of creator content curated by Omnicom Media Group. Insights within these data lakes can be broken out by industry and influencer to drill down to granular decisioning levels.

Megan Pagliuca, OMGs North American chief activation officer, said the benchmarking extends work Omnicom has been doing with Meta for more than a year. It started as kind of a paid social intelligence suite where we had paid social benchmarking, and its now extended to have to creator benchmarking capabilities that help inform planning, said Pagliuca. So were looking at an array of attributes rather than just looking at something like the number of followers.

That resonates with other agents of the industry that have a stake in making influencers a bigger part of marketing. A good influencer campaign should look beyond follower counts, emphasizing audience loyalty and engagement, said Matilda Donovan, digital talent agent at UTA. Aligning the creators brand with the promoted product ensures resonance with the audiences established preferences, driving the strongest results.

Ben Hovaness, global chief media officer for OMD, added that this is another step toward assessing creator marketing side by side with other established media channels. This gives our clients insights far beyond what you can get out of using the platforms built-in planning tools, because we have the advantage of a huge volume of client performance data to use, said Hovaness. So we can drill in by different objectives, formats and so forth.

As we are moving towards influencer [marketing] being a full fledged media channel, we had to think about what are the ways you would optimize, said Clarissa Season, chief experience officer at Annalect, which manages Omni. So that caused us to look at data a little differently for the influencer audience and how we visualize that and bring that to life for our users, so that they can quickly and easily optimize and make those adjustments.

Bianca Bradford, Metas head of agency for North America, said its the context thats key to the co-developed benchmarking: It can help provide additional context around the impact an individual creator is making, and we believe that providing these types of insights to advertisers can help push forward the broader creator ecosystem.

Hovaness pointed out the importance of understanding regional nuances of creator partnerships and the benchmarking effort will occur in a variety of global markets starting in Q1 of this year. Influencer Marketing varies a great deal from one region to the next from one market to the next, he said. What a micro influencer is in the United States is very different from what it might be in China or another major market. Being able to cluster our data into tranches or tiers of influencers is enormously powerful, especially when were looking at things through a market-specific lens which we do for most of our media activations. Michael Brgi

Make sure that your inputs are really fit to the purpose of what youre trying to get out of it. Stacy Berek, consumer insights and sales effectiveness teams at GfK, North America

Once the madness of CES is over, dont be that person who forgets to follow up, said said Marisa Nelson, evp of marketing and communications at ad tech vendor Equativ. Shoot a message to those you met while the memory is still fresh and cement those relationships quickly. as told to Seb Joseph; Read the full veterans guide to CES.

To receive this briefing by email, please sign up here.

Go here to see the original:

CES Briefing: Brands use CES stage to spotlight AI innovation - Digiday

At CES, new AI tools for TVs offer new features for both content and commerce – Digiday

The current wave of AI innovation is making smart TVs smarter.

At CES 2024 this week in Las Vegas, giant and startup TV manufactures alike are rolling out new AI integrations to power viewing, advertising and shopping.

Earlier this week, Telly a startup that gives people a free 4K TV in exchange for more ads debuted a new voice AI assistant called Hey Telly. Built with OpenAIs large language model, the chatbot helps operate the TV, chat with viewers and behave in ways similar to ChatGPT. With three possible characters to start and more to come the chatbot can also provide personalized recommendations based on whos watching. Its also exploring ways of using generative AI platforms like Midjourney and DALL-E to let people make generative AI images for TV screensavers.

Besides using OpenAIs LLMs, Telly also employs others for various tasks. For example, it uses a separate voice engine that can translate speech to text and then use that raw text for various AI models depending on uses.

We have brought in the models, all the processing, that we see fit for the use case, said Telly chief product officer Sascha Prueter.

Beyond startups, giants like Samsung, LG and Hisense are all adding AI to new TVs to improve picture and sound quality. For example, some use visual recognition to optimize the screen based on the content someone is watching or playing. Others are using AI to optimize audio by analyzing a rooms background noise and voices.

TV manufacturers also hope new AI features will make screens more useful beyond entertainment options include assisting people with workouts, providing telehealth tools, and becoming a hub for controlling other smart home devices. One end goal is to potentially replace other smart screens from companies like Amazon and Google by making the TV a central hub for everything in the home.

Some marketers who attended CES said they think new transparent TVs from LG and Samsung also powered by new AI processors offer compelling uses beyond showing viewers whats on the wall behind their big screen. For example, the newly unveiled devices which can operate like a normal TV or turn off to be clear as glass might someday be used by retailers to give passersby new ways to window shop.

AI is also enabling more options for commerce. Telly has added new AI-powered image recognition tools that show products to buy depending on whats on the screen. In a demo for Digiday during CES, a televised basketball game showcased products worn by players that can then be bought directly through Telly. Advertisers will also be able to sponsor product ads to show alongside other unpaid personalized recommendations.

Telly isnt the only TV startup thinking about AI and e-commerce. Another on the showroom floor at CES was Displace, which initially debuted a wireless TV at CES 2023. This year, the startups added a way to use hand gestures which the TV recognizes with AI to tell the TV to pause the show, analyze the screen for various products before Displace suggests similar products to buy.

Unlike Telly, Displace doesnt plan to run sponsored product ads, but one of its TVs does have an NFC reader so people can tap their preferred method of payment to buy something on Amazon or another platform. (Displace is also using OpenAIs technology along with various other sources.) Displaces new features also hope to ultimately provide recipes and show where to buy ingredients based on a cooking show someone is watching in their kitchen.

We are building a TV platform, said Displace founder and CEO Balaji Krishnan. This is exactly what happened with smartphones. Before smartphones, everything was SMS-based messaging. And even payments PayPal started with email, but now everything is integrated into the phone because companies like Apple and Google created a platform We are trying to create a contextual thing for the TV set.

Of course, the AI used in new cameras and voice tools also creates new privacy concerns: How do they make sure their faces, conversations, credit card data and other information arent leaked by one of the LLMs that process them? Prueter also said Tellys survey data is highly confidential and wont be fed back into ChatGPT or other public AI models.

To offer a sense of privacy, Tellys camera has a shutter in front and requires permission before taking a photo, but other sensors know when people are watching and how many are in the room. On Displaces devices, the camera pops up and can get closed into the screen whenever they dont want to use it.

Although ads are part of Tellys pitch people need to fill out a lengthy survey with more than 100 questions that helps to personalize features and advertising thats not the case for Displace. When asked about whether Displace plans to offer retargeting for advertisers, Krishnan said the company will never sell data because ads arent our thing.

What we are enabling is the advertisers now can actually pay their ads in a way that is interactive and transactional, Krishnan said. Because theyre already playing ads.Like if you look at the Super Bowl ads, theyre millions of dollars worth of ads, but theyre not interactive.

More:

At CES, new AI tools for TVs offer new features for both content and commerce - Digiday

AI Discovers That Not Every Fingerprint Is Unique – Columbia University

Columbia engineers have built a new AI that shatters a long-held belief in forensicsthat fingerprints from different fingers of the same person are unique. It turns out they are similar, only weve been comparing fingerprints the wrong way!

Jan 10 2024 | By Holly Evarts | Photo Credit: Marco-Marcil Montoto, Columbia Engineering, generated with Dall-E

AI discovers a new way to compare fingerprints that seem different, but actually belong to different fingers of the same person. In contrast with traditional forensics, this AI relies mostly on the curvature of the swirls at the center of the fingerprint, as shown by the heatmap. Credit: Gabe Guo, Columbia Engineering; Midjourney generated silhouette.

From Law and Order to CSI, not to mention real life, investigators have used fingerprints as the gold standard for linking criminals to a crime. But if a perpetrator leaves prints from different fingers in two different crime scenes, these scenes are very difficult to link, and the trace can go cold.

Its a well-accepted fact in the forensics community that fingerprints of different fingers of the same person--intra-person fingerprints--are unique, and therefore unmatchable.

A team led by Columbia Engineering undergraduate senior Gabe Guo challenged this widely held presumption. Guo, who had no prior knowledge of forensics, found a public U.S. government database of some 60,000 fingerprints and fed them in pairs into an artificial intelligence-based system known as a deep contrastive network. Sometimes the pairs belonged to the same person (but different fingers), and sometimes they belonged to different people.

Credit: Gabe Guo and Aniv Ray/Columbia Engineering

Over time, the AI system, which the team designed by modifying a state-of-the-art framework, got better at telling when seemingly unique fingerprints belonged to the same person and when they didnt. The accuracy for a single pair reached 77%. When multiple pairs were presented, the accuracy shot significantly higher, potentially increasing current forensic efficiency by more than tenfold. The project, a collaboration between Hod Lipsons Creative Machines lab at Columbia Engineering and Wenyao Xus Embedded Sensors and Computing lab at University at Buffalo, SUNY, was published today in Science Advances.

Once the team verified their results, they quickly sent the findings to a well-established forensics journal, only to receive a rejection a few months later. The anonymous expert reviewer and editor concluded that It is well known that every fingerprint is unique, and therefore it would not be possible to detect similarities even if the fingerprints came from the same person.

The team did not give up. They doubled down on the lead, fed their AI system even more data, and the system kept improving. Aware of the forensics community's skepticism, the team opted to submit their manuscript to a more general audience. The paper was rejected again, but Lipson, who is the James and Sally Scapa Professor of Innovation in the Department of Mechanical Engineering and co-director of the Makerspace Facility, appealed. I dont normally argue editorial decisions, but this finding was too important to ignore, he said. If this information tips the balance, then I imagine that cold cases could be revived, and even that innocent people could be acquitted.

While the systems accuracy is not sufficient to officially decide a case, it can help prioritize leads in ambiguous situations. After more back and forth, the paper was finally accepted for publication by Science Advances.

One of the sticking points was the following question: What alternative information was the AI actually using that has evaded decades of forensic analysis? After careful visualizations of the AI systems decision process, the team concluded that the AI was using a new kind of forensic marker.

The AI was not using minutiae, which are the branchings and endpoints in fingerprint ridges the patterns used in traditional fingerprint comparison, said Guo, who began the study as a first-year student at Columbia Engineering in 2021. Instead, it was using something else, related to the angles and curvatures of the swirls and loops in the center of the fingerprint.

Columbia Engineering senior Aniv Ray and PhD student Judah Goldfeder, who helped analyze the data, noted that their results are just the beginning. Just imagine how well this will perform once its trained on millions, instead of thousands of fingerprints, said Ray.

The team is aware of potential biases in the data. The authors present evidence that indicates that the AI performs similarly across genders and races, where samples were available. However, they note, more careful validation needs to be done using datasets with broader coverage if this technique is to be used in practice.

This discovery is an example of more surprising things to come from AI, notes Lipson. Many people think that AI cannot really make new discoveriesthat it just regurgitates knowledge, he said. But this research is an example of how even a fairly simple AI, given a fairly plain dataset that the research community has had lying around for years, can provide insights that have eluded experts for decades.

He added, Even more exciting is the fact that an undergraduate student, with no background in forensics whatsoever, can use AI to successfully challenge a widely held belief of an entire field. We are about to experience an explosion of AI-led scientific discovery by non-experts, and the expert community, including academia, needs to get ready.

Continue reading here:

AI Discovers That Not Every Fingerprint Is Unique - Columbia University

AI can transform education for the better – The Economist

Listen to this story. Enjoy more audio and podcasts on iOS or Android.

Your browser does not support the

AS PUPILS AND students return to classrooms and lecture halls for the new year, it is striking to reflect on how little education has changed in recent decades. Laptops and interactive whiteboards hardly constitute disruption. Many parents bewildered by how their children shop or socialise would be unruffled by how they are taught. The sector remains a digital laggard: American schools and universities spend around 2% and 5% of their budgets, respectively, on technology, compared with 8% for the average American company. Techies have long coveted a bigger share of the $6trn the world spends each year on education.

When the pandemic forced schools and universities to shut down, the moment for a digital offensive seemed nigh. Students flocked to online learning platforms to plug gaps left by stilted Zoom classes. The market value of Chegg, a provider of online tutoring, jumped from $5bn at the start of 2020 to $12bn a year later. Byjus, an Indian peer, soared to a private valuation of $22bn in March 2022 as it snapped up other providers across the world. Global venture-capital investment in education-related startups jumped from $7bn in 2019 to $20bn in 2021, according to Crunchbase, a data provider.

Then, once covid was brought to heel, classes resumed much as before. By the end of 2022 Cheggs market value had slumped back to $3bn. Early last year investment firms including BlackRock and Prosus started marking down the value of their stakes in Byjus as its losses mounted. In hindsight we grew a bit too big a bit too fast, admits Divya Gokulnath, the companys co-founder.

If the pandemic couldnt overcome the education sectors resistance to digital disruption, can artificial intelligence? ChatGPT-like generative AI, which can converse cleverly on a wide variety of subjects, certainly looks the part. So much so that educationalists began to panic that students would use it to cheat on essays and homework. In January 2023 New York City banned ChatGPT from public schools. Increasingly, however, it is generating excitement as a means to provide personalised tutoring to students and speed up tedious tasks such as marking. By May New York had let the bot back into classrooms.

Learners, for their part, are embracing the technology. Two-fifths of undergraduates surveyed last year by Chegg reported using an AI chatbot to help them with their studies, with half of those using it daily. Indeed, the technologys popularity has raised awkward questions for companies like Chegg, whose share price plunged last May after Dan Rosensweig, its chief executive, told investors it was losing customers to ChatGPT. Yet there are good reasons to believe that education specialists who harness AI will eventually prevail over generalists such as OpenAI, the maker of ChatGPT, and other tech firms eyeing the education business.

For one, AI chatbots have a bad habit of spouting nonsense, an unhelpful trait in an educational context. Students want content from trusted providers, argues Kate Edwards, chief pedagogist at Pearson, a textbook publisher. The company has not allowed ChatGPT and other AIs to ingest its material, but has instead used the content to train its own models, which it is embedding into its suite of learning apps. Rivals including McGraw Hill are taking a similar approach. Chegg has likewise developed its own AI bot that it has trained on its ample dataset of questions and answers.

What is more, as Cheggs Mr Rosensweig argues, teaching is not merely about giving students an answer, but about presenting it in a way that helps them learn. Understanding pedagogy thus gives education specialists an edge. Pearson has designed its AI tools to engage students by breaking complex topics down, testing their understanding and providing quick feedback, says Ms Edwards. Byjus is incorporating forgetting curves for students into the design of its AI tutoring tools, refreshing their memories at personalised intervals. Chatbots must also be tailored to different age groups, to avoid either bamboozling or infantilising students.

Specialists that have already forged relationships with risk-averse educational institutions will have the added advantage of being able to embed AI into otherwise familiar products. Anthology, a maker of education software, has incorporated generative-AI features into its Blackboard Learn program to help teachers speedily create course outlines, rubrics and tests. Established suppliers are also better placed to instruct teachers on how to make use of AIs capabilities.

Bringing AI to education will not be easy. Although teachers have endured a covid-induced crash course in education technology, many are still behind the learning curve. Less than a fifth of British educators surveyed by Pearson last year reported receiving training on digital learning tools. Tight budgets at many institutions will make selling new technology an uphill battle. AI sceptics will have to be won over, and new AI-powered tools may be needed to catch AI-powered cheating. Thorny questions will inevitably arise as to what all this means for the jobs of teachers: their attention may need to shift towards motivating students and instructing them on how to best work with AI tools. We owe the industry answers on how to harness this technology, declares Bruce Dahlgren, boss of Anthology.

If those answers can be provided, it is not just companies like Mr Dahlgrens that stand to benefit. An influential paper from 1984 by Benjamin Bloom, an educational psychologist, found that one-to-one tutoring both improved the average academic performance of students and reduced the variance between them. AI could at last make individual tutors viable for the many. With the learning of students, especially those from poorer households, set back by the upheaval of the pandemic, such a development would certainly deserve top marks.

Read more from Schumpeter, our columnist on global business:Meet the shrewdest operators in todays oil markets (Jan 3rd)Can anyone bar Europe do luxury? (Dec 20th)Boneheaded anti-immigration politicians are throttling globalisation (Dec 14th)

Also: If you want to write directly to Schumpeter, email him at [emailprotected]. And here is an explanation of how the Schumpeter column got its name.

Read more here:

AI can transform education for the better - The Economist