The Department of State’s pilot project approach to AI adoption – FedScoop
With the release of ChatGPT and other large language models, generative AI has clearly caught the publics attention.This new awareness, particularly in the public sector, of the tremendous power of artificial intelligence is a net good. However, excessive focus on chatbot-style AI capabilities risks overshadowing applicationsthatare both innovative and practicaland seek to serve the public through increased government transparency.
Within government, there are existing projects that are more maturethan AI chatbotsand are immediately ready to deliver more efficient government operations.Through a partnershipbetweenthree offices, the Department of State is seeking to automate the cumbersome process of document declassification and prepare for the large volume of electronic records that will need to be reviewed in the next several years.The Bureau of Administrations Office of Global Information Services (A/GIS), the Office of Management Strategy and Solutions Center for Analytics (M/SS CfA), and the Bureau of Information Resource Managements (IRM) Messaging Systems Officehave piloted and are now moving toward production-scale deployment of AI to augmentanintensive, manual review processthat normally necessitates a page-by-page human review of 25-year-old classified electronic records. The pilot focused mainly on cable messages which are communications between Washington and the departments overseas posts.
The 25-year declassification review process entails a manual review of electronic, classified records at the confidential and secret levelsin the year that their protection period elapses; inmanycases, 25 years after original classification.Manual review has historically been the only way to determineif information can be declassified for eventual public release, or exempt from declassification to protect information critical to our nations security.
However, manual review is a time-intensive process.A team ofabout sixreviewers works year-round to review classified cables and must use a triage method to prioritize reviewing the cables most likely to require exemption from automatic declassification.In most years, they are unable to review every one of the between 112,000 and 133,000electroniccables under review from 1995-1997.The risk ofnot being able to review each document for anysensitive material is exacerbated by the increasing volume of documents.
Thismanual review strategy is quickly becoming unsustainable.Around 100,000 classified cables were created each year between 1995 and 2003.The number of cablescreated in 2006thatwill require review grew to over 650,000and remains at that volume for the following years.Whileemails are currently an insignificant portion of25-year declassificationreviews, the number of classified emails doubles every two years after 2001, rising to over 12 million emails in 2018.To get ahead of this challenge, we have turned to artificial intelligence.
Considering AI is still a cutting-edge innovation with uncertainty and risk, our approach started with a pilot to test the impact of the process on a small scale. We trained a model, using human declassification decisions made in 2020 and 2021 on cables classified confidential and secret in 1995 and 1996, to recreate those decisions on cables classified in 1997.Over 300,000 classified cables were used for training and testing during the pilot.The pilot took three months and five dedicated data scientists to develop and train a model that matches previous humandeclassificationreview decisions at a rate of over 97percentand with the potential to reduce over 65percentof the existing manual workload.The pilot approach allowed us to consider and plan for three AI risks: lack of human oversight of automated decision-making, the ethics of AI, and overinvestment of time and money on products that arent usable.
The new declassification tool will not replace jobs.The AI-assisted declassification review processrequireshuman reviewers to remain part of the decision-makingprocess.During the pilot and the subsequent weeks of work to put the model into production, reviewers were consistently consulted and their feedback integrated into the automated decision process.This combination of technological review with human review and insight is critical to the success of the model.The model cannot make a decision with confidence on every cable, necessitating thathumanreviewers make a decision as they normally would on a portion of all cables.Reviewers also conduct quality control.A small, yet significant, percentage of cables with automated confident decisions are given to reviewers for confirmation.If enough of the AI-generated decisions are contradicted during the quality control check, the model can be re-trained to consider the information that it missed and integrate reviewer feedback.This feedback is critical to sustaining the model in the long term and for considering evolving geopolitical contexts.During the pilot, we determined that additional input from the Departments Office of the Historian (FSI/OH) could help strengthen future declassification review models by providing input about world events during the years of records being reviewed.
There are ethical concerns that innovating with AI will lead to governing by algorithm.Although the descriptive AI used in our pilot does not construct narrative conversations like large language models (LLMs) and ChatGPT, it is designed to make decisions by learning previous human inputs.The approximation of human thought raises concerns of ethical government when it replaces what is considered sensitive and specialized experience.In our implementation, AI is a tool that works in concert with humans for validation, oversight, and process refinement.Incorporating AI tools into our workflows requires continually addressing the ethical dimensions of automated decision-making.
This project also saves money potentially millions of dollars worth of personnel hours.Innovation for the sake of being innovative can result in overinvestment in dedicated staff and technology, which is unable to sustain itself or end up in long-term cost savings.Because we tested our short-term pilot within the confines of existing technology, when we forecast the workload reduction across the next ten years of reviews, we anticipate an almost $8 million savings on labor costs.Those savings can be applied to piloting AI solutions for other governmental programsmanaging increased volumes of data and records with finite resources, such asinformation access requests for electronic recordsand Freedom of Information Actrequests.
Rarely in government do we prioritize the time to try, and potentially fail, in the interest of innovation and efficiency.The small-scale declassification pilot allowed for a proof of concept before committing to sweeping changes.In ournextphase,the Department isbringing the pilot to scaleso that the AI technology is integrated with existing Department technology as part of the routine declassification process.
Federal interest in AI use cases has exploded in only the last few months, with many big and bold ideas being debated.While positive, these debates should not detract from use cases like this, which can rapidly improve government efficiencyand transparency through the release of information to the public.Furthermore, the lessons learned from this use case having clear metrics of success upfront, investing in data quality and structure, starting with asmall-scalepilot can also be applied to future generative AI use cases as well.AIs general-purpose capabilities mean that it will eventually be a part of almost all aspects of how the government operates, from budget and HR to strategy and policy making.We have an opportunity to help shape how the government modernizes its programs and services within and across federal agencies to improve services for the public in ways previously unimagined or possible.
Matthew Graviss is chief data and AI officer at the Department of State, and director of the agencys Center for Analytics. Eric Stein is the deputy assistant secretary for the office of Global Information Services at States Bureau of Administration. Samuel Stehle is a data scientist within the Center for Analytics.
Originally posted here:
The Department of State's pilot project approach to AI adoption - FedScoop
- Artificial General Intelligence (AGI): the first global standard for measuring it has been defined - Red Hot Cyber - October 28th, 2025 [October 28th, 2025]
- Tech CEO Dan Herbatschek, a Mathematician Who Founded Ramsey Theory Group, Outlines Three Breakthroughs Essential for Achieving True Artificial... - October 17th, 2025 [October 17th, 2025]
- Artificial General Intelligence and The Slaveholder Mentality - Daily Kos - September 30th, 2025 [September 30th, 2025]
- Artificial General Intelligence Development: Bridging Theoretical Aspirations and Contemporary Enterprise Integration Frameworks - Tech Times - September 25th, 2025 [September 25th, 2025]
- Dyna Robotics Raises $120 Million to Advance Robotic Foundation Models on the Path to Physical Artificial General Intelligence - Yahoo Finance - September 21st, 2025 [September 21st, 2025]
- Dyna Robotics Raises $120 Million to Advance Robotic Foundation Models on the Path to Physical Artificial General Intelligence - PR Newswire - September 17th, 2025 [September 17th, 2025]
- "Physical Bodies Required for True Intelligence": AI Researchers Explore Whether Soft Robotics and Embodied Cognition Unlock Artificial... - September 13th, 2025 [September 13th, 2025]
- Report: The Road to Artificial General Intelligence: Achieving the Next Era of Intelligence - Semiconductor Engineering - September 11th, 2025 [September 11th, 2025]
- The Debate On Whether Artificial General Intelligence Should Inevitably Be Declared A Worldwide Public Good With Free Access For All - Forbes - September 11th, 2025 [September 11th, 2025]
- Prepare for the workplace impact of artificial general intelligence - it-online.co.za - September 3rd, 2025 [September 3rd, 2025]
- The Race for AGI: Why 2027 Is the Year We Could See Artificial General Intelligence - MSN - August 26th, 2025 [August 26th, 2025]
- OpenAI's head of people is leaving to make art about artificial general intelligence - MSN - August 26th, 2025 [August 26th, 2025]
- Godfather of AI warns artificial general intelligence may arrive years sooner than previously believed - MacDailyNews - August 16th, 2025 [August 16th, 2025]
- Meta is planning its fourth overhaul of AI operations in just six months, with CEO Mark Zuckerberg aiming to accelerate work toward artificial general... - August 16th, 2025 [August 16th, 2025]
- People Will Lose Their Minds When AI Such As Artificial General Intelligence Suffers Blackouts - Forbes - August 14th, 2025 [August 14th, 2025]
- ChatGPT edges towards artificial general intelligence with GPT-5 - Techgoondu - August 12th, 2025 [August 12th, 2025]
- Most of the GPT-5 Updates Are a Snooze. Wake Me When Artificial General Intelligence Arrives - PCMag - August 9th, 2025 [August 9th, 2025]
- Most of the GPT-5 Updates Are a Snooze. Wake Me When Artificial General Intelligence Arrives - PCMag Australia - August 9th, 2025 [August 9th, 2025]
- GPT-5 Is Not Artificial General Intelligence, but Heres Why It Is Crucial for OpenAIs Mission - Republic World - August 9th, 2025 [August 9th, 2025]
- Experts Discuss the Impact of Advanced Autonomy and Progress Toward Artificial General Intelligence - ePlaneAI - August 9th, 2025 [August 9th, 2025]
- DeepMind's Genie 3: A Milestone on the Path to Artificial General Intelligence - AInvest - August 7th, 2025 [August 7th, 2025]
- After months of mounting anticipation, OpenAI officially launched GPT-5 on Thursday, calling it a major leap in its mission toward Artificial General... - August 7th, 2025 [August 7th, 2025]
- Computer Architecture Extending The Von Neumann Model With A Dedicated Reasoning Unit For Native Artificial General Intelligence(TU Munich, Pace U.) -... - July 24th, 2025 [July 24th, 2025]
- Artificial General Intelligence: What is It, and Which Companies Are Leading the Way? - CMC Markets - July 18th, 2025 [July 18th, 2025]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - AOL.com - July 2nd, 2025 [July 2nd, 2025]
- Artificial General Intelligence Explained: When Will AI Be Smarter Than Us? | Behind the Numbers - eMarketer - July 2nd, 2025 [July 2nd, 2025]
- Is Artificial General Intelligence (AGI) Closer Than We Think? - Vocal - June 29th, 2025 [June 29th, 2025]
- Microsoft and OpenAI dueling over artificial general intelligence, The Information reports - MSN - June 29th, 2025 [June 29th, 2025]
- Viewpoint: How AGI (artificial general intelligence) threatens to undermine what it means to be human - Genetic Literacy Project - June 28th, 2025 [June 28th, 2025]
- These two game-changing breakthroughs advance us toward artificial general intelligence - Fast Company - June 28th, 2025 [June 28th, 2025]
- Microsoft and OpenAI dueling over artificial general intelligence, The Information reports By Reuters - Investing.com - June 28th, 2025 [June 28th, 2025]
- OpenAI And Microsoft Reportedly At Odds Over Access To Artificial General Intelligence: 'Talks Are Ongoing And We Are Optimistic' - Benzinga - June 26th, 2025 [June 26th, 2025]
- Is Artificial General Intelligence Here? - Behind The News - Australian Broadcasting Corporation - June 24th, 2025 [June 24th, 2025]
- Did Apples Recent Illusion of Thinking Study Expose Fatal Shortcomings in Using LLMs for Artificial General Intelligence? - Economist Writing Every... - June 20th, 2025 [June 20th, 2025]
- On the construction of artificial general intelligence based on the correspondence between goals and means - Frontiers - June 20th, 2025 [June 20th, 2025]
- The Ardent Belief That Artificial General Intelligence Will Bring Us Infinite Einsteins - Forbes - June 10th, 2025 [June 10th, 2025]
- Mark Zuckerberg is assembling a team of experts to achieve artificial general intelligence - iblnews.org - June 10th, 2025 [June 10th, 2025]
- 'Foolhardy at best, and deceptive and dangerous at worst': Don't believe the hype here's why artificial general intelligence isn't what the... - June 7th, 2025 [June 7th, 2025]
- Mind-Bending New Inventions That Artificial General Intelligence Might Discover For The Sake Of Humanity - Forbes - June 7th, 2025 [June 7th, 2025]
- Why AI-As-Coder Is Said To Be The Fastest Path Toward Reaching Artificial General Intelligence - Forbes - June 7th, 2025 [June 7th, 2025]
- Artificial General Intelligence in Competition and War - RealClearDefense - May 11th, 2025 [May 11th, 2025]
- OpenAI CFO Sarah Friar on the race to build artificial general intelligence - Goldman Sachs - April 16th, 2025 [April 16th, 2025]
- Artificial General Intelligence (AGI) Progress & The Road to ASI - Crowe - April 16th, 2025 [April 16th, 2025]
- What is artificial general intelligence and how does it differ from other types of AI? - Tech Xplore - April 5th, 2025 [April 5th, 2025]
- DeepMind predicts arrival of artificial general intelligence by 2030, warns of potential existential threat to humanity - BizzBuzz - April 5th, 2025 [April 5th, 2025]
- Stop the World: The road to artificial general intelligence, with Helen Toner - | Australian Strategic Policy Institute | ASPI - April 5th, 2025 [April 5th, 2025]
- Artificial General Intelligence: The Next Frontier in AI - The Villager Newspaper - April 3rd, 2025 [April 3rd, 2025]
- Prominent transhumanist on Artificial General Intelligence: We must stop everything. We are not ready. - All Israel News - March 22nd, 2025 [March 22nd, 2025]
- Researchers want to give some common sense to AI to turn it into artificial general intelligence - MSN - March 22nd, 2025 [March 22nd, 2025]
- The AI Obsession: Why Chasing Artificial General Intelligence is a Misguided Dream - Macnifico.pt - March 18th, 2025 [March 18th, 2025]
- Navigating artificial general intelligence development: societal, technological, ethical, and brain-inspired pathways - Nature.com - March 13th, 2025 [March 13th, 2025]
- We meet the protesters who want to ban Artificial General Intelligence before it even exists - The Register - February 23rd, 2025 [February 23rd, 2025]
- How Artificial General Intelligence (AGI) is likely to transform manufacturing in the next 10 years - Wire19 - February 11th, 2025 [February 11th, 2025]
- How Artificial General Intelligence is likely to transform manufacturing in the next 10 years - ET Manufacturing - February 11th, 2025 [February 11th, 2025]
- How Do You Get to Artificial General Intelligence? Think Lighter - WIRED - November 28th, 2024 [November 28th, 2024]
- How much time do we have before Artificial General Intelligence (AGI) to turns into Artificial Self-preserving - The Times of India - November 5th, 2024 [November 5th, 2024]
- Simuli to Leap Forward in the Trek to Artificial General Intelligence through 2027 Hyperdimensional AI Ecosystem - USA TODAY - November 5th, 2024 [November 5th, 2024]
- Implications of Artificial General Intelligence on National and International Security - Yoshua Bengio - - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - Business Insider - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - MSN - October 31st, 2024 [October 31st, 2024]
- Bot fresh hell is this?: Inside the rise of Artificial General Intelligence or AGI - MSN - October 31st, 2024 [October 31st, 2024]
- Artificial General Intelligence (AGI) Market to Reach $26.9 Billion by 2031 As Revealed In New Report - WhaTech - September 26th, 2024 [September 26th, 2024]
- 19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - MSN - September 26th, 2024 [September 26th, 2024]
- Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -... - August 16th, 2024 [August 16th, 2024]
- Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra - August 11th, 2024 [August 11th, 2024]
- OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED - July 22nd, 2024 [July 22nd, 2024]
- OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub - July 22nd, 2024 [July 22nd, 2024]
- One of the Best Ways to Invest in AI Is Dont - InvestorPlace - July 22nd, 2024 [July 22nd, 2024]
- OpenAI is plagued by safety concerns - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework - Ars Technica - July 17th, 2024 [July 17th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite - July 17th, 2024 [July 17th, 2024]
- Heres how OpenAI will determine how powerful its AI systems are - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz - July 17th, 2024 [July 17th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune - July 17th, 2024 [July 17th, 2024]
- AI News Today July 15, 2024 - The Dales Report - July 17th, 2024 [July 17th, 2024]
- The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx - July 17th, 2024 [July 17th, 2024]
- What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer - July 17th, 2024 [July 17th, 2024]
- Companies are losing faith in AI, and AI is losing money - Android Headlines - July 17th, 2024 [July 17th, 2024]