Department of Commerce Announces New Guidance, Tools 270 Days Following President Bidens Executive Order on AI – NIST
Credit: NicoElNino/Shutterstock
The U.S. Department of Commerce announced today, on the 270-day mark since President Bidens Executive Order (EO) on the Safe, Secure and Trustworthy Development of AI, the release of new guidance and software to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems.
The departments National Institute of Standards and Technology (NIST) released three final guidance documents that were first released in April for public comment, as well as a draft guidance document from the U.S. AI Safety Institute that is intended to help mitigate risks. NIST is also releasing a software package designed to measure how adversarial attacks can degrade the performance of an AI system. In addition, Commerces U.S. Patent and Trademark Office (USPTO) issued a guidance update on patent subject matter eligibility to address innovation in critical and emerging technologies, including AI.
For all its potentially transformational benefits, generative AI also brings risks that are significantly different from those we see with traditional software. These guidance documents and testing platform will inform software creators about these unique risks and help them develop ways to mitigate those risks while supporting innovation. Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology and NIST Director
Read the full Department of Commerce news release.
Read the White House fact sheet on administration-wide actions on AI.
The NIST releases cover varied aspects of AI technology. Two of them appear today for the first time: One is the initial public draft of a guidance document from the U.S. AI Safety Institute, and is intended to help software developers mitigate the risks stemming from generative AI and dual-use foundation models AI systems that can be used for either beneficial or harmful purposes. The other is a testing platform designed to help AI system users and developers measure how certain types of attacks can degrade the performance of an AI system.
Of the remaining three releases, two are guidance documents designed to help manage the risks of generative AI the technology that enables many chatbots as well as text-based image and video creation tools and serve as companion resources to NISTs AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF). The third proposes a plan for U.S. stakeholders to work with others around the globe on AI standards. These three publications previously appeared April 29 in draft form for public comment, and NIST is now releasing their final versions.
The two releases NIST is announcing today for the first time are:
AI foundation models are powerful tools that are useful across a broad range of tasks and are sometimes called dual-use because of their potential for both benefit and harm. NISTs AI Safety Institute has released the initial public draft of its guidelines on Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1), which outlines voluntary best practices for how foundation model developers can protect their systems from being misused to cause deliberate harm to individuals, public safety and national security.
The draft guidance offers seven key approaches for mitigating the risks that models will be misused, along with recommendations for how to implement them and how to be transparent about their implementation. Together, these practices can help prevent models from enabling harm through activities like developing biological weapons, carrying out offensive cyber operations, and generating child sexual abuse material and nonconsensual intimate imagery.
NIST is accepting comments from the public on the draft Managing the Risk of Misuse for Dual-Use Foundation Models until Sept. 9, 2024, at 11:59 p.m. Eastern Time. Comments can be submitted to NISTAI800-1 [at] nist.gov (NISTAI800-1[at]nist[dot]gov).
One of the vulnerabilities of an AI system is the model at its core. By exposing a model to large amounts of training data, it learns to make decisions. Butif adversaries poison the training data with inaccuracies for example, by introducing data that can cause the model to misidentify stop signs as speed limit signs the model can make incorrect, potentially disastrous decisions. Testing the effects of adversarial attacks on machine learning models is one of the goals of Dioptra, a new software package aimed at helping AI developers and customers determine how well their AI software stands up to a variety of adversarial attacks.
The open-source software, available for freedownload, could help the community, including government agencies and small to medium-sized businesses, conduct evaluations to assess AI developers claims about their systems performance. This software responds to Executive Order section 4.1 (ii) (B), which requires NIST to help with model testing. Dioptra does this by allowing a user to determine what sorts of attacks would make the model perform less effectively and quantifying the performance reduction so that the user can learn how often and under what circumstances the system would fail.
Augmenting todays two initial releases are three finalized documents:
The AI RMF Generative AI Profile (NIST AI 600-1) can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best aligns with their goals and priorities. The guidance is intended to be a companion resource for users of NISTsAI RMF. It centers on a list of 12 risks and just over 200 actions that developers can take to manage them.
The 12 risks include a lowered barrier to entry for cybersecurity attacks, the production of mis- and disinformation or hate speech and other harmful content, and generative AI systems confabulating or hallucinating output. After describing each risk, the document presents a matrix of actions that developers can take to mitigate it, mapped to the AI RMF.
The second finalized publication, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NISTSpecial Publication (SP) 800-218A), is designed to be used alongside the Secure Software Development Framework (SP 800-218). While the SSDF is broadly concerned with software coding practices, the companion resource expands the SSDF in part to address a major concern with generative AI systems: They can becompromised with malicious training data that adversely affect the AI systems performance.
In addition to covering aspects of the training and use of AI systems, this guidance document identifies potential risk factors and strategies to address them. Among other recommendations, it suggests analyzing training data for signs of poisoning, bias, homogeneity and tampering.
AI systems are transforming society not only within the U.S., but around the world. A Plan for Global Engagement on AI Standards (NIST AI 100-5), todays third finalized publication, is designed to drive the worldwide development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing.
The guidance is informed by priorities outlined in the NIST-developed Plan for Federal Engagement in AI Standards and Related Tools and is tied to the National Standards Strategy for Critical and Emerging Technology. This publication suggests that a broader range of multidisciplinary stakeholders from many countries participate in the standards development process.
Here is the original post:
- TSMC Just Reignited the AI Boom -- Here's What Wall Street Missed - Yahoo Finance - July 18th, 2025 [July 18th, 2025]
- Nvidia Stock Gains. What TSMC Results Say About the AI Chip Market. - Barron's - July 18th, 2025 [July 18th, 2025]
- Apollos chief economist warns the AI bubble is even worse than the 1999 dot-com bubble - Fortune - July 18th, 2025 [July 18th, 2025]
- Mapping the AI economy: Which regions are ready for the next technology leap - Brookings - July 18th, 2025 [July 18th, 2025]
- How Sam Altman Outfoxed Elon Musk to Become Trumps AI Buddy - The Wall Street Journal - July 18th, 2025 [July 18th, 2025]
- ChatGPTs new AI agent can browse the web and create PowerPoint slideshows - Ars Technica - July 18th, 2025 [July 18th, 2025]
- Xi Wonders If All Chinese Provinces Need to Flood Into AI, EVs - Bloomberg - July 18th, 2025 [July 18th, 2025]
- Citi, Ant International pilot AI-powered FX tool for clients to help cut hedging costs - Reuters - July 18th, 2025 [July 18th, 2025]
- Nvidia CEO Says He Has Plans to Either Change or Eliminate Every Single Person's Job With AI - Futurism - July 18th, 2025 [July 18th, 2025]
- No Sleep For Kaname Date - From AI: THE SOMNIUM FILES review Therapy with the himbo detective - GamingTrend - July 18th, 2025 [July 18th, 2025]
- Xi Jinping is questioning if all of China should chase the same AI and EV dreams - Business Insider - July 18th, 2025 [July 18th, 2025]
- Former Top Google Researchers Have Made a New Kind of AI Agent - WIRED - July 18th, 2025 [July 18th, 2025]
- Barry Diller on why he's optimistic about AI - MSNBC News - July 18th, 2025 [July 18th, 2025]
- AI-generated music is going viral. Should the music industry be worried? - CNBC - July 18th, 2025 [July 18th, 2025]
- CoreWeave Stock Drops. Sell the AI Cloud Vendor, Analyst Says. - Barron's - July 18th, 2025 [July 18th, 2025]
- Amazon's AWS has joined the AI agent craze. Now the real work of showing Fortune 500 companies how to actually use them begins - Fortune - July 18th, 2025 [July 18th, 2025]
- Netflixs Ted Sarandos Says AI Will Make Movies and TV Better, Not Just Cheaper - The Hollywood Reporter - July 18th, 2025 [July 18th, 2025]
- Hollywoods being reshaped by generative AI. What does that mean for screenwriters? - Los Angeles Times - July 18th, 2025 [July 18th, 2025]
- How to Stop Hertzs AI Rental Car Damage Scanners From Screwing You - The Drive - July 18th, 2025 [July 18th, 2025]
- The Number of Steam Games Featuring Gen AI Has Increased Eightfold in a Year - 80 Level - July 18th, 2025 [July 18th, 2025]
- To survive the AI age, the web needs a new business model - The Economist - July 18th, 2025 [July 18th, 2025]
- ChatGPT Agent supercharges AI to carry out tasks here's how OpenAI's new agent works - Tom's Guide - July 18th, 2025 [July 18th, 2025]
- Top AI Researchers Concerned Theyre Losing the Ability to Understand What Theyve Created - Futurism - July 18th, 2025 [July 18th, 2025]
- An AI anime girlfriend is the latest feature on Elon Musks Grok - Euronews.com - July 18th, 2025 [July 18th, 2025]
- AI can't be your therapist: 'These bots basically tell people exactly what they want to hear,' psychologist says - CNBC - July 18th, 2025 [July 18th, 2025]
- Perplexity AIs Valuation Soars to $18B as Investors Rush for the AI Boom - TipRanks - July 18th, 2025 [July 18th, 2025]
- Why BigBear.ai Stock Skyrocketed 52.6% in the First Half of 2025 and Has Kept Surging - Yahoo Finance - July 18th, 2025 [July 18th, 2025]
- Rise of the Machines: Inside Hollywoods AI Civil War - The Hollywood Reporter - July 18th, 2025 [July 18th, 2025]
- Ive spent the week at one of the worlds top AI research conferences. Heres what is sticking with me - Fortune - July 18th, 2025 [July 18th, 2025]
- Meta and AWS are teaming up to win over developers in the AI race - CNN - July 18th, 2025 [July 18th, 2025]
- Nvidia says Trump administration lifts ban on AI chip sales to China - The Washington Post - July 16th, 2025 [July 16th, 2025]
- AI is killing the web. Can anything save it? - The Economist - July 16th, 2025 [July 16th, 2025]
- Trump unveils $90 billion in energy and AI investments for Pennsylvania during summit in Pittsburgh - CBS News - July 16th, 2025 [July 16th, 2025]
- Google says Big Sleep AI tool found bug hackers planned to use - The Record from Recorded Future News - July 16th, 2025 [July 16th, 2025]
- Google AI "Big Sleep" Stops Exploitation of Critical SQLite Vulnerability Before Hackers Act - The Hacker News - July 16th, 2025 [July 16th, 2025]
- Google partners with Youngkin and offers AI training courses to Virginia job seekers - AP News - July 16th, 2025 [July 16th, 2025]
- Big US investments announced at Trump's tech and AI summit - Reuters - July 16th, 2025 [July 16th, 2025]
- Trump unveils $70bn AI and energy plan at summit with oil and tech bigwigs - The Guardian - July 16th, 2025 [July 16th, 2025]
- No Holds Barred: Navigating the AI Age at My Age - New York Social Diary - July 16th, 2025 [July 16th, 2025]
- Mira Murati's AI startup Thinking Machines valued at $12 billion in early-stage funding - Reuters - July 16th, 2025 [July 16th, 2025]
- Bain & Company forms strategic partnership with Dr. Andrew Ng to accelerate AI transformation for clients worldwide - Bain & Company - July 16th, 2025 [July 16th, 2025]
- Trump Touts $92 Billion AI, Energy Projects in Key Swing State - Barron's - July 16th, 2025 [July 16th, 2025]
- Microsoft's Wisconsin AI megacenter: What it will do, and how it could change everything - NBC26 - July 16th, 2025 [July 16th, 2025]
- AWS doubles investment in AWS Generative AI Innovation Center, marking two years of customer success - Amazon.com - July 16th, 2025 [July 16th, 2025]
- Deepfakes. Fake Recruiters. Cloned CFOs Learn How to Stop AI-Driven Attacks in Real Time - The Hacker News - July 16th, 2025 [July 16th, 2025]
- AI and disinformation fuel political tensions in the Philippines - Al Jazeera - July 16th, 2025 [July 16th, 2025]
- 3 Nuclear Stocks Powering the AI Revolution - The Motley Fool - July 16th, 2025 [July 16th, 2025]
- The $90 billion AI investment Trump announced is an economic and national-security win - New York Post - July 16th, 2025 [July 16th, 2025]
- Kids are asking AI companions to solve their problems, according to a new study. Heres why thats a problem - CNN - July 16th, 2025 [July 16th, 2025]
- Trump unveils more than $100B in AI, energy investments as part of major push for US to lead innovation over China - New York Post - July 16th, 2025 [July 16th, 2025]
- Medical charlatans have existed through history. But AI has turbocharged them | Edna Bonhomme - The Guardian - July 16th, 2025 [July 16th, 2025]
- Exclusive: AV startup Pronto.ai acquires off-road autonomous vehicle rival SafeAI - TechCrunch - July 16th, 2025 [July 16th, 2025]
- AI-powered web browsers want to help people save time. But are they effective? - NBC News - July 16th, 2025 [July 16th, 2025]
- Google and the Commonwealth of Virginia are offering AI training scholarships to Virginians - The Keyword - July 16th, 2025 [July 16th, 2025]
- Trump praises more than $90 billion investment in AI, energy - Fox News - July 16th, 2025 [July 16th, 2025]
- Oracle to invest $3 billion in AI, cloud expansion in Germany, Netherlands - Reuters - July 16th, 2025 [July 16th, 2025]
- The future of AI in the insurance industry - McKinsey & Company - July 16th, 2025 [July 16th, 2025]
- Laid off Candy Crush studio staff reportedly replaced by the AI tools they helped build - Engadget - July 16th, 2025 [July 16th, 2025]
- I Interviewed With an AI for a Job, and Flunked Miserably - Inc.com - July 16th, 2025 [July 16th, 2025]
- At Pittsburgh energy and tech summit, optimism and warnings over future of AI - Pittsburgh Post-Gazette - July 16th, 2025 [July 16th, 2025]
- On GPS: Nvidia CEO on whether AI will lead to job losses - CNN - July 14th, 2025 [July 14th, 2025]
- AI Will Never Be Your Kids Friend - The Atlantic - July 14th, 2025 [July 14th, 2025]
- The human harbor: Navigating identity and meaning in the AI age - VentureBeat - July 14th, 2025 [July 14th, 2025]
- The Viral Reel That Sparked The AI vs. Real Debate in Photography - Fstoppers - July 14th, 2025 [July 14th, 2025]
- Study warns of significant risks in using AI therapy chatbots - TechCrunch - July 14th, 2025 [July 14th, 2025]
- AI Is Already Showing Signs of Slashing Job Openings in the UK - Bloomberg - July 14th, 2025 [July 14th, 2025]
- The Real AI Race: America Needs More Than Innovation to Compete With China - Foreign Affairs - July 14th, 2025 [July 14th, 2025]
- Nvidias Jensen Huang says AI could lead to job losses if the world runs out of ideas - CNN - July 14th, 2025 [July 14th, 2025]
- 10 Reasons to Buy and Hold This AI Stock Forever - The Motley Fool - July 14th, 2025 [July 14th, 2025]
- Google hires Windsurf execs in $2.4 billion deal to advance AI coding ambitions - Reuters - July 14th, 2025 [July 14th, 2025]
- Nvidia, Microsoft, or Alphabet: Which AI Stock Giant Holds the Highest Upside? Analysts Weigh In - Yahoo Finance - July 14th, 2025 [July 14th, 2025]
- Mark Cuban: I use AI daily for everything, from monitoring my health to writing codebut youve got to be careful - CNBC - July 14th, 2025 [July 14th, 2025]
- M&A News: Meta Buys Voice AI Startup PlayAI to Boost Wearables and Apps - TipRanks - July 14th, 2025 [July 14th, 2025]
- Agentic AI Is Quietly Replacing Developers - The New Stack - July 14th, 2025 [July 14th, 2025]
- This Band Finally Admitted That Theyre AI After Weeks of Controversy - vice.com - July 14th, 2025 [July 14th, 2025]
- Workforce crisis: key takeaways for graduates battling AI in the jobs market - The Guardian - July 14th, 2025 [July 14th, 2025]
- Prediction: Taiwan Semiconductor Manufacturing Stock Is the Safest AI Chip Bet - The Motley Fool - July 14th, 2025 [July 14th, 2025]
- Stop vetting engineers like its 2021 the AI-native workforce has arrived - VentureBeat - July 14th, 2025 [July 14th, 2025]
- Where Will BigBear.ai Stock Be in 1 Year? - The Motley Fool - July 14th, 2025 [July 14th, 2025]
- AI outsmarted 30 of the world's top mathematicians at secret meeting in California - Live Science - July 14th, 2025 [July 14th, 2025]