Department of Commerce Announces New Guidance, Tools 270 Days Following President Bidens Executive Order on AI – NIST
Credit: NicoElNino/Shutterstock
The U.S. Department of Commerce announced today, on the 270-day mark since President Bidens Executive Order (EO) on the Safe, Secure and Trustworthy Development of AI, the release of new guidance and software to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems.
The departments National Institute of Standards and Technology (NIST) released three final guidance documents that were first released in April for public comment, as well as a draft guidance document from the U.S. AI Safety Institute that is intended to help mitigate risks. NIST is also releasing a software package designed to measure how adversarial attacks can degrade the performance of an AI system. In addition, Commerces U.S. Patent and Trademark Office (USPTO) issued a guidance update on patent subject matter eligibility to address innovation in critical and emerging technologies, including AI.
For all its potentially transformational benefits, generative AI also brings risks that are significantly different from those we see with traditional software. These guidance documents and testing platform will inform software creators about these unique risks and help them develop ways to mitigate those risks while supporting innovation. Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology and NIST Director
Read the full Department of Commerce news release.
Read the White House fact sheet on administration-wide actions on AI.
The NIST releases cover varied aspects of AI technology. Two of them appear today for the first time: One is the initial public draft of a guidance document from the U.S. AI Safety Institute, and is intended to help software developers mitigate the risks stemming from generative AI and dual-use foundation models AI systems that can be used for either beneficial or harmful purposes. The other is a testing platform designed to help AI system users and developers measure how certain types of attacks can degrade the performance of an AI system.
Of the remaining three releases, two are guidance documents designed to help manage the risks of generative AI the technology that enables many chatbots as well as text-based image and video creation tools and serve as companion resources to NISTs AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF). The third proposes a plan for U.S. stakeholders to work with others around the globe on AI standards. These three publications previously appeared April 29 in draft form for public comment, and NIST is now releasing their final versions.
The two releases NIST is announcing today for the first time are:
AI foundation models are powerful tools that are useful across a broad range of tasks and are sometimes called dual-use because of their potential for both benefit and harm. NISTs AI Safety Institute has released the initial public draft of its guidelines on Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1), which outlines voluntary best practices for how foundation model developers can protect their systems from being misused to cause deliberate harm to individuals, public safety and national security.
The draft guidance offers seven key approaches for mitigating the risks that models will be misused, along with recommendations for how to implement them and how to be transparent about their implementation. Together, these practices can help prevent models from enabling harm through activities like developing biological weapons, carrying out offensive cyber operations, and generating child sexual abuse material and nonconsensual intimate imagery.
NIST is accepting comments from the public on the draft Managing the Risk of Misuse for Dual-Use Foundation Models until Sept. 9, 2024, at 11:59 p.m. Eastern Time. Comments can be submitted to NISTAI800-1 [at] nist.gov (NISTAI800-1[at]nist[dot]gov).
One of the vulnerabilities of an AI system is the model at its core. By exposing a model to large amounts of training data, it learns to make decisions. Butif adversaries poison the training data with inaccuracies for example, by introducing data that can cause the model to misidentify stop signs as speed limit signs the model can make incorrect, potentially disastrous decisions. Testing the effects of adversarial attacks on machine learning models is one of the goals of Dioptra, a new software package aimed at helping AI developers and customers determine how well their AI software stands up to a variety of adversarial attacks.
The open-source software, available for freedownload, could help the community, including government agencies and small to medium-sized businesses, conduct evaluations to assess AI developers claims about their systems performance. This software responds to Executive Order section 4.1 (ii) (B), which requires NIST to help with model testing. Dioptra does this by allowing a user to determine what sorts of attacks would make the model perform less effectively and quantifying the performance reduction so that the user can learn how often and under what circumstances the system would fail.
Augmenting todays two initial releases are three finalized documents:
The AI RMF Generative AI Profile (NIST AI 600-1) can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best aligns with their goals and priorities. The guidance is intended to be a companion resource for users of NISTsAI RMF. It centers on a list of 12 risks and just over 200 actions that developers can take to manage them.
The 12 risks include a lowered barrier to entry for cybersecurity attacks, the production of mis- and disinformation or hate speech and other harmful content, and generative AI systems confabulating or hallucinating output. After describing each risk, the document presents a matrix of actions that developers can take to mitigate it, mapped to the AI RMF.
The second finalized publication, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NISTSpecial Publication (SP) 800-218A), is designed to be used alongside the Secure Software Development Framework (SP 800-218). While the SSDF is broadly concerned with software coding practices, the companion resource expands the SSDF in part to address a major concern with generative AI systems: They can becompromised with malicious training data that adversely affect the AI systems performance.
In addition to covering aspects of the training and use of AI systems, this guidance document identifies potential risk factors and strategies to address them. Among other recommendations, it suggests analyzing training data for signs of poisoning, bias, homogeneity and tampering.
AI systems are transforming society not only within the U.S., but around the world. A Plan for Global Engagement on AI Standards (NIST AI 100-5), todays third finalized publication, is designed to drive the worldwide development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing.
The guidance is informed by priorities outlined in the NIST-developed Plan for Federal Engagement in AI Standards and Related Tools and is tied to the National Standards Strategy for Critical and Emerging Technology. This publication suggests that a broader range of multidisciplinary stakeholders from many countries participate in the standards development process.
Here is the original post:
- Military AI Adoption Is Outpacing Global Cooperation - Council on Foreign Relations - February 11th, 2026 [February 11th, 2026]
- CBP Signs Clearview AI Deal to Use Face Recognition for Tactical Targeting - WIRED - February 11th, 2026 [February 11th, 2026]
- Googles Nobel-winning AI leader sees a renaissance aheadafter a 10- or 15-year shakeout - Fortune - February 11th, 2026 [February 11th, 2026]
- Heineken to slash up to 6,000 jobs in AI 'productivity savings' amid slump in beer sales - CNBC - February 11th, 2026 [February 11th, 2026]
- ExGoogle exec says degrees in law and medicine are a waste of time because they take so long to complete that AI will catch up by graduation - Fortune - February 11th, 2026 [February 11th, 2026]
- AI CEO warns AI's disruption will be 'much bigger' than COVID: 'The people I care about deserve to hear what is coming' - Business Insider - February 11th, 2026 [February 11th, 2026]
- Guest column: Super Bowl ads predicts the end of the AI bubble - WRAL - February 11th, 2026 [February 11th, 2026]
- Is your campaign structure holding you back in the era of AI? - blog.google - February 11th, 2026 [February 11th, 2026]
- Harmony Korine Avoids Books, Doesnt See Movies and Thinks AI Is the Art Form That Holds the Most Promise - The Hollywood Reporter - February 11th, 2026 [February 11th, 2026]
- Middle Tennessee police agencies say they dont use AI to search for suspects, solve crimes - WSMV - February 11th, 2026 [February 11th, 2026]
- Student claims U-M wrongly accused her of using AI - Detroit Free Press - February 11th, 2026 [February 11th, 2026]
- New AI tool helps scientists see how cells work together inside diseased tissue - Medical Xpress - February 11th, 2026 [February 11th, 2026]
- How Palantir and AI money is shaping the midterms - CNN - February 11th, 2026 [February 11th, 2026]
- The big AI job swap: why white-collar workers are ditching their careers - The Guardian - February 11th, 2026 [February 11th, 2026]
- 5 takeaways from new state government report on AI - City & State Pennsylvania - February 11th, 2026 [February 11th, 2026]
- The Power of Luminar Neo's Newest AI Tools Put to the Test - Fstoppers - February 11th, 2026 [February 11th, 2026]
- Most people are using AI wrong; heres how to deploy it like a super user - AZ Family - February 11th, 2026 [February 11th, 2026]
- What to know about 'Generation AI,' a new show on Arizona's Family+ - AZ Family - February 11th, 2026 [February 11th, 2026]
- America Isnt Ready for What AI Will Do to Jobs - The Atlantic - February 11th, 2026 [February 11th, 2026]
- I plan to study computer science even though people say it's being replaced by AI. Here's why. - Business Insider - February 11th, 2026 [February 11th, 2026]
- Uber Eats launches AI assistant to help with grocery cart creation - TechCrunch - February 11th, 2026 [February 11th, 2026]
- New Marriott and Hilton Filings Reveal Risks From AI Platforms to Direct Bookings - Skift - February 11th, 2026 [February 11th, 2026]
- Cisco raises annual resulsts forecast fueled by AI demand - Reuters - February 11th, 2026 [February 11th, 2026]
- The AI industry has a big Chicken Little problem - Mashable - February 11th, 2026 [February 11th, 2026]
- Forget SoundHound AI: This Enterprise AI Stock Is Turning Government Contracts Into a Cash Machine - The Motley Fool - February 11th, 2026 [February 11th, 2026]
- AbbVie AI expert to headline USI Romain Market Makers event - University of Southern Indiana | USI - February 11th, 2026 [February 11th, 2026]
- Prediction: This Will Be the Best AI Stock to Own for the Next 5 Years - The Motley Fool - February 11th, 2026 [February 11th, 2026]
- Kodiak AI lands Marine Corps deal to add driverless tech to ROGUE Fires platform - Breaking Defense - February 11th, 2026 [February 11th, 2026]
- What Apples AI deal with Google means for the two tech giants, and for $500 billion upstart OpenAI - Fortune - January 14th, 2026 [January 14th, 2026]
- Whats Expensive in AI? The Answer is Changing Fast. - SaaStr - January 14th, 2026 [January 14th, 2026]
- Four Ways I Use AI as a Principal (and One Way I Never Will) (Opinion) - Education Week - January 14th, 2026 [January 14th, 2026]
- Pentagon rolls out major reforms of R&D, AI - Breaking Defense - January 14th, 2026 [January 14th, 2026]
- Pentagon task force to deploy AI-powered UAS systems to capture drones - Defense News - January 14th, 2026 [January 14th, 2026]
- Buy These 3 AI ETFs Now: They Could Be Worth $15 Million in 30 Years - The Motley Fool - January 14th, 2026 [January 14th, 2026]
- ServiceNow Patches Critical AI Platform Flaw Allowing Unauthenticated User Impersonation - The Hacker News - January 14th, 2026 [January 14th, 2026]
- Partnering with Sandstone: An AI-Native Platform for In-House Legal Teams - Sequoia Capital - January 14th, 2026 [January 14th, 2026]
- Bandcamps Mission and Our Approach to Generative AI - Bandcamp - January 14th, 2026 [January 14th, 2026]
- Mom of one of Elon Musk's kids says AI chatbot Grok generated sexual deepfake images of her: "Make it stop" - CBS News - January 14th, 2026 [January 14th, 2026]
- Bill Gates Says 'AI Will Change Society the Most'Job Disruption Has Already Begun, 'Less Labor' Will Be Needed, And 5-Day Work Week May Disappear -... - January 14th, 2026 [January 14th, 2026]
- Prediction: This Artificial Intelligence (AI) Chip Stock Will Outperform Nvidia in 2026 (Hint: It's Not AMD) - The Motley Fool - January 14th, 2026 [January 14th, 2026]
- Microsoft responds to AI data center revolt, vowing to cover full power costs and reject local tax breaks - GeekWire - January 14th, 2026 [January 14th, 2026]
- War Department 'SWAT Team' Removes Barriers to Efficient AI Development - U.S. Department of War (.gov) - January 14th, 2026 [January 14th, 2026]
- South Koreas Revised AI Basic Act to Take Effect January 22 With New Oversight, Watermarking Rules - BABL AI - January 14th, 2026 [January 14th, 2026]
- Musks AI tool Grok will be integrated into Pentagon networks, Hegseth says - The Guardian - January 14th, 2026 [January 14th, 2026]
- You cant afford not to use it: Inderpal Bhandari speaks about the future of AI in sports - The Daily Northwestern - January 14th, 2026 [January 14th, 2026]
- How AI image tools can be tricked into making political propaganda - Help Net Security - January 14th, 2026 [January 14th, 2026]
- Mesa County to test AI software for housing development reviews - KKCO 11 News - January 14th, 2026 [January 14th, 2026]
- 'Most Severe AI Vulnerability to Date' Hits ServiceNow - Dark Reading | Security - January 14th, 2026 [January 14th, 2026]
- Self-learning AI generates NFL picks, score predictions for every 2026 divisional round matchup - CBS Sports - January 14th, 2026 [January 14th, 2026]
- Gen AI Is Threatening the Platforms That Dominate Online Travel - Harvard Business Review - January 14th, 2026 [January 14th, 2026]
- NVIDIA and Lilly Announce Co-Innovation AI Lab to Reinvent Drug Discovery In the Age of AI - Eli Lilly - January 14th, 2026 [January 14th, 2026]
- AI Fraud Has Exploded. This Background-Check Startup Is Cashing In. - Forbes - January 14th, 2026 [January 14th, 2026]
- Caterpillar Briefly Tops $300 Billion Valuation on AI Rally - Yahoo Finance - January 14th, 2026 [January 14th, 2026]
- Google has the best AI for enterprise right now, Ray Wang - Fox Business - January 14th, 2026 [January 14th, 2026]
- What AI is actually good for, according to developers - The GitHub Blog - January 14th, 2026 [January 14th, 2026]
- Apple and Google are teaming up on AI. What it means for both stocks - CNBC - January 14th, 2026 [January 14th, 2026]
- A Look At Cisco Systems (CSCO) Valuation As AI And Cybersecurity Expansion Gain Traction - simplywall.st - January 14th, 2026 [January 14th, 2026]
- US allows Nvidia to send advanced AI chips to China with restrictions - Yahoo Finance - January 14th, 2026 [January 14th, 2026]
- AI industry insiders launch site to poison the data that feeds them - theregister.com - January 11th, 2026 [January 11th, 2026]
- The agentic commerce platform: Shopify connects any merchant to every AI conversation - Shopify - January 11th, 2026 [January 11th, 2026]
- Google teams up with Walmart and other retailers to enable shopping within Gemini AI chatbot - AP News - January 11th, 2026 [January 11th, 2026]
- This CEO laid off nearly 80% of his staff because they refused to adopt AI fast enough. 2 years later, he says hed do it again - Yahoo Finance - January 11th, 2026 [January 11th, 2026]
- Bridging the gap between AI and medicine: Claude in Microsoft Foundry advances capabilities for healthcare and life sciences customers - Microsoft - January 11th, 2026 [January 11th, 2026]
- Artificial Intelligence (AI) Is Driving a New Wave of Infrastructure Spending. This Stock Is Key. - Yahoo Finance - January 11th, 2026 [January 11th, 2026]
- Job Seekers Find a New Source of Income: Training AI to Do Their Old Roles - The Wall Street Journal - January 11th, 2026 [January 11th, 2026]
- The AI platform shift and the opportunity ahead for retail - blog.google - January 11th, 2026 [January 11th, 2026]
- Applied Digital Just Solved AI's Biggest Bottleneck with Technology From the 1800s - The Motley Fool - January 11th, 2026 [January 11th, 2026]
- Can Agentic AI reduce the burden of compliance? - Security Boulevard - January 11th, 2026 [January 11th, 2026]
- Americas AI Boom Is Running Into An Unplanned Water Problem - Forbes - January 11th, 2026 [January 11th, 2026]
- AI, edge, and security: Shaping the need for modern infrastructure management - Network World - January 11th, 2026 [January 11th, 2026]
- Your next primary care doctor could be online only, accessed through an AI tool : Shots - Health News - NPR - January 11th, 2026 [January 11th, 2026]
- Brad Gerstner breaks from the crowd on one AI stock - thestreet.com - January 11th, 2026 [January 11th, 2026]
- Are we in an AI bubble? What 40 tech leaders and analysts are saying, in one chart - CNBC - January 11th, 2026 [January 11th, 2026]
- AI is intensifying a 'collapse' of trust online, experts say - NBC News - January 11th, 2026 [January 11th, 2026]
- Anthropic follows OpenAI in rolling out healthcare AI tools - Investing.com - January 11th, 2026 [January 11th, 2026]
- Behind Anthropic's stunning growth is a sibling team that may hold the key to generative AI - CNBC - January 11th, 2026 [January 11th, 2026]
- Fears of an AI bubble were nowhere to be found at the worlds biggest tech show - CNN - January 11th, 2026 [January 11th, 2026]
- 'No one verified the evidence': Woman says AI-generated deepfake text sent her to jail | Action News Investigation - 6abc Philadelphia - January 11th, 2026 [January 11th, 2026]
- Global AI adoption rose in 2025 but regional gaps widened | ETIH EdTech News - EdTech Innovation Hub - January 11th, 2026 [January 11th, 2026]
- AI isn't making us smarter it's training us to think backward, an innovation theorist says - Business Insider - January 11th, 2026 [January 11th, 2026]