The U.S. National Institute of Standards and Technology’s request for information seeks input from AI companies and the public on generative AI risk management and reducing the risks of AI-generated misinformation.
US standards and tech group seeks public input on AI safety, development guidelines
The U.S. National Institute of Standards and Technology’s request for information seeks input from AI companies and the public on generative AI risk management and reducing risks of AI-generated misinformation.
27 Total views Listen to article
The National Institute of Standards and Technology (NIST) within the U.S. Department of Commerce has released a Request for Information (RFI) to support its duties outlined in the latest Executive Order concerning the secure and responsible development and use of artificial intelligence (AI).
The organization announced that it's inviting public input up to Feb. 2 to gather essential feedback for conducting tests to ensure the safety of AI systems.
Secretary of Commerce Gina Raimondo stated that the initiative is inspired by President Joe Biden’s October Executive directive instructing NIST to create guidelines, including evaluation and red-teaming, foster consensus-based standards, and establish testing environments for AI system assessment. This framework aims to support the AI community in safely, reliably, and responsibly developing AI.
The NIST’s request for information seeks input from AI companies and the public on generative AI risk management and reducing risks of AI-generated misinformation.
Generative AI, capable of generating text, photos, and videos based on open-ended prompts, has generated enthusiasm and concerns. There are worries about job displacement, electoral disruptions, and the potential for the technology to surpass human capabilities with possibly catastrophic consequences.
The RFI also seeks details on determining the most effective areas for “red-teaming” in AI risk assessment and establishing best practices. “Red-teaming,” a practice from Cold War simulations, refers to a technique where a group of individuals, known as the “red team,” simulates potential adversarial scenarios or attacks to assess the vulnerabilities and weaknesses of a system, process, or organization and has been long employed in cybersecurity to uncover new risks.
In August, the inaugural U.S. public evaluation “red-teaming” event occurred at a cybersecurity conference, coordinated by AI Village, SeedAI, and Humane Intelligence.
In November, NIST announced the formation of the new AI consortium along with an official notice expressing the office’s request for applicants with the relevant credentials. The consortium aims to create and implement specific policies and measurements to ensure U.S. lawmakers take a human-centered approach to AI safety and governance.
Add reaction














