Finance Associated Press, December 20 (edited by Liu Rui).After Biden signed the first executive order on AI regulation in October this year, Biden said on Tuesday that the first step is being taken to write key standards and guidance for the safe deployment of generative AI and how to test and secure systems.
Biden takes the first step in developing AI standards
At the end of October, Biden signed a "landmarking" executive order introducing the White House's first regulation on generative AI. Under the Executive Order, several U.S. agencies are required to develop standards to prevent chemical, biological, radiation, nuclear, and cybersecurity risks posed by AI.
On Tuesday, the U.S. Department of Commerce's National Institute of Standards and Technology (NIST) said it would solicit comments from AI companies and the public by Feb. 2 for critical AI testing that is critical to ensuring the safety of AI systems.
NIST claims that they are developing guidelines for evaluating and testing AI, advancing the development of AI industry standards, and providing a test environment for evaluating AI systems.
U.S. Commerce Secretary Gina Raimondo said NIST's latest move, driven by Biden's October executive order, aims to develop "industry standards around AI safety, security, and trust, allowing the U.S. to continue to lead the world in the responsible development and use of this rapidly evolving technology." ”
NIST is developing guidelines for AI "red team" testing
Among the testing guidelines that NIST is developing is the so-called "red team" testing – NIST will consider where "red team" testing is most beneficial to AI risk assessment and management, and develop best practices for this.
The "red team" test is an important part of cybersecurity testing. It refers to the simulation of real-world adversaries by a team of experts to test and enhance the security of the system.
In August, the United States publicly conducted AI "red team" testing during a major cybersecurity conference for the first time. The event was mainly attended by AI industry organizations such as AI Village, Seedai, and Humane Intelligence.
The White House said that at the August event, thousands of participants tested whether they could "let these systems produce undesirable outputs, or let these systems fail in other ways, with the goal of better understanding the risks posed by these systems." The event "showcased how external red teams can be an effective tool for identifying new AI risks".