U.S. media OpenAI released new security guidelines to give the board of directors veto power

Mondo Finance Updated on 2024-01-30

On December 19, according to Bloomberg, on the 18th local time, the American artificial intelligence company OpenAI announced a security guide on its **, strengthened its internal security processes, and gave the board of directors veto power over high-risk artificial intelligence.

On the same day, OpenAI reportedly published a series of security guidelines explaining how the company plans to deal with the extreme risks that the most powerful artificial intelligence (AI) systems can possibly cause.

Under the guidelines, OpenAI will only begin to apply the latest technology after it has determined its safety. The company will establish a team of consultants to review the safety report before forwarding it to the company's senior management and board of directors. While the company's leadership is responsible for making decisions, the board of directors can overturn decisions.

Data map: OpenAI logo.

OpenAI announced the establishment of a "Disaster Preparedness Team" in October 2023 and will continue to evaluate the performance of its AI systems in four areas: cybersecurity, chemical threats, nuclear threats, and biological threats, while mitigating any harm that technology may bring.

Specifically, companies examine "catastrophic" risks defined as those that could cause hundreds of millions of dollars in economic losses or cause injury or death to many individuals.

AI is not inherently good or bad, and we are shaping it. Alexander Madrid, who leads the Disaster Preparedness Team, said his team reports monthly to the newly formed internal team of security consultants, who then judge and analyze the recommendations submitted by the team.

Madrid hopes that other companies will use OpenAI's guidance to assess the potential risks of their AI models.

In March 2023, more than 1,000 AI experts and industry executives, including Elon Musk and Apple co-founder Steve Wozniak, signed an open letter in which they called for a moratorium on advanced AI development until shared security protocols for such designs are developed, implemented, and reviewed by independent experts.

Related Pages