OpenAI Sets Up Preparedness Early Warning Team The board of directors has the power to block the r

Mondo Finance Updated on 2024-01-30

IT Home reported on December 19 that OpenAI, which developed ChatGPT, recently announced the establishment of a new "preparedness" team, aiming to monitor the potential threats that its technology may bring, prevent it from falling into the wrong hands, and even be used to make chemical and biological **.

The team, led by MIT AI professor Aleksander Madry, will recruit AI researchers, computer scientists, experts, and policy experts to continuously monitor and test the technology developed by OpenAI and warn the company of any signs of danger.

OpenAI released guidelines on Monday called the Preparedness Framework, emphasizing that the guidelines are still in beta.

It is reported that the preparedness team will send a monthly report to a new internal security advisory group, which will then analyze it and submit recommendations to OpenAI CEO Sam Altman and the board of directors. Altman and the company's top management can decide whether to release a new AI system based on these reports, but the board has the authority to revoke this decision.

The preparedness team will repeatedly evaluate OpenAI's state-of-the-art, yet-to-be-released AI models and rate them on four scales based on different types of perceived risk, from low to high, "low", "medium", "high", and "severe." Under the new guidelines, OpenAI will only roll out models rated "low" and "medium."

OpenAI's "Preparedness" team sits somewhere between two existing teams: the "Security Systems" team, which is responsible for eliminating existing issues such as racial bias in AI systems, and the "Superalignment" team, which is working on how to ensure that AI does not harm humans in future scenarios beyond human intelligence.

IT Home has noticed that the popularity of ChatGPT and the rapid development of generative AI technology have sparked heated discussions in the tech community about the potential dangers of the technology. Well-known AI experts from OpenAI, Google, and Microsoft warned this year that the technology could pose an existential threat to humanity comparable to an epidemic or nuclear **. Other AI researchers argue that focusing too much on these far-flung risks ignores the potential harms that AI technologies are already causing so far. There are also AI business leaders who believe that concerns about risk are overblown, and that companies should continue to advance technology for the benefit and benefit of society.

OpenAI has taken a more neutral stance in this debate. CEO Sam Altman acknowledged that the technology has serious long-term risks, but also called for attention to address existing problems. He believes that regulation should not hinder competition among smaller companies in the AI space. At the same time, he also promoted the commercialization of the company's technology and raised funds to accelerate its development.

Madrid is a veteran AI researcher who previously led the Center for Deployable Machine Xi at MIT and co-led the MIT AI Policy Forum. He joined OpenAI this year, but resigned along with a handful of OpenAI executives after Altman was fired by the board, and when Altman was reinstated, Madrid returned to the company five days later. OpenAI is governed by a non-profit board of directors with a mission to promote AI and make it beneficial for all of humanity. Three board members who fired him resigned after Altman's reinstatement, and the organization is currently in the process of selecting new board members.

Despite the "turmoil" experienced by the leadership, Madrid said he still believes the OpenAI board takes the risks of AI seriously.

In addition to talent in the AI field, OpenAI's "Preparedness" team will also recruit experts from areas such as *** to help companies understand how to deal with significant risks. Madrid said the team has begun to engage with agencies such as the U.S. Nuclear Security Administration to ensure that companies are properly studying the risks of AI.

One of the team's focuses is to monitor when and how OpenAI's technology leads people to computer intrusions or create dangerous chemical, biological, and nuclear** beyond what one can find online through conventional research. Madrid is looking for people who are: "They think deeply, 'How do I break these rules?'"How do I become the most witty villain?’”

OpenAI said in a blog post on Monday that the company will also allow "qualified independent third parties" from outside of OpenAI to test its technology.

Madrid said he disagrees with both the "doomsayers" who worry that AI has surpassed human intelligence, and the "accelerators" who want to remove all barriers to AI development.

"I really think it's a very simple way to separate development and inhibition," he said. AI has a lot of potential, but we also need to work to ensure that it is realized and that the negative impacts are minimized. ”

Related Pages