Ultraman is put under the Tightening Curse ?27 page AI safety guide, with the Board having the auth

Mondo Finance Updated on 2024-01-30

Author |Li Dongmei, nuclear cola.

The potential dangers of generative AI have attracted the attention of the public, politicians, and AI researchers. As countries** look to suppress the technology, OpenAI has expanded its internal security processes to address the threat of harmful artificial intelligence (AI).

Recently, SAM Altman, CEO of OpenAI, appeared at the Global Hope Forum in Atlanta, Georgia, USA. More than 5,200 delegates from 40 countries around the world participated in the event of reimagining the global economic system so that the benefits and opportunities of business benefit all.

OpenAI has a plan in place to contain the worst-case scenario that can arise from the powerful AI technologies being developed today and in the future.

The creators of ChatGPT, the chatbot that has taken the world by storm, this week unveiled a 27-page "Accurate Framework" document outlining how OpenAI tracks, assesses, and prevents "catastrophic risks" posed by cutting-edge AI models.

Risks range from AI models being used to carry out large-scale cybersecurity breaches, to assisting in the creation of biological, chemical, or nuclear**.

As part of the checks and balances section of the preparation framework, OpenAI said that the company's leadership will have the decision-making power over whether to release the new AI model, but the final decision will always remain with the board of directors, which retains the "veto" power over the conclusions of OpenAI's executive team.

And even if it's not vetoed by the company's board of directors, potentially risky AI models need to pass a series of security checks before they can be deployed.

A dedicated "prepend" team will lead this multi-pronged governance effort to monitor and mitigate potential risks posed by OpenAI's advanced AI models.

OpenAI updated its page on preparing the team on December 18, 2023. The main purpose of this update appears to be to provide a clear path for identifying, analyzing, and deciding how to deal with the "catastrophic" risks inherent in the models they are developing. As they define it:

By catastrophic risk, we mean any risk that could result in hundreds of billions of dollars in economic loss or result in serious injury or death for many people – including, but not limited to, existential risks.

In addition to investigating the preparation team for the AI model under development, the Security Systems team also investigates the risks of the current model, and the "Security Systems" team investigates the following risks: Super-intelligent models such as artificial general intelligence are expected to be used in the future for real-world applications. They announced that they will be forming a team called "Superalignment" and these three teams will ensure the security of OpenAI.

Aleksander Madry, a professor at the Massachusetts Institute of Technology who is currently on sabbatical, will lead the startup's preparation team. He will supervise a team of researchers who will be responsible for assessing and closely monitoring potential risks and codifying these specific risks into a scorecard. Depending on the level of impact, these scorecards will classify specific risks into categories such as "low", "medium", "high" and "severe". If the risk of AI under development exceeds "high", development will be stopped, and if it exceeds "high", development will be stopped. If you exceed medium, you may stop publishing.

The preparation framework states that "models with a risk level of medium and below can only be deployed after mitigations have been implemented" and that "models with a risk level of high or below can be further developed only after mitigations have been implemented." ”

In addition, OpenAI also announced the creation of a department, the Security Advisory Group, to oversee the technical work and operational architecture of security decisions.

The security advisory group sits on top of OpenAI's technology development and regularly generates reports on AI models. In addition, the report is presented to management and the Board of Directors. Management can decide whether to release an AI model based on the Security Advisory Group's report, but the Board can override management's decision. In other words, even if management ignores the Security Advisory Group's report and decides to release an AI model that is inherently high-risk, the board can use the same report to overturn that decision.

OpenAI said that the document is currently in "beta" testing and is expected to be updated regularly based on feedback.

The framework has brought renewed attention to the unusual governance structure of this powerful AI startup. Following last month's OpenAI "forced palace" incident, the company's board of directors implemented a wave of overhaul and even ousted founder and CEO Sam Altman from power. But with a strong public opinion base within the company and high recognition from external investors, Altman made a lightning comeback in just five days.

The high-profile power jog raised new questions at the time: what power Altman should retain over the companies he was involved in, and how the board should limit Altman and his executive team.

Notably, since the CEO's return, those who oppose him have been excluded from the board. "If the Security Advisory Group makes a recommendation, and the CEO agrees with the recommendation, can the board really stop him?The answer to this question is unknown. There isn't much mention of transparency other than the promise that OpenAI will be audited by an independent third party. There have also been doubts about the existence of the Security Advisory Group.

OpenAI stressed that the current board of directors is still in the "initial stages" and has not yet been finalized. All three members are high-net-worth white individuals and are responsible for ensuring that OpenAI's cutting-edge technology is moving forward for the benefit of all of humanity.

The lack of diversity on the interim board is being widely criticized. Some opponents also worry that corporate self-regulation alone is not enough, and that legislatures should do more to ensure the safe development and deployment of AI tools.

With OpenAI's release of this new proactive security framework, the tech industry and beyond have been hotly debated over the past year about the potential for AI technology to cause disaster.

Earlier this year, hundreds of leading AI scientists and researchers, including OpenAI's Altman and Google DeepMind CEO Dem Hassabis, signed a brief open letter calling for mitigating the "extinction risk posed by AI" as a global priority, equating with top risks such as "pandemics and nuclear war."

The announcement quickly caused widespread public alarm. But some industry observers later believe that this is actually a smokescreen to divert attention, in order to direct people's attention to the current dangers of AI tools to the ethereal and distant post-apocalyptic scene.

But in any case, the "struggle" that broke out within OpenAI has raised concerns about super-powered artificial intelligence. Time magazine named Altman one of the most influential people in the world for his work in advancing AI systems, while warning us that AI could wipe out all human civilization.

Original link: Ultraman** on the "Tightening Spell"?OpenAI publishes 27-page security guide, and the board has the power to block the release of new AI models Generative AI Dongmei Li featured article on InfoQ.

Related Pages