The EU AI Act is about to be implemented, and safety regulation has gradually become a global consen

Mondo Technology Updated on 2024-01-28

Our reporter Qin Xiao reports from Beijing.

In addition to bringing surprises to people, AI has also spawned problems such as data leakage, telecom fraud, and personal privacy risks. In response, countries have formulated relevant policies and regulations to regulate the use of AI. In China, seven departments, including the Cyberspace Administration of China and the National Development and Reform Commission (NDRC), jointly issued the Interim Measures for the Management of Generative AI ServicesThe U.S. has also previously issued a series of regulatory documents.

In Europe, where AI regulation was put on the agenda earlier, AI regulatory bills will also be implemented. Previously, the European Parliament had passed a draft mandate to negotiate the Artificial Intelligence Act (AI Act). However, the follow-up negotiations on the bill lasted for six months, which also raised concerns about whether the EU would lag behind in the field of AI.

However, according to the latest news from foreign media, negotiators from the European Parliament, EU member states and the European Commission have agreed on a series of control measures for generative AI tools. The reporter of "China Business Daily" noticed that including OpenAI's ChatGPT and Google's Bard, which have attracted a lot of attention around the world.

Hidden worries under the carnival

The AI wave triggered by ChatGPT developed by OpenAI at the end of last year has exceeded everyone's expectations, and its penetration, diffusion and disruption to various industries are increasing day by day, especially emerging technologies represented by pre-trained large models and AI for Science are triggering a new round of AI innovation. With the breakthrough of generative AI brought about by large models, the fields of technology, economy, and society are also expected to usher in profound changes.

This disruptiveness is mainly reflected in the rapid iteration of large model technology, breaking the upper limit of the original AI technology development, showing the characteristics of huge data quantification, model generalization, and application mode centralization, reshaping the production engine of enterprises with the ability of "unlimited production", and promoting the subversive improvement of production efficiency. From a global perspective, the development and application of large models are actively promoted all over the world.

But at the same time, there has been an ongoing discussion about AI regulation.

At present, the rapid development of generative AI is in contradiction with the establishment of relevant laws and regulations, standard systems, and ethical norms. Privacy and data protection risks, copyright infringement risks, deepfake risks, increased job losses, and discrimination and bias risks are coming to light.

As the problem becomes more prominent, countries have also put the regulation of AI on the agenda. According to the 2023 Artificial Intelligence Index Report released by Stanford University, the results of a survey of the legislative records of 127 countries show that the number of bills containing "artificial intelligence" passed into law has increased from just one in 2016 to 37 in 2022. The report's analysis of the records of AI laws and regulations in 81 countries since 2016 similarly shows that the number of references to AI in the global legislative process has increased by nearly 65 times.

With the exception of China and the United States, the European Union has been relatively slow to promote AI legislation and regulation.

In June 2023, the European Parliament voted 499 in favor, 28 against and 93 abstentions, overwhelmingly passing the draft mandate to negotiate the Artificial Intelligence Act. Following the EU legislative procedure, the European Parliament, EU Member States and the European Commission conduct "tripartite negotiations" to determine the final provisions of the Act.

It is reported that the draft includes a hierarchical management approach for the regulation of the underlying model, which is defined as a model with more than 45 million users. The chatbot ChatGPT is defined as Very Capable Foundation Models (VCFM) with additional obligations, including regular reviews for potential vulnerabilities.

It is worth noting that negotiations on the said bill lasted for six months. The controversy is how to find a balance between protecting one's own AI start-ups and potential societal risks. Some EU countries, including France and Germany, oppose the rules, saying they unnecessarily discourage local businesses.

In response, Kent Walker, Google's global president and chief legal officer, said that the EU should aim for the best AI rules, rather than making the first AI regulations. "Technology leadership requires a balance between innovation and regulation," he said. Regulators should not impose restrictions on AI development, but hold it accountable when it violates public trust. ”

"We've long said that AI is too important to be left unattended or unregulated," Kent Walker said. Regulators should race to create the best AI regulation, not the first one. ”

Sam Altman, CEO of OpenAI, agrees: "We don't need strict regulation, and future generations probably won't need it." But at some point, when an AI model can provide the equivalent of an entire company, an entire country, or an entire world, maybe we do need some collective oversight. ”

However, Alexandra van Huffelen, Dutch Minister for Digitalisation, said: "The EU must reach an agreement by the end of this year, especially on artificial general intelligence. The world's citizens, stakeholders, non-governmental organizations, and the private sector are all looking to us for a meaningful piece of legislation on AI. ”

According to Nick Reiners, a technology policy analyst at Eurasia Group, a political risk consultancy, "the EU AI Act is unlikely to become the world's leading standard for industry regulation, and it may not be agreed before next year's European Parliament elections, and there are so many things that need to be finalized in the final round of negotiations on Wednesday that even if they work late into the night as expected, they may have to be postponed until next year." ”

Regulation has become a global consensus

Not only in Europe, but the ethics and security governance of generative AI have become a common concern in the global AI field.

In China, the Cyberspace Administration of China, together with the National Development and Reform Commission, the Ministry of Education, the Ministry of Science and Technology, the Ministry of Industry and Information Technology, and the Ministry of Public Security, jointly promulgated the Interim Measures for the Management of Generative AI Services on July 13, 2023, adhering to the principle of equal emphasis on development and security, encouraging the combination of innovation and governance, and implementing inclusive and prudent and categorical regulatory measures, aiming to improve the efficiency, accuracy and agility of supervision, which came into effect on August 15, 2023.

The United States has issued a series of voluntary standards, such as the AI Risk Management Framework and the AI Bill of Rights Blueprint, which emphasize the innovation and development of AI, and tend to adopt guidelines, frameworks, or standards that organizations voluntarily adhere to for the soft governance of AI applications.

In addition, in October 2023, the United Nations established a high-level AI advisory body to address the risks and opportunities posed by AI technologies, while supporting the international community in strengthening governance.

At the previous AI Safety Summit, representatives and companies from 28 countries, including the United States, China, and the European Union, signed the Bletchley Declaration on AI Safety, which reaffirmed the concept of "human-centric, trustworthy and responsible." responsible) AI development model, calling for the cooperation of the international community and emphasizing that the implementation of AI aims to safeguard human rights and achieve the United Nations Sustainable Development Goals.

Wu Zhaohui, Vice Minister of Science and Technology of China, said at the summit that AI governance is an important issue facing all countries around the world. He stressed that while developing artificial intelligence, we should actively advocate people-oriented, intelligent and good, and strengthen technical risk management. At the same time, Wu Zhaohui also put forward suggestions to enhance the representation and voice of developing countries in the global governance of AI to bridge the intelligence gap and the governance capacity gap.

Zeng Yi, a researcher at the Institute of Automation of the Chinese Academy of Sciences, believes that while advanced AI has the potential to solve current unsolved challenges in areas such as health, education, environment, and science, the nature of the AI system that creates these benefits has become a huge risk. When larger, more uncertain models are open to all, the problem of misuse and abuse will be a challenge not only for AI scientists, but for the whole world. "We need to build a secure AI network that encompasses the whole world to address unpredictable progress and failure," he added. ”

Related Pages