According to the Associated Press on December 8, EU negotiators reached an agreement on the world's first comprehensive AI rules on the 8th, paving the way for the use of currently popular generative AI services such as ChatGPT by law.
Negotiators from the European Parliament and the EU's 27 member states overcame huge differences to sign a preliminary political agreement to develop an AI bill. These differences manifest themselves in controversial issues such as generative AI and the use of facial recognition for surveillance.
The agreement is reached!European Commissioner Thierry Bredon posted on the social ** platform at midnight: "The European Union has become the first region in the world to set clear rules for the use of artificial intelligence. ”
The result came after a marathon closed-door meeting of negotiators this week, one of which lasted 22 hours, with the second meeting starting on the morning of the 8th.
The EU** did not elaborate on what the final law would encompass, which will not come into force until 2025 at the earliest. It is expected that they will also start further negotiations on the development of relevant rules.
The European Union published the first draft of its AI rulebook in 2021 and has taken the lead in the global race to build guardrails for AI. With the recent boom in generative AI, Europeans are hurrying to update the rules and intend to launch a blueprint for the world to refer to.
Brando Benifey, an Italian lawmaker who is involved in leading EU negotiations, told The Associated Press that the European Parliament still needs to vote on the bill early next year, but that this is just a formality because the relevant agreement has already been reached.
Generative AI systems like ChatGPT have burst into view, amazed users at their ability to generate human-like text, **, and songs, but also raised concerns that the rapidly evolving technology poses risks to jobs, privacy and copyright protection, and even human life itself.
Now, international organizations such as the United States, the United Kingdom, China, and the G7 have all come up with their own AI regulatory initiatives.
Anu Bradford, a professor at Columbia University School of Law and an expert on EU and digital regulation, said that the EU's strong and comprehensive regulatory measures "can set a good example for many people who are considering regulation". Other countries "may not copy every provision, but many of the aspects mentioned in many of them will be copied".
AI companies, which must comply with EU rules, may also replicate some of these practices in markets outside the continent, she said. "After all, it's not efficient to retrain different models for different markets."
Others worry that the agreement was hastily developed.
Daniel Friedlund, head of the European office of the Computer and Communications Industry Association, said: "Today's political agreement indicates that important and necessary technical work on the key details of the AI bill is about to begin. These are still missing. ”
According to the report, the original purpose of the AI bill was to mitigate the danger of specific AI functions based on their level of risk. But European lawmakers are pushing to expand it to cover the underlying model, the advanced systems that underpin general AI services like ChatGPT and Google's Bard chatbot.
The base model seems to have become one of the biggest sticking points plaguing Europe. Negotiators, however, managed to reach an initial compromise early in the negotiations, despite calls for self-regulation led by France-led dissidents to help European-grown generative AI companies compete with powerful American rivals such as Microsoft.
Also known as large language models, these systems are trained using large amounts of text and images scraped from the internet, giving generative AI systems the ability to create new things. In contrast, traditional AI can only process data and complete tasks using predetermined rules.
Under the agreement, those advanced underlying models that pose the greatest "systemic risk" will be subject to additional scrutiny, including being asked to disclose additional information.
Researchers warn that these powerful foundational models, built by a handful of big tech companies, could be used to process online disinformation and information manipulation, cyberattacks, or create biological**. Compiled by Guo Jun).