In the past year, the rapid development of generative artificial intelligence, represented by ChatGPT, has brought major challenges to regulators in various countries. There is an urgent need to reach a global consensus on AI governance, especially AI regulation, to guard against major risks that may arise. However, at present, the global regulatory landscape is fragmented, and major economies have differences on the basic concepts, values, and paths and strategies of AI and its governance, and major economies are also fiercely competing for the right to speak on AI governance. In this regard, on the basis of iterative improvement of the domestic AI governance system, China should continue to enhance the influence of international AI governance by promoting global co-governance and sharing, building regulatory cooperation rules, accelerating the formulation of international standards, and innovating multilateral coordination mechanisms.
Since 2023, the multi-scenario application of generative artificial intelligence (AI) represented by ChatGPT has been rapidly unfolding, and the potential alienation risk behind the iterative upgrading of technology has brought new regulatory challenges. Around AI governance, especially AI regulation, the world's major economies are stepping up their efforts to become rule-makers. However, at the same time, the global regulatory landscape is still fragmented and regional, and major economies have differences on the basic concepts, values, and paths and strategies of AI, and a variety of factors restrict the process of international regulatory coordination. Preventing countries from imposing regulations under different rules, terminology and requirements is a major challenge for regulators around the world. In this regard, China should focus on promoting global co-governance and sharing, building regulatory cooperation rules, accelerating the formulation of international standards, and innovating multilateral coordination mechanisms, so as to continuously enhance the international influence and discourse power of AI governance.
1 The world's major economies accelerate the exploration of AI regulationSince 2023, AI regulation in major economies has accelerated significantly, relevant legislation, guidelines, and norms have been issued or put on the agenda, and some economies have established new special governance institutions or committees. First, the EU relies on laws, regulations and directives to fully implement strong supervision. In April 2021, the European Union proposed a draft AI Act, which divides AI systems into four categories: minimal or no risk, limited risk, high risk, and unacceptable risk according to different security risk levels, and adopts different regulatory measures accordingly, and in extreme cases, the offending company may be fined up to 30 million euros or 6% of the company's global annual revenue. In June 2023, the European Parliament approved the draft AI Act, which officially entered the tripartite negotiation process between the European Parliament, the European Commission and EU member states, and is expected to reach an agreement by the end of the year and have an impact on related companies in 2026.
Second, the United States tends to "soft" regulation to standardize and guide. It is mainly regulated and guided through "soft" methods such as local autonomy, industry rules, and voluntary advocacy, and the main goal of regulatory guidance is to promote the development of AI. In January 2020, the White House Office of Science and Technology Policy issued the "Guidance on the Supervision of AI Applications", which proposes a series of risk assessment and management options, but its focus remains on ensuring that regulatory rules do not hinder the development of AI. In October 2022, the White House Office of Science and Technology Policy released the AI Bill of Rights Blueprint, which outlines five core rights that the American public should enjoy: freedom from insecure or inefficient systems, freedom from algorithmic and systematic discrimination, protection of personal privacy, knowledge of the use of AI systems and understanding how and why AI produces final results, and the right to choose not to use AI technology. In January 2023, the U.S. Department of Commerce released the AI Risk Management Framework, which incorporates trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. Since 2023, the United States has paid more attention to AI legislation: in June, Democratic Rep. Ted Lieu and others submitted a proposal for the National AI Commission bill; In July, Secretary of State Antony Blinken and Commerce Secretary Gina Raimondo published an article in the Financial Times calling for a better AI regulation bill. Until substantive laws and regulations are introduced, the United States** hopes to use "voluntary commitments" to achieve the development goals of safe, reliable, and transparent AI technology. On July 21, 2023, the White House announced that seven AI giants, Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI, "voluntarily" made a series of commitments to deal with AI risks.
Third, China implements both "soft" guidance and "hard" constraints. In terms of hard constraints, with the successive promulgation and implementation of the E-Commerce Law, the Data Security Law, the Personal Information Protection Law, and the Cybersecurity Law, the cornerstone of the underlying rules in the AI field has been gradually standardized. The Cyberspace Administration of China (CAC) and others issued the "Guiding Opinions on Strengthening the Comprehensive Governance of Internet Information Service Algorithms", "Provisions on the Administration of Internet Information Service Algorithm Recommendations", and "Provisions on the Administration of Deep Synthesis of Internet Information Services", formulating a basic framework for regulating application scenarios such as generative AI. In July 2023, the Cyberspace Administration of China (CAC) and seven other government departments jointly issued the Interim Measures for the Administration of Generative AI Services, which became the first strong constraint rule for generative AI in the world. In terms of "soft" guidance, in 2019, China issued the "Principles for the Governance of a New Generation of Artificial Intelligence - Developing Responsible Artificial Intelligence", which put forward a framework and action guidelines for AI governance. Subsequently, documents such as the New Generation of AI Ethics Code and the Measures for the Review of Science and Technology Ethics (Trial) provide important guidance on solving the problems of social norm disorder and ethics brought about by AI, focusing on the ethics of science and technology, algorithm governance, and industry application security. In March 2023, China established the National Data Bureau, which is responsible for coordinating and promoting the construction of data infrastructure systems, coordinating the integration, sharing, development and utilization of data resources, and further consolidating the new foundation of AI regulatory organizational structure. China's combination of "soft" and "hard" governance methods has been at the forefront of the world, and if dynamic optimization and iteration can be realized, this model will become an important reference for international AI governance.
Fourth, the United Kingdom, ASEAN and other countries are actively seizing the commanding heights of AI supervision. In March 2023, the UK released the "Innovative AI Regulation" and in April, the "Guidelines for the Use of Generative AI" were released. In June, it was announced that it would hold the first AI Security Global Summit in the second half of the year, bringing together "like-minded" countries around the world to formulate AI norms and strive to establish a global AI regulator in the UK. ASEAN is drafting AI regulatory guidelines: Singapore is leading ASEAN countries in drafting AI governance norms and ethics guidelines, which are expected to be formally announced at the ASEAN Digital Ministers' Meeting in early 2024, according to Reuters on June 16, 2023. India is also moving aggressively to try to strengthen the regulation of AI through the proposed Digital India Act, 2023.
2 The main dilemma of global AI regulatory cooperationAt present, the main dilemma of global AI regulatory cooperation is mainly reflected in the imbalance between regulatory countries, the differences in basic concepts and concepts of supervision, and the limited role of relevant international organizations. 2.1 Regulatory fragmentation has intensified, and imbalances have become prominentOn the one hand, AI regulatory schemes vary greatly among major economies, leading to regulatory fragmentation. In March 2023, Canada's National Centre for Governance Innovation released a report pointing out that as of August 2022, 62 countries around the world had submitted applications to the OECD AI Policy Observatory (OECDAI) submitted more than 800 AI-related policy documents or initiatives to report on the country's initiatives to support AI development and governance. Among them, the majority of relevant initiatives submitted by major developed economies. The analysis finds that the types of regulatory measures in various countries are not clear, the content of various policy documents is overlapping, and there is a lack of evaluation of the usefulness, credibility and independence of policies, especially the relatively few measures for international cooperation or assistance in AI, which highlights the fragmentation of AI supervision. On the other hand, AI innovation and application are mainly concentrated in developed countries, and the problem of regulatory imbalance has intensified, and developing countries lack the right to speak. Developed countries have advantages in data resources, model computing power, and talent capabilities, so they are more capable of promoting AI R&D and applications, and are more likely to benefit from it. In contrast, the model of labor participation in the international division of labor in developing countries has been seriously challenged, and the corresponding countries have a low adoption rate of AI technology, and their ability to respond to potential risks such as data leakage, privacy protection, and algorithm bias is insufficient, making it difficult to form effective regulatory measures, which will further widen the technological gap between developing countries and developed countries, and even exacerbate the inequality in the distribution of national power.
2.2 Divergent regulatory approaches and insufficient interoperability
First, there are differences in basic concepts. Currently, regulators disagree on many concepts such as the definition of AI, risk classification, and the meaning of "responsible" and "trustworthy". For example, the U.S. AI Bill of Rights Blueprint uses a broad definition of AI, which includes not only machine learning systems, but also most of all types of software. The UK's Innovative AI Regulation defines AI as "products and services that are 'adaptable' and 'autonomous'", without pointing to specific algorithmic features or product types. The divergence in the definition of basic concepts greatly affects the scope and extent of AI regulation, and profoundly restricts the formulation of industry rules and legislation, resulting in different compliance requirements for different countries. Second, there are differences in values. At present, there is a risk of politicization of the multilateral coordination framework for AI, and Western developed countries such as the United States and Europe have carried out a series of regulatory coordination activities under the framework of the G7, the United States and the European Union** and the Technology Council (TTC), but they frequently declare that they want to carry out relevant cooperation on the basis of "common democratic values", and intend to introduce the so-called values into the future global AI regulatory system. At the same time, individual countries continue to use industry standards, international regulations and other regulatory means to carry out inter-power games, which has led to the intensification of strategic differences and regulatory conflicts in the development of global AI technology and regulatory governance. However, developing countries such as Asia, Africa and Latin America are basically excluded from the coordination mechanisms or initiatives related to AI among developed countries, which obviously does not meet the requirements of global sustainable development. Third, there are differences in regulatory models. Good regulation maximizes the public good and minimizes risk. AI regulatory models vary from country to country, and regulatory effects are also different. The EU seeks to regulate almost all AI applications by using a single act that creates a more comprehensive regulatory scope, a more centralized coordination approach, and stricter regulatory requirements. The core of U.S. regulation is to "avoid unnecessary regulatory or non-regulatory actions that hinder AI innovation and growth", and there is no comprehensive legislative system, and more through local autonomy, industry rules, and voluntary advocacy to address the security issues behind the application of AI technology. Similar to the US, the UK is also focused on developing AI guidelines that empower regulators and only take statutory action when necessary. The above two regulatory models bring different regulatory effects. Due to the continuous emergence and iterative upgrading of AI technology, it is difficult for policymakers to clearly understand the risks and benefits derived from them in the future. Therefore, although the EU's strong regulatory model is conducive to risk prevention and control, it is not conducive to the development of cutting-edge technology. Although the United States and the United Kingdom have recently called for stronger regulation, the legislative process is still in the initial stage, and although its weak regulatory model is conducive to encouraging technological innovation, it may exacerbate the potential risk crisis behind technological alienation.
2.3 International coordination is slow, and the role of international organizations is limited
First, the OECD's status has been recognized by many countries, but it is not substantively binding, and it is difficult to promote the formation of a global AI regulatory framework. While 46 countries around the world have signed up to the OECD AI Principles and have been endorsed by the G20 and major economies such as China and Russia, their reach is still largely concentrated in developed countries. In the absence of an effective way to assess the implementation of AI in a credible way, countries are not doing well in implementing the principles. In addition, because the OECD does not have legislative power, it is difficult to formulate binding international provisions to regulate the development and use of AI, and can only coordinate global AI governance through cognitive authority, norm-setting and agenda-setting, but it is not effective in promoting international regulatory cooperation.
Second, as an important platform for AI governance, the United Nations faces challenges such as the lack of authoritative standards, which restricts its role in international coordination. At the end of March 2023, UNESCO issued a statement calling on all countries to implement the Recommendation on AI Ethics, the first global agreement on AI ethics adopted by the organization, as soon as possible, and more than 40 countries have cooperated with UNESCO to develop AI normative measures at the macro level based on the Recommendation. On July 18, the UN Security Council held a high-level public meeting on the theme of "Opportunities and Risks Brought by Artificial Intelligence to International Peace and Security", which was the first time that the Security Council held a meeting on the issue of artificial intelligence. It should be noted that in the process of promoting AI regulatory cooperation, the United Nations still faces many challenges, such as the lack of authoritative standards, the slow formulation of international norms, and the need to strengthen the coordination mechanism, which restricts the effectiveness of its promotion of international regulatory cooperation in this field.
Third, the Group of Seven (G7) countries actively promote the formulation of international standards, but they intend to draw lines and build "small circles" based on values. In recent years, the G7's discussion of AI has gradually escalated from the ministerial level to the leadership level, and the Global AI Partnership (GPAI) has rapidly expanded its influence beyond the G7 as a useful tool to limit China's influence on the global AI regulatory system. In April 2023, the G7 Digital and Technology Ministers' Meeting issued a joint statement agreeing to introduce a "risk-based" regulatory bill for AI; In May, G7 leaders issued the G7 Hiroshima Leaders' Communiqué, emphasizing that the rules of digital technologies such as AI should be "in line with shared values", that a ministerial forum dedicated to discussing AI progress will be established by the end of the year, and that the OECD and others will be urged to accelerate the establishment of international standards. In June, the United States and the United Kingdom issued the Atlantic Declaration, emphasizing the need to build a data bridge between the United States and the United Kingdom, and to work with allies to launch AI security measures. The series of actions shows that Western countries will comprehensively carry out the global regulation and governance of emerging technologies such as AI, and regard it as an important area to grasp the voice and enhance global leadership in the future.
3 Implications and SuggestionsAt present, the rapid development of AI technology has pushed global AI regulation into a new stage, and it is urgent to form a joint force at the global level to meet the challenges, and through the innovation of laws, rules, and policies, to break the obstacles to technological innovation and application caused by the global regulatory imbalance characterized by fragmentation and regionality, and maximize the potential benefits of AI. As the largest developing country in the world, China should adhere to the basic concept of "people-oriented", make full use of multilateral mechanisms, give full play to the role of enterprises and other innovative entities in an organized manner, and actively promote the construction of a diversified, efficient, fair and just global AI regulatory cooperation framework by enhancing international discourse.
First, we need to build "people-oriented" rules for regulatory cooperation. Adhere to ethics first, take "people-oriented" and "intelligence for good" as the basic criteria, and standardize the development direction of artificial intelligence. On this basis, gradually establish and improve the system of AI ethics, laws, regulations, and policies. Among them, the focus is to strengthen risk awareness, establish an effective risk early warning and response mechanism, clarify the risk definition and risk level classification, and reasonably set the rights and obligations of different participants to ensure that risks beyond human control do not occur.
Second, we need to make use of existing multilateral mechanisms to promote global co-governance and shared benefits. Resolutely resist the hegemonic logic of a few key countries in the field of AI, and highlight collaborative governance and cooperation with all countries in the world. Strengthen research on major international common issues such as AI security, strive to add relevant topics in the G20, APEC, BRICS meetings or bilateral international seminars, enhance China's influence in AI global regulation and governance, strive for more voice in technology development, deployment and application for developing countries, and strive to narrow the digital divide and capacity gap. Third, explore the creation and active participation of new multilateral coordination mechanisms. UN Secretary-General António Guterres has publicly stated that he plans to appoint an AI advisory board in September 2023 and supports the establishment of a global regulator based on expertise and regulatory powers, following the example of the International Atomic Energy Agency. In this regard, China should seize the opportunity to actively plan participation programs and candidates, strive to gain more seats in the newly established international regulatory bodies of the United Nations, and win the strategic initiative in leading the formulation of global AI regulatory rules. Fourth, support and encourage enterprises to participate in the formulation of international standards. In the AI International Standard, the Foundation Standard articulates a common set of languages and frameworks for all use cases, helping to advance interoperability across different regulatory frameworks. Encourage first-tier enterprises and relevant institutions to participate more in the development of basic AI standards by the International Organization for Standardization (I.S.I.C. JTC1 SC42) and other institutions. In addition to the China-led data quality process framework, we should actively participate in the relevant work of other working groups, promote the development of inclusive international standards in data quality and governance, trustworthiness and security, and promote the formation of a regulatory strategy with agile governance characteristics, allowing countries to improve their own regulatory systems based on basic standards.