Review:
With the rapid development of artificial intelligence (AI) technology, the issue of AI governance on a global scale has attracted more and more attention. In 2023, countries have made some progress in AI regulation, but at the same time, they also face multiple challenges. The purpose of this paper is to review the progress of global AI governance in 2023, compare the differences in regulatory policies in different economies, and look forward to future global AI governance cooperation.
Progress on global AI governance in 2023
Over the past year, some important progress has been made in AI governance globally. Countries** and companies are beginning to recognize the potential risks of AI technology and are taking steps to strengthen regulation.
In terms of policy formulation, major economies such as the United States, the European Union, and China have all issued important documents on AI governance. The documents highlight issues such as transparency, traceability and explainability of AI technologies, data protection and privacy security, and the prevention of discrimination and bias.
In addition, some international organizations are also actively involved in discussions and cooperation on AI governance. For example, in 2023, the G20 Group published a guiding principle on AI governance, emphasizing the importance of responsible AI development and application.
Regulatory policies vary from economy to economy
Comparing the progress and specific policies of AI regulation in major countries in 2023, it can be found that there are still obvious differences in AI governance ideas among countries, and even between the developed economies of the United States and Europe, there are more differences than similarities.
Common ground
There are five main points in common between the parties on the principles of AI regulation:One isemphasizing transparency, traceability and explainability; The second isemphasis on data protection, privacy and data security; The third isEmphasizing challenging or correcting AI decisions and identifying and managing associated risks; Four areprohibit prejudice or discrimination and hold human beings accountable; Fifth, yesWe prohibit the misuse of technology and illegal activities, and guarantee the right to know of human beings.
For example, the US AI Executive Order and the EU draft AI Act are similar in some respects, both advocate risk-based regulatory principles, with a special focus on high-risk AI systems, and elaborate corresponding regulatory guidelines. In addition, for emerging generative AI technologies, both have introduced disclosure or testing requirements for the underlying model to ensure transparency and trustworthiness of the technology.
Differences
United States
Comparatively, the U.S. has adopted a decentralized, industry-specific, and non-mandatory approach to regulation. In terms of regulatory structure, the executive order uses the power of the existing mechanism to instruct** departments to assess and develop standards for AI security risks in their respective fields, without establishing new regulations or new regulatory bodies. In terms of regulatory approach, the executive order does not stipulate specific implementation provisions, and regulation at the federal level is mainly promoted through non-restrictive administrative directives and voluntary corporate commitments, rather than legislative means.
European Union
The EU has adopted a comprehensive, horizontal, risk-based regulatory approach. In terms of regulatory framework, the EU AI Act adopts a dual top-down regulatory structure, with an AI office at the EU level to oversee the standards and testing of state-of-the-art AI models, and a market surveillance authority at the member state level to enforce the law. In terms of regulatory scope, it does not distinguish between specific industries, but classifies the risks of AI technology in specific application scenarios and takes corresponding measures, and imposes complex and detailed requirements and obligations on providers and users of AI systems. In terms of regulatory severity, the EU prefers to use "stick regulation" than the United States, and the bill contains a number of punitive provisions.
The situation in other countries
United Kingdom:There is a preference for using existing systems to regulate AI technology.
Japan:Non-binding AI guidelines are being drafted to avoid overly restrictive approaches to promote technological innovation.
China:Adopt a more decentralized, vertical, and iterative regulatory approach to conduct targeted supervision of specific applications or manifestations, focusing on the use of algorithms and data.
Prospects for future global AI governance cooperation
Develop global AI governance standards and best practices: Countries should work together to develop a set of global AI governance standards and best practices to promote transparency, fairness, and sustainability of AI technologies. These standards should include requirements for data privacy, security, and ethics.
Strengthen the role of international organizations: International organizations should play a more active role in promoting cooperation among countries in AI governance.
Promote multi-stakeholder participation: The future of global AI governance requires the participation of various stakeholders, including enterprises, academia, and social organizations. All parties should work together to develop AI governance policies and standards to ensure their feasibility and effectiveness.
Focus on the needs of developing countries: In global AI governance cooperation, the needs and interests of developing countries should be addressed. Developing countries are lagging behind in the development and application of AI technology, so they need more support and help. The international community should work together to help developing countries strengthen the training, application and promotion of AI technologies.
Strengthen regulatory cooperation and coordination: Countries** should strengthen regulatory cooperation and coordination to avoid regulatory fragmentation. Countries should work together to develop regulatory policies and standards, and strengthen cross-border regulatory cooperation to ensure the sustainable development and widespread application of AI technologies.
The content of this article is original, **please indicate** in "Guangzhou Financial Technology***