On the afternoon of January 20, 2024, the 14th "Open Source Development and Legal Regulation of Artificial Intelligence Legislation" and the Academic Exchange Meeting of the Institute of Management and Innovation of Tsinghua University was held at the School of Public Policy and Management of Tsinghua University.
Professor Zhu Xufeng, Dean of the School of Public Policy and Management of Tsinghua University, and Professor Yu An of the School of Public Policy and Management of Tsinghua University delivered welcome speeches. The keynote speech session was chaired by Associate Professor Chen Tianhao of the School of Public Policy and Management of Tsinghua University, Associate Researcher Zhou Hui of the Chinese Academy of Social Sciences, Professor Su Yu of the Chinese People's Public Security University, Associate Professor Zhang Xin of the University of International Business and Economics and Assistant Professor Zhu Yue of Tongji University made keynote speeches, Professor Liang Zheng of the School of Public Policy and Management of Tsinghua University, Professor Li Xueyao of Shanghai Jiao Tong University, Tang Shiliang, Deputy General Manager of Beijing Huayu Yuandian Information Service***, and Liu Chu, Senior Manager of the Strategy Department of CICC. In the subsequent roundtable discussion, He Bo, Deputy Director of the Internet Law Research Center of the China Academy of Information and Communications Technology, Shi Chongde, Chief Artificial Intelligence Expert of Beijing Huayu Yuandian Information Service, Hu Naying, Senior Business Director of the Artificial Intelligence Research Center of the China Academy of Information and Communications Technology, Fang Liang, Senior Research Manager of Anyuan AI, Wang Xiang, Member of ISO TC 154, Wang Jun, Deputy Director of the Compliance News Department of the 21st Century Business Herald, Zhu Lingfeng, Executive Director of Data Compliance of Meizu Group, and Wang Juan, a lawyer from Zhongce Law Firm, delivered speeches successively. Finally, Zhou Hui and Chen Tianhao made concluding remarks.
Zhu Xufeng and Yu An delivered welcome speeches.
First of all, Chen Tianhao briefly introduced the origin and focus of this conference, that is, the discussion of the Model Law on Artificial Intelligence (hereinafter referred to as the Model Law) and the development and legal regulation of artificial intelligence open source. In his speech, Zhu Xufeng said that artificial intelligence has become the focus of global attention and an important means for China to effectively deal with geopolitical "decoupling and breaking chains" on the international stage. At the same time, open source is an important means for the development of artificial intelligence and the digital economy. It is hoped that through this meeting, relevant consensus will be formed to promote the construction of open source and legal regulation of artificial intelligence in China. Yu An said that the development of artificial intelligence is not only a technical issue, but also a governance issue, and the core of the governance problem is the problem of value conflict. The clash of values regarding the use of digital technologies will be a complex and long-term issue. China's artificial intelligence development model is highly consistent with the national development model, so it will still play a major role in the development of artificial intelligence. However, at this stage, the overall layout of artificial intelligence in China does not fully match the overall economic and social operation of China, so it is urgent to carry out digital process reengineering of economic and social operation at this stage. In addition, as the basis of artificial intelligence, the quality of data supply needs to be paid close attention to. Keynote speeches
Su Yu made a keynote speech.
Su Yu made a keynote speech on "Value Balance and Realization Mechanism of Artificial Intelligence Legislation", and he first proposed that artificial intelligence legislation needs to achieve a balance between four sets of goals: development and security, freedom and fairness, planning and market, and individual and society. The balance between development and security is statically the balance between safety redundancy and regulatory burden, and regulatory measures often strive to reserve sufficient security redundancy, but there is an exchange relationship between safety redundancy and regulatory burden, and it is not a linear relationship, the benefits of security redundancy are often marginally decreasing, and the corresponding regulatory burden may increase nonlinearly; Dynamically, i.e., the balance between resilience and stable expectations, mechanisms that change too suddenly and frequently can lead to unstable expectations. the balance between freedom and fairness, at the individual level, i.e. the balance between freedom and fair competition; At the commercial level, that is, the balance between free competition and the protection of small and medium-sized enterprises, for small and medium-sized enterprises, the regulatory burden of AI and the cost of computing power consumption are quite heavy. The balance between the plan and the market focuses on how to balance market competition and participation in the construction of computing power and data infrastructure, and how to grasp the quasi-public goods attributes of data and computing power and correct the signal distortion. The balance between individuals and society involves the impact of artificial intelligence on individual employment, resulting in a balance between labor rights and social productivity, which is essentially a matter of benefit distribution for the application of artificial intelligence. On the other hand, in the era of artificial intelligence, the individual will become a point in the high-dimensional vector space, and the algorithm will cluster it according to the various characteristics of the individual for further calculation, and the autonomy of the individual may be gradually lost, so it is necessary to strike a balance between the independence of personality and the change of social relations. Taking the balance between development and security as an example, Professor Su Yu believes that it is necessary to achieve the overall balance of the legal regulation of artificial intelligence through a series of subdivided balance points, including classification management, legal relations, regulatory authority, regulatory requirements, etc. In the end, Su Yu put forward 6 suggestions. First, the objects of AI legislative adjustment must not be limited to generative AI or large models; Second, the balance between development and security should be regulated and governed as accurately as possible, and agile governance should be carried out to avoid one-size-fits-all security redundancy. Third, support the free development of AI while maintaining the bottom line of systemic risk; Fourth, accurately identify the invalid part of the public goods and quasi-public goods in computing power, data, and algorithms, and establish a comprehensive supply mechanism; Fifth, gradually establish new types of algorithm-related rights and interests, extensive benefit-sharing mechanisms, and artificial intelligence empowerment mechanisms to protect individual choices; Finally, a variety of "regulatory balancers" that can be accurately adjusted are reserved, and the algorithm governance tools are expanded into an algorithm governance system, so as to accurately adjust the institutionalized balance at the built-in balance point of the algorithm governance tools as much as possible. Correspondingly, the individual empowerment system, supply promotion system and governance tool system of artificial intelligence still need to be further studied.
Zhang Xin made a keynote speech.
Zhang Xin delivered a keynote speech on "Patterns, Characteristics and Trend Insights of Global Artificial Intelligence Governance". She first introduced the formation and development of the global AI governance landscape. Since 2020, AI risk incidents have been on the rise year by year. Correspondingly, all countries attach great importance to and actively participate in the formation of a new global AI order. Specifically, the influence of multiple participants in the existing AI governance matrix is different, and European and American countries have seized the opportunity at the global AI governance level due to their early start, and to a certain extent, they have implemented the strategy of small courtyards and high walls. In the existing international governance pattern of artificial intelligence, China's hard power is in the stage of running and leading, but its soft power still needs to be improved. The limited number and influence of China's technology enterprises and non-profit organizations limit China's participation in global AI governance. At present, the rule of law in global AI governance is becoming increasingly prominent, and a broad consensus has been formed at the international level to provide an "institutional fence" for AI governance through the rule of law. However, the existing international governance mechanism of artificial intelligence still highlights the characteristics of fragmentation, the orientation of great power competition is prominent, and there is a lack of effective coordination on issues such as data security, privacy protection, and algorithm abuse. Finally, there is still a smart divide and a generation gap in AI governance between developing countries and advanced economies.
Zhu Yue made a keynote speech.
Zhu Yue gave a keynote speech on "Taking Open Source Exemptions Seriously". First of all, Zhu Yue believes that the Model Law has made a forward-looking legislative exploration of open source exemptions. Subsequently, he discussed the legislative flow and legislative progress of the EU open source exemption. The EU's legal regulation of open source exemptions is mainly focused on two laws: the Artificial Intelligence Act and the Product Liability Directive. In the legislative process of the Artificial Intelligence Law, France, Portugal, Estonia and other countries proposed open source exemption early on out of interest and concept, arguing that open source authors have completed information disclosure and achieved self-regulation when uploading models, and innovation and scientific freedom should be encouraged through open source exemptions. In the middle and late stages of the Artificial Intelligence Act, several parties in the European Parliament expressed their support for the open source exemption to promote the development of small and medium-sized enterprises. It is worth noting that the AI Law clearly states that although the exemption applies to components provided under a free and open-source license, commercial operations such as exchange for currency consideration, binding services, and processing of personal information for purposes other than security and compatibility cannot be carried out, otherwise the exemption will be lost. It can be found that the EU has given more complete exemptions to open source in two upcoming laws. Secondly, Zhu Yue said that at present, open source plays an important role in the main components of the AI industry chain value chain, such as frameworks, data, models, benchmarks, hardware, ecology, etc. In reality, open source is not a minority and an exception, but the default state in reality. Finally, Zhu Yue once again said that AI legislation should seriously consider open source exemptions and their additional conditions, clarify the boundaries of open source exemptions, and strike a balance between security and development.
Guests and Talks
Liang Zheng and Li Xueyao spoke.
In the guest discussion session, Liang Zheng believed that artificial intelligence legislation first needs to judge the degree of development of artificial intelligence technology, whether external intervention is needed through legislative means, and whether its potential risks can be solved through technological development. Second, the focus of AI legislation should be determined, and AI legislation should consider what changes the introduction of new technologies will bring to the existing value system and rights system, and clarify the boundaries of rights and the main responsibilities, rather than over-stipulating technical details. Finally, Liang Zheng believes that AI legislation needs to avoid over-regulation and fully consider the technical feasibility and implementation costs. Regarding the open source exemption, Liang Zheng said that open source has become an ecological model of artificial intelligence, but its significance is not only to promote innovation, open source can also find and solve problems and defects through competition, so as to promote the development of security.
Li Xueyao believes that in the drafting of provisions, the proportion of articles related to ethics should be increased in China's future artificial intelligence law or related legal system, and it is recommended to set up a "special chapter on artificial intelligence ethics" or promulgate a special "artificial intelligence ethics law" at the level of the National People's Congress, which should not only stay at the level of administrative rules or even normative legal documents. Li Xueyao preliminarily distinguishes the differences between AI ethics and biomedical ethics within the framework of science and technology ethics: first, AI ethics is embeddable; Second, AI ethics has stronger scenario-based characteristics. Third, AI is more susceptible to the full involvement of capital and has a stronger and deeper impact on other industries. Fourth, AI has a greater impact on relationships and social structures, and is more revolutionary. Li Xueyao also reminded that within the framework of AI governance, the ethical review of science and technology is not only a security measure, but also has the functions of behavior guide and responsibility exemption for AI developers and service providers. On this basis, he believes that the formulation of the AI Ethics Expert Committee in the AI Model Law is still relatively simple, and its regulatory body, personnel composition, function setting, and review initiation conditions should be further refined.
Tang Shiliang and Liu Chu spoke.
Tang Shiliang first reviewed the company's development history and shared the company's cooperation cases with the Beijing News, Shanghai Court, and China Microsoft. When talking about open source governance, Tang Shiliang emphasized three key points. First and foremost, security issues are critical when it comes to open source, and should be at the top of the list. Second, open source also needs to fully protect developers to ensure that they can participate in open source projects safely and stably. Finally, open source should avoid illegal use or abuse by companies or individuals.
Liu Chu reviewed the historical evolution of global AI development stages and governance regulations, emphasizing that the emergence of generative AI in recent years has raised concerns about relevant laws and governance. Liu Chu believes that according to first principles, the current development of AI technology has not yet reached the ultimate stage of foreseeability - the stage of achieving the same intelligence and the same power consumption as humans, and calls for sufficient time for the formulation of regulations for artificial intelligence. In response to the issue of open source, Liu Chu emphasized the special consideration of open source, pointing out that open source has become the default state of technical solutions in the industry, and it is necessary to consider the rights and claims that developers have waived when formulating relevant regulations, and balance the obligations required by the regulations. Finally, three suggestions are put forward for the governance of open source AI, including considering reducing the obligation to provide security proof to developers through improving the transparency of developers, considering the possible responsibilities of the open source software committee and the ** committee in governance, and emphasizing the need to consider relevant laws in combination with open source agreements to better realize the governance of open source AI.
Roundtable discussion
He Bo and Shi Chongde spoke.
In the roundtable discussion session, He Bo first analyzed whether new regulations need to be formulated for AI governance, and believed that in the case of uncontrollable technological development, the formulation of regulations is to clarify the bottom line and red line of technological development, which can effectively avoid the development of new technologies falling into the "Collingridge dilemma". Secondly, in response to the issue of AI legislation, He Bo believes that a complete AI legal system should be established, involving multiple levels of laws, administrative regulations, and departmental rules, covering personal information protection, data security, network security, intellectual property rights and other aspects. Thirdly, on the issue of data processing, He Bo discussed the personal information protection obligations and responsibilities of the large model, and analyzed whether the large model should be regarded as a data controller in the research and development stage. Finally, at the level of international rules, He Bo mentioned that China, as a developing country, needs to balance its position on international governance, and fully take into account the interests of China's AI leadership and other developing members. He Bo said that there are many issues worth discussing in the legal policy and governance of artificial intelligence, and he hopes that the academic community can give more guidance and explore together.
Shi Chongde first expressed his agreement with the recent draft and legislative trends, emphasizing the importance of humanity and transparency in the field of artificial intelligence. Second, Shi emphasized the need to use domestic data for large model training to avoid the potential influence of Western values on the way of thinking. In addition, he discussed the impact of AI algorithms on individual thinking, and called on the legal and academic circles to pay more attention to the diversity of content generated by algorithmic models. Finally, Shi Chongde discussed the issue of compliance control in detail, and put forward suggestions that a reasonable large-scale supervision mechanism should be designed according to the scale of the enterprise, aiming to reduce the burden of SMEs in terms of security investment.
Hu Naying and Fang Liang spoke.
Hu Naying shared her experience from four aspects. First, she introduced the work of her artificial intelligence research center in open source, especially emphasizing the role of open source in stimulating the vitality of technological innovation, improving the efficiency of the division of labor in the industrial system, promoting economic development, and promoting sustainable development. Second, she reviewed the evolution of the global technology open source ecosystem, emphasizing the open source characteristics of different fields, especially the importance of the open source ecosystem in the fields of operating systems, cloud computing, big data, and artificial intelligence. Third, she mentioned the security risks associated with open source, including security vulnerabilities, open source license issues, and AI governance issues. She agrees with the positive role of open source in technological innovation and economic development, but also warns of the corresponding risks of open source, especially the risk of controllability brought about by dependence. Finally, Hu Naying emphasized the role of open source in several levels of AI risk governance, analyzed the position of open source in its positioning, and suggested a comprehensive risk management plan for the risk of the technical system itself, as well as the impact of technology application on individuals, organizations, national society and human ecology.
Fang Liang put forward a series of insights on the open source governance of cutting-edge AI. Fang Liang first pointed out that open source vs closed source is a false dichotomy, and there are multiple model release options from "completely closed" to "fully open". For example, managed access, API access, API fine-tuning, providing weights, and providing weights + ** data with restrictions. The exact pros and cons of open source depend on the specific combination of model components being exposed, and to whom + when + how to release it. Secondly, Fang Liang discussed in detail that in the face of the potential extreme risks of cutting-edge AI, we should explore the alternative of open source, that is, to achieve the same benefits of open source with less risk. He focused on analyzing possible alternatives to facilitate external security evaluation of models, such as phased releases, establishing red-team test networks, granting specific model access to trusted third parties, etc., and suggested optimizing the research API to facilitate security research. Finally, Fang Liang emphasized that closed-source cutting-edge AI also faces security risks, and given that current protection measures are not enough to resist adversarial attacks, closed-source models are also prone to abuse, and need strong external supervision and evaluation.
Wang Xiang and Wang Jun made a speech Wang Xiang combined with the United Nations Transparency Protocol (UNTP: UN Transparency Protocol) under research, taking international cooperation as the starting point, and the latest progress of the current cross-border collaborative governance measures of artificial intelligence technology. In the face of increasingly detailed public management requirements on sustainable development issues such as environment, climate, food, and health, it is necessary to consider combining limited administrative resources with open-source artificial intelligence to coherently respond to the potential risks brought about by the cross-border flow of goods, commodities, capital, people, and data elements. Taking into account the differences in data governance ideas in major economies and the different conditions for connecting international segments of the Internet, we can refer to standards such as the United Nations ** Data Source Catalog to realize the interoperability of AI training corpora. In addition, based on China's advantages in industrial data in the field of manufacturing and commercial data in the field of circulation, he put forward suggestions for promoting the development of the open source artificial intelligence industry in the future: establishing a data management framework based on transparency, studying new formats such as "data processing", and designing digital passports for cross-border flow of data and digital products and services. Finally, through comparison, he believes that the UNTP, as an implementation-oriented governance toolbox, can better align with the draft model law after the inclusion of AI ethics.
Wang Jun pointed out that it is necessary to pay close attention to the technological development level of open source, and the design of legal obligations and responsibilities should match the entire open source ecology and technical practices. In addition, the battle between open source and closed source continues to entangle in the development of artificial intelligence. Previously, the controversy over closed source was that the application scenarios were not rich enough and the cost-benefit ratio was insufficient, while the controversy over open source was that the computing power was insufficient. It is worth paying attention to what will be the future trend of open source closed source on the business ecology of artificial intelligence, whether it will continue the old scripts of Apple and Android, or whether there will be new narrative scripts.
Zhu Lingfeng and Wang Juan spoke.
Zhu Lingfeng first emphasized the importance of open source in the field of artificial intelligence. Using real-world examples, she highlighted that development engineers often rely on the open source community to solve problems. Secondly, Zhu Lingfeng mentioned the discussion and definition of open source AI by the Open Source Software Association, and at the same time paid attention to the concept of trustworthy AI, emphasizing the urgency of security and community governance. Finally, when thinking about the legal regulation of open source AI, she called for caution and put forward the idea of hierarchical governance. She pointed out that except in special circumstances, the role of open source AI in the development of the industry and the free market should be respected and guaranteed, and obligations such as transparency should be stipulated for basic models and other major downstream impacts, taking into account innovation and security.
Wang Juan gave in-depth thinking about open-source artificial intelligence from the perspective of legal implementation. Based on the experience of law firms, Wang Juan pointed out that the newly promulgated data protection ** faces some obstacles in practice, especially in terms of interpretation and enforcement. She proposed that the collective wisdom of experts in the technical and legal fields should be brought into the legislative phase to ensure the intelligibility of the law. In addition, she suggested clarifying the separation of legal and technical standards so that legal personnel and business technicians could better understand and enforce regulations. Wang Juan also discussed the positive value that open source may bring and some challenges in market competition, as well as suggestions for optimizing the design of governance solutions, emphasizing the need for a more scientific and sophisticated approach to deal with the risks of open source AI.
Zhou Hui and Chen Tianhao made concluding remarks, and Zhou Hui and Chen Tianhao made concluding remarks. Zhou Hui summarized the core issues discussed at the conference, including the definition and practical complexity of open source, how open source affects market competition, the potential risks of open source, and the governance scheme of open source. First of all, he raised questions about the definition of open source, pointing out that in practice, there are differences in the degree of openness, the degree of information market-oriented, and user screening. Secondly, he paid attention to the problems that open source may cause in market competition, and took Google's Android ecosystem as an example, and raised the question of whether open source platforms may lead to monopoly. Thirdly, in terms of risk, he emphasized the need to discuss whether open source amplifies the risk, and the types of risks that cause it. Finally, Zhou Hui put forward suggestions for optimizing the governance plan, including clarifying the terms of open source responsibility and establishing a compliance governance system that meets national standards. This series of questions and reflections provides useful guidance for future open source research and governance schemes.
Finally, Chen Tianhao summarized the meeting. He first expressed his gratitude to Zhou Hui for his wonderful summary. Second, Chen pointed out that the technology community has a unique meaning in the competition between the digital Leviathan and the political Leviathan. He sees the open source community as an incision into a future utopia, emphasizing the need to support the development of the technical community through legislation. Finally, Chen Tianhao once again thanked all the teachers for their attention to this conference, and said that he would continue to promote the construction of the academic community in the future.
About the Legal Research Center, School of Public Policy and Management, Tsinghua University
The report of the 20th National Congress of the Communist Party of China pointed out that "comprehensive rule of law is a profound revolution in national governance", and "the construction of rule of law is the key task and main project of comprehensive rule of law". *The Legal Research Center is an interdisciplinary research platform established by the School of Public Policy and Management of Tsinghua University, aiming to carry out research work in the fields of digital governance, digital governance, governance of emerging industries, judicial policy, and public-private cooperation around the central topic of how to promote the construction of the rule of law, so as to further promote the construction of the rule of law in the new era, adapt to the innovation of cutting-edge science and technology, and help the high-quality development of the country.
Contributed by丨**Legal Research Center Editor丨Wang Ruiqi Review丨Zhu Xufeng.