The ethical challenges of AI are not only reflected in the "technology divide", but also in the broader field. These challenges are concentrated in four dimensions.
Countries that are leading in AI technology and rules are in a cycle of rapid accumulation of technological advantages, which are likely to become a "bottleneck" tool in the field of semiconductors, which will hinder the progress of late-developing countries in AI.
2023 will be the first year of global AI ethics governance. Countries** and international organizations have begun to intensively discuss the ethics of AI, and issued a series of statements, visions and policies in an attempt to standardize the development path of AI.
Rather than managing risks, the United States is more reluctant to severely limit the development of its technology until it is absolutely ahead of the curve. As a result, the U.S. often lags behind countries in AI governance and technology development.
Text |Li Zheng Zhang Lanshu.
Robot exhibits at the Shanghai Science and Technology Innovation Achievement Exhibition (photo taken on November 29, 2023) by Fang Zhe This magazine.
In 2023, with the emergence of a new generation of generative AI application ChatGPT, the international community will increasingly discuss the ethical challenges of AI. A growing number of observers are discovering that the rapid development of AI may be beyond the preparedness of human society, and believe that the risks posed by it cannot be ignored. Ethical challenges have become the most prominent topic in the widespread controversy brought about by artificial intelligence, and will also have a profound impact on the interaction between artificial intelligence and human society in the future.
Four dimensions look at the ethical challenges of AI.
Similar to the birth of the Internet, AI will also bring significant changes to the world. This impact is often a double-edged sword, with new technologies both transforming and impacting the world, and not everyone benefits equally. The ethical challenges of AI are not only reflected in the "technology divide", but also in the broader field. These challenges are concentrated in four dimensions.
The first dimension stems from the "autonomy" of AI, which means that the technology is more likely to escape human control than other cutting-edge technologies. The relevant ethical challenges are mainly reflected in whether AI will deceive and control human consciousness, and whether it will reduce human development opportunities.
Compared with the Internet and social networking, AI can more comprehensively understand the needs of individuals, "perceptions" and users. This capability, combined with "deepfake" technology, will further intensify "control" and "spoofing" against different groups. Through targeted information feeding, artificial intelligence may create a tighter "information cocoon" and a deeper "consciousness control". Such risks are illustrated in 2023 when a UK court ruled in a case in which an AI chatbot encouraged a man to assassinate the Queen.
The continuous iterative progress of generative artificial intelligence represented by ChatGPT has also allowed the business community to see more and more broad scenarios of "human substitution". According to McKinsey & Company, by 2030, with advances in technologies such as artificial intelligence, as many as 37.5 billion workers may face the problem of re-employment. Research firm Oxford Economics has come to a similar conclusion that by 2030, some 20 million manufacturing jobs will disappear around the world, and that these jobs will be transferred to automated systems, and that the forced manufacturing labor force will be replaced by machines, even if it is converted to service jobs. Among the many jobs, the types of jobs with the highest risk of being replaced by artificial intelligence technology include technical jobs such as programmers, software engineers, and data analysts, and ** jobs such as advertising, content creation, technical writing, and journalism, as well as lawyers, market analysts, teachers, financial analysts, financial consultants, traders, graphic designers, accountants, customer service, etc. These jobs are generally highly educated, and unemployment means a huge loss of human capital, which will exacerbate the structural unemployment problem in some countries.
The second dimension stems from the "non-transparency" of AI, which means that the hidden risks of the technology are more difficult to detect, and the problems cannot be disclosed in a timely manner and attract social attention.
Artificial intelligence applications are inseparable from the support of computing power and algorithms, but these two important resources are not transparent. For generative AI models, hundreds of millions of parameters and data are called for each content generation, making it almost difficult to explain their decision-making process. The opacity of the process and content makes AI more prone to hidden dangers. The lack or aggressiveness of the design of large models is likely to lead to problems such as privacy information leakage, excessive data collection and abuse, and uncontrollable algorithms. The output of generative AI can be misleading, contain untrue and inaccurate information, and mislead the human decision-making process. Some criminals may also mislead AI systems by "data poisoning" and "algorithm poisoning", causing a wider range of systemic failures.
The militarized deployment of AI has been the most worrying in recent years. Artificial intelligence systems are being deployed in offensive systems at an accelerated pace by various countries, which makes the risk of "intelligent warfare" systems making mistakes in decision-making, "misfired" and even detonating and worsening wars.
The third dimension stems from the "extensibility" of AI, which means that the technology can be used by all kinds of people and organizations, which may include some people with ulterior motives.
Artificial intelligence is easy to port, easy to be transformed, and easy to integrate, and technological breakthroughs are easy to spread, and the same algorithm may serve completely contradictory purposes. Some criminals can circumvent model security policies to extract "dangerous knowledge" from AI, and they can also transform AI into a crime tool. According to Forbes, artificial intelligence has become the most powerful in the field of telecom fraud, and it is difficult for any country to escape this catastrophe that has spread around the world. Telecom fraud empowered by artificial intelligence has the potential to become the most harmful organized crime in the world.
The fourth dimension stems from the "monopoly" of artificial intelligence, which means that the technology is highly dependent on capital investment, and the use of advanced algorithms has a high threshold, including the algorithm preferences of designers and data formation, which is easy to expand class differentiation.
First, AI could exacerbate monopolistic behavior. Artificial intelligence has become the most powerful "mass destruction**" in the field of marketing, and it has changed the marketing strategy of enterprises in all aspects. However, this more precise marketing may also contribute to behaviors such as "thousands of people, thousands of prices".
Second, AI may exacerbate real-world discrimination. The algorithms of AI applications are driven by data, and these data cover specific labels such as race, gender, creed, disability, infectious diseases, etc., reflecting the complex values and ideologies of human society. Once bias is introduced into the training of relevant application models, the content of the algorithm output may be biased or favored by individuals, groups, and countries, raising fairness issues.
Finally, AI can lead to unequal development. The key expertise and cutting-edge technologies of artificial intelligence are concentrated in a small number of enterprises and countries, which have a first-mover advantage, which will inevitably lead to the uneven development of the global artificial intelligence industry and deepen the global "digital divide" to a greater extent. At the same time, countries that are leading in AI technology and rules are in a cycle of rapid accumulation of technological advantages, which are likely to become "bottleneck" tools such as those in the semiconductor field, which will hinder the progress of AI late-developing countries.
A staff member places the discus on the robot dog used to carry the discus before the women's discus throw final at the Asian Games in Hangzhou (photo taken on October 1, 2023) Photo by Jiang Han This magazine.
The first year of the global ethical governance of artificial intelligence.
These ethical challenges are attracting widespread attention from the international community. In 2023, countries** and international organizations will begin to intensively discuss the ethics of AI, and issue a series of statements, visions and policies in an attempt to standardize the development path of AI.
The United Nations has played a more important role in the ethical governance of AI. In 2023, the United Nations made some progress in promoting consensus-building, security risks and governance cooperation among countries. In March, UNESCO called on countries to immediately implement the organization's Recommendation on the Ethics of Artificial Intelligence, published in November 2021. In July, the United Nations held the first press conference attended by humanoid robots and humans, and 9 humanoid robots accepted questions from participating experts and leaders from all walks of lifeHost the "AI for Good" Global Summit to discuss the future development and governance framework of AI;The Security Council held its first open debate on the potential threat of artificial intelligence to international peace and security. In October, UN Secretary-General António Guterres announced the establishment of a high-level advisory body on AI, with 39 experts from around the world joining to discuss the risks and opportunities of AI technology and provide support for strengthening international social governance. It can be seen that the United Nations has included AI ethics in the global governance agenda, and will promote the formation of more formal and binding organizational and governance norms in the future.
The European Union has dedicated legislation for artificial intelligence to fully regulate the technology. The European Commission's 2021 draft negotiating mandate for the proposed Artificial Intelligence Act strictly prohibits "AI systems that pose an unacceptable risk to human safety", requires AI companies to maintain control over algorithms, provide technical documentation, and establish risk management systems. After marathon negotiations, the European Parliament, EU member states and the European Commission reached an agreement on the Artificial Intelligence Act on December 8, 2023, which became the world's first comprehensive regulation in the field of artificial intelligence.
The U.S. has introduced regulatory policies, but the legislative process has been slow. Compared with the European Union, the United States has fewer regulatory requirements, mainly emphasizing safety principles and encouraging corporate self-regulation. In January 2023, the U.S. National Institute of Standards and Technology (NIST) officially announced the AI Risk Management Framework, which aims to guide organizations in mitigating security risks when developing and deploying AI systems, but the document is a non-mandatory guidance document. In October, Biden signed the most comprehensive U.S. AI regulatory principles to date, the Executive Order on the Safe, Secure, and Credible Development and Use of Artificial Intelligence, which goes beyond voluntary commitments made earlier this year by companies such as OpenAI, Google, and Meta, but still lacks enforcement effectiveness. Biden urged Congress to introduce relevant legislation as soon as possible after the executive order was issued. Senate Majority Leader Chuck Schumer has hosted two AI Insights Forums in September and October to collect industry advice and expects AI legislation to be ready within a few months, but it remains to be seen whether such bills will pass smoothly in the future.
The UK is investing more resources in AI governance diplomacy. In November 2023, the first Global AI Security Summit was held at Bletchley Park, UK. Representatives from the United States, the United Kingdom, the European Union, China, India and other parties attended the meeting. The conference culminated in the adoption of the Bletchley Declaration, emphasizing that many of the risks of AI are international in nature and therefore "best addressed through international cooperation". Participants agreed to work together to create an "internationally inclusive" network of cutting-edge AI safety science research to deepen understanding of AI risks and capabilities that are not yet fully understood. The UK has made extensive preparations and diplomatic mediations to host the summit, with the aim of establishing itself as the "convener" of global AI governance. In the future, more and more countries will invest more resources in AI governance diplomacy to compete for the right to speak in this emerging field.
China attaches great importance to the issue of AI governance, and the governance concept focuses on balancing development and security. In April 2023, the Cyberspace Administration of China (CAC) drafted the Administrative Measures for Generative AI Services (Draft for Comments). The Interim Measures for the Administration of Generative AI Services was officially released in July, making specific provisions on generative AI in terms of technology development and governance, service specifications, supervision and inspection, and legal liability, and came into force on August 15, making it the world's first special legislation for generative AI. At the same time, China also introduced a series of regulations on deep synthesis and algorithms during the year, such as the Provisions on the Administration of Deep Synthesis of Internet Information Services, which came into effect in January, and the Provisions on the Administration of Algorithmic Recommendations for Internet Information Services, which came into effect in January. In October, China proposed the Global AI Governance Initiative, which proposes specific principles, guidelines or recommendations on personal privacy and data protection, data acquisition, algorithm design, technology development, risk level testing and assessment, and ethical guidelines.
Why the United States is slow.
Compared with the speed of development in the United States at the level of AI technology application, the United States has been slow to act on AI regulation policies and legislation, which is the result of a combination of many factors.
First, the United States is reluctant to give up its AI advantage.
The United States and strategic circles generally believe that artificial intelligence is one of the strategic technologies that will determine whether the United States can win the next round of global technology competition. Since the Obama era, the United States** has put forward a number of relevant national plans and visions. Both Trump's and Biden's AI executive orders emphasize the preservation of "U.S. AI leadership" as a foundational goal for U.S. governance of AI. Rather than managing risks, the United States is more reluctant to severely limit the development of its technology until it is absolutely ahead of the curve. After the popularity of ChatGPT, relevant regulatory policies in the United States have been introduced, the goal of which is not only to deal with governance risks, but also to prevent the rapid spread of the technology and weaken the leading edge of the United States.
Second, the issue of AI ethics is politicized in the United States, and it is difficult for the two parties to reconcile their differences and reach a consensus on governance.
In recent years, political polarization has intensified in the United States, with the two parties at odds on almost all social issues, especially on the ethics of AI as it relates to people's lifestyles. In terms of issues, the Democratic Party pays more attention to diversity-related issues such as personal privacy, algorithmic discrimination, and algorithmic justiceThe Republican Party is more concerned about security issues such as AI crimes. In terms of risk prevention, the Democratic Party believes that the application of artificial intelligence fraud and rumors is the most prominent risk, and requires strengthening the responsibility of intermediate communication channels such as social networkingRepublicans, on the other hand, argue that such governance measures are politically motivated and detrimental to Republican candidates. Affected by 2024**, the contradictions and disputes between the two parties have become more acute, so that the progress of legislation has obviously lagged behind the development of the situation. It can be seen from Biden**'s successive launch of a series of AI governance policy documents at the end of 2023 that the Democratic Party intends to take the lead in breaking the deadlock, making AI governance a potential campaign issue and accelerating the legislative process on this issue.
Finally, the U.S. also faces some institutional barriers to regulating AI.
The so-called "freedom first" and "individual first" in the American political tradition are not conducive to the control of decentralization, risk diversification, and rapid proliferation of technologies and applications. This tradition tends to create a regulatory gap between states, and it also makes it difficult to use administrative resources to eradicate the chain of illegal interests. The proliferation of guns and drug crimes are all related to the proliferation of guns and drug crimes, and dangerous artificial intelligence applications may also become the next social risk in the United States.
This hesitation could lead to an increased risk of a global AI "arms race". As the world's leading country in AI technology research and development, the United States has the obligation to become the earliest participant in the global promotion of AI-related regulatory measures, but the United States has not yet formed regulatory legislation, and the agenda of promoting AI governance at the global level has also slowed down, which will make more and more countries ignore control and blindly pursue technological leadership, and then join the algorithm "arms race". This kind of competition may gradually deviate from the healthy and orderly development direction of artificial intelligence, which will undoubtedly bring more obstacles to subsequent global legislation and governance, and also increase the risk of vicious competition incidents around artificial intelligence in various countries.
Li Zheng: Assistant Director of the Institute of American Studies, China Institute of Contemporary International RelationsZhang Lanshu is an assistant researcher at the Institute of American Studies, China Institute of Contemporary International Relations
Outlook, No. 52, 2023).