Shared todayArtificial Intelligence SeriesIn-depth Research Report:Special Topic on Artificial Intelligence Research Report on Ethical Governance of Artificial Intelligence 2023 China Academy of Information and Communications Technology
Report Producer:China Academy of Information and Communications Technology
Report total: 30 pages.
Featured Report**: The School of Artificial Intelligence
(1) The concept and characteristics of AI ethics
AI ethics is the value concept and behavioral norms that need to be followed to carry out scientific and technological activities such as AI research, design, development, service and use. AI ethics focuses on the "truth" and "goodness" of technology, and provides a broader discussion space for the development of AI. AI ethics includes two aspects: value goals and behavioral requirements. In terms of value goals, AI ethics requires that all stages of AI activities should aim to improve human well-being, respect the right to life, adhere to fairness and justice, and respect privacy. In terms of behavioral requirements, AI ethics requires AI technology to be safe, controllable, transparent and explainable, strengthen human responsibility in all aspects of AI R&D and application, and advocate and encourage multi-party participation and cooperation.
AI ethics presents three characteristics: philosophical, technical, and global. First, the ethics of artificial intelligence expands the boundaries of human moral and philosophical reflection. Ethics of artificial intelligence contains ethical thinking about the relationship between humans and machines, and expands human exploration on issues such as goodness, rationality, and emotion. The discussion of AI ethics not only includes the study of ontological ethical issues in terms of AI subjects, personality, and emotions, but also pays attention to whether the application of AI meets the requirements of social morality. The discussion on the ethics of artificial intelligence embodies the value ideals of contemporary people towards social life, and extends the ethical norms of human interaction to the reflection on the interaction between humans and technology. Second, AI ethics is closely related to the development and application of AI technology. From the first wave of artificial intelligence in 1940, Asimov proposed the "three principles of robots", to the third wave of artificial intelligence in 2004, the symposium on robot ethics in the third wave of artificial intelligence officially proposed "robot ethics", and now, the ethics of artificial intelligence has become a common concern of the world, industry, and academia. With the evolution of deep learning algorithms and the expansion of the application field of artificial intelligence technology, AI ethics is concerned with the prevention of algorithm technology risks and technology application crises. Before the advent of the era of "strong artificial intelligence", AI ethics mainly focused on discriminatory bias, algorithmic "black box", technology abuse, and improper data collection. However, with the development of large-scale model technology, the topic of AI ethics discussion has also deepened. Third, improving human well-being is a global consensus on AI ethics. Unlike the differences between traditional ethical concepts and the influence of regional history and traditional culture, the development and application of AI technology has brought global ethical challenges. At present, social prejudice, technological divide, and diversity crisis have become common challenges facing the international community. People-oriented, intelligent for good, and promoting sustainable development have become the global ethical consensus on artificial intelligence.
(2) The necessity of ethical governance of artificial intelligence
In the face of the risks brought by artificial intelligence, it has become a general consensus to promote the healthy development of artificial intelligence through multiple mechanisms of AI governance. Ethical governance of AI is an important part of AI governance, which mainly includes people-oriented, fair and non-discriminatory, transparent and explainable, human control, traceable responsibility, and sustainable development. Based on the development and application of AI technology, AI ethical governance can propose timely ways to adjust the relationship between humans and AI and deal with AI risks. The focus of AI ethical governance is not to focus on the minimum obligation requirements for innovative subjects, but to promote the realization of the value goal of "intelligence for good".
(1) Ethical challenges to artificial intelligence
In the technology research and development stage, due to the lack of technical capabilities and management methods in data acquisition and use, algorithm design, model tuning, etc., there may be ethical risks such as bias and discrimination, privacy leakage, misinformation, and unexplainability. The risk of bias discrimination is due to the quality of data sets such as biased content and lack of diversity, as well as unfair design of algorithms for different groups, resulting in discriminatory algorithm decision-making or content output. Privacy risk refers to the use of non-consensual personal data for model training, which in turn raises the risk that the model output content may violate privacy. The risk of misinformation mainly occurs in the basic model of artificial intelligence, because the large model is based on the pre-order text for the next word to be autoregressive**generated, the content is greatly affected by the pre-order text, the large model may produce "hallucination", and then generate incorrect and unreliable content. Unexplainable risk refers to the unexplained reasons and processes of AI decision-making due to the complexity and "black box" of AI algorithms. In the product development and application stage, the specific fields for AI products and the deployment and application scope of AI systems will affect the degree of ethical risks of AI, and may lead to ethical risks such as misuse, abuse, over-dependence, and impact on education and employment. The risk of misuse and abuse refers to the risk that AI is easily used for inappropriate tasks due to the improved convenience of using AI technology and the enhanced ability to complete tasks, including the rapid generation of a large number of false content, the generation of malicious **, and the inducement to output bad information. The risk of over-reliance refers to the excessive dependence and trust of users on AI due to the improvement of AI technical capabilities, including the acceptance of content generated by large models without fact-checking, and even emotional dependence due to long-term interactions. Impact Education and employment risks refer to the improvement of the convenience of artificial intelligence, which allows students to complete homework with the help of machines**, affecting the basic methods of education and learning, and the widespread direct interaction between adolescents and artificial intelligence may also bring mental health risks. At the same time, the employment impact of AI is not limited to digital substitution, but may also impact professionals engaged in arts, consulting, education and other fields, accelerating the depreciation of education investment7 and triggering further employment substitution shocks.
(2) Ethical risks of AI in typical application scenarios
According to the specific application scenarios of artificial intelligence, there are great differences in the main ethical risks, the objects and scope of risk impacts, and the objects of ethical governance. At present, there are some differences in the typical ethical risks faced by application fields such as ** generation, autonomous driving, and smart healthcare, which need to be analyzed and discussed in different scenarios.
Report total: 30 pages.
Featured Report**: The School of Artificial Intelligence