Sha Leimei s new trend in AI governance has given us a wake up call

Mondo Finance Updated on 2024-03-08

Recently, the United States established the Alliance of Artificial Intelligence Security Research Institutes (AISIC). It is part of the Institute for Artificial Intelligence Security at the Institute of Technology and Technology in the United States, and will work to develop collaborative research and safety guidelines for advanced AI models. This new trend in AI governance in the United States, which directly links artificial intelligence (AI) with security, deserves our close attention.

People pose with the 2024 Consumer Electronics Show (CES) logo. (*From China News Service).

This first marks that the United States has elevated the issue of artificial intelligence security to a national strategic level. In the past few years, the United States has continued to increase investment in artificial intelligence research and development, and has made great progress in algorithms, chips, and application scenarios. At the same time, Washington is becoming more aware of the security risks that AI may pose, such as misleading and fraudulent information generated by speech synthesis and image generation technologies using deep learning. discrimination and harm caused by algorithmic bias in areas such as autonomous driving and medical-assisted decision-making; security incidents caused by privacy and permission control vulnerabilities in data and models; and the ripple effects of information exchange and decision-making errors between different automated systems. In particular, the recent large language model technology has raised the effectiveness of artificial intelligence to a new level, and at the same time, it has also brought greater security challenges. It is based on the recognition of various potential risks that the United States** has raised the issue of artificial intelligence security to a national strategic level through the establishment of AISIC, so as to promote cross-departmental and cross-domain system governance.

One of the main tasks of the Consortium of Artificial Intelligence Security Institutes is to develop guidelines for red team attacks, security detection assessments, and more. The so-called "red team attack" refers to a set of test methods that simulate hackers infiltrating the system and discovering system vulnerabilities. Once such guidelines and standards are established, they will bring standardized and institutionalized guidance to the development of AI security research. For example, benchmarking requirements drive the safety metrics that must be considered and achieved during the design and development of various systems. The risk assessment framework provides a clearer and more systematic approach to security research throughout the AI technology lifecycle. Although some institutions in China (such as Zhongguancun Laboratory) have also carried out certain technical research on the safety of large models and carried out preliminary evaluation system construction, the relevant standardization work is still relatively weak on the whole. This reminds us that we need to pay close attention to and accelerate the deployment of testing and standard research in areas such as data evaluation, model auditing, and algorithm robustness.

The alliance adopts the organizational model of leading the way and involving multiple parties of industry, academia and research. In addition to the relevant departments, the heads of security departments of well-known enterprises, as well as experts and scholars from top universities and scientific research institutions also participated. On the one hand, it highlights the policy orientation of the United States to promote joint research and in-depth collaboration between industry, university and research in major strategic science and technology fields; On the other hand, it also reflects the high complexity of AI security issues, which require cross-border and cross-field cooperation and integration and innovation. We should also further strengthen information communication and cooperation between departments and enterprises, universities and research institutes in the formulation of artificial intelligence safety standards, the construction of regulatory systems, and technology research and development, so as to form a synergistic effect of policy guidance, standardized construction and scientific and technological supply.

In the future, it is very likely that the United States will continue to launch more important measures in the field of AI security to further improve the top-level design and governance structure. In view of the fact that the development of artificial intelligence is still on the rise and showing signs of acceleration, various application innovations are emerging one after another, and security risks and risks will gradually be revealed. Therefore, the continued efforts of the United States in terms of security policies, regulations, and technological solutions require our active attention. At the same time, this can also provide important experience for the construction of China's artificial intelligence security governance system.

China can also adopt some effective measures to improve the security of AI systems. For example, by timely grasping the latest situation in the field of artificial intelligence security, possible risk points can be discovered earlier: a dynamic artificial intelligence security incident monitoring system can be built, and through real-time monitoring of the output of the best artificial intelligence system, real-time tracking of data leakage, algorithm discrimination and other security incidents in the field of artificial intelligence at home and abroad. On this basis, the detection tool output of the AI system can be further developed to conduct security risk review of AI-generated content in different modes such as text, audio, and **. In addition, it is necessary to build a rapid AI system security risk feedback and remediation mechanism. In other words, when inappropriate and harmful content generated by algorithms or models is detected, problems in the system can be quickly located and fed back to relevant companies and research institutions, requiring them to modify the model to mitigate risks. Such a closed-loop system of "monitoring-detection-feedback-correction" can discover potential safety hazards of artificial intelligence in a more timely manner, and promote industry entities to actively fulfill their responsibilities for safe production. In general, we need to have clearer ideas and action guidelines for building a systematic and standardized AI security policy and technical system. (END) (The author is a professor at the School of Artificial Intelligence, Beihang University).

Written by Sha Lei.

*: Global Times.

Related Pages