Dawan NewsThe reporter learned from the University of Science and Technology of China that a few days ago, Professor Yu Nenghai, executive dean of the School of Cyberspace Security of the university, was invited to be a guest on the "Innovation in Progress" live broadcast column of the Science and Education Channel of Radio and Television Station, and worked with the host and the audience to discuss the application prospects and potential risks of generative AI.
This episode of the program starts from the impact of generative AI on film and television creation, and kicks off with the ** of "digital person Qian Xuesen", which is jointly produced by the Information Processing Center of the University of Science and Technology of China, Beijing Lingjing Cyber Technology, Hefei High-dimensional Data Technology, and Beijing Aerospace Vision Technology, and is also supported by Professor Qian Yonggang, a descendant of Qian Lao, who "resurrected" the image of Qian Lao with synthetic reality technology, and reproduced Qian Lao's motivational words. It has aroused the resonance and heated discussion of the majority of netizens, and the audience has reached 6 million, and the audience has spoken highly of the "digital person Qian Xuesen".
It is worth mentioning that the security of generative AI was mentioned in the program, which is also the research direction that the School of Cyberspace Security of USTC focuses on in the construction of the "Demonstration Project for the Construction of First-class Cyber Security Colleges" by the Cyberspace Administration of China and the Ministry of Education.
As an interdisciplinary expert in the field of artificial intelligence and security, Professor Yu Nenghai, who serves as the vice president of the Chinese Society of Image and Graphics and the director of the CSIG Digital Forensics and Security Committee, talked about the potential risks from two typical generative AI malicious use scenarios.
The first scenario is the risk of malicious use of AI for Science large models, which may lead malicious users to synthesize dangerous chemicals or toxic substances with large scientific models.
The second scenario is the potential harm of face deepfake, which may have a significant impact on public safety, personal privacy, property security, etc., for this reason, his research team also proposed a variety of deepfake defense countermeasures, and defeated NVIDIA, MIT and other well-known schools and enterprises in the world's most influential deep fake detection challenge, achieved the world's second domestic first result, and was selected as 8 representative achievements in the field of Chinese artificial intelligence security since 2014.
Dawan News reporter Chen Mu Intern Yang Chuncao.
Edited by Xu Dapeng.