Alibaba released the AIGC white paper, and experts explain how to supervise AI development and risk

Mondo Technology Updated on 2024-01-31

On December 27, Alibaba Group and China Electronics Standardization Institute jointly released the "AIGC (Artificial Intelligence Generated Content) Governance and Practice" The person in charge of the Alibaba Science and Technology Ethics Governance Committee said that Alibaba is building a firewall for the development of AI (artificial intelligence) while breaking through the ceiling of AI applications, and working with all sectors of society to solve more social problems with AI and promote AI to benefit more people.

In 2022, Alibaba established the Science and Technology Ethics Governance Committee and released the first "AI Governance and Sustainable Development Practices" in China***In 2023, AIGC technology has made major breakthroughs and become the main track of global AI development

The latest AIGC*** introduces the latest progress of global AIGC technology and application, sorts out the doubts and concerns of all walks of life about this new technology, and analyzes the different models of AIGC governance in countries around the world.

Xue Hui, a member of Alibaba's Science and Technology Ethics Governance Committee and a researcher at Alibaba's Security Department, told the Beijing News Shell Finance reporter: "We don't know more about AI than we know, and we can't imagine more than we can." AIGC presents unprecedented challenges that require us to be proactive in our response. The development and governance of AI cannot be completed by a single enterprise, a university, or an institution alone, and must be 'diverse, collaborative, open and co-governed'. Alibaba is building a strong firewall while breaking through the ceiling, and working with all sectors of society to solve more social problems with AI and promote AI to benefit more people. ”

Pan Enrong, a professor at Zhejiang University, also said that generative AI has brought a huge conceptual impact on human economic and social development, and "it should be sparse rather than blocked". On the one hand, it is necessary to overcome all kinds of fears and conjectures and restrain the impulse to "block".On the other hand, it is necessary to iterate on various "sparse" ways in practice.

Zhang Mi, a professor at Fudan University, believes that the technology industry should develop AI responsibly, taking into account AI development and risk management. Zhang Mi said: "There is an atmosphere of AI competition around the world, and focusing on safety may lead to technological backwardness, and security has to be put on hold in order to take the lead. All parties should take a long-term view, work together to create orderly competition, control risks within the upper limit of protection capabilities, and ensure that AI development is in a safe zone. ”

Zhang Mi is optimistic about the future security prospects of AI large models, citing cutting-edge views that with the maturity of evaluation and governance technologies and the improvement of governance systems, humans can provide a set of security rules, and AI can "supervise models with models" according to the rulesIn the longer term, it is possible for AI models to autonomously align with human values and actively develop for good. "As long as we treat AI responsibly, we will be able to build AI that 'loves humanity'. Zhang Mi said.

Edited by Yue Caizhou.

Proofread by Lucy.

Related Pages