"Let's enjoy a long 'AI summer' instead of heading into autumn unprepared. ”
Outlook Oriental Weekly, contributing writer Hu Zhenzhou and editor Chen Rongxue.
Twenty-six years ago, German-American neuroscientist Christopher Koch made a bet with Australian philosopher David Chalmers. Koch said the mechanism by which neurons in the brain produce consciousness will be discovered in 2023, which Chalmers says is impossible.
When 2023 passes, the bet is revealed, philosophers trump neurologists, and the mechanism by which neurons in the brain generate consciousness remains undiscovered, but the tech community is beginning to discuss more passionately - will AI (artificial intelligence) be conscious?
From passing the qualification exam to "creating" art, while the world is excited about the power of AI, the plots that once existed in movies such as "Ex Machina" and "Blade Runner" seem to be coming to life.
Snap your fingers. The most frightening thing is that if the OpenAI team were to do it all over again, they might not be able to create ChatGPT, because these people don't know how the 'emergence' came about. At the end of 2023, Liu Jia, chief researcher of the Brain and Intelligence Laboratory of Tsinghua University, said in an exclusive interview with Phoenix Satellite TV that it was like snapping a finger and it came out.
He believed that artificial intelligence was "emerging" consciousness, and for this reason, he moved out Jeffrey Hinton, known as the "godfather of AI". In an interview with CNN in May 2023, Jeffrey Hinton said: "AI is becoming smarter than humans, and I want to 'blow the whistle' as a reminder that serious thinking should be done about how to prevent AI from controlling humans. ”
In the field of AI, Jeffrey Hinton's achievements are significant. He is a 2018 Turing Award winner and has spent almost his entire life working in AI-related research as vice president and engineering researcher at Google. His main focus on neural networks and deep learning is the basic science for the rapid evolution of AI programs such as AlphaGo (an AI program for Go) and ChatGPT, and Ilya Sutskev, co-founder and chief scientist of OpenAI, is also his student.
In February 2022, when Ilya Sutskfo posted that "maybe today's large neural networks have given rise to consciousness," Murray Shanahan, chief scientist at Google's deepmind, replied: "In the same sense, a large field of wheat may seem a bit like spaghetti." ”
On October 26, 2023, MIT Technology Review interviewed Ilya Sutskov about this, and he smiled and asked, "Do you know what the Boltzmann brain is?" ”
This is a quantum mechanical thought experiment named after the 19th-century physicist Ludwig Boltzmann, where random thermodynamic fluctuations in the universe are imagined as the cause of the sudden appearance and disappearance of the brain.
I feel like the current language model is a bit like the Boltzmann brain. The brain appears when you talk to it, and when you say that, the brain pops and disappears. Ilya Sutskov said ChatGPT has changed a lot of people's expectations of what is coming, from "never happening" to "will happen sooner than you think."
Moat. No one can fail to take Jeffrey Hinton's opinion seriously. In the view of Wang Weijia, the founder of Meitong, ChatGPT's ability to grasp high-order relevance has far surpassed that of humans, and its understanding of the world is far beyond that of ordinary people. He believes that the large model "hallucination" is the ability to associate, which is the proof of the awakening of consciousness - it is with the ability of association that human beings have discovered gravity, the theory of relativity and the double helix of DNA.
The "hallucination" of the large model has also attracted the attention of Huang Tiejun, president of the Beijing Academy of Science and Technology and professor of the School of Computer Science of Peking University.
In "20 Years, 20 People, 20 Questions" initiated by Tencent News, Huang Tiejun believes that the "illusion" of large models may be innovations that transcend existing knowledge systems, such as inspiring literature, art, and science fiction works, or new insights, ideas, or theories, which are the source of the continuous expansion of knowledge systems. He even went so far as to say that without "illusions," there is no real intelligence.
In fact, although people have been exploring consciousness for thousands of years, there has been no decisive breakthrough in what consciousness is and how to determine whether a person or thing is conscious.
In 1714, the German philosopher Leibniz published "Monadism", referring to a thought experiment: "It must be admitted that the existence of perception cannot be explained by mechanical movements and numerical values alone. Imagine a mechanical device that we don't know if it's sentient or not. Therefore, we zoom ourselves out and walk into it, we can see all the details of the operation of the machine, the process, and also understand the mechanics behind the process. Even, we can also ** how the machine will work. However, none of this seems to have anything to do with the machine's perception. There always seems to be a moat between the observed phenomena and the perception, and they are always unable to connect with each other. ”
To this day, this "Leibniz moat" still stands in front of mankind.
However, this did not prevent Zhou Hongyi, the founder of 360 Group, from publicly asserting: "Artificial intelligence will definitely produce self-awareness, and there is not much time left for human beings." ”
In March 2023, he said at the China Development Forum that the current large language model parameters can be regarded as the number of connections of neural networks in the brain capacity, and the human brain has at least 100 trillion, and when the large model parameters reach 10 trillion, consciousness may be automatically generated.
On October 31, 2023, in Yunqi Town, Hangzhou, visitors visited artificial intelligence products and applications in the "Artificial Intelligence +" Pavilion of the 2023 Yunqi Conference with the theme of "Computing, for the Value of the Incalculable" (photo by Huang Zongzhi).
Crowd ridicule. In fact, this is not the first time that the tech community has discussed whether AI is waking up.
The last time we talked about it was in June 2022, when Blake Lemoyne, a seven-year researcher at Google, said he had discovered a shocking secret — that the company's AI-powered chatbot, Lamda, had awakened and become self-aware, but Google was trying to cover it up.
In order to prove that he is not talking nonsense, he posted a 21-page chat log on the Internet, hoping that everyone will see that from the three laws of Asimov's robot to the Chinese Zen case, Lamda has given semantic answers.
But he almost became a joke and suffered unceremonious group ridicule.
Gary Marcus, a professor of psychology at New York University in the United States, said he was "a stilt stretch version of nonsense."
Amed Hewaja, a professor at the University of California, Berkeley, who works on natural language processing models, put it more briefly but directly: "Whoever is really familiar with the systems of these models will not say the stupid thing that these models have woken up." ”
According to Liang Zheng, deputy dean of the Institute of Artificial Intelligence International Governance and director of the Artificial Intelligence Governance Research Center of Tsinghua University, the debate on whether artificial intelligence has autonomous consciousness is not only academic in the field of technology, but also related to the basic adherence to corporate compliance.
Once an AI system is found to be autonomous, it is likely to be considered a violation of the relevant norms of the 2nd edition of the Code of Ethics for AI Design**. He said.
The specification, published by the Institute of Electrical and Electronics Engineers in 2017, states: "According to some theories, unforeseen or unintentional system behavior becomes increasingly dangerous and difficult to correct as systems approach and exceed general AI." Not all GEI-level systems are able to align with human interests, so caution should be taken to determine the operating mechanisms of different systems as these systems become more capable. ”
Risk. Liang Zheng said that sometimes the development of technology will go beyond the framework of people's expectations, and unconsciously appear inconsistent or even contrary to human interests.
For example, the "paper clip maker" hypothesis describes a scenario in which artificial general intelligence poses a threat to humans when the target and technology are harmless—suppose that the ultimate goal of an artificial intelligence machine is to make paper clips, although the purpose seems harmless to humans, when it uses human incomparable capabilities to make all the world's resources into paper clips, it will cause harm to humans.
It's important to focus not only on the potential opportunities for large models, but also on the risks and drawbacks. Google Chief Scientist Jeff Dean said.
Aligning the values of AI with the values of human beings has become a hot topic at the moment.
In this regard, Huang Tiejun said: "When human intelligence is higher than AI intelligence, AI is a controllable assistant that can be trained by humans to become more and more credible assistants; However, when AGI (Artificial General Intelligence), which is superior to human intelligence and surpasses humans in all respects, emerges, the question becomes whether AGI trusts humans, rather than whether humans believe in AGI. The initiative is not on the side of humanity. ”
In March 2023, more than 1,000 technical experts around the world jointly issued an open letter titled "Moratorium on Large-scale AI Research".
In the open letter, which calls for "all AI labs to immediately suspend the training of AI systems more powerful than GPT-4 for at least 6 months", tech giants including Apple co-founder Steve Wozniak, Tesla and SpaceX boss Elon Musk, and top expert in the field of artificial intelligence, Turing Award winner Joshua Bengio, wrote: "AI systems with intelligence that competes with humans could pose profound risks to society and humanity"" Let's enjoy a long 'AI summer' instead of heading into autumn unprepared. ”
Boldly hypothesize, what about humans in a world with smarter artificial intelligence?
There is a possibility that it may be crazy by today's standards, but not so crazy by tomorrow's standards, and that is that many people will choose to be part of artificial intelligence. This could be how humans are trying to keep up with artificial intelligence. "At first, only the boldest and most adventurous people would try to do this. ”
Click on the title below to read all the articles in this special section.
2024, Three Questions on Artificial Intelligence" special series.