AGI Magic Box, will it release Hades PLUTO or Astro Boy?

Mondo Anime Updated on 2024-01-28

Symbiosis between man and machine is the eternal theme of science fiction works. Among them, "Pluto" may be the earliest comic book of how humans and robots coexist in conflict.

If Astro Boy is the "peacemaker" of human-machine symbiosis, enlightening generations of people to trust and love robots, then Pluto is the opposite of Astro Boy, a hateful killing machine. In the side story "The Biggest Robot on the Earth" of "Astro Boy", a super robot named Pluto will challenge the world's seven robots, including Astro Boy.

This story was later further adapted by Naoki Urasawa, which depicts the various contradictions and conflicts of the coexistence of man and machine in a larger length and pen and ink.

Recently, the story of Pluto was put on the screen by Netflix, and the animation "Pluto" became a hit when it was launched, with a score of 9 on Douban1. IMDB scores 93. Word of mouth is like a tide.

Although the theme of this animation is the cliché "human-machine symbiosis", in 2023**, it has a different taste.

This year, the "intelligence emergence" of large models has shown human-level reasoning ability, embodied intelligence has accelerated the arrival of human-machine coexistence in science fiction movies, and the realization of AGI is no longer out of reach.

At the same time, the impact and danger exposed by the large model and AGI are gradually emerging. Xi One of the reasons for the OpenAI coup is scientists' concern that "AI is out of control", and the AI competition between China and the United States has also set off a round of games.

And let's not forget the larger world, where many regions are mired in the war in the Middle East similar to the war in the anime Pluto.

The real world we live in is unconsciously crossing the singularity. In the near future, what kind of picture will human-machine symbiosis look like?Will AI be used to kill people like Pluto?Are we ready to take on a group of "100,000 horsepower" super-intelligent robots?

You might as well take the same curiosity as watching Astro Boy cartoons as you did in your childhood, and open your mind with us.

Killing Machine Awakening?Again "the wolf came".

To avoid spoilers, I will summarize the plot of the anime "Pluto" in one sentence - robots cry out for love in the center of the universe.

In the future where humans and robots live together, robots have the same emotional and thinking abilities as humans, and can build happy families with humans. And several bizarre ** opened up the hidden contradiction of human-machine coexistence: the robot had a feeling of hatred, so it raised its gun angrily ......

Seeing this, do you think this story is so familiar?

Screenshot of the Pluto animation).

There are a large number of science fiction works like this: in the distant 3xxx years, the scientific and technological level of human society is highly developed, artificial intelligence is everywhere, and one day robots like Delores in "Westworld" awaken their consciousness and find that human beings oppress themselves, so they begin to fight and take revenge.

The awakening of robot consciousness and becoming an uncontrollable threat to human beings is a necessary setting for almost all "technological ghost stories". However, Pluto is just another "wolf".

First of all, even if agi strong artificial intelligence is implemented, IQ and consciousness are not the same thing. For a long time, intelligent machines can only simulate human emotional expressions through voices, facial expressions, postures, etc., and do not really have emotional experiences. What's more, we are still a long way from strong artificial intelligence, and in a recent Turing test for GPT-4, the score was far less than that of humans. The fear of the "awakening of machine consciousness" is nothing more than another "uncanny valley effect".

Secondly, what if, in case, the robot does have consciousness, will it have to be evil?Apparently not. Just as Astro Boy and Pluto are two opposing camps, the robot's behavioral characteristics are also divided into "good" and "evil". With today's understanding of AI technology and the current relationship between humans and robots, I am afraid that robots will not feel "threatened by humans", and it is naturally unlikely that they will become human killers or robot rebel legions. Just like in the latest sci-fi works such as "The Wandering Earth 2" and "Pluto", most of the threats of robots stem from network vulnerabilities, malicious human operations, and the results calculated by algorithms to make humans continue to "do bad things with good intentions".

Screenshot of the Pluto animation).

Whenever an intelligent agent with outstanding capabilities comes out, such as ChatGPT, the myth of "superintelligence" of machines will appear, selling ideas such as "machine intelligence will surpass human intelligence" and "machines will replace humans". Obviously, none of these arguments are tenable, and no matter how powerful a machine is, it will not be enough to defeat human intelligence.

It's just that people always like to believe in thrilling stories and imaginary dangers. To put it another way, this instinct to be sensitive to danger may be engraved in the DNA of human survival in the wild. Therefore, from "The Wolf Comes" to "The Killing Machine Awakens", this kind of story will always make people happy, and you can start again with a different protagonist.

The real problem that has been swept under the rug: a highly symbiotic social crisis

When people are talking about the false risks of the awakening of intelligent machines, the real dangers may be covered up, and these real risks are the problem of human-machine symbiosis.

The advent of large models is enabling machines to evolve from automated systems to highly autonomous systems.

The so-called automation system, similar to sweeping robots and thermostatic controllers, only needs to perform the set tasks in a controllable environment and according to a fixed goal. And embodied intelligence such as housekeeping robots and self-driving cars is obviously not so simple. They need to detect obstacles accurately and timely in a dynamically changing physical environment, and make appropriate decisions and actions in real time, and these highly autonomous autonomous autonomous systems show intelligence similar to that of humans. Large models are making the embodied intelligence of autonomous systems a reality.

It sounds wonderful, but this future of symbiosis between humans and autonomous systems may have security implications:

1.Algorithmic black box. AI has "dark knowledge" and can make decisions without the help of theory or mathematical analysis, and do better than human experts in terms of weather, etc. However, AI does not really understand the scientific principles and mathematical models behind it, and if critical systems are not reliable, it is difficult to ensure safety in the production process. For example, when designing an aircraft's autopilot system, the mathematical model is white-boxed and can be sure to meet safety standards, while the black-box nature of the algorithm cannot ensure sufficient safety and reliability.

2.Unemployment issues. The higher the degree of automation and the more prevalent the use of robots, the higher the unemployment rate. Not all coachmen can learn to drive cars, many jobs that have been replaced by autonomous systems will not return, and occupations that require creativity and are difficult to be replaced by AI, such as programming and artistic creation, can only provide job opportunities for a few people. Changes in the occupational structure will inevitably bring about unemployment pains and hinder the development of the AI industry. In the past year, a large number of artists have opposed and resisted AIGC, which is evident.

3.Technology dependency. For most people, the arrival of AI applications and robots will definitely make life more comfortable. For example, automatic delivery delivery trucks and hotel service robots greatly reduce manual labor. ChatGPT helps students with homework and writing. This also means that humans will gradually lose certain skills. For example, Tang poems and Song poems, future children may no longer need to memorize them, robots deliver things to their hands in every detail, and people's daily activities will also be reduced, bringing hidden dangers such as obesity. As algorithms become more and more accurate, we will gradually rely on AI to make decisions, instead of listening to the suggestions of ordinary people around us and our own feelings and intuition, and the entire positive mechanism of decision-making, action and feedback has also been interrupted, making people more and more lacking in self-confidence. Human-machine coexistence is like "boiling a frog in warm water", and the over-reliance on technology is not unfounded.

4.Political conflicts. There is also a potential threat, which comes from the suspicion and game of various countries** and business organizations. The control of intelligent technology is already a political issue, and the scientific and technological competition with intelligent technology as the core is setting off a political game similar to that of the Cold War.

An article published by The New Yorker magazine on November 20, 2023, "Why the Godfather of A."i.In Fears What He's Built, Turing Award winner Geoffrey Hinton bluntly states, "We don't know what AI will become." But when the reporter asked, "Why don't you just unplug it?"Hinton replied, "Because of the competition between different countries." Previously, he refused to sign a letter calling for a moratorium on research on artificial intelligence for at least six months, and said: "China will not stop research and development for six months." There is reason to believe that for the sake of political games, some **, enterprises or scientists will simply let the risks of artificial intelligence go unchecked.

From human autonomy to machine autonomy, the operating mechanism of the human-machine symbiotic society must change accordingly. It may not directly lead to disasters such as killings, but the impact on employment, culture, and political order will also create security risks.

From this moment on, open the "Pandora's box".

You may ask, since the real risk has been discovered, can it be prevented in advance to reduce the friction of human-machine symbiosis?

Technically and socially, this is not a realistic aspiration.

At the technical level, AI is different from any technological risk in human history. As an algorithm that learns Xi and evolves autonomously, AI needs to constantly chew on a large amount of data, perform complex internal transformations and "unsupervised learning Xi", and the rules and hidden bugs in it may not be captured by human engineers in time. We often hear programmers tell a joke: if a program can run and work, then don't touch it, don't want to clean up the "mountain". Therefore, human beings may not be able to pinch AI growth in the palm of their own hands. Just like a year ago, who would have thought that ChatGPT would "emerge intelligently" and turn the whole world upside down?

Screenshot of the Pluto animation).

At the social level, the current approach of mainstream countries to deal with crises is often not "prevention in advance", but "dynamic equilibrium". Pre-emptive prevention, which requires an assessment of the "worst-case scenario" in advance and then defensive investment, is very costly and is generally only used in medical treatment, natural disasters, wars, etc. The risk of strong artificial intelligence is not very critical, and some people even ridicule that "the AI power is unplugged at a critical time", so the prevention of this kind of crisis should consider the cost-benefit ratio, not sacrificing the maximization of benefits, the pursuit of absolute security, but between the economy and security, to find a dynamic equilibrium. This problem of "dynamic equilibrium" will put some vulnerable groups at greater risk, such as repetitive workers, low-level white-collar workers, simple knowledge producers, etc., facing the pressure of being replaced.

The AI road of large models is opening the "Pandora's box" of AGI strong artificial intelligence, and it is still unknown whether it will release Pluto or Astro Boy, or both.

Since a society with a high degree of integration and symbiosis between humans and robots will inevitably come, what should we rely on to maintain optimism and confidence?I think "Pluto" may have given the answer, and that is "love".

The seeds of humanity will sprout tenaciously in the flames of war and scorched earth, rebuilding new worlds again and again.

List of high-quality authors

Related Pages