What is the doomsday value of AI that everyone in Silicon Valley likes to talk about?How much is a

Mondo games Updated on 2024-01-31

What is your "doomsday value"?It is said that it has become the most common greeting on the lips of the upstarts in Silicon Valley's tech circle recently, and for those in the AI industry, this is a topic that cannot be vaguely confused and avoided. What exactly is the "Doomsday Value"?What is the point of asking someone about the "doomsday value"?

The so-called "probability of doom" refers to the probability that you personally believe that AI may lead humanity to the end of the world.

If you don't believe that AI will cause the end of the world, your doom value will approach 0. Conversely, if you believe that AI will eventually lead to the demise of human society, your doom value will approach 100.

According to Kevin Roose, a technology columnist for the New York Times, the concept and term "Doomsday" probably originated from the "lesswrong" version of the insider's social ** platform in the Silicon Valley technology industry, which can be traced back to an edition post more than ten years ago.

At that time, several netizens who had begun to study AI technology posted posts to share various scenarios of "AI leading to the extinction of mankind", and shared what they considered the "doomsday value". Surprisingly, this half-joking post has become a topic that people take seriously and are keen to discuss after the rise of generative AI such as ChatGPT.

And the founder of "LessWrong" and AI scholar Eliezer Yudkowsky is a well-known "AI doomer" (Doomer), who believes that the doomsday value is as high as 95.

Tech celebrities have different doom valuesWhat is the doomsday value of people from all walks of life in the technology industry and the AI industry?

In a column for The New York Times, Ruth wrote that AI scholars Geoffrey Hinton and Yoshua Bengio, known as the "godfathers of deep learning," have called on the industry to be more vigilant and tighten control over AI technology.

Hinton believes that if not strictly regulated, there is a 10% chance that AI will cause human extinction in the next 30 years, while Bangeo said in an interview that the probability of a catastrophic event with AI is about 20%.

According to the financial news network "Business Insider" (Business Insider), Paul Christiano, who worked as a researcher at ChatGPT developer OpenAI and was selected as one of the "100 Most Influential AI Insiders" by Time magazine in the United States in 2023, once admitted in a podcast show: "I think the probability of AI completely controlling human society is about 10% to 20% When the time comes, many, if not the vast majority of human beings, will die. For me, this is something that needs to be taken seriously. ”

Fast Company, a U.S. business news magazine, reported that a recent survey of 40 AI engineers showed that their average doomsday value was 40, which means that the industry generally believes that AI has a 40% chance of human extinction.

For example, Tesla founder Elon Musk (Elon Musk), who has just become "the person with the most wealth growth in 2023", has publicly stated that his doomsday value is about 20 to 30. And Dario Amodei, CEO of AI startup Anthropic, has a doomsday value between 10 and 25.

Musk has publicly stated that his doomsday value is around 20 to 30. (source:flickr/steve jurvetsoncc by 2.0)

Interestingly, Fast Company describes Lina Khan, the chairwoman of the Federal Trade Commission, as "the most frequent anti-AI banner," but her doomsday value is quite low, with only a 15% chance of AI causing the end of the world.

Doom values can help judge the attitudes of othersStrictly speaking, doomsday values are quite subjective, and there is no quantifiable set of measures, not even the conventions of the industry.

However, Ruth pointed out in the report that the significance of the existence of doomsday values is not how accurate they are, after all, when people ask others about their doomsday values, they do not set the definition of "doomsday" in advance.

If half of humanity dies because of the rise of AI, can this be considered "doomsday"?What if everyone doesn't die because of AI, but they are all collectively unemployed and unhappy, is this a kind of apocalypse?How exactly will AI conquer the world?Ruth wrote.

Rather than having any practical academic value, Doom Value is more like a ruler in people's minds, which can help a person judge the perception and attitude of those around him towards AI. Ruth describes Doom as a sort of "social marker" that allows people involved in conversations to quickly find like-minded people on AI issues.

The board of directors announced at the time that Emmett Shear, the founder of the live streaming platform Twitch, would replace Ultraman (Sam Altman) as the CEO of OpenAI, and soon after an employee shared a **, pointing out that Schill mentioned that his doomsday value was as high as 50 when recording a podcast episode, which made employees worry about this "airborne" The CEO of the company will ask the team to slow down the pace of developing new products out of distrust of AI technology.

It can be seen that a person's doom value will more or less directly affect the personal image he wants to establish, and ultimately determine which circles he can fit into.

Header image**: Shutterstock).

Related Pages