In the face of AI annihilation, intelligence is not everything

Mondo Technology Updated on 2024-01-29

AI annihilists have long been lost in a religion masquerading as Bayesian analysis. This was said by the CTO of Oxide in an interview with TNS.

Translated from Bryan Cantrill on AI Doomerism: Intelligence Is Not Enough, by D**ID Cassel, a proud resident of the San Francisco Bay Area, has been covering tech news for more than two decades. Over the years, his articles have appeared in places ranging from CNN, MSNBC, and Wall Street** Interactive. When historians look back at our time, they see a species facing new technologies. They will see new focuses, and some will even raise existential fears that AI could exterminate humanity.

However, they may remember other things as well. A man stands on stage in Portland, Maine, defending the honor of humanity because humans possess unique attributes that AI can never replicate. As William Faulkner once said, humanity not only endures, but also wins. This defense comes from Bryan Cantrill, co-founder and CTO of Oxide Computers. At the 11th annual "Monktoberfest" developer conference, a developer conference that "examines the intersection of societal trends and technology," Cantrill gave a strong feel for the highly hypothetical "existential threat" scenario. In an email interview, Cantrill told us, "My talk wasn't aimed at AI extinctionists. "They have long since disappeared into a religion disguised as Bayesian analysis. ”

In Portland, Maine, he told his audience that the talk was "instigated by a whole bunch of Internet garbage, and I was incredibly outraged by it."

How exactly will AI exterminate humans?Cantrill dismisses some of the usual suggestion scenarios because they are based on ridiculous, even ridiculous. ("You can't say that a computer program will control the nucleus**.")

Okay, but what if the AI somehow develops a new type of creature**"I think it reflects some kind of misconception about how complex biology is. "What if a super-intelligent AI developed a new type of molecular nanotechnology?

I'm embarrassed to say that I read an entire book on nanotechnology before realizing that everything was not reduced to practice. All of this is actually hypothetical. ”

"As my daughter likes to say whenever it comes to AI taking over the world, 'It doesn't have arms or legs. '”

Cantrill then reacted more briefly to what he thought was a ridiculous hypothesis: "Oh my God. Nanotechnology is back. ”

Cantrill is happy to demonstrate his skepticism about AI that exterminates humans, noting that even as a hypothesis, it raises "hundreds of millions of questions." For example, why would that be the motivation of AI?It obtains the means of production from **?"As my daughter likes to say whenever it comes to AI taking over the world, 'It doesn't have arms or legs. '”

Cantrill elaborates, amusing the audience. "When you want to kill all humans, the lack of arms and legs becomes really critical. ”

So how exactly does AI respond to the threat of human resistance?"Honestly, it's kind of fun to fantasize. Cantrill says. "Can you imagine what would happen if we all joined forces to fight against computer programs?"Think about what it would be like if all of humanity focused its efforts on thwarting a single piece of software failure.

That would be great!”

An example of AI annihilation is a well-meaning person who, according to Cantrill, "reluctantly supports a moratorium on all AI – AI is scary, and we must suspend all AI research."

In September, Flo Crivello, founder of AI assistant company Lindy, argued in a tweet that "intelligence is the most powerful force in the world." And we're about to give nuclear weapons to everyone on the planet without thinking too much about it. Crivello also argues that "no substantial argument against existential risks is provided" and derids AI proponents as "unscrupulous people."

First of all, Cantrill was offended by the cautious people in this scenario who took to Twitter to "equate computer programs with nuclear **". And these so-called serious people have gone so far that they have thrown away their own assessment of our "probability of extinction" – that is, the total annihilation of all humanity.

Can we be a little more in awe of our common ancestors?”

But Cantrill argues that this "exaggeration" and unfounded assumptions themselves can lead to frightening scenarios. For example, a moratorium on AI development would be "authoritarianism with no one in sight." It's a must. Cantrill begins by noting that even "restricting what a computer program can do is rather scary and violates what many consider to be a natural right."

On this downward downhill slope, as one slide points out, "the accompanying rhetoric is often uncomfortably violent." Some who argue that AI poses an existential threat to humanity can then demonstrate actual acts of human protective violence.

In Cantrill's view, the belief that there is an existential threat to humanity leads people to say "we should control the GPU." And what about those who violate the GPU international embargo?Yes, we should bomb their data centers. In fact, we should have preemptively hit their data centers. ”

Cantrill derided it as an overreaction, all "because of a computer program." If a "serious" rebuttal is needed in this debate, Cantrill has come up with one himself.

Please don't bomb the data center. ”

Cantrill's presentation was titled "Intelligence Is Not Enough: The Humanization of Engineering."

Here the audience realizes that they are listening to a presentation from the proud CTO of a company that has just launched a new server architecture. "I want to focus on what is needed to actually carry out the project. I do have some recent experience building some very large and very difficult collective engineering practices. Sharing a story from the real world, Cantrill lays out the ** of their finished server, and then tells the story from the most terrifying dystopia of all horror stories: production.

They spent weeks debugging a CPU that refused to reset – only to find out that the problem was a bug in their firmware.

Another week is spent on a network interface controller that also doesn't reset. Again, their quotient went wrong – it involved the specification of one of their key resistors.

There was even a period of time that they later called "Data Corruption Week" – when corruption began to appear sporadically in their OS boot image. (One slide explains the incredibly obscure reason: their microprocessor is "speculatively loaded through a mapping in an early bootstrap for free-riding"). Cantrill says that only a lone human knows where to look through his intuition. "It was their curiosity that led them to find the coal fire that burned beneath the surface. ”

Importantly, what these errors have in common is the property of "newness" – something that is not actually designed into a part, but something that appears when they are put together. "For each one, there is no documentation. In fact, for several of them, the documentation is positively incorrect. Documentation can mislead you. Breakouts are often something that shouldn't work.

Some superintelligent beings will not suggest something. ”

Cantrill put up a slide that read, "Intelligence alone doesn't solve such a problem," showing that his team at Oxide has something unique about humanity. "Our ability to solve these problems has nothing to do with our collective intelligence as a team. He told his audience. "We have to mobilize the elements of our character. It's not our intellect – our resilience. ”

Our team spirit. Our rigor. Our optimism. ”

Cantrill says he's sure you (human) engineers do the same.

He highlighted the key points of his speech. "These are human attributes. When we hire, we don't just think about intelligence – we seek collaboration and teamwork, and most importantly, shared values. "This fascination with the intellect comes from people who honestly don't go out much. ”

They need to do more hands-on things, such as taking care of children, hiking.

Cantrill came up with a profound truth that he said, "The intellect is great;But it's not all. There's humanity here. ”

It needs to be clarified that AI is still useful for engineers, but it lacks three key attributes: willpower, desire, and drive. "When we pretend they can engineer autonomously, we are harming our own humanity. Cantrill says.

They can't. We humans can.

While Cantrill believes that the risk of human extinction is too small to be a cause for concern, he acknowledges that there is a real risk. But "bad news," Cantrill said. "It's a risk you already know. It's racism. It's economic unemployment. It's class—it's all the problems we've been dealing with since eternity as human beings. ”

AI is the power multiplier for these problems, and we need to take it very, very seriously. Because AI is going to be misused – and it's already being abused. AI ethics are very important. ”

Cantrill notes that there is a silver lining here. There are already ready-made laws and regulations and entire regulatory systems for things like nuclear, biological, and even self-driving cars. "Let's execute them. Cantrill says. "Use your fears to push for enforcement regulations. ”

But this makes it all the more important to oppose what he sees as overhyped "AI pessimism." As Cantrill put it in a recent blog post, "The fear that AI will autonomously destroy humanity is worse than nonsense because they make us ignore the very real possibility that AI could be misused." In his speech, Cantrill even hinted that people secretly prefer to think about an exaggerated dystopia, "We're all going to be extinct anyway." So things like, 'We're all going to be after the singularity.' We don't actually care about the world. '”

Some of us do care about this planet, this life, and this world. This is the world we live in.

We should not let fear – unspecified, non-concrete fear – stop us from making this world a better place. ”

Related Pages