OpenAI protest storm Artificial intelligence may destroy humanity?

Mondo Technology Updated on 2024-02-24

In San Francisco, USA, more than 30 activists gathered outside OpenAI's offices, which works with the U.S. side.

Last month, Sam Altman-led OpenAI quietly lifted its ban on "military and war" in its usage policy, a change that was first spotted by Intercept.

A few days later, OpenAI confirmed that it was working with the U.S. Department of Defense to develop open-source cybersecurity software.

Holly Elmore, one of the organizers of this week's OpenAI event, told Bloomberg that the problem is even more serious than the company's willingness to work with military contractors.

Even if companies set very reasonable limits, they can change them at any time. She said.

OpenAI insists that despite its apparent flexibility in terms of rules, it still prohibits the use of its AI to create ** or harm people.

Last month, Anna Makanju, OpenAI's vice president of global affairs, said in a Bloomberg talk at the World Economic Forum in Davos, Switzerland, that the company's collaboration with the military is "very much in line with what we want to see in the world."

An OpenAI spokesperson told The Register at the time, "We have partnered with the Defense Advanced Research Projects Agency (DARPA) to facilitate the creation of new cybersecurity tools to protect critical infrastructure and open-source software that industries rely on." ”

OpenAI's policy has quietly reversed, much to the displeasure of organizers of this week's demonstrations.

Elmo leads a community of volunteers called PauseAI in the U.S., which calls for a ban on "the development of the largest general-purpose AI systems" because they have the potential to be an "existential threat."

Not only PauseAI, but even top AI executives have expressed concern that AI is becoming a significant threat to humanity. Polls recently found that a majority of voters also believe that AI could unexpectedly trigger catastrophic events.

You don't have to be a genius to understand that it might be a bad idea to build powerful machines that you can't control," Elmo told Bloomberg. "Maybe we shouldn't be completely counting on the market to protect us from such threats. ”

However, Altman believes that the key is that the technology should be actively developed in a safe and responsible way, rather than completely opposing the concept of AI.

It's easy to imagine that some of them are going to get really bad," he said at the world's summit in Dubai this week. "And I'm not interested in the sight of killing robots wreaking havoc on the streets. ”

I'm more concerned with very subtle social dislocations, where we just roll out these systems socially without any particular bad intentions, and things get really bad. He added.

For Altman, who has apparently heard enough of calls for a pause in AI, it's a very simple question.

You can work to help secure our shared future, or you can write about substacks that show why we're failing," he tweeted over the weekend.

Related Pages