Musk sued OpenAI, which he single handedly created, for reasons that are thought provoking

Mondo Entertainment Updated on 2024-03-02

Recently, Musk officially filed a lawsuit in the San Francisco Superior Court, claiming that OpenAI's recent relationship with tech giant Microsoft has damaged the company's original intention of working on public, open-source general artificial intelligence. "OpenAI has transformed into a de facto closed-source subsidiary of Microsoft, the world's largest technology company," Musk said. Under the new board of directors, it is not just developing, but actually refining AGI to maximize Microsoft's profits, not for the benefit of humanity.

Musk said in the lawsuitAltmanand OpenAI reneged on an agreement reached when the AI research company was founded to develop technology for the benefit of humanity rather than profit. Musk claims that OpenAI's recent relationship with tech giant Microsoft has undermined the company's initial goal of working on public, open-source artificial general intelligence (AGI).

Court documents show that Musk filed charges against OpenAI, including breach of contract, breach of fiduciary duty and unfair business practices, and demanded that the company reinstate open source. Musk also asked the court for an injunction prohibiting OpenAI, its presidents Gregory Brockman and Altman (co-defendants in the case), and Microsoft from profiting from the company's artificial general intelligence technology.

In fact, in 2012, Elon Musk met Demis Hassabis, the co-founder of DeepMind, a for-profit AI company. Around this time, Musk and Hassabis met at SpaceX's factory in Hawthorne, California, and discussed the biggest threats facing society. In this conversation, Hassabis highlighted the potential dangers that AI advances can pose to society.

After this conversation with Hassabis, Musk became increasingly concerned about the potential of AI to become super-intelligent, surpass human intelligence, and threaten humanity. In fact, Musk isn't the only one who is scared about the AI research Deepmind is doing and the dangers of AI. After meeting with Hassabis and Deepmind's investors, one investor reportedly commented that the best he could do to humans was to shoot Hassabis on the spot.

Musk started with people in his circle, such as Alphabet, Inc., the parent company of Google at the timeCEO Larry Page discusses AI and Deepmind. Musk often brings up the dangers of AI in conversations with Page, but to Musk's shock, Page isn't worried. For example, in 2013, Musk spoke passionately with Page about the dangers of AI. He warns that unless safety measures are taken, "AI systems could replace humans and make our species irrelevant or even extinct." Page responded that this is only "the next stage of evolution" and claimed that Musk was a "speciesist" – i.e., a preference for the human species over intelligent machines. Musk responded: "Yes, I support humanity. ”

By late 2013, Musk was very concerned about Google's planned acquisition of Deepmind. At the time, DeepMind was one of the most advanced AI companies in the industry. As a result, Musk is deeply concerned that Deepmind's AI technology will be in the hands of someone who takes its power so lightly and may hide its designs and abilities behind a closed door.

To prevent this powerful technology from falling into Google's hands, Luke Nosek, co-founder of Musk and PayPal, tried to raise money to buy Deepmind. This effort culminated in an hour-long phone call in which Musk and Nosek made a last-ditch effort to try to convince Hassabis not to sell Deepmind to Google. Musk told Hassabis: "The future of AI shouldn't be controlled by Larry [page]. ”

Musk and Nosek's efforts were unsuccessful. In January 2014, it was reported that Deepmind would be acquired by Google. However, this has not stopped Musk from continuing to ensure the safe development and practice of AI.

After Google's acquisition of Deepmind, Musk began "hosting his own series of dinner discussions, ** ways to counter Google and promote AI security." Musk also reached out to Barack Obama in the U.S. to discuss AI and AI security. In 2015, Musk and Obama met to explain the dangers of AI and advocate for regulation. Musk argues that Obama understands the dangers of AI, but regulation never materializes.

Despite these setbacks, Musk continues to advocate for safe AI practices. In 2015, Musk seemed to have found someone who understood his concerns about AI and his desire to keep the first AGI out of a private company like Google: defendant Sam Altman.

At the time, Altman was the president of Y Combinator, a startup accelerator in Silicon Valley. Prior to that, Altman was involved in various start-up ventures.

Altman seems to share Musk's concerns about AI. In a backtracking public blog post in 2014, Altman claimed that if AGI were made, "it would be the biggest development ever in technology." Altman notes that many companies are moving toward achieving AGI, but admits that "good companies are very secretive about this." ”

On February 25, 2015, Altman also expressed his concern about the development of "superhuman machine intelligence," which he believes "may be the greatest threat to the continued existence of humanity," stressing that "as a human being programmed to survive and reproduce, I feel we should fight it." In addition, Altman criticizes those who consider "superhuman machine intelligence" dangerous but also see it as "never going to happen or certainly very far away," accusing them of being "lax in thinking, dangerous." ”

In fact, in early 2015, Altman praised regulation as a means to ensure the safe creation of AI, and suggested that groups with "a very smart group of people with a lot of resources" might involve "American companies in some way" and would be the group most likely to achieve "superhuman machine intelligence" first.

Later that month, Altman reached out to Musk to ask if he would be interested in drafting an open letter to the U.S. to discuss AI. The two began preparing letters and reaching out to influential people in the field of technology and AI to sign them. Soon, the whole industry heard about the rumors of the letter.

For example, in April 2015, Hassabis contacted Musk to say that he had heard from multiple ** that Musk was drafting a letter to ** calling for the regulation of AI. Musk defended the idea of AI regulation to Hassabis, stating: "If done right, this could accelerate the development of AI in the long term." Without the public peace of mind provided by regulatory oversight, it is likely that AI research will be banned after AI has caused great harm because it poses a danger to public safety. ”

Five days after Hassabis contacted Musk about the open letter, Hassabis announced the first meeting of Google's Deepmind AI Ethics Committee, which Google and DeepMind promised when Google acquired Deepmind two years ago. Musk was invited to become a committee member and proposed that the first meeting be held at SpaceX in Hawthorne, California. Musk clearly felt after the first meeting that this committee was not a serious effort, but an attempt to slow down the front of AI regulation.

The open letter was later published on October 28, 2015, and was signed by more than 11,000 people, including Musk, Stephen Hawking, and Steve Wozniak.

On May 25, 2015, Sam Altman sent an email to Elon Musk expressing his thoughts on whether it might be possible to prevent humans from developing AI. Altman thinks the answer is almost certainly no. If the evolution of AI is unavoidable, then it's best to let some organization other than Google make it happen first. Altman had an idea: for Y Combinator to launch a "Manhattan Project" in the field of AI (a name that might be very apt). He suggested that the technology could be made globally accessible in the form of a non-profit organization, and that if the project was successful, those involved could be paid by similar startups. Obviously, they will comply with and actively support all regulations. Musk responded that it was "well worth talking about."

After further exchanges, on June 24, 2015, Altman sent Musk a detailed proposal for this new "AI lab." "The mission will be to create the first general-purpose AI and use it for personal empowerment – i.e., the distributed version of the future looks the most secure. More generally, safety should be a top-notch requirement. "The technology will be owned by the ** society and used for the 'benefit of the whole world'. "He proposes to start with a group of 7-10 people and expand from there. He also proposed a governance structure. Musk responded, "Agree on all fronts. ”

Shortly thereafter, Altman began recruiting others to help develop the project. Notably, Altman turned to Gregory Brockman for help. In November 2015, Altman connected Brockman with Musk via email. Regarding the project, Brockman told Musk, "I want us to enter this space as a neutral group, seek broad collaboration, and shift the conversation to about humanity winning rather than any particular group or company winning." (I think that's the best way to make ourselves a leading research institution.) Optimistic about the possibility of a neutral AI research team focused on the interests of humanity rather than any particular individual or group, Musk told Brockman that he would commit funding.

Musk came up with the name of the new lab, reflecting the protocol for which it was founded: "Open AI Institute," or "OpenAI" for short.

Guided by the principles of the formation agreement, Musk joined forces with Altman and Brockman to officially launch and move forward with the project. Musk was actively involved in the project before it was publicly announced. For example, Musk advises Brockman on employee compensation packages, sharing his strategies on compensation and retaining talent.

December 8, 2015, OpenAI, IncThe Certificate of Formation is filed with the Delaware Secretary of State. The certificate documents the agreement of formation in writing: "The Company is a not-for-profit corporation organized solely for charitable and/or educational purposes in accordance with Section 501(c)(3) of the Internal Revenue Code of 1986, as amended, or any future equivalent provision of the U.S. Internal Revenue Code." The specific purpose of the company is to fund the research and development and distribution of AI-related technologies. The resulting technology will benefit the public, and the Company will seek to open source the technology for the benefit of the public, where applicable. ”

openai, inc.It was announced to the public on December 11, 2015. In the announcement, Musk and Altman were named co-chairs and Brockman was named CTO. The announcement stresses that OpenAI aims to "benefit humanity" and that its research is "not bound by financial obligations": "OpenAI is a non-profit AI research company. Our goal is to advance digital intelligence in a way that is most likely to benefit humanity as a whole, without being constrained by the need to generate financial returns. Since our research is not bound by financial obligations, we can better focus on the positive impact on humanity. ”

Back in early June 2023, Max Tegmark, a physicist and artificial intelligence expert at the Massachusetts Institute of Technology, said in an interview with the Swedish national broadcaster SVT that there is a 50% chance that artificial intelligence will wipe out humanity and that we will not know how or when it will happen because machines will be "much smarter than us."

Professor Tegmark is reportedly one of the signatories of a recently published 22-word statement warning of the risk of human extinction posed by artificial intelligence (AI).

"Mitigating the extinction risk posed by AI should be a global priority, alongside other society-scale risks such as pandemics and nuclear war," the statement read. ”

In an interview with Sweden's national broadcaster SVT, Professor Tegmark said history has shown that humanity, the most intelligent species on Earth, is responsible for the death of "smaller" species, such as the dodo.

He warns that if AI becomes smarter than humans, the same fate could easily befall us.

What's more, he says, we won't know when we'll die at the hands of artificial intelligence, because there's no way for a not-so-intelligent species to know.

AI may be used to create autonomous** or robots that can kill people.

It is reported that some of the world's top scientists believe that in the near future, artificial intelligence may be used to create autonomous ** or robots that can kill people, with or without human intervention.

Perhaps, for Musk, who created OpenAI, AI is now closed, which may lead to catastrophic consequences for mankind in the future, and suing OpenAI will make the development of artificial intelligence more controllable.

Related Pages