Kill the chief scientist, OpenAI teaches a class to China s top modelers

Mondo Workplace Updated on 2024-01-19

Visual China.

Text |Wendao Business Academy, Author|Shoot the wolf.

OpenAI's Chief Scientist vs. CEO From November 17th to the end of the month, it is the most nervous and exciting moment for AI people around the world.

OpenAI CEO Sam Altman was hastily "dismissed" and hurriedly "re-ascended", which touched the nerves of too many people. These people are not only OpenAI employees and management, but also OpenAI's loyal users, investors, and even AI practitioners and observers.

Briefly retell the key nodes of this ups and downs and infinite reversal of the annual palace fighting drama:

1. On November 17, Sam Altman was unilaterally dismissed from the board of directors represented by Ilya Sutskevi while serving as CEO.

2. Microsoft, the major shareholder of OpenAI (and the main investor), announced that it did not know about it before, and in subsequent actions, it intends to unite OpenAI's management, employees and other investors to promote Sam's return;Sam also made a return condition - to dissolve the existing board of directors, if not, he would choose to start a new business.

3. On November 20, the board of directors of OpenAI announced the new CEO, which is equivalent to removing Sam "twice";On the same day, Microsoft CEO Satya also announced that he would continue to cooperate with OpenAI and invited Sam to join Microsoft to lead the new AI team.

4. On November 29, Sam "unexpectedly" reinstated the CEO of OpenAI and formed a new board of directors;Ilya, who had previously ousted Sam, became his predecessor, and Microsoft was promoted to board observer.

At this point, the openAI power struggle triggered by Sam's departure has come to an end for the time being. But this ending, which most people are happy with, did not evacuate the crowd of onlookers. On the contrary, it has aroused more and more interest in Sam and Ierya, and further deepened people's thinking about the development path of AI.

In Silicon Valley, Sam is a young talent, but also a "**", and his ambitions are well known. For SamPaul Graham, the founder of YC, once gave the evaluation that "ambition exceeds the boundaries that Silicon Valley can accommodate".

Before becoming the CEO of OpenAI in 2019, Sam, who was only 30 years old (2014), served as the CEO of YC Business Incubator. Although domestic users are not very familiar with YC, the YC incubator is very famous in the US AI community.

In 2015, Sam wanted to build an AI lab through the YC incubator, with the goal of "building safe human-grade AI". This became a rehearsal for Sam to create OpenAI in the future. Under Sam's leadership, YC Incubator has also achieved good results, with a market value of nearly 150 billion yuan in 2019.

Also in 2015, Sam co-founded OpenAI with Musk, Greg and others. But until 2019, when he became CEO, Sam, as the founder, was more of an investor in OpenAI.

After becoming a CEO, the characteristics of investors began to slowly work on OpenAI. Especially after becoming popular with the help of ChatGPT at the end of last year, Sam accelerated the commercialization path of OpenAI.

Entering 2023, OpenAI has launched commercial products such as paid versions and customized enterprise versions. In October, Sam told employees that the company's annual revenue could reach $1.3 billion, compared with less than $30 million last year.

It was also at this time that the conflict between Sam and the board began to intensify. At the heart of this conflict is that Sam's aggressive commercialization strategy has led the board of directors led by Ilya to believe that it violates OpenAI's original intention of "non-profit, safe and beneficial to all mankind".

This involves Ilya, the other protagonist of this incident. It should be pointed out that Ireya, as a member of the board of directors of OpenAI, is not a representative of "management" in the traditional sense, but a real technology master. In addition to her board membership, Ilya is the co-founder and chief scientist of OpenAI.

Among them, the position of chief scientist can better reflect the identity and positioning of Ilya. The Israeli-Canadian computer scientist, who studied under Turing Award winner Geoffrey Hinton, has a high degree of attainment in the field of machine Xi. Previously, he was a research scientist at Google and was elected a Fellow of the Royal Society in 2022.

At this point, we can better understand what is behind the power struggle between OpenAI, Sam and Ilya?

This is a battle between the concept of AI development and the path of reality, Ilya is more inclined to be technology-neutral and non-profit, while Sam is more focused on commercial applications. Judging by the results, the commercialization idea overwhelmingly prevailed over the pure technology stream.

But it would be too superficial to define it as a business-to-technology victory. In reality, it's more like a triumph of business realism over technological idealism.

Even with the support of Microsoft, the big financier, OpenAI's follow-up technology research and development still needs a steady stream of financing. It is important to know that in the extremely "money-burning" path of large models, it is extremely expensive to maintain ChatGPT alone. In Sam's view, commercialization is inevitable in order to ultimately "benefit all mankind".

For this decision, whether it is investors, partners, openai employees, management, or users, they are voting with their feet and standing on Sam's side.

The feedback given by many AI observers is more rational and direct: Sam, as the CEO, can better understand what OpenAI needs most at the moment, compared to Ilya, who is a little too idealistic and too taken for granted.

OpenAI's palace fight drama also gives us a revelation, that is, AIGC innovation around large models is an eternal "number one project", and it is not enough to rely solely on some scientists, and it is impossible to pin it on the head of a certain technical department.

First of all, the top leader should be able to understand the front-end market demand better than the mere technical personnel, and better understand the core demands of partners, users and investors.

Secondly, only by looking back with this real demand can the company's top leaders make forward-looking predictions and choose the correct implementation path of the AI large model.

In addition, the large model is a long-term strategic business with huge costs, and only the top leader can push and afford it.

Robin Li's speech at the Xili Lake Forum was in line with the above logic. He believes that making 100 large models is a waste of resources, and only AI native applications can improve key business indicators, and the top leaders should personally embrace the AI era and end up in person, rather than letting the IT person in charge complete the technical task of "making a large model".

Because the CTO and IT leaders pay more attention to the technology itself, they think that making a large model is a homework, and the result is not only a waste of resources, but also unusable, and the last chicken feathers;Only the top leaders will really pay attention to how the new technology can improve their business key indicators, and the first leader can make the new technology truly used by the enterprise. ”

Not just rhetoric, we can see Robin Li on many occasions leading by example, promoting the comprehensive "AI reconstruction" of products, and focusing on the "number one project" of AI native applications.

Just as OpenAI employees voted with their feet to support Sam's return, the market is also making some positive response to Robin Li's choice.

According to the third-quarter financial report, a new round of prosperity is ushering in, and the data of AI native applications such as search, library, and network disk has increased significantlyWenxin model began to drive revenue growth;After the earnings report, the stock price rose for three consecutive days.

On the contrary,Some good technologies and hard technologies will also slow down or even stagnate due to the lack of specific implementation paths and first-in-command strategic support.

Previously, it was reported that due to budget and profit reasons, the quantum laboratory of Alibaba DAMO Academy may have been disbanded, with a total of more than 30 people laid off.

According to the news released by Alibaba DAMO Academy, in order to further promote the coordinated development of quantum technology, the DAMO Academy will donate the quantum laboratory and transferable quantum experimental instruments and equipment to Zhejiang University, and open it to other universities and scientific research institutions to seek common development.

It is reported that the Damo Academy is affiliated to Ali's "Cloud Intelligence Group". As early as May 2018, the Quantum Laboratory of the Damo Academy developed the world's most powerful quantum circuit simulator "Taizhang" at that time, which even surpassed Google at that time. But even with such a leading technology, Quantum Lab did not escape the fate of being transferred.

Returning to the big model, whether it is the 100-model battle in China or the palace fight drama of OpenAI, it all shows one thing:The large models that entered the deep water area began to rapidly differentiate and divide into fields.

After relying on technology to reach the peak and achieve extraordinary results in the early stage, the industry began to slow down and entered a transition period of "path" selection. In other words, it is not the technology itself that determines the life and death of the large model at this stage, but the more specific business development path.

According to incomplete statistics, as of October, 238 large models have been released in China, and the number was still 79 at the end of June, a threefold increase in four months.

Every enterprise that aims to build a basic large model has to go through a money-burning model such as purchasing chips, establishing an intelligent computing center, and training from scratch. But even so, because the training parameters are not large enough, most models will not produce intelligence emergence.

Therefore, we see that in addition to dozens of basic large models, all of them are making a fuss about applications, for example, there are thousands of AI native applications abroad, and we are still only a handful.

Of course, we also have to look through some large model data to see something different.

On the one hand, whether it is a large manufacturer, an AI company, or even a leading enterprise in certain segments, it is indeed embracing large models, and even saying that "each person has a large model". But on the other hand, many of the large models in the hands of these enterprises are just a gimmick, such as the large models created by some fintech companies, which are more like a data-based marketing application, and the essence is an application logic.

This is just the proof of thisLarge models are being seen, recognized, and accepted by the industry at the application level.

From the perspective of industry development, there is only one operating system in the PC era, but there are many software developed based on the Windows systemIn the era of mobile Internet, there are only two mainstream operating systems, Android and iOS, but there are as many as 8 million mobile applications.

In Robin Li's view, the large model itself is also a basic base similar to an operating system, and developers need to develop a variety of native applications around this base. The repeated development of basic large models is not only a waste of social resources, but also makes enterprises miss excellent development opportunities.

Perhaps, we should repeatedly ponder the true connotation of Robin Li's sentence, "In the era of AI native, we need 1 million AI native applications, but we don't need 100 large models". As he said, only a few basic models will survive in the end, but AI native applications will definitely emerge.

For the top leaders of enterprises aiming at large models, we must not only watch OpenAI's palace fighting drama, but also see AIGC's more certain achievable path. AI native applications are a must.

Related Pages