Recently, the new model "SORA" released by OpenAI has attracted attention, and the New York Times reported that OpenAI's valuation may now reach about $80 billion. How has the market reacted to this new model? What are the hidden dangers of it?
CNBC Jiang Yu: From the chatbot ChatGPT, to the text-to-image model DALL-E, and then to the recent text-to-** model Sora, OpenAI on the tuyere has become the focus of attention in the technology circle and the capital market. Although the SORA generated ** is still lacking, Macquarie analysts believe that it already represents a new breakthrough technology. Industry insiders said that the academic principles behind SORA are clear to the industry, but from the principle to the landing, there are two mountains of "data" and "model", and this is also the advantage of OpenAI from other competitors.
Recently, the New York Times quoted sources as saying that OpenAI has closed a deal that could be valued at $80 billion or more. This means that OpenAI has nearly tripled from a valuation of about $29 billion in less than 10 months.
According to a report by tech data platform CB Insights, OpenAI is now one of the most valuable tech startups in the world, behind only ByteDance and Space X.
On the one hand, the new model SORA has shocked the content production industry, and on the other hand, the market is also paying attention to its two hidden dangers. First of all, an executive of an advertising company said that this is a huge turning point for the advertising industry. In the past, the cost of creating ** ads was very high, which was usually only affordable for big brands, but now, the SORA model offers new opportunities for SMEs to create ** ads. However, there are still some doubts about content copyright. At present, OpenAI did not disclose the number of training models involved and their specifics, but only said that all training materials are from public channels or authorized content.
Another concern is deepfakes. This year is a big year for elections around the world, which will affect more than 4 billion people, including more than 40 countries. AI deepfakes can generate a large number of fake voices** and images to influence elections.
According to a survey by YouGov, about 85 percent of Americans are very or somewhat concerned about this.
The president of global affairs of Meta, the parent company of Facebook, bluntly said that it is impossible to "one-size-fits-all" to directly prohibit AI-generated content from spreading on social networks, because there will always be loopholes in the "whack-a-mole" approach, and the current approach is mainly to disclose, and any AI-generated content needs to be watermarked to inform users who see it. However, how to identify AI content generated by different platforms is still a major difficulty.
Nick Clegg, President, Global Affairs, Meta, USA: Identifying AI-produced content requires a lot of engineering and technical work on the technology backend to ensure that AI content generated on one platform can be identified as AI-generated when distributed on another platform, and companies need to work together to bridge this identification gap.
The technological revolution triggered by OpenAI is still moving forward. Bloomberg, citing sources, said Altman is seeking approval from the United States, hoping to raise billions of dollars from the Middle East to increase global AI chip production capacity.
*Please specify CCTV Finance.
Intern Editor: Li Cong.