With the advent of ChatGPT, 2023 is undoubtedly crowned with the title of "the first year of AI large models". In this wave of global technology, the American startup OpenAI is leading the entire industry forward with its forward-looking vision and firm investment. However, as Chinese companies are stepping into this space in an attempt to replicate or even surpass OpenAI's success, we can't help but ask: can China catch up in this race for AI models?
OpenAI's success is not accidental, and the strength and investment behind it should not be underestimated. From massive financial support to the gathering of top talents, to years of firm investment, OpenAI has almost all the elements of success in the field of scientific and technological innovation. According to Tianyancha data, since its establishment, OpenAI has attracted financial support from well-known enterprises and investment institutions, including Microsoft and Khosla Ventures, which provides a solid backing for OpenAI's investment in computing power, data, and talents.
In terms of computing power, OpenAI spares no expense. SemiAnalysis, a third-party data agency, estimates that OpenAI uses about 3,617 HGX A100 servers, including nearly 30,000 NVIDIA GPUs. The efficiency of these GPUs is due to the large model customized computing power cluster built by the investor Microsoft. In terms of data, OpenAI has continuously invested in every link from data collection, labeling, cleaning, sorting to optimization to ensure the quality and efficiency of model training.
However, even with such strong strength and investment, it still took OpenAI more than eight years to create a breakthrough product, GPT4. What is even more noteworthy is that even though GPT4 has made significant progress, there are still "hallucinations", that is, answering questions that are not asked, talking nonsense, etc. This shows that the development of large AI models is still full of challenges and uncertainties.
For Chinese companies, although they are good at engineering optimization, they still face great pressure and challenges in the research and development of large AI models. Tianyancha data shows that although the number of large models released by Chinese companies in 2023 has exceeded 130, there is still a big gap between these models and OpenAI's GPT4 in terms of quality and application.
To sum up, the research and development of AI large models is a long-distance race that requires enough patience and strength. OpenAI has proven this in eight years, and Chinese companies will need to put in more effort and wisdom if they want to catch up in this race. Although there are many challenges, with the continuous advancement of technology and continuous investment of funds, more Chinese companies may stand on the pinnacle of AI models in the future. (Data support: Tianyancha).