Under the wave of large language models, how far away are we from AI native applications?

Mondo Technology Updated on 2024-01-29

With the mushrooming of artificial intelligence large language models, the era of artificial intelligence has quietly arrived. Just like the age of steam, electricity, and information, it has swept the world in its unique way and is changing people's lives step by step. But you may have such a question, what exactly does such an advanced AI large language model bring to us?

In fact, the development of artificial intelligence can be traced back more than 70 years, and in this process, artificial intelligence has experienced many ups and downs and booms, and the emergence of applications such as playing Go and facial recognition has attracted waves of attention. However, the craze fades whenever it is discovered that the practical application of these technologies is limited, or that the threshold is not high enough to be easily replicated.

Google's Alpha Go, for example, rose to fame in 2016 against world champion Lee Sedol, which brought global attention to its achievements in the field of Go. But after paying attention, people find that such a technical value seems to be too small. In the field of computer vision, although there have been many practical and commercially valuable applications, it is difficult to form standardized products due to the scattered application scenarios, resulting in companies in this field not developing particularly excellent large companies or standardized products.

Over the years, although artificial intelligence has experienced many ups and downs, the emergence of large language models such as ChatGPT has brought a whole new possibility. Unlike previous waves of artificial intelligence, the versatility of large models allows them to quickly generate valuable applications in a variety of scenarios. This capability has not been seen in the past 70 years of AI development, so it is generally considered a completely new opportunity.

However, the more important problem is that ChatGPT-like large language models can only be regarded as basic models and cannot directly innovate value. In addition, ChatGPT has been around for a whole year, and technology companies such as Microsoft, Google, Amazon, X, and Meta have invested a lot of resources in the training and running of basic models, while less resources are used for the development of AI-native applications.

Among them, the only one that can use large language models to generate greater benefits from existing products is Microsoft Office 365 Copilot. At present, the annual revenue is about 5 billion US dollars, and the annual revenue of OpenAI, which developed ChatGPT, is only 1.3 billion US dollars, which is several times more. It can be seen that even training a large language model that is more advanced than ChatGPT is far less valuable than developing an AI-native application.

It can be seen that in the future, the practice of relying solely on a large number of resources for model training and running scores may change. Competition will be more efficient, i.e., less costly at the same effect, or better at the same cost. Inference cost and application effect will become the main line of competition in the future.

At this stage, the large model is only a foundation, whether it is a new "super AI native application" or a transformation of existing applications in the future, this can reflect the true value of the large model. However, during this period, it is necessary to cultivate and attract a large number of comprehensive talents with learning and Xi ability, product sense and market sense to adapt to new technology and application needs.

In short, as long as we continue to try and explore in this era full of possibilities, it may not be long before more AI super applications will emerge that will fundamentally change our lives, work, and Xi learning.

Welcome to like, comment, **Favorite, thank you for your support!

List of high-quality authors

Related Pages