Caption: Shen Dou, Executive Vice President of the Group and President of the Intelligent Cloud Business Group, provided by the interviewee.
January 10, in Glory MagicOS 8At the 0 press conference and developer conference, Zhao Ming, CEO of Honor Terminal, announced the "100 Model Ecological Plan", and jointly announced with Shen Dou, Executive Vice President of the Group and President of the Intelligent Cloud Business Group, that Intelligent Cloud has become a strategic partner of the Honor Large Model Ecosystem.
Shen Dou said in his on-site speech that "device-cloud collaboration" is an innovative paradigm from large models to device-side applications. The device-side model better understands user intentions, and the cloud-side model is good at handling complex problems and meeting the deep-seated needs of users. Not only will the existing 8 million mobile applications be upgraded and reconstructed based on the large model, but more new AI-native applications will be born in the future.
Device-cloud collaboration forms a new paradigm for device-side applications of large models.
The competition pattern of large models has gradually evolved from the battle of technology to the battle of application landing and ecology, and the real value lies in letting the large model enter thousands of households.
Shen Dou said: "From the second half of 2023, the competition of basic large models has entered the stage of survival of the fittest. It makes no sense to roll up the basic large model again this year. What everyone really cares about now is how to 'use' the big model. In the era of large models, what really brings value to enterprises is the depth of using large models and the speed of polishing AI native applications. ”
With the deepening of the implementation of large models, it has become a trend to apply large models to the device side. Device-side application refers to the large model directly using the computing power of the device-side chip to generate results. However, the huge computing power, storage, and energy consumption required by large models with hundreds of billions of parameters at every turn put forward high requirements for device-side chips, and device-side users have special requirements for high performance, low latency, and data privacy. Therefore, the large model deployed on the cloud side and the large model applied on the device side are the best choice to balance performance, cost, power consumption, privacy, and speed.
Intelligent Cloud and Honor have made an innovative attempt of device-cloud collaboration on MagicOS. The cloud-side general model Wenxin model empowers HONOR YOYO to create more professional user services, bringing localized text creation, knowledge quizzes, life suggestions, etc. In addition, the "Wenxin model" on the cloud is combined with the "magic model" of Honor's platform-level device-side AI model.
The magic model is responsible for understanding the user's intentions, transforming the user's simple prompts into more professional prompts in the background, and then the Wenxin model provides professional services such as knowledge questions and answers, life suggestions, etc. For example, when a user asks for a "help me make a health plan", the magic model will analyze the user's health information, automatically generate specific prompts, and then dispatch the Wenxin model to generate a more comprehensive personalized fitness plan. In the process, the magic model filters out sensitive information through the device-side protective net, and ensures that personal privacy is not on the cloud to protect security. This cooperation demonstrates a new ecological cooperation model between terminal vendors and cloud service vendors, and is also a perfect fit between Honor and the technology ecosystem.
2024 will be the first year of the explosion of AI-native applications.
Large models will also revolutionize mobile applications. There are more than 8 million mobile applications in the mobile Internet era, and the number and volume of mobile applications on the shelves have gradually flattened in recent years, but the rise of AI native applications driven by large models last year has opened a new round of growth momentum. According to statistics, the number of generative AI mobile apps** will increase ninefold in 2023, with AI chatbots increasing by 72 times.
In Shen Dou's view, the large model will be the key force to promote the operating system and mobile applications into the next wave of growth, and will definitely bring about the second explosion of mobile applications. As the threshold for using large models continues to be lowered, not only will the existing 8 million mobile applications be upgraded and reconstructed based on large models, but more new "AI native applications" will be born in the future.
Be an intelligent assistant for refactoring applications: "copilotfor building copilots".
Shen Dou said that there are two key steps to make an AI native application. First of all, to make an application, it is crucial to find the right model. The Wenxin model has strong general capabilities such as comprehension, generation, reasoning, and memory, and in most cases, it can be called directly on the Qianfan model platform. In scenarios where personalization, industry knowledge, and efficiency are considered, specific proprietary models play an indispensable role. In the future, we will see more basic models such as the "Wenxin model" and platform-level end-side AI models such as Honor's "magic model", as well as some special models fine-tuned based on the large model, that is, "MOE". Specialized models incorporate specialized knowledge and are more contextually savvy, and they are responsible for handling specialized tasks in a specific domain; The basic model is more intelligent and is responsible for solving more comprehensive and complex problems.
Shen Dou judged that there will be millions of dedicated models in the future. The intelligent cloud Qianfan large model platform provides a complete set of tool chain for large model development, that is, Qianfan "modelbuilder", on which users can fine-tune special models very economically and simply. At present, Qianfan "Modelbuilder" has fine-tuned a total of 10,000 models.
Once you have a good model, the next step is to develop AI-native applications. Large models are reconstructing the entire technology stack, data flow, and business flow of mobile applications, which poses new challenges to the development of AI-native applications. The intelligent cloud AI native application workbench Qianfan AppBuilder provides developers with a more professional and convenient AI development kit and resource environment, lowering the threshold for AI native application development.
Shen Dou said: "2024 will be the first year of the explosion of AI native applications in China. These applications are not just tools, they will become intelligent assistants in people's work and life, and become indispensable 'copilot'. In this process, Qianfan ModelBuilder and Qianfan AppBuilder will become the 'intelligent assistants' for everyone to reconstruct the application, that is, Copilot for Building Copilots, to help everyone bring a revolutionary and beautiful experience to users. ”
Xinmin Evening News reporter Jin Zhigang.