Tongyi Qianwen s 72 billion parameter model was announced to be open source, and some of its perform

Mondo Technology Updated on 2024-01-19

Alibaba Cloud Tongyi Qianwen's 72 billion parameter model QWEN-72B was recently announced as open source. The model is trained on 3T tokens high-quality data, and has won the best score of the open-source model in 10 authoritative benchmark evaluations, surpassing the closed-source GPT-3 in some evaluations5 and GPT-4.

In the English task, QWEN-72B achieved the highest score for the open source model in the MMLU benchmarkIn the Chinese task, QWEN-72B surpassed GPT-4 in benchmarks such as C-Eval, CMMLU, and GaokaobenchIn terms of mathematical reasoning, QWEN-72B is ahead of other open-source models in the GSM8K and MATH evaluation interrupt layersIn terms of comprehension, QWEN-72B's performance in Humaneval, MBPP and other assessments has been greatly improved, and its ability has made a qualitative leap.

According to reports, QWEN-72B can handle up to 32K long text inputs, surpassing ChatGPT-3 on the long text comprehension test set LeVal5-16k effect. The R&D team optimized the QWEN-72B's skills such as instruction compliance and tool use, so that it can be better integrated by downstream applications. For example, QWEN-72B is equipped with powerful system prompt capabilities, which allow users to customize AI assistants with just one prompt word, requiring large models to play a certain role or perform specific response tasks.

With the open source of QWEN-72B, Tongyi Qianwen has also open-sourced the 1.8 billion parameter model QWEN-18b and the audio model qwen-audio. So far, Tongyi Qianwen has open-sourced four large language models with 1.8 billion, 7 billion, 14 billion, and 72 billion parameters, as well as two multi-modal large models for visual understanding and audio understanding, realizing "full-size, full-modal" open source. Based on QWEN-72B, large and medium-sized enterprises can develop commercial applications, and universities and research institutes can carry out scientific research such as AI for Science.

From 1From 8b to 72b, Tongyi Qianwen took the lead in realizing full-scale open source.

If QWEN-72B is "touched upwards", it raises the size and performance ceiling of the open source large model;Another open-source model at the conference, QWEN-18B "bottoms down" and becomes the smallest open-source model in China, which only needs 3G video memory to infer 2K long text content, and can be deployed on consumer-grade terminals.

From 1.8 billion, 7 billion, 14 billion to 72 billion parameters, Tongyi Qianwen has become the industry's first "full-scale open source" large model. Users can directly experience the effects of the QWEN series models in the Moda community, call the model API through the Alibaba Cloud Lingji platform, or customize large model applications based on the Alibaba Cloud Bailian platform. Alibaba Cloud AI Platform PAI has carried out in-depth adaptation for the full range of models of Tongyi Qianwen, and has launched services such as lightweight fine-tuning, full-parameter fine-tuning, distributed training, offline inference verification, and advanced service deployment.

Alibaba Cloud is the first technology company in China to open-source self-developed large models, and has open-sourced QWEN-7B, QWEN-14B, and QWEN-VL, a visual understanding model, since August. Several models have successively rushed to the list of HuggingFace and GitHub large models, and have been favored by small and medium-sized enterprises and individual developers, with a cumulative number of more than 1.5 million, giving birth to more than 150 new models and new applications. At the press conference, a number of developer partners came forward to share their practices of using QWEN to develop exclusive models and specific applications.

Zhou Jingren, CTO of Alibaba Cloud, said that the open source ecosystem is crucial to promoting the technological progress and application of China's large models, and Tongyi Qianwen will continue to invest in open source, hoping to become "the most open large model in the AI era" and work with partners to promote the construction of the large model ecosystem.

The Tongyi Qianwen pedestal model continues to evolve, and the multimodal exploration is industry-leading.

Tongyi Qianwen's exploration in the field of multimodal large models is also one step ahead of the industry, and on the same day, Alibaba Cloud open-sourced the audio understanding large model qwen-audio for the first time.

QWEN-AUDIO can perceive and understand various speech signals such as human voices, nature voices, animal voices, and ** voices. Users can input a piece of audio and ask the model to give an understanding of the audio, and even use the audio to create literature, logical reasoning, story continuation, and so on. Audio comprehension enables large models to be close to human hearing.

The Tongyi model can "listen" and "see". In August, Tongyi Qianwen open-sourced the visual understanding model QWEN-VL, which quickly became one of the best practices in the international open source community. This conference also announced a major update to QWEN-VL, which greatly improves the basic capabilities of general OCR, visual reasoning, and Chinese text understanding, and can also process images of various resolutions and specifications, and even "look at pictures to do problems". Whether in terms of authoritative assessment results or the effect of real human experience, QWEN-VL's Chinese text comprehension ability greatly exceeds GPT-4V.

The closed-source model of Tongyi Qianwen also continues to evolve, with the release of Tongyi Qianwen 2 a month agoVersion 0 of the closed-source model, which has recently been advanced to 2In version 1, the length of the context window was expanded to 32k, and the ability to comprehend and generate, mathematical reasoning, Chinese and English encyclopedic knowledge, and hallucination-induced resistance were increased by nearly 5%, nearly 5%, and 14%, respectively. Users can experience the latest version of the closed-source model for free in the Tongyi Qianwen app.

Author: Xu Jinghui.

Text: Xu Jinghui Editor: Bo Xiaobo Responsible Editor: Rong Bing.

*Please indicate the source of this article.

Related Pages