China s open source model topped the HuggingFace rankings

Mondo Home Updated on 2024-01-28

On December 8, it was reported that Huggingface, the world's largest open source large model community, recently announced the latest open source large model rankings, and Alibaba Cloud Tongyi Qianwen beat llama2 and other domestic and foreign open source large models to the top of the list.

Tongyi Qianwen-72B topped Huggingface's Open LLM Leaderboard

HuggingFace's Open LLM Leaderboard is currently the most authoritative list in the field of large models, including hundreds of open source models around the world, and the test dimensions cover six major evaluations, including reading comprehension, logical reasoning, mathematical calculation, and question and answer. Tongyi Qianwen (QWEN-72B) performed well with a score of 73The overall score of 6 ranks first among all pre-trained models.

At the beginning of December, Alibaba Cloud announced that it had officially open-sourced the 72 billion parameter large language model Tongyi Qianwen QWEN-72B, which achieved the best results in 10 authoritative benchmarks, becoming the industry's strongest open source large model, surpassing the open source benchmark LLAMA 2-70B and most commercial closed-source models, and adapting to enterprise-level and scientific research-level high-performance applications.

Alibaba Cloud is the first technology company in China to open source self-developed large models, and has successively open-sourced QWEN-7B, QWEN-14B, and QWEN-1 since August this year8b, the visual understanding model QWEN-VL, and the audio understanding model QWEN-Audio, are the first to realize the "full-size, full-modal" open source of large models. Several models have rushed to the list of HuggingFace and GitHub large models, and are widely favored by small and medium-sized enterprises and individual developers, with a cumulative number of more than 1.5 million, giving birth to more than 150 new models and new applications.

Related Pages