Alibaba Cloud, the subsidiary of Alibaba Group specializing in cloud computing, to open-source its 7-billion-parameter Large Language Models (LLM), specifically the Qwen-7B and Qwen-7B-Chat models to interested developers and organizations.Â
Alibaba Cloud is working with ModelScope and the collaborative AI platform Hugging Face in open-sourcing the LLMs.
“By open-sourcing our proprietary large language models, we aim to promote inclusive technologies and enable more developers and SMEs to reap the benefits of generative AI,” said Jingren Zhou, CTO of Alibaba Cloud Intelligence.
Alibaba Cloud unveils AI image generation model
Alibaba Cloud opens partnership program for use of AI language model
Earlier this year in April, Alibaba Cloud introduced its Tongyi Qianwen language model, designed to generate content in both English and Chinese. The model offers various sizes, including those with over 7 billion parameters.
Alibaba Cloud has set the models to be freely accessible for companies with up to 100 million monthly active users for commercial purposes. For larger programs, the option to request a license from Alibaba Cloud is available.
Large Language Models
The Qwen-7B was pre-trained on over 2 trillion tokens, including Chinese, English and other multilingual materials, code, and mathematics, covering general and professional fields. Its context length reaches 8K. In training, the Qwen-7B-Chat model was aligned with human instructions. Both Qwen-7B and Qwen-7B-Chat models can be deployed on cloud and on-premises infrastructures. This enables users to fine-tune the models and build their own high-quality generative models effectively and cost-efficiently.
The pre-trained Qwen-7B model distinguished itself in the Massive Multi-task Language Understanding (MMLU) benchmark, scoring a notable 56.7, outperforming other major pre-trained open-source models with similar scales or even some larger-size models. This benchmark assesses a text model’s multitask accuracy across 57 varied tasks, encompassing fields such as elementary mathematics, computer science, and law.Â
According to Alibaba Cloud, Qwen-7B achieved the highest score among models with equivalent parameters in the leaderboard of C-Eval, a comprehensive Chinese evaluation suite for foundational models. It covers 52 subjects in four major specialties including humanities, social sciences, STEM, and others. Additionally, Qwen-7B reached outstanding performance on benchmarks of mathematics and code generation, such as GSM8K and HumanEval.
Categories: News
1 reply »