Alibaba Cloud’s latest large language model, Qwen2.5-Max, has secured high rankings on Chatbot Arena, a platform that evaluates AI chatbots. The model placed seventh overall and performed well in key areas, ranking first in math and coding and second in handling complex prompts.
Qwen2.5-Max is a Mixture of Experts (MoE) model, trained on over 20 trillion tokens. It has been further improved using Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). These enhancements have strengthened its performance in coding, knowledge processing, and human alignment. The model also achieved leading scores in industry benchmarks, such as MMLU-Pro, LiveCodeBench, and Arena-Hard.
Businesses and developers can access Qwen2.5-Max through Model Studio, Alibaba Cloud’s AI development platform. The model is also available for use on Qwen Chat.
Alibaba Cloud has been expanding its AI offerings over the past year. Last month, it introduced Qwen2.5-VL, an open-source visual-language model designed to assist with tasks on computers and mobile devices. The company also released Qwen2.5-1M, which can process long text inputs of up to 1 million tokens.

Earlier this year, during its Global Developer Summit in Jakarta, Alibaba Cloud introduced new AI development tools and infrastructure upgrades. The company continues to build on its AI capabilities to meet growing demands from developers and businesses worldwide.
You must be logged in to post a comment.