Xiaomi announced on Tuesday the open-source release of its MiMo-V2.5 series of AI models under the MIT license, allowing commercial inference deployment and secondary training without additional authorization. The initiative is led by Luo Fuli, Xiaomi's AI lead.
MiMo-V2.5 Pro, a Mixture of Experts (MoE) model, has 1.02 trillion total parameters with 42 billion active parameters. It supports a context length of up to 1 million tokens and integrates a lightweight multi-token prediction module that triples output throughput. The model was trained on 27 trillion tokens using mixed-precision FP8.
The model outperformed DeepSeek's latest open-source DeepSeek-V4-Pro and several mainstream closed-source models on multiple benchmarks, including GDPVal-AA and Claw-Eval, ranking first overall, according to Xiaomi's published evaluation results.
A separate MiMo-V2.5 model, a 310 billion-parameter sparse MoE multimodal model with 15 billion active parameters, was trained on 48 trillion tokens and equipped with proprietary vision and audio encoders.
On the first day of release, MiMo-V2.5-Pro was already compatible with seven chip manufacturers including Alibaba's T-Head, Amazon Web Services (based on Trainium2), AMD's ROCm open-source stack, Baidu's Kunlun, Enflame, MetaX and Iluvatar CoreX, as well as the SGLang and vLLM inference frameworks.
In a separate interview on a business podcast, Luo said her 100-person AI team has adopted an ultra-flat structure. To encourage use of the OpenClaw AI agent, she set a target of at least 100 dialogue rounds per person per day, leading to research output in three to four weeks equivalent to about 40 weeks previously.
Luo expressed optimism about AGI, predicting it could be achieved within two years, saying current progress is at about 20 percent and could reach 60 to 70 percent by the end of this year.
𝗗𝗼𝘄𝗻𝗹𝗼𝗮𝗱 𝗧𝗵𝗲 𝗦𝘁𝗮𝗻𝗱𝗮𝗿𝗱 𝗔𝗽𝗽 ↓