
Chinese cloud giant Alibaba’s Qian family of open-weight models has overtaken the meta platform’s Llama model on HuggingFace.
Stanford HAI
Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- Chinese AI models have surpassed American models in power and performance.
- China is the leader in model openness.
- Most of the world can adopt freely available Chinese technology.
American artificial intelligence startup OpenAI started out with a mission of transparency in AI, a mission that was abandoned in 2022 after the company began hiding details of its technology.
Chinese companies and institutions have taken the lead in the breach.
Also: A new Chinese AI model claims to outperform GPT-5 and Sonnet 4.5 – and it’s free
“Leadership in AI now depends not only on proprietary systems, but also on the reach, adoption, and normative influence of open-weighted models around the world,” wrote Carolyn Meinhart, policy research manager at Stanford University and lead author. Human-Centered AI InstituteHAI said in a report released last week, “Beyond DeepSeq: China’s diverse open-source AI ecosystem and its policy implications,
(Disclosure: ZDNET’s parent company Ziff Davis filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in the training and operation of its AI systems.)
“Today, Chinese-made open-weight models are indispensable in the global competitive AI landscape,” Meinhart and colleagues said.
The report shows that Chinese large language models (LLM), such as Alibaba’s Quen family of models, are in a statistically worse position with the cloud large language family of Anthropic, another US startup, and are a long way off the best models from OpenAI and Google.
Too: As the meta fades in open-source AI, Nvidia senses a chance to lead
Looking more broadly, the growing power of Quon and DeepSeek and other Chinese models is fueling a “global diffusion” movement, HAI scholars have written: Countries around the world, but especially countries in the developing world, are adopting Chinese models as a cheaper alternative to trying to build their own AI from scratch.
This rise comes as open-source AI, the former leader in meta platforms, has slipped in AI rankings and is now moving more towards the closed-source approach of OpenAI, Google and Anthropic.
As a result, according to HAI, “widespread global adoption of the Chinese open-weighing model could reshape global technology access and dependence patterns and have implications for AI governance, security, and competition.”
a technological giant leap
The picture that made waves around the world with DeepSeek AI’s R1 large language model earlier this year, due to its low cost of development, has now turned to growing technology powerhouses from Alibaba and Asian startup firms, including Singapore-based Moonshot AI, maker of KM2, and China’s ZAI, maker of GLM, Meinhart and team.
Too: What is DeepSeek AI? is it safe? Here’s everything you need to know
China’s AI labs have operated under a US export ban that restricts the country’s access to the most cutting-edge technology from the US, such as Nvidia’s best GPU chips.
This has created a discipline that has increased efficiency in Chinese laboratories, which is now translating into concrete technological advances.
“Chinese open-weighted models now perform at nearly state-of-the-art levels in major benchmarks and leaderboards, including general reasoning, coding, and tool use,” lead author Caroline Meinhart, a policy research manager at HAI, wrote, citing the data. Popular LMArena site.
And the top 22 Chinese open models are better than OpenAI’s own “open-weight” model, GPT-OSS, they wrote.
Although there are many issues with the benchmarks and ratings, such as potential “gaming” of the scores, the authors note that other indices, such as the Epoch Capacity Index and Artificial Intelligence Intelligence Index“Show Chinese models competing with their American and other international counterparts.”
There is another means by which Kwen and the rest are gaining profits: uploading their code to the HuggingFace code hosting platform.
“In September 2025, 63% of all new fine-tuned or derived models released on Hugging Face were Chinese fine-tuned or derived models,” the authors wrote. “Along with anecdotal stories about adoption, these data points suggest a variety of contexts and geographic areas where the Chinese model has been adopted.”
Too: DeepSeek AI could be about to shake up the world again – as we know it
Also in September, “Alibaba’s Quan model family became the most downloaded LLM family on Hugging Face, surpassing (Meta’s) Llama.”
By those measures, “Chinese open models now appear to be surpassing their American counterparts when it comes to downstream access,” he wrote.
More openness from China
Not only rising technological efficiency but also greater “openness” is fueling China’s rise.
What constitutes an “open” AI model can vary depending on a number of factors. Traditionally, Meta and others only offered the “weights” of their trained AI models, such as Meta’s Llama family of models. They did not disclose or post the terabytes of training data they used. Such models are considered “open-weight” models, but not truly open-source.
Data availability is important as it enables developers to deploy AI models effectively and increases the reliability of their outputs.
Also: Alibaba’s Quen AI chatbot claims 10 million downloads in its first week — here’s what it offers
While data disclosure is still relatively rare, Hai said, Chinese companies, after initial reluctance, are offering increasingly more permissive licenses for their open-source models.
“Both QWEN3 and DeepSeek R1 are more capable and are released with more permissive licenses (Apache 2.0 and the MIT License), which allow broader use, modification, and redistribution,” they wrote.
He noted that the CEO of Chinese search engine Baidu, which produces the Ernie family of models, was once “one of the strongest voices in China to extol the advantages of proprietary models,” but has since “made a U-turn by issuing a veto in June 2025.”
a global spread
As a result of their technical efficiency and greater openness, Chinese models are increasingly becoming a way for developers around the world to access free code and create efficient, tunable models for a variety of purposes.
“Distillation” refers to the process of taking an existing AI model and using it to create a smaller, more efficient model. A developer effectively takes advantage of the large budget invested by Alibaba or another major developer by endowing small models with capabilities trained on larger models.
That distillation is now leading to the “proliferation” of Chinese AI, the authors wrote.
Also: AI’s scary new trick: carrying out cyberattacks instead of just helping
“The widespread availability of high-performance Chinese AI models opens up new avenues for organizations and individuals in less computationally resourced parts of the world to access advanced AI,” Meinhardt and team wrote, “thereby shaping global AI diffusion and cross-border technological dependency patterns.”
The authors predict that the diffusion trend will self-perpetuate as the economic benefits continue to exceed benchmark achievements by OpenAI and other closed frontier AI models.
“With model performance approaching limits, AI adopters with limited resources to build advanced models, particularly in low- and middle-income countries, may prioritize affordable and reliable access to enable industrial upgrading and other productivity gains,” they wrote.
And it’s not just the developing world. He observed, “American companies, from established big tech companies to some of the most highly publicized AI startups, are widely adopting the Chinese open-weighing model.” “The existence of open-source Chinese models at a sufficient scale could reduce the reliance of global actors on US companies providing models through APIs.”
lots of warnings
There are several caveats to growing Chinese dominance. Open-weighed models still cannot provide enough transparency to ease many concerns about the Chinese government’s involvement in their development.
While open-weight models can be run on any computer of sufficient power, many users, HAI said, “will use apps, APIs, and integrated solutions offered by DeepSeek, Alibaba, and others.”
Also: The best free AI for coding – only 3 available now
As a result, “this typically means that user data is under the control of these companies and can physically travel to China, potentially exposing the information to legal or extra-legal access by the Chinese government or corporate competitors.”
And, he stressed, it appears that Chinese developers like DeepSeek have fewer concerns about guardrails and other “responsible AI” parameters. “An evaluation conducted by CAISI, the US government’s AI testing center, found that DeepSeek models were, on average, 12 times more vulnerable to jailbreaking attacks than comparable US models,” they wrote.
“Other independent assessments by security researchers also show that DeepSeek’s guardrails can be easily bypassed.”
Those concerns mean it is uncharted territory for China’s ultimate influence. However, the report matches comments from seasoned observers who see the rise of China and the decline in AI benchmark gains as a sign that the primacy of US commercial firms is waning.
Too: Is OpenAI doomed? Expert warns open-source models could crush it
As AI scholar Kai-Fu Lee observed earlier this year, large language models have now become commodities, making OpenAI’s business model vulnerable to the economics of open-source AI like DeepSeq.
More broadly, the report offers strong evidence that China will continue to have a role in global AI, and that the West may have less of a role in controlling the technology in the coming years than it did when OpenAI’s ChatGPT made headlines.

