Anthropic
Anthropic Alleges Chinese AI Firms Used Claude to Boost Their Own Models
Artificial intelligence company Anthropic has accused several Chinese AI firms of improperly using its chatbot Claude AI to enhance their own artificial intelligence systems, raising new concerns about AI security, intellectual property, and global technology competition.
According to the company, three prominent Chinese AI developers — DeepSeek, Moonshot AI, and MiniMax — created thousands of fake accounts to interact with Claude and extract capabilities that could be used to train their own AI models.
Anthropic says the activity violated its terms of service and highlights the growing tension between the United States and China over the future of artificial intelligence.
Alleged “Distillation” Campaign Involving Millions of Interactions
In a blog post outlining the issue, Anthropic claimed the companies conducted more than 16 million interactions with Claude using roughly 24,000 fraudulent accounts.
The purpose of these interactions was to carry out a process known as AI distillation, where a smaller or less advanced model learns by studying the outputs generated by a more powerful system.
While distillation is widely used within companies to optimize their own models, many AI firms prohibit external developers from using their systems in this way.
Anthropic said the alleged campaigns were designed to replicate advanced capabilities, including reasoning, coding, data analysis, and the use of agent-based tools.
DeepSeek, Moonshot, and MiniMax Named in the Allegations
The company identified three AI developers as key participants in the activity.
-
DeepSeek reportedly targeted reasoning tasks and policy-sensitive prompts.
-
Moonshot AI focused on advanced reasoning abilities, coding functions, and tool integration.
-
MiniMax allegedly experimented with agent-based coding and orchestration capabilities.
Anthropic stated that in one case involving MiniMax, the company shifted its strategy within 24 hours after Anthropic released a new model update, redirecting traffic to extract features from the latest system.
At the time the activity was detected, Anthropic said MiniMax had not yet released the model it was training.
The companies named in the report had not publicly responded to the allegations.
AI Security and National Risk Concerns
Anthropic warned that models built through improper distillation could lack the safety guardrails and content moderation systems present in leading commercial AI systems.
Without these safeguards, the company said such models could potentially be used for harmful purposes, including cyberattacks, disinformation campaigns, or other security threats.
The firm also suggested that the issue underscores the importance of export controls on advanced computer chips, which are necessary for training high-performance AI systems.
Anthropic argued that restricting access to powerful chips could limit both direct AI development and the ability to scale large distillation campaigns.
Growing AI Rivalry Between the US and China
The allegations come amid intensifying global competition in artificial intelligence.
Earlier this month, OpenAI also warned U.S. lawmakers that Chinese AI developers might be attempting to replicate capabilities from systems like ChatGPT.
Meanwhile, Chinese startups such as DeepSeek have surprised industry observers by releasing highly capable models that rival Western systems while reportedly using fewer computing resources.
These rapid advancements have sparked debate about whether current export controls and technological restrictions are sufficient to maintain the United States’ leadership in AI development.
Anthropic says the growing sophistication of distillation campaigns highlights how quickly the global AI race is evolving.

