Anthropic says DeepSeek Moonshot and MiniMax ran coordinated distillation campaigns on Claude AI

In a major development shaking the artificial intelligence world, U.S.-based AI company Anthropic has accused three Chinese AI labs of running massive operations to extract knowledge and capabilities from its advanced AI model, Claude. According to the company, these attacks involved tens of thousands of fake accounts and millions of interactions, highlighting the challenges of protecting sophisticated AI systems in a competitive global market.

The attacks reportedly aimed to copy Claude’s advanced reasoning, coding, and problem-solving abilities without authorization. The scale, coordination, and technical sophistication of these campaigns have raised alarms across the AI industry, as companies struggle to secure their models against theft or misuse. Experts say such incidents could undermine the safety and ethical use of artificial intelligence worldwide.

Anthropic Reveals – What Happened and How It Worked

Anthropic revealed that three Chinese labs, DeepSeek, Moonshot AI, and MiniMax, ran coordinated campaigns to exploit Claude using a technique known as distillation. Distillation is a common AI method in which a smaller or less powerful model learns from the outputs of a larger, more advanced model. While this is typically used legally to create faster or more efficient versions of a company’s own AI, using it on another company’s model without permission is considered illicit.

The attackers reportedly created approximately 24,000 fraudulent accounts to interact with Anthropic’s Claude, generating over 16 million exchanges. They masked their activities using networks of fake accounts and commercial proxy services, making detection extremely difficult. These “hydra clusters” allowed the attackers to continue their operations even if some accounts were blocked or flagged.

FBI Informant alleges Epstein relied on a personal hacker for gaining sensitive digital access

By feeding Claude carefully crafted prompts, the labs were able to extract step-by-step reasoning, coding logic, and problem-solving methods. This allowed them to train their own AI systems with capabilities that normally take years and millions of dollars to develop.

Anthropic emphasized that this method of capability transfer bypasses many of the safeguards designed to prevent misuse. AI models developed in the United States often include protections against harmful applications, such as cyberattacks, surveillance abuse, or the creation of dangerous biological agents. Illegally distilled copies may lack these controls, potentially allowing sensitive AI capabilities to fall into unsafe hands.

The Scale of the Campaigns

The attacks were conducted in three main campaigns, each focusing on different aspects of Claude’s intelligence. DeepSeek’s campaign involved over 150,000 interactions and targeted advanced reasoning and step-by-step problem solving while avoiding politically sensitive queries. Moonshot AI engaged in more than 3.4 million exchanges, focusing on coding, data analysis, tool use, and computer vision. Their efforts were so sophisticated that they attempted to reconstruct Claude’s internal reasoning patterns.

MiniMax ran the largest campaign, with over 13 million interactions, concentrating on coding, tool orchestration, and agent-like reasoning. The lab reportedly adjusted its strategy within 24 hours whenever Anthropic released a new model, redirecting nearly half of its traffic to the updated system.

AI spending war erupts as Amazon and Google pour $300bn into data centers — but only one has the cash

Anthropic confirmed it could identify these attacks with high confidence using account behavior, IP addresses, request metadata, and infrastructure fingerprints. In several instances, the data even matched the public profiles of researchers at the labs, providing further evidence of coordinated activity.

The scale of these campaigns demonstrates that advanced AI systems are increasingly seen as strategic assets. The attacks were not isolated incidents but part of highly organized and continuous efforts to capture proprietary intelligence at an industrial scale.

Risks and Company Response

Anthropic warned that illegally distilled copies of Claude are unlikely to include the safety protections present in the original system. Without these safeguards, AI capabilities could be misused in dangerous ways, including military operations, surveillance, or cyberattacks. The company is investing heavily in new detection systems, including behavioral fingerprinting and specialized classifiers to identify coordinated attempts to extract reasoning patterns.

In addition to internal safeguards, Anthropic is sharing technical indicators with other AI labs, cloud providers, and authorities to prevent similar attacks. The company stressed that no single organization can handle the problem alone and called for coordinated industry-wide action to secure advanced artificial intelligence systems.

India advances AI ambitions with L&T-NVIDIA gigawatt data center partnership Mor

These disclosures come as part of broader concerns about intellectual property theft and safety in the fast-growing AI sector, which continues to see intense global competition.

The attacks have also renewed debates about the ethics of AI sharing, international regulation, and export controls. Restricting access to advanced hardware and software may help prevent illegal training or large-scale distillation attacks, but companies and governments alike are facing challenges in keeping pace with the rapid evolution of AI technology.

TOP 10 TRENDING ON NEWSINTERPRETATION

Advanced Nvidia Blackwell chips reportedly used by Chinese AI firm despite US export controls

Reports about artificial intelligence development have brought attention to...

Peter Attia steps down from CBS News role after messages with Jeffrey Epstein surface

Celebrity physician Peter Attia has stepped down from his...

Geoeconomics takes center stage as Asia Economic Dialogue returns to Pune

Pune is preparing to host a major international conference...

What the law says about revoking Elon Musk’s naturalized US citizenship

Online conversations have recently intensified after a petition began...

FBI reports more than 700 ATM jackpotting cases in 2025 as financial losses exceed 12 million

The Federal Bureau of Investigation (FBI), the main law...

Taylor Swift’s private texts revealed in court filing tied to Blake Lively dispute

Taylor Swift has been unexpectedly pulled into a legal...

Zuckerberg faces intense courtroom scrutiny in teen social media safety trial

Mark Zuckerberg testified in a closely watched trial in...

India advances AI ambitions with L&T-NVIDIA gigawatt data center partnership Mor

India’s artificial intelligence journey is expanding with a major...

Related Articles

Popular Categories

error: Content is protected !!