Critics Mock Anthropic’s Claims Chinese AI Labs Are Stealing Its Data

In brief

  • Anthropic says three Chinese AI labs extracted Claude outputs at scale using fraudulent accounts.
  • The company claims the activity undermines export controls and strips safety safeguards.
  • Critics on X are accusing Anthropic of hypocrisy over how AI models are trained.

Anthropic has accused three Chinese AI labs of extracting millions of responses from its Claude chatbot to train competing systems, a move the company claims violates its terms of service and weakens U.S. export controls.

In a blog post published Monday, Anthropic said it identified “industrial-scale campaigns” by AI developers DeepSeek, Moonshot, and MiniMax to extract Claude’s capabilities through model distillation. The company alleged the labs generated more than 16 million exchanges using roughly 24,000 fraudulent accounts.

Anthropic’s announcement drew skepticism and mockery on X, where critics questioned its stance given how major AI models, including Claude, are trained, reflecting the broader ongoing debate over intellectual property, copyright, and fair use.

“You trained on the open internet and then call it ‘distillation attacks’ when others learn from you,” wrote Tory Green, co-founder of AI infrastructure firm IO.Net. “Labs that like to preach ‘open research’ suddenly crying about open access.”

“Ohhh nooo not my private IP, how dare someone use that to train an AI model, only Anthropic has the right to use everyone else’s IP nooooo, this cannot stand!” another X user wrote.

Distillation is an AI training method in which a smaller model learns from the outputs of a larger one.

In cybersecurity contexts, it can also describe model extraction attacks, where an attacker uses legitimate access to systematically query a system and use its responses to train a competing model.

“These campaigns are growing in intensity and sophistication,” Anthropic wrote Monday. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community.”

“Distillation can be legitimate: AI labs use it to create smaller, cheaper models for their customers,” Anthropic wrote in a separate X post. “But foreign labs that illicitly distill American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems.”

In June, Reddit sued Anthropic, accusing it of scraping more than 100,000 posts and comments and using the data to fine-tune Claude.

The case joins lawsuits against OpenAI, Meta, and Google over the large-scale scraping of online content without permission.

“[There’s] the public face that attempts to ingratiate itself into the consumer’s consciousness with claims of righteousness and respect for boundaries and the law, and the private face that ignores any rules that interfere with its attempts to further line its pockets,” the Reddit lawsuit said.

Anthropic said it is expanding detection, tightening account verification, sharing intelligence with other labs and authorities, and adding safeguards to limit future distillation attempts.

“But no company can solve this alone,” Anthropic wrote. “As we noted above, distillation attacks at this scale require a coordinated response across the AI industry, cloud providers, and policymakers.”

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Source: https://decrypt.co/358911/critics-mock-anthropics-claims-chinese-ai-labs-stealing-data