Musk Calls Out Anthropic for Hypocrisy in AI Data Theft After $1.5B Deal

Musk calls out Anthropic over Claude training data

Anthropic, the company behind Claude AI, recently accused three Chinese AI laboratories, DeepSeek, Moonshot AI, and MiniMax, of industrial-scale distillation attacks. Anthropic claims that these companies used its AI model to train their less capable ones using a technique called distillation. It adds that these companies created around 24,000 fake accounts to generate over 16 million exchanges.

This raised concerns about AI data theft and training practices, and how some fall into the grey area. Many users, including Elon Musk, called out Anthropic, accusing it of hypocrisy. The company had faced serious allegations during Claude’s training.

Also Read: Redotpay Eyes $1 Billion US IPO Backed by JPMorgan and Goldman Sachs

Why Musk Calls Out Anthropic Over AI Data Theft and Its $1.5B Settlement

Anthropic hypocrisy and AI data theft
Source: Reuters

Responding to the developments, Elon Musk argued that Anthropic is guilty of stealing training data and has paid multi-billion-dollar settlements for it.

Elon, while making the claims, reposted another entry that had two screenshots of proposed community notes. It mentioned that the Claude training data was stolen, Anthropic has settled a $1.5 billion lawsuit for pirating over 7 million books from shadow libraries, and that it faces another $3 billion lawsuit for torrenting songs.

According to a Reuters report, Anthorpic agreed in September 2025 to a $1.5 billion settlement for a copyright infringement lawsuit. Another report from January 2026 confirms that Anthropic was hit with a $3 billion lawsuit by music companies Universal Music Group, Concord, and ABKCO for infringement.

Anthropic has yet to respond to these latest accusations. But the entire episode highlights a growing ethical issue in the AI industry.

Many major AI developers rely on datasets in the public domain, as well as on copyrighted works and licensed material. Last year, Reddit filed a lawsuit against Anthropic. It accused Anthropic of unlawfully using user-generated content from the platform to train its AI model without

Also Read: Critical Level XRP Must Hit Before Eyeing $200

At the same time, many of these companies are increasingly protective of their own systems, not allowing other models to train on them.

As the AI race intensifies, we are likely to see more such disputes over training data, with large settlements becoming a norm. The Anthropic hypocrisy episode, as many call it, is neither the first instance nor will it be the last.

Read Next