OpenAI, Anthropic, Google team up to stop Chinese AI distillation threat

AhmadJunaidBlogApril 7, 2026358 Views


Three of America’s biggest artificial intelligence companies, OpenAI, Anthropic and Google, are working together to stop foreign actors, mainly from China, from copying the capabilities of their AI models, according to a Bloomberg report.

The three companies are sharing information through the Frontier Model Forum, an industry body they set up along with Microsoft in 2023, to spot and block what are called ‘adversarial distillation’ attempts. In simple terms, this means catching people who are trying to illegally clone their AI systems.

What is AI distillation?

Distillation is a technique in the AI industry where a large, powerful AI model, called the ‘teacher’ is used to train a smaller, cheaper model called the ‘student.’ The student model learns to behave like the teacher, but costs far less to build and run.

This practice is perfectly legal when AI companies do it themselves to make their own products more efficient. The problem arises when outside developers use distillation to copy a company’s proprietary AI without permission, essentially getting the benefits of years of expensive research for free.

US government officials have estimated that such unauthorised copying costs American AI companies billions of dollars in lost profits every year, according to Bloomberg.

The DeepSeek shock that started it all
 
The development comes against the backdrop of Chinese startup DeepSeek released an AI model called R1 in January 2025 that matched the performance of leading American systems, but at a fraction of the cost. The release stunned the global technology industry and briefly wiped hundreds of billions of dollars off the stock market valuations of US AI companies.

Microsoft and OpenAI investigated whether DeepSeek had improperly used large amounts of data from OpenAI’s models to build R1. By February 2025, OpenAI was warning US lawmakers that DeepSeek kept finding new and more sophisticated ways to extract information from its systems, despite its attempts to stop it. 

In a memo to the House Select Committee on China, OpenAI accused DeepSeek of trying to ‘free-ride on the capabilities developed by OpenAI and other US frontier labs.’

Anthropic and Google sound the alarm

Anthropic has taken some of the toughest public stances on the issue. Last year, it blocked Chinese-controlled companies from using its Claude AI chatbot altogether. In February, it named three Chinese AI companies, DeepSeek, Moonshot, and MiniMax, as having illegally copied Claude’s capabilities through distillation.

This year, Anthropic went further, warning that the problem is not limited to any one company or country and poses a national security risk. The concern is that copied AI models often strip out the safety features that prevent misuse.

Google has also published a statement saying it has seen a significant rise in attempts to steal its AI models’ capabilities.

Why full cooperation is still difficult

Despite the gravity of the threat, the intelligence-sharing effort remains constrained. The three companies are uncertain about the scope of information they can legally share under existing antitrust guidelines, and have indicated they would benefit from greater regulatory clarity from the US government, Bloomberg reported, citing people familiar with the discussions.

The Trump administration has signalled its support for formalising such collaboration. The AI Action Plan unveiled by President Donald Trump included a proposal to establish an information-sharing and analysis centre, in part to address adversarial distillation.

0 Votes: 0 Upvotes, 0 Downvotes (0 Points)

Leave a reply

Loading Next Post...
Search Trending
Popular Now
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...