What is Anthropic's message in alleging Chinese distillation efforts?
National security, rights, closed-source, and export controls
Anthropic yesterday published an announcement saying the company had detected the “industrial-scale” use of its services to improve other large language models. It attributed the activity to three prominent Chinese AI labs: MiniMax, Moonshot AI, and DeepSeek.
The announcement is written in the style of a cybersecurity firm that has uncovered a major hacking network and is here to dutifully report current events while, by the way, offering cyber defense services. In other words, there is the news and then there is the message. Here, there is also the context of a company at odds with the US administration.
The news is that Anthropic says its “terms of service and regional access restrictions” were violated in service of “distillation” efforts, where the outputs of one model are used in the training of another.1 That’s a fair complaint for a company that has strong views on proper, safe use of its technology and is in a very competitive market. The company provides interesting and informative details about the types and volume of the detected activity. Nathan Lambert has a good read this morning about what it all means and doesn’t mean from a technical perspective.
For me the message is more interesting. The title calls the efforts “attacks,” though there is no indication of service disruption or data breach. The moral valence of the language— “attacks,” “illicit,” “fraudulent,” “threat”—aligns with the company’s broader stance that Chinese AI labs must be constrained lest their capabilities empower a geopolitical rival and authoritarian rights abuses.
The context of Anthropic’s policy advocacy with the US government is distinctive and important for the company. During my conversations in Washington last week with people in government, research, and corporate roles, the company’s voice kept coming up, favorably and unfavorably. The current stakes are especially high. Axios reported yesterday Pentagon officials have called in CEO Dario Amodei with a $200 billion contract in question. “This is not a friendly meeting,” a DOD source reportedly said. “This is a sh*t-or-get-off-the-pot meeting.” Earlier, Secretary of Defense Pete Hegseth reportedly threatened to declare Anthropic a supply chain risk.

So what is Anthropic’s message, amidst these high stakes?
Under “why distillation matters,” the company argues “national security risks” result from lack of safeguards in models produced through the disclosed type of activity. They invoke “bioweapons” or “malicious cyber attacks.” Anthropic earlier announced that it had attributed use of Claude Code for cyber offense to a “Chinese state-sponsored group.”
They raise the possibility of “enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance.” More national security plus some human rights.
If models are “open-sourced,” they write, “this risk multiplies.” Anthropic wants advanced models to be closed.
Amidst distillation, “the apparently rapid advancements made by these labs are incorrectly taken as evidence that export controls are ineffective and able to be circumvented by innovation.” Anthropic is widely seen as a key advocate for export controls targeting China, and they seek to refute arguments that the controls don’t work.
What their message isn’t:
Anthropic doesn’t emphasize the potential rise of competitors if distillation contributes to useful products in the same market.
They do not emphasize theft of intellectual property or industrial secrets. (Perhaps this is because AI companies are not exactly morally pure on the IP front?)
In sum, the message is that Anthropic cares about national security and human rights, and, in service of this, wants the most capable models closed-source while maintaining export controls designed to prevent Chinese labs from meeting or exceeding US labs’ capabilities. And they portray themselves, accurately in my perception of the field, as a leader in this worldview—talking about “intelligence sharing” with other labs.
I find myself asking here about the mixture of a principled stance vs. strategic communication. On the principled side: Whether one agrees or not with the principles—and many disagree from many angles—there’s a lot of continuity here. Anthropic and Amodei have long been vocal about their view of safety and the geopolitical valences of AI. On the strategic side, it’s reasonable to ask if the national security and “attack” language was tuned or timed to ease the Pentagon discussions. At the same time, in “what their message isn’t” above, we see a foregone opportunity to pander to a skeptical administration where influential factions care about US firms leading AI globally. The company simply doesn’t make this about its own, or the country’s, bottom line—a frame that has been useful for others projecting alignment with the US government against China.
About Here It Comes
Here it Comes is written by me, Graham Webster, a lecturer and research scholar at the Stanford Program on Geopolitics, Technology, and Governance, and editor-in-chief of the DigiChina Project. It is the successor to my earlier newsletter efforts U.S.–China Week and Transpacifica. Here It Comes is an exploration of the onslaught of interactions between US-China relations, technology, and climate change. The opinions expressed here are my own, and I reserve the right to change my mind.
The term “distillation” is used in a dizzying variety of ways. I found Nathan Lambert’s clarification extremely helpful: “The word itself is derived from a more technical and specific definition of knowledge distillation (Hinton, Vinyals, & Dean 2015), which involves a specific way of learning to match the probability distribution of a teacher model. The distillation of today is better described generally as synthetic data. You take outputs from a stronger model, usually via an API, and you train your model to predict those. The technical form of knowledge distillation is not actually possible from API models because they don’t expose the right information to the user.”

