Why the US-China 'AI race' narrative confuses everyone
It's not because no one in the US is competing against anyone in China
Why shouldn’t we think of what’s going on in AI development as a race between the United States and China? Numerous journalists, scholars, and friends have asked me this in recent months. One person observed that specialists don’t seem to like the “AI race” frame—but they didn’t see what our problem was.
There’s no sense in picking on particular journalists or pundits. The shorthand of a “global AI race” is so omnipresent that it’s a natural crutch. (Brevity is a virtue in headlines and broadcast scripts.) We certainly could pick on some specific actors who have intentionally pushed this frame, but that’s not my purpose here. As US President Donald Trump prepares to travel to Beijing for a state visit and the “race” rhetoric soaks the media, I want to outline why I think it confuses everyone about what is actually going on.
Here, in brief, is my answer, in three parts.
1. There is no comprehensive ‘AI race,’ because AI is not one thing.
AI can mean facial recognition, social media feed algorithms, deepfake video production, LLM-based chatbots, autonomous robotic systems, and more. A race for such a thing can imply competition for technical excellence, engineering toward practical applications, actual deployment, revenue, or geopolitical advantage. If people want to talk about being ahead in an AI race, they need to specify which one. Which technology, and what aspect of its role in the world?
The idea of nation-vs-nation AI competition between the United States and China has been present in public policy debates at least since the Chinese government’s New Generation AI Development Plan (AIDP) in 2017 declared:
[B]y 2030, China’s AI theories, technologies, and applications should achieve world-leading levels, making China the world’s primary AI innovation center, achieving visible results in intelligent economy and intelligent society applications, and laying an important foundation for becoming a leading innovation-style nation and an economic power.1
US commentators reacted to this ambition with alarm, though some also pointed out that the AIDP seemed to react to Obama administration plans from the year before. (This observation was later confirmed to me in a US-China dialogue when one of the people who worked on China’s plan directly said the US plans helped drive China’s own planning.) If nothing else, we might observe a tit-for-tat dynamic in AI competition rhetoric.
Yet in 2017, AI was not large language models. It meant various forms of machine learning, largely of the “deep learning” paradigm, focusing on pattern matching. Facial recognition and computer vision for medical diagnosis were rising. A DeepMind system had beat the Go champion Lee Sodol the pervious year. The AIDP and other future-oriented documents contemplated all kinds of other things, but no one knew what would emerge.
Fundamentally, if you’re going to talk coherently about “winning” a race, or a competition or battle or war or whatever metaphor, you’ve got to know what you’re talking about. The “AI race” idea, unless precisely narrowed, leaves it unclear how to judge success or failure. It’s evocative, but empty.

2. A ‘race’ has a finish line. Unless you specify one, the term frustrates careful thinking.
Even if you define an “AI race” in terms of being “ahead” or “behind” on some specific metric, say the best performance of any system in each country on a chosen benchmark, you still have the problem of the race metaphor implying a destination. If there’s no finish line specified, it will be impossible to judge the truth value of any speculation about who will win, or who is winning. The lesson of the tortoise and the hare is that being “ahead” doesn’t mean anything until the finish line is reached; if no destination is specified, you can’t measure victory.
Defining a specific goal is a good, serviceable way to use the evocative and sometimes accurate language of “racing” to develop something. I have no beef with anyone talking about a race between firms to be first to market with a product that reliably performs a given task, or to be the first to make such a thing profitable. If you talk about a race to develop a system that aces a particular benchmark, that too is specific. You can talk about a race to build an autonomous driving company with 10 million vehicles operating per month. Yet these are not “the AI race”; they are “a race to _____.” If anything, the simple exercise of asking about specific goals reveals that there are many, many races in “AI”—and that only some of them make sense as a nation-vs-nation contest.
Now in many cases the implied destination is artificial general intelligence (AGI). Yet the concept of AGI has always been speculative and diffuse. Some ideas of the thing have clearly been achieved. Others are so far outside of what we witness today that it’s hard to see any “race track” ahead. A race to AGI simply is not a specific thing unless a specific definition is agreed.
3. What’s good for a US company is not necessarily good for America.
Suppose we agree that one or more labs, or constellations of organizations, are in a race to achieve a specific outcome. If a well-defined finish line is achieved first in a given country, does it mean that country has won? Not necessarily.
Let’s say Waymo’s deployment of highly autonomous ride-share vehicles that do quite well in a lot of different contexts has just reached a well-defined finish line first. Waymo, or Alphabet, could naturally be said to have won this specific race. Some of the many companies Waymo beats in this hypothetical are Chinese.2 Has America won and China lost? The US government doesn’t own Waymo, nor do the American people. Some shareholders, who are spread around the world, might gain. Some riders enjoy the service. If it is wildly successful, already-struggling gig workers likely will not enjoy the increased squeeze on their finances. And Chinese life proceeds largely unaffected.
How about a more specific and complex example. Say that we were aliens with advanced knowledge of human computer systems monitoring for the moment an LLM system (and the specialists at the company that made it) could discover the specific OpenBSD vulnerability Anthropic’s Mythos reportedly uncovered. And say no one in a cyber offense research group has been able to secretly use other available systems to discover this secret so far. Anthropic has just “won”! And the company is making some efforts to spread the benefits of this achievement among US actors and not Chinese ones (though Chinese actors also win when they implement the OpenBSD patch). Anthropic appears to be trying to turn its achievement into a win for the United States, and/or it is trying to encourage the perception that this is the case. Here you can make a coherent argument that the United States may be winning marginal cybersecurity advantages, one model at a time, while China is denied this edge. But that doesn’t tell us anything about the overall situation with “AI.” The problem of many forms of AI (#1 above) and the general absence of specific finish lines (#2) remains
Again with the AGI: Many people who feel urgency about a US-China “AI race” will tell you they are concerned about self-improving, potentially catastrophically capable systems. Sober-minded concerns about systems that may be deployed in ways that lead to terrible outcomes are blended with singularity, Terminator, Matrix, and human-extinction imaginaries. The fear of unknown capability leads to the fear that an adversary might get to the unknown first, holding our team—whoever that is—at peril. So shouldn’t we throw out all these objections to the “race” metaphor and unleash all US actors to blunder forward into the unknown lest someone in China get somewhere first? To this I simply ask: If the fear is of unknowable, unintended, uncontrollable consequences, doesn’t it take a healthy dose of hubris to imagine that just because a system is made by OpenAI or Anthropic, or xAI, therefore it will naturally operate to the US benefit? Which benefit specifically, and for which US citizens? We have longstanding imperfect institutions designed to align the US government with the plural interests of the US people, and those are under extraordinary strain. What precisely should an idealized, “aligned” such system conform to?
Conclusion
Talking about a single, generalized “AI race” between the United States and China distracts from the multiplicity of endeavors under the AI banner. It implies a clear finish line when most often the discussion at hand has specified no such thing. And it inaccurately assumes the national identity of an actor determines which national interests their work will benefit.
So if you find yourself reaching for the “AI race” in a headline or a commentary, ask yourself if you can be more specific. Ask in what realm competitors might be operating, ask “race to where,” and ask who benefits if someone gets there first.
About Here It Comes
Here it Comes is written by me, Graham Webster, a lecturer and research scholar at the Stanford Program on Geopolitics, Technology, and Governance, and editor-in-chief of the DigiChina Project. It is the successor to my earlier newsletter efforts U.S.–China Week and Transpacifica. Here It Comes is an exploration of the onslaught of interactions between US-China relations, technology, and climate change. The opinions expressed here are my own, and I reserve the right to change my mind.
This is our DigiChina translation, by Rogier Creemers, Elsa Kania, Paul Triolo, and myself. The original says “到2030年人工智能理论、技术与应用总体达到世界领先水平,成为世界主要人工智能创新中心,智能经济、智能社会取得明显成效,为跻身创新型国家前列和经济强国奠定重要基础。” I have debated whether we should have expressed translation ambiguity around the definite article in “the world’s primary AI innovation center.” It could also be read as “a primary AI innovation center of the world.” In any event, for many in the United States, an ambition to reach parity was alarming enough.
I would be curious to hear if anyone thinks the hypothetical would better reflect reality if I’d picked a Chinese company. If so, which one? In any case, just an example for discussion.

