Hi everyone. Thanks for reading Here it Comes. For a year, I abstained from microblogs of all sorts. Recently, I have returned and am posting on Bluesky at @gwbstr.com. Join me there if you want to connect!
The case for a TikTok ban has been multifaceted, but for the most part it has relied on imagined potential future harms. Today, writing a thread on what’s next for the ban, I found myself imagining some scary potential outcomes after a ban takes effect.
First, to recap, many ban proponents imagine:
The Chinese government might exfiltrate data at scale and use it somehow to harm US national security. Location data, browsing data, social graph, misusing access to microphones or cameras on devices—all kinds of possible concerns. The fact that such large-scale misuse has not been documented isn’t relevant here, because it’s about what China might do. (And the fact that the company has done shady company things like try to track journalists’ locations in a leak investigation, or leave open access to Beijing engineers, doesn’t help their case.)
The Chinese government could use physical access to ByteDance systems or personal leverage over staff to try to manipulate US public opinion. Evidence of this kind of thing being under way is quite thin, but some bleed between China-based content moderation practices and the global app has been reported, and the company’s assurances that mistakes were made and fixed aren’t convincing if you already don’t trust them. Moreover, you don’t need evidence of such a campaign now if you’re imagining this would happen in the context of a future contingency—the classic example being a Taiwan contingency.
Persistent integration between Chinese and global software teams could give Chinese government actors a chance to insert or exploit cyber vulnerabilities.
I have long argued that these risks are indeed imaginable and worth taking seriously, but that they are largely not unique to TikTok and ought to be dealt with on a comprehensive basis covering all such services. Where there are risks specific to TikTok, I have argued that the Project Texas mitigation proposals, while not publicly detailed in full, looked like they might do a good job on all of this. I’ve never found it useful when people try to refute the possibilities, because you can’t prove wrong a speculated future. Anyway, the law passed, and the DC Circuit says it can stand, and comprehensive legislation isn’t forthcoming. You don’t have to use your imagination about that.
Now for some imagined possibilities stemming from a ban. Allowing that we don’t know yet whether a ban will actually happen (SCOTUS, Trump, a divestiture, possibly further lawsuits, all could intervene), here are some slightly screwball1 but nonetheless plausible results that could come about assuming a ban. Imagine:
Apple and Google could comply in removing TikTok from their app stores, leading large numbers of committed users to learn how to jailbreak their mobile operating systems and side-load an internationally available copy of the app. This would mean users lose the (imperfect but important) cybersecurity refinements of an up-to-date iOS or Android version and the similarly imperfect efforts by the companies to vet apps they distribute. Users could then be compromised by cyber criminals and adversary governments. Any effort to embed hidden malware in a TikTok app would be more powerful in a jailbroken OS, which would be more vulnerable in other ways as well.
If the ban is enforced such that DNS for TikTok’s servers is blocked or unstable, users may turn to VPNs to tunnel into countries where access is unencumbered. Yet many or most would not want to pay for their VPN, increasing the chance that the VPNs themselves would act as data exfiltrators or attack vectors.
In anticipation of a potential Project Texas arrangement with the US government, TikTok said it would base its US user data in US-based Oracle infrastructure. Under the ban, Oracle would be prohibited from providing these services, and US user data would be stored wherever convenient for TikTok. For people who worry about where US user data is stored and how, US authorities would lose visibility.
The purpose of this is not to argue over the ban. There’s been plenty of that, and the die is cast. Instead, I wrote this up because I have been thinking a lot about the imagined potential futures that animate the policies I track on technology issues in the United States and China. What AI might be, what military contests might emerge, what might decide who “leads” in what, and why that matters… Speculating about uncertain futures is unavoidable in policymaking, but the guesses we make are a big deal, and often the imaginaries seem to be the least contested part of the discussion.
I can imagine some good outcomes of a ban. Maybe people would spend less time on a media product designed to be addictive. Maybe they’d garden and read books and go to the coffee shop or the pizza parlor to sing and dance at each other. Or not: Stock performance today suggests many believe outcomes for Meta’s products would be good.
About Here It Comes
Here it Comes is written by me, Graham Webster, a research scholar and editor-in-chief of the DigiChina Project at the Stanford Program on Geopolitics, Technology, and Governance. It is the successor to my earlier newsletter efforts U.S.–China Week and Transpacifica. Here It Comes is an exploration of the onslaught of interactions between US-China relations, technology, and climate change. The opinions expressed here are my own, and I reserve the right to change my mind.
It’s hard to say if this stuff is more screwball than the imaginaries that justify the ban. But they seem screwball, even to me, because we’re not constantly hearing about them and having these imagined futures affirmed by grave pronouncements.