The Court's strict scrutiny dodge, and a US surrender to Internet censorship
Two perspectives as the TikTok ban saga unfolds
The TikTok saga is not over in the United States, but the Supreme Court’s unanimous decision today is an occasion for reflection. I’d like to offer two perspectives, progressing from the micro level of the legal mechanics to an international and longer-term viewpoint. Each of these could be developed further, but since events may still unfold with action by the Biden and/or Trump administrations, I offer them in brief.
Reminder: Thanks for reading. Please don’t hesitate to share reactions or objections with me directly or on Bluesky, and please consider sharing with interested friends and colleagues. If you haven’t subscribed, sign up here for my ongoing work to understand the interplay of US-China relations, technology, and climate. If you especially wish to support this work and have the means, please consider a paid subscription.
1. The Court’s choice to ignore the divest-or-ban proponents’ concerns about information manipulation may be legally sound, but it isn’t good history.
Standard disclaimer: I am not a lawyer or a freedom of expression scholar, and although I have spent a great deal of time surrounded by lawyers and law professors, I’m an advanced amateur and welcome correction. (I think I have this bit right, though.)
In order to judge whether the First Amendment should block the TikTok divest-or-ban bill, the Court needed to decide whether to apply “strict scrutiny” or “intermediate scrutiny” to the law’s provisions. Under strict scrutiny, among other things, precedent would call for the government’s goals to be accomplished with the “least restrictive” means in terms of limits on speech in order to be constitutional. Under intermediate scrutiny, the law merely need not “burden substantially more speech than is necessary” to advance those goals.1
The lower court assumed strict scrutiny applied and decided the law passed muster. SCOTUS instead applied intermediate scrutiny. In doing so it said “the challenged provisions are facially content neutral and are justified by a content-neutral rationale” (10). This matters because applying intermediate scrutiny frees the Court from evaluating whether the law is the least speech-restrictive option for achieving government goals. Concretely, this means the justices can ignore the question of whether a law mandating something like Project Texas—which would have installed controls on user data and algorithm management at TikTok US ultimately overseen by the US government—would have addressed the goal of protecting national security.
The Court makes a pretty slick move to apply only the lower standard. First, it reasonably argues that one of the two driving concerns behind the law is content neutral: protecting US user data from possible Chinese government access. Second, it dodges the question of whether the other main concern is content neutral: the risk that China’s government might manipulate feeds for propaganda or information interference goals by pressuring TikTok’s Chinese parent company or its employees. The Court simply declares the second justification is irrelevant, ignoring its much more content-related logic—therefore avoiding applying strict scrutiny.
To eject the feed manipulation concern from their analysis, the justices assert that “Congress would have passed the challenged provisions based on the data collection justification alone” (18) On Bluesky, University of Chicago Law School Professor Genevieve Lakier wrote that this “gives govts space to whitewash bad content-based motivations by tacking on plausible-sounding content-neutral ones.”
I’ll leave that to legal experts, but I do have something to say about the argument itself that the law would have passed and been signed without the further propaganda concern. As someone who has watched the TikTok ban story closely for nearly five years, this does not sound like a strong argument. Indeed, while both data protection and algorithmic manipulation fears have been present all along, the data concern was dominant in the discourse at first. That argument met significant friction, however. Many experts have made the cogent argument (which I have also advanced) that banning TikTok wouldn’t at best only partially address the real risk that China’s government might gain access to large-scale US personal data and try to leverage it. Addressing this risk calls for more comprehensive data security legislation that would curb risks of personal information transfer through the data broker economy or through badly secured platforms Chinese security services could simply hack into. A comprehensive policy would certainly have something to say about TikTok’s structure and data access provisions, but it would focus on the data exfiltration problem holistically rather than zeroing in on one of many sources of risk.
As more and more people understood the data problem was too broad to justify a unique focus on TikTok, proponents of the bill before it was passed last spring shifted their attention to the argument that in an emergency, for instance a Chinese invasion of Taiwan, Bytedance might tweak the algorithm to deliver content to millions of US users supporting China’s position. The risk of covert influence was the big red flashing light in the discourse as the bill passed. Data security was still there, but it’s just not reasonable to exclude the other, more plausibly “content-based,” justification from a history of what got this long-running effort over the finish line. Maybe the Court has blinders here due to the record, but its argument is not valid outside their narrow context. Both justifications motivated Congress and Biden. (Whether that means SCOTUS should have applied strict scrutiny I leave to First Amendment scholars.)
2. The United States doesn’t get to claim that it’s against national security–driven Internet censorship anymore.
It’s not clear today whether TikTok will indeed be banned, or if so for how long, but the three branches of the US government have spoken: Speculative national security concerns based on nationality of a company’s ownership are legitimate grounds to erect formidable barriers preventing US citizens from accessing information on or expressing themselves through that company’s product. In other words, Congress, the president, and the Supreme Court endorse censoring websites or apps on the Internet inside the United States if they say there’s a national security risk.
To be fair, there are several arguments against the way I have just framed this:
First, one could argue the Supreme Court was at pains not to endorse censoring anything but TikTok. This is true. They write: “A law targeting any other speaker [i.e., other than TikTok] would by necessity entail a distinct inquiry and separate considerations” (12–13). Yet the law at issue gives the president the ability to designate other entities for the divest-or-ban treatment, and today’s decision signals broad deference to the government’s assessment of national security risks. There’s no obvious barrier to growing the censored list.
Second, one could argue the result is not censorship, because there was the option of divesting. We could debate whether divestiture was ever a real option, but at present there is no concrete sign that it will happen. (Who knows, though!) At least for the moment, the effective outcome of the law is massive-scale censorship in terms of US citizens’ access to information and their choice of venue for speech.
Third, one could argue that the national security concerns were so intense and specific, including necessarily undisclosed classified evidence, that this is a narrow action to address a serious threat and doesn’t signal a new comfort with Internet censorship. There are multiple weaknesses to this objection. At the Supreme Court level, the justices explicitly state that they relied only on the public record and not on the classified evidence (13), and the public record is pretty speculative. Meanwhile, the national security dangers identified are broader than TikTok, and it’s hard to take this seriously as an effort to address them absent a corresponding comprehensive effort. Finally, to deny that a TikTok ban is censorship is to deny that cutting off access to a huge range of information for one-third or more Americans is censorship, and I just don’t accept that. You can argue that it’s justified, but the interference with freedom of expression and access to information is undeniable.
So I believe my framing stands, and proponents of the law, the rest of us in the United States, and people around the world need to reckon with a massive shift on the part of the US government regarding Internet censorship. The US government has long been a loud critic of efforts like China’s Great Firewall, which maintains an online border to cut off citizens from large amounts of information in the name of public safety, national security, and sovereignty. Now, the US government is on the verge of erecting the first span of what could become our counterpart. (Call it a Cyber Border Wall, perhaps—a national security–motivated effort, still porous, but aimed at keeping out a reputed foreign menace while sacrificing erstwhile US ideals of openness.) The TikTok law is one tool the government might use. The ICTS rule developed over the Trump I and Biden administrations is another.
As I said this morning in resharing a post by David Kaye, a UC Irvine law professor and former UN special rapporteur on freedom of opinion and expression, this watershed moment in the US system’s willingness to censor the Internet is the biggest take-away for me today. Kaye said it better than I could: “can the United States ever again, with a straight face, argue that another country's internet shutdown, website blocking, app-banning, etc is a violation of the global right to freedom of expression? that it's unnecessary for national security? no. imo its prior moral authority is getting shredded.”
—
That’s enough for now. Like it or not, there will be more to discuss. And whether one likes or dislikes today’s outcome, it’s certainly a big deal.
About Here It Comes
Here it Comes is written by me, Graham Webster, a research scholar and editor-in-chief of the DigiChina Project at the Stanford Program on Geopolitics, Technology, and Governance. It is the successor to my earlier newsletter efforts U.S.–China Week and Transpacifica. Here It Comes is an exploration of the onslaught of interactions between US-China relations, technology, and climate change. The opinions expressed here are my own, and I reserve the right to change my mind.
This is laid out in the decision, but I found this CRS report a good reference to gut check my layperson’s understanding. See pp. 4–8. https://crsreports.congress.gov/product/pdf/R/R47986