A quick thought on 'AI safety' and corporate governance
OpenAI's leadership jumble clarified something, at least for me
Let’s set aside disagreement among experts as to whether the cutting-edge large language models behind products like ChatGPT are likely to be a step on a path toward “artificial general intelligence,” which OpenAI defines as “a highly autonomous system that outperforms humans at most economically valuable work.” Set aside the question of whether attempts to build such a system truly might lead to catastrophic outcomes for humanity as a species, or rather will result in a more mundane variety of benefits and harms distributed in some uneven way.
Suppose for the sake of argument the “risk of extinction from AI” is, as prominent industry figures including OpenAI’s at-least-for-now deposed CEO Sam Altman declared in a short open letter this year, something that “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” (but not, one notes, climate change).
Assume—as do many of the people building and marketing these products based on heaps of data and dazzling amounts of computation and electricity—that these projects or their successors may pose a dire threat to humanity whether through accidents, malicious use, or some apparatus becoming self-aware and deciding to kill us all.
Granting all that, and without any need to take a position on whether Altman has been insufficiently concerned with preventing these outcomes or whether OpenAI’s board members are justified in their reported concerns, there’s one big thing that bothers me:
If the relevant people believe these are the stakes, what makes them believe that OpenAI’s unusual institutional architecture topped by a six-member board including three employees and three outsiders, whatever their qualifications, is enough to ensure catastrophes the scale of nuclear war or global pandemics do not result? What if, assuming best efforts, this small group simply fails to spot the mechanism by which extinction-level disasters might emerge? Is it not hubristic to suppose that just a few people arguably from just one community are enough of a check? Shouldn’t people with these genuine concerns err on the side of radical transparency and widely distributed veto power over model development and deployment, far beyond their firm and to include people with many different specialties and viewpoints?
I caught wind of Altman’s firing by the OpenAI board Friday while listening to talks at a conference on “AI safety,” a field packed with people trying to figure out how to build systems that prevent harms both expected and unexpected while also providing both expected and unexpected benefits. There are so many technical, institutional, philosophical, and policy ideas in this discussion, which overlaps with and sometimes conflicts with discussions framed around concepts like “AI governance” or “AI ethics.” A huge diversity of concerns and ideas about how to address them, and that’s even when the discussions do not significantly include people from most countries and cultural backgrounds.
There are more and less reasonable arguments for why development of these products continues even when developers believe they can’t really know whether they’re building armageddon: the potential economic, social, and scientific benefits are too large not to pursue; others will build even less safely so best that we go first; YOLO and it’s nice to have a big house…
Some developers will not be concerned that their work has massive potential downsides, whether because they just don’t think the potential is there or because they see it as someone else’s role. But for those who are concerned, I just can’t help but chuckle that one small board in one corporate structure would be enough.
Whatever happened and whatever happens next at OpenAI—the company currently most closely associated with cutting-edge “AI”—from now on it’s going to be hard for me to take seriously the most dire safety warnings unless those developers doing the warning are building more ambitious controls and operating with more transparency, caution, and pluralism. If they don’t, I think we’re quite legitimate in speculating about their real bottom lines.
About Here It Comes
Here it Comes is written by me, Graham Webster, a research scholar and editor-in-chief of the DigiChina Project at the Stanford Program on Geopolitics, Technology, and Governance. It is the successor to my earlier newsletter efforts U.S.–China Week and Transpacifica. Here It Comes is an exploration of the onslaught of interactions between US-China relations, technology in China, and climate change. The opinions expressed here are my own, and I reserve the right to change my mind.