The Media Today

Q&A: John Mecklin on AI as an existential story

June 7, 2023
John Mecklin

Last week, leading figures in the artificial-intelligence field put out a statement warning that AI poses a potentially existential threat to humanity. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement read. That was it—the signatories limited their warning to twenty-two words because they don’t necessarily agree on what the threat from AI might look like, or how to stop it. “We didn’t want to push for a very large menu of 30 potential interventions,” Dan Hendrycks, the executive director of the Center for AI Safety, told the New York Times. “When that happens, it dilutes the message.” Other signatories included Demis Hassabis, the CEO of Google DeepMind, and Sam Altman, of OpenAI, who also recently appeared before Congress and pleaded with lawmakers to regulate the technology that his company is developing.

Dire warnings about the destructive potential of AI—if not for the whole of humankind, then at least for our job market and information ecosystem—have become more common of late, thanks in no small part to the rapid recent development of tools that can generate convincing text (like ChatGPT) or images. But not everyone shares in the alarmism. Many observers see AI as having the potential to greatly improve our lives. This view, of course, is not incompatible with fearing that very bad outcomes could also come to pass, but many experts nonetheless see existential-level warnings as misplaced, or at least premature. And different voices in the debate agree that some of the doom-mongers have a credibility problem: industry leaders, they say, are hyping the threat of AI to make themselves sound all-powerful, yet also responsible in their wielding of that power—all while continuing to push a technology that already causes manifold social harms further and faster. “When the AI bros scream ‘Look a monster!’ to distract everyone from their practices (data theft, profligate energy usage, scaling of biases, pollution of the information ecosystem),” Emily Bender, a computational-linguistics expert at the University of Washington, tweeted last week, “we should make like Scooby-Doo and remove their mask.” 

Bender’s tweet was cited in a story about the twenty-two-word warning statement that appeared last week in the Bulletin of the Atomic Scientists. The story was written by Sara Goudarzi, and appeared under a tab marked “Disruptive Technologies,” at the top of the Bulletin’s homepage, that itself sits next to tabs marked “Nuclear Risk,” “Biosecurity,” “Climate Change,” and “Doomsday Clock”—the latter linking to the Bulletin’s famous annual assessment of how vulnerable we all currently are to a man-made global cataclysm. As E. Tammy Kim reported for CJR in a 2020 profile of the publication, the Bulletin was “founded in Chicago in 1945 by conscience-stricken alumni of the Manhattan Project” and is best known for covering “all things nuclear.” Earlier this year, the Clock moved closer than ever before to “midnight”—it is now just “ninety seconds” away—largely as a result of “the mounting dangers of the war in Ukraine.”

Largely, but not exclusively. Even the Bulletin’s assessment of the threats emanating from Ukraine went beyond that of imminent nuclear war, comprising, for example, the conflict’s deleterious effect on the climate crisis by driving expanded investment in fossil fuels. As Kim wrote in 2020—and as the tabs at the top of its homepage suggest—the Bulletin, these days, brings a rare, explicitly existential lens to stories beyond nuclear weapons, not least that of climate change. The Bulletin covers “the most important things first and most strongly,” John Mecklin, the editor in chief, told Covering Climate Now, a climate-journalism initiative founded by CJR and The Nation, last year. “All those other issues that we pretend are important and fixate on—many of these issues are important, but if we don’t pay attention to the existential ones first, there won’t be a civilization for those issues to play themselves out in.”

In 2020, Mecklin told Kim that the Bulletin, at the time, was about 40 percent focused on nuclear and 40 percent on climate, with its third area of focus—“Disruptive Technologies”—seen as a sort of “threat multiplier of the first two.” Already, Kim’s article name-checked AI as a part of this third focus, and the technology also gets a mention on the Bulletin’s online “About” page

Experts may not agree that AI poses an existential-level threat, but the fact that they’re debating it would at least seem to make AI an existential question. We all may have to get used to covering it on such terms—terms that the Bulletin has put at the heart of its coverage since its inception. After the AI industry leaders issued their statement last week, CJR’s Mike Laws checked in briefly with Mecklin about the Bulletin’s coverage of it, and how AI fits into the publication’s mission more broadly. (They also chatted about their favorite eschatological movies.) Their emailed conversation has been edited for clarity.

Sign up for CJR's daily email


ML: I’m sure there were strong feelings at the
Bulletin about the twenty-two-word statement issued by tech industry leaders and researchers warning of the “risk of extinction from AI.” Your colleague Sara Goudarzi, for example, sounded somewhat skeptical. What was your take, and that of the newsroom more broadly? 

JM: Sara wrote a short piece that, to my way of thinking, simply said the statement had been issued and mentioned that there are differing “takes”—to use your word—on the various existing and potential dangers of AI technologies. I think the recent warnings about artificial intelligence will be useful if they influence governments to take the potential downsides of AI development seriously and begin crafting management regimes for the broad range of technologies described as being part of AI.  

In her article, Goudarzi criticizes “those creating the technology” for now sounding the alarm on the same systems they’ve worked to develop. But is that different from the Bulletin of the Atomic Scientists itself, which was founded in 1945 by veterans of the Manhattan Project? 

Sara’s article merely noted that AI technologies are already causing significant problems, and that some observers wonder whether some of the warnings about future existential-level threats are meant to distract from efforts to regulate those existing problems. I don’t see the discontinuity you seem to; Bulletin staff and contributors have always offered a variety of views—some conflicting—on each of the subjects we cover. See this piece, coauthored by the Bulletin‘s CEO, for example [appearing in Newsweek last month and headlined “AI Is the Nuclear Bomb of the 21st Century”].

Back in 2020, E. Tammy Kim wrote for CJR about the Bulletin‘s “original remit” having “broadened considerably. It now devotes equal attention to the climate crisis, including in the setting of the [Doomsday] clock.” You now have the “Disruptive Technologies” and “Biosecurity” verticals on your website, with articles largely centering on, respectively, artificial intelligence and public health in the wake of the pandemic. Are those concerns also figuring into the setting of the Doomsday Clock? Does the Bulletin see the risk posed by AI as being tied up with nuclear—automation in weapons systems and the like—or is AI-qua-AI a threat worthy of the calculus for setting the clock? (You’ll forgive me, I hope, for having the Terminator movies’ Skynet in mind here.) 

The Doomsday Clock is set by the Bulletin’s Science and Security Board, which includes top experts in each of the threat areas we cover. The board’s discussions on those threats are extensive and include intersections among the threats, such as the one you mention involving automation of weapons systems and of command-and-control technology. There are significant differences of opinion among scientists and policy experts about whether artificial general intelligence—that is, the machine superintelligence that might pose an existential threat to humanity, à la Skynet—will happen and, if AGI is ever achieved, how long it might take for such an advance to become reality. Those differences of opinion have been discussed by SASB members.

Since you took me up on my Skynet reference: Do you have a favorite eschatological movie?

Cormac McCarthy’s The Road is my favorite apocalyptic book; I haven’t seen the film and so can’t choose it to answer your question. I’m not sure it qualifies as eschatological, but Fail Safe is the most realistically frightening and heartrending film about nuclear weapons that I’ve seen. There’s something genuinely horrifying about Henry Fonda ordering the nuking of New York City—where his wife is—to prevent all-out nuclear war with Russia.


Other notable stories:

  • Last night, Tucker Carlson, who was ousted from Fox in April, debuted his new show on Twitter. As the New York Times noted, the show resembled Carlson’s Fox program—he delivered a monologue expressing sympathy for Vladimir Putin, bashing the media, and claiming that aliens are real—albeit significantly “stripped-down”: there were no guests, it only lasted ten minutes, and Carlson appeared to be manually scrolling his teleprompter. 
  • David Enrich, of the Times, reports on a convergence of physical and legal threats that Lauren Chooljian, a journalist at New Hampshire Public Radio, has faced since reporting on allegations of sexual misconduct against the founder of a network of rehab centers: the founder sued Chooljian, while an as yet unidentified perpetrator vandalized her home and those of her parents and editor. NHPR is out with a new podcast about its story.
  • Last month, the agency that runs prisons in New York quietly handed down new rules governing the expression rights of incarcerated journalists, writers, and artists, who must now go through a strict vetting process before publishing their work outside prison walls, and can no longer be paid without permission. The policy, Chris Gelardi reports for New York Focus, fits a trend of “US prisons aggressively censoring people in their custody.”
  • Yesterday and today, Prince Harry testified in a UK court in a lawsuit that he and others have brought against the publisher of the Daily Mirror over alleged violations of privacy, including phone-hacking. He is thought to be the first royal to have been cross-examined in a British court since 1891, but Stephen Castle writes, for the Times, that he didn’t seem fazed: Harry “kept his cool and his focus, and handled tough questions with poise.”
  • And—ten years on from Edward Snowden identifying himself as the source of a massive leak exposing the surveillance practices of the US government—The Guardian’s David Smith asks how much has really changed as a result of the disclosures. Jameel Jaffer, of Columbia’s Knight First Amendment Institute, told Smith that Snowden made a “huge difference” to public debate, but wishes that “things had changed more than they have.”

ICYMI: John Mecklin, of the Bulletin of the Atomic Scientists, on climate change and other existential threats

Jon Allsop and Mike Laws are the authors of this article. Jon Allsop writes CJR’s newsletter The Media Today. Mike Laws is a freelance copy editor, occasional writer, and cohost of CJR’s Red Pen podcast.