The Media Today

What the chaos at OpenAI says about the future of the industry

November 30, 2023
Sam Altman, CEO of OpenAI, during the AI safety summit, the first global summit on the safe use of artificial intelligence, at Bletchley Park in Milton Keynes, Buckinghamshire. Picture date: Thursday November 2, 2023. 74418895 (Press Association via AP Images)

The now-he’s-in, now-he’s-out drama of the past couple of weeks at OpenAI, arguably the world’s leading artificial intelligence company, feels almost like an episode of the satirical TV show Silicon Valley. To recap: on November 17, Sam Altman, the widely admired CEO and public face of OpenAI, was suddenly removed in a surprise coup by the company’s board, which suggested, in a statement accompanying Altman’s ouster, that he had been “less than candid” with them (without saying what this lack of candor involved). A few days later, Microsoft, one of OpenAI’s largest investors, announced that Altman and Greg Brockman—the former president of OpenAI, who quit when Altman was fired—were joining the company to run a new AI research unit. Meanwhile, OpenAI employees started circulating a letter in which they threatened to quit unless Altman was given his job back; within three days, it had more than seven hundred signatures. Days after he was fired, Altman was reinstated, and the three board members who ousted him—as well as an interim CEO who had replaced him for less than seventy-two hours—were gone.

In the meantime, speculation swirled as to what had driven Altman’s ouster. Reuters suggested that a troubling new development in OpenAI’s research might have played a part in his sudden firing—according to anonymous sources, a new feature known as Q* (pronounced Q Star) seemed to show that the company’s “large language model” AI engine was capable of doing basic math without external help. This might not sound impressive, but it implied that OpenAI’s engine was capable of logical reasoning, which some observers suggested would be a major step toward “artificial general intelligence” or AGI—the idea that an AI engine could become intelligent enough to reason like a human, which is a holy grail in the AI field. A number of sources, however, subsequently poured cold water on the idea that this advancement in OpenAI’s engine indicated a quantum leap forward for the industry and that said development—or Altman’s secrecy about it—might have led to his removal.

Another theory about Altman’s ouster had less to do with technical advancements in artificial intelligence and more to do with (very human) infighting among members of OpenAI’s board. According to a number of reports, including in the New York Times, Helen Toner, a now former member of the OpenAI board who is the director of strategy at Georgetown University’s Center for Security and Emerging Technology, was central to the drama. Toner is seen as a leading figure in the “effective altruism” movement, which has gained momentum in Silicon Valley over the past few years. Some of the idea’s proponents believe that altruism—in the sense of doing the greatest good—requires them to first make as much money as possible, so they can donate it to good causes and use it to fund science that helps humanity, including developments in artificial intelligence.

According to some observers, there was growing tension between Altman—who was trying to raise as much external funding as possible in order to accelerate OpenAI’s research—and Toner, who wanted to take things much more slowly, for fear that this acceleration might result in the development of an advanced artificial intelligence engine that could pose a danger to mankind. After Toner wrote an academic paper looking at the potential risks of AI research, Altman reportedly confronted her about it, arguing that the paper was too critical of OpenAI’s work and too complimentary about Anthropic, a competitor founded by former OpenAI staffers. Altman allegedly tried to oust Toner from the board. Toner and Ilya Sutskever, a co-founder of the company and its chief scientist, then reportedly used this attempted coup as evidence that Altman didn’t have the best interests of OpenAI at heart.

This tension between Toner’s take-it-slow, safety-oriented approach to AI research and the move-fast-and-break-things attitude associated with Altman is reflected in the corporate makeup of OpenAI itself. Unlike most corporations, which typically have a board of directors that manages the company on behalf of shareholders, OpenAI is a beast with two masters. It has a for-profit arm, which Altman was (and is now once again) in charge of and which aims to raise money to fund the company’s research. (Microsoft owns shares in this part of the company.) But this for-profit arm is ultimately controlled by a nonprofit parent entity, whose responsibility is to manage AI research in the best interests of society. According to the open letter from staffers urging Altman’s reinstatement, the OpenAI board suggested to the leadership team that allowing the company to be destroyed would be “consistent with [its] mission,” since doing so would theoretically prevent anyone associated with it from developing a dangerous AI.

The reported tension between Toner and Altman may smack of personal politics, but it is also a microcosm of a broader tension in the world of AI research as to the field’s goals and the best—or least dangerous—ways to get there. As I wrote recently in this newsletter, there are, broadly, two schools of thought when it comes to the potential dangers of AI research. One focuses on the risk that people will unwittingly give birth to an all-powerful artificial intelligence, with potentially catastrophic results for humanity. (Many believers in effective altruism fall into this camp.) Geoffrey Hinton, seen by many in the field as the godfather of modern AI research, said recently that he left Google specifically so that he could raise the alarm about the dangers of super-intelligent AI. Last month, President Biden issued an executive order in an attempt to set boundaries for the development of AI; this week, sixteen countries including the US agreed to abide by common research standards.

Sign up for CJR's daily email

The opposing school of thought—led by Yann LeCun, another top AI researcher, who works for Meta—argues that the harbingers of AI doom are overstating the dangers of such research. Proponents of this position are often called “accelerationists,” because they believe that artificial intelligence will primarily be a force for good and that society should therefore facilitate its rapid development. In some cases, critics of the slow-and-steady approach go even further, arguing that this doom-saying plays into the hands of giant companies such as Microsoft and Google by implying that AI research is so dangerous that only a small number of large corporations should be allowed to do it. This, critics say, could lead to regulatory capture, with governments cementing the monopolies of a few companies by implementing rules designed by the industry itself.

Compounding this problem is the fact that AI research requires extremely large amounts of computing power, which in turn requires extremely large amounts of money. This is said to be the main driving force behind Altman’s attempts to bring in larger sources of funding, both through partnerships and investment from companies such as Microsoft, and through a planned initial public offering for OpenAI, which at one point was expected to value the company at nearly ninety billion dollars. In a sense, this need for funding also dictated the awkward corporate structure of OpenAI. The company was originally founded as a nonprofit with the goal to open up AI research. But once it became obvious how much money would be needed to develop a large computing infrastructure, Altman and others decided that it was necessary to have a for-profit arm that could strike large-scale funding deals.

The release last year of OpenAI’s flagship ChatGPT engine—which quickly became one of the most popular software launches of all time—seems to have crystallized divisions within the company. As Karen Hao and Charlie Warzel wrote in The Atlantic last week, the launch “sent OpenAI in polar-opposite directions, widening and worsening the already present ideological rifts.” (In a 2019 email to staff, Altman referred to “tribes” at the company.) A source told Hao and Warzel that, once it became obvious how quickly ChatGPT was growing, OpenAI “could no longer make a case for being an idealistic research lab,” because “there were customers looking to be served here and now.” Within a matter of months, OpenAI had two million users, including many Fortune 500 companies, and had signed an investment deal with Microsoft for thirteen billion dollars in funding. Just before he was ousted, Altman was said to be trying to raise funds to start a computer chip company that could supply OpenAI’s insatiable demand for computing power.

So what will happen to OpenAI now that Altman has returned? While it might appear that Altman and his supporters got exactly what they wanted, his return is not an all-out victory. For one thing, Altman and Brockman no longer have seats on the board, reducing the amount of power they can wield over the company’s future. As part of the agreement to reinstate Altman as CEO, the new board of OpenAI—which includes Adam D’Angelo, an early Facebook staffer and co-founder of Quora; Lawrence Summers, a former US treasury secretary; and Bret Taylor, a former co-CEO of Salesforce—also agreed to conduct an investigation into the details of and reasons behind his ouster. And it’s unclear whether customers of OpenAI will be as sanguine about relying on its technology after all of the upheaval and revelations of the past two weeks.

Whatever happens, one thing seems obvious about OpenAI’s rapid unscheduled disassembly (and equally rapid unscheduled reassembly): those in favor of accelerating AI research appear to have won the day. Toner and Sutskever, two of the strongest advocates for taking a slow and cautious approach, are gone. As Benedict Evans, a technology analyst, wrote recently in his newsletter and in the Financial Times, all of the drama around Altman’s departure “came from trying to slow down AI, and instead it will accelerate it.” In the end, Evans wrote, when it comes to determining the future of AI research, a “half-baked coup by three people might be more consequential than all the government AI forums of the last six months combined.”


Other notable stories:

  • Today marks the start of COP28, an annual United Nations climate summit, in the United Arab Emirates. Such conferences are often rich in media stories (we reported live from COP26 in Glasgow in 2021), and this year’s is no exception: ahead of time, journalists and civil-society groups expressed concerns, pointing to the UAE’s spotty record on press freedom and human rights; yesterday, Sultan Ahmed al-Jaber—the official overseeing the summit, who also leads the state oil company—lashed out at media reports that the UAE intended to use its role as host to strike oil and gas deals. Writing for Heated, Emily Atkin argues that while COP28 “sucks,” we should pay attention to it anyway—because fossil-fuel interests would love us to look away, and because the “populations most vulnerable to climate change do not have the privilege of tuning out.”
  • Earlier this month, G/O Media, a private-equity-backed company, shuttered Jezebel, the pioneering feminist news site, and laid off its entire staff, citing “economic headwinds.” Amid an outpouring of anger at G/O’s management of the site and eulogies for its impact on online culture it emerged that reports of its death may have been exaggerated, with a clutch of potential buyers interested in acquiring and resurrecting it. This week, Paste Magazine, which covers music and culture, did just that—and also acquired Splinter, a politics site that G/O shuttered in 2019. Josh Jackson, Paste’s editor, told the New York Times that “the idea of there not being a Jezebel right now just didn’t seem to make sense,” and that he plans to revive “all of the best things from all of the eras” of the site.
  • In recent months, we’ve written in this newsletter about Meta’s decision to block news from its platforms in Canada, after the country’s government passed a law mandating that big tech companies compensate publishers for their content. Google threatened to follow suit—but this week, the company reached a compromise with officials that will keep news content on Google’s services in Canada. The company has agreed to pay publishers around a hundred million Canadian dollars per year, on the condition that it be able to negotiate with a single representative of the media industry, rather than individual publishers. (Meta told the CBC that its position on the legislation remains unchanged.)
  • In the UK, the BBC announced a restructuring of its news programming that, among other things, will lead to major cuts at Newsnight, a flagship current-affairs show. The broadcaster cast the changes as part of a pivot to a more digital future. In other British-media news, the parent company of The Guardian announced a partnership with Sony Pictures Entertainment, giving the latter exclusive rights to develop Guardian journalism into movies, TV shows, and other forms of creative production. And the singer Peter Andre will host a show on the upstart right-wing network GB News.
  • And Henry Kissinger, the former US secretary of state and national security adviser, has died at the age of a hundred. In an obituary for Rolling Stone—headlined “Henry Kissinger, War Criminal Beloved by America’s Ruling Class, Finally Dies”—Spencer Ackerman indicts the mainstream press for whitewashing Kissinger’s legacy over the years, and predicts that “no infamy” will attach to him in the coverage of his death: a “demonstration of why he was able to kill so many people and get away with it.”

ICYMI: Cinthia Membreño on the global network helping journalists in exile

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.