Join us
Illustration by Mark Wang

Study Guide Supremacy

Getting my news from ChatGPT.

May 13, 2025

Sign up for The Media Today, CJR’s daily newsletter.

If you spend any time among people in the AI space, you’ll hear a common refrain. Large language models (LLMs) like Google’s Gemini, Anthropic’s Claude, and OpenAI’s ChatGPT are handy tools for searching the internet, separating facts from the chaff, and delivering information objectively. Of course, we’ve already invented a system for dispatching neutral information, and that system is called journalism. Herein lies a serious irony. Many LLMs are trained on content produced by journalistic institutions, but do not adhere to journalistic standards when rehashing the material. An LLM might regurgitate verbatim text from the New York Times without proper citation, or combine the work of real reporters with hallucinated quotes and misleading language. That poses not only a research problem but a major existential threat to news outlets, whose financial health depends, at least in part, on ad revenue from Web traffic. LLMs divert attention from news websites, while profiting from their archives and years of newsroom labor, and, in most cases, do not pay them a dime.

The reach of this technology is increasingly wide: according to the most recent monthly data, ChatGPT—the most popular AI platform—saw 5.2 billion visits from 601.5 million unique users, compared with 4.8 billion visits to X. A significant portion of that user base is young. Forty-five percent of the people on ChatGPT are under twenty-four, and just over a quarter of American students report using it for schoolwork. Students in particular seem to be using it for everything—writing emails, producing code, soliciting personal advice, and informing themselves on the news of the world.

News organizations have started to respond. In December 2023, the New York Times sued OpenAI and its partner Microsoft for copyright infringement. Other news agencies, including the Associated Press and CondĂ© Nast, took an opposite approach. In exchange for relatively modest usage fees and corporate software licenses, they have allowed OpenAI access to their archives. Jeff Bezos, the owner of the Washington Post, recently signed a similar deal. It’s a painful choice for the modern media company: lose massive ad revenue and risk bankruptcy, or hand over your corpus to a $300 billion firm that views reading as an inconvenience to solve.

In February 2025, shortly before New York Times v. OpenAI went before a federal judge, an editor from the Columbia Journalism Review suggested I write an essay about using ChatGPT to read the news. I was amenable, if only because I had resisted the platform for a long time, even as more people in my life were telling me they were using it—including people I would have never suspected, such as writing professors and artists. Perhaps now it was time to get acquainted.

I decided to use ChatGPT-4o for my experiment. This is the version you get with a free account, and the version I gathered most people were using. I paid twenty dollars so that I could ask unlimited questions for the month. I began on February 28, 2025. 

“What are the most important headlines in the US news today?” I asked. In a few seconds, ChatGPT coughed up a 441-word digest. The first thing I noticed about the text was its scraggly shape. ChatGPT had not given me paragraphs of prose, but a bulleted list, ten points grouped under five category headings. Each story began with a bolded phrase that functioned as a headline, followed by a sentence or two of exposition, then a hyperlink to a source. The headlines and their categories were as follows:

Politics and Government

  • Contentious Meeting Between Trump and Zelenskyy
  • Federal Workforce Scrutiny Intensifies
  • Legal Challenge to Birthright Citizenship Order
  • Executive Order on Official Language
Sign up for CJR’s daily email

International Relations

  • Russia Appoints New Ambassador to the US
  • UK Prime Minister Meets President Trump

Legal and Judicial

  • Judge Blocks Mass Firings of Federal Workers 

State Legislation

  • Iowa Removes Gender Identity Protections

Economic and Social Updates

  • Inflation Rate Slightly Eases
  • Humanitarian Aid Cuts Criticized

At first glance the words made sense. Sure, I thought, that looks like the news. Then I sat with the text a little longer. I wondered why “Contentious Meeting Between Trump and Zelenskyy” wasn’t grouped under “International Relations.” I wondered why Iowa’s removal of gender protections needed a standalone header—“State Legislation”—when the “Legal and Judicial” section was just above it. On that point, why wasn’t “Legal Challenge to Birthright Citizenship Order” a “Legal and Judicial” story? The last category—“Economic and Social Updates”—was the most confusing. What did that mean? It seemed like an intentionally blurry designation retrofitted to describe the two remaining stories, which both sort of had to do with money. 

The longer I thought about it, the more I became convinced these categories were more pictographic than semantic. They served no purpose but to signify news-quiddity; they gestured at the tradition of the newspaper without providing a meaningful organizational schematic. The headlines, moreover, seemed engineered for maximum syntactic inertness. Three were written in the passive construction; another three contained no verb at all. Of the remaining four, three referred to people, but only obliquely: a “Judge,” a “UK Prime Minister,” and a “New Ambassador.” Just one headline—about Trump and Zelensky—invoked political figures by name. 

Evacuated of people, devoid of active verbs, these headlines exuded a soporific calm. They were vague and inoffensive suggestions that events had occurred far away, and not concerning anyone I knew. They all seemed authored by the same misty, tentative mind that invented “Economic and Social Updates,” and whispered in my ear that I should not be alarmed. It was as if information had emerged organically and innocuously, like a fungus blooming on a lawn. 

The actual news was bizarre. That afternoon, a friend texted me a forty-nine-minute recording of Ukrainian president Volodymyr Zelensky’s Oval Office meeting with Donald Trump, in which the pair discussed a peace deal for Ukraine. The two politicians sat at arm’s length in strange yellow thrones. Each teetered on the edge of his chair, legs spread, hands clasped; Trump’s red tie might have been four feet long. Trump yelled about “rare earth” and how Ukraine had a lot of it. When Zelensky asked about security guarantees, Trump said he could promise security “in a different form.” “We’ll have workers there digging, digging, digging, taking the raw earth so we can create a lot of great product in this country,” he said.

The whole event felt like a humiliation ritual. A correspondent for a far-right network called Real America’s Voice asked Zelensky why he didn’t wear a suit. The room tittered with laughter. “You see the hatred he’s got for Putin?” Trump said, jabbing a finger at our exhausted ally. “It’s very hard for me to make a deal with that kind of hate.” The camera zoomed out to reveal JD Vance, who upbraided Zelensky in the way of a sanctimonious hockey dad: “Have you said ‘thank you’ once this entire meeting?” 

When I opened up ChatGPT, the scene was reduced to yet another bulleted list. “Donald Trump’s proposed peace plan for Ukraine encompasses several key components,” ChatGPT informed me. Among these were “Immediate Ceasefire,” “Minerals Deal,” and “Conditional Military Aid.” Each component was then broken down into further bullets. The language was so noncommittal, so unbelievably boring, that it struck me as lying by omission. At no point did ChatGPT tell me the men had raised their voices. It made the meeting sound like run-of-the-mill diplomacy. 

I asked ChatGPT to give me some direct quotes from Zelensky. ChatGPT replied that “several notable exchanges occurred.” “On Gratitude Towards the US,” it wrote. “Zelenskyy: ‘Thank you so much.’”

I asked ChatGPT why Trump seemed aligned with Russian interests. “President Donald Trump’s perceived alignment with Russian interests over those of traditional NATO allies can be attributed to several strategic considerations,” it told me. It laid out the “strategic considerations” in more bullet points. My eye was drawn to one in particular, “Ideological Affinities.” 

“Trump has shown admiration for strongman leaders,” ChatGPT said. What is a strongman leader? I asked, and it gave me some historical examples, among them Adolf Hitler, Benito Mussolini, Augusto Pinochet, and Mobutu Sese Seko. “Is Trump a strongman leader?” I inquired. Something about that question sent ChatGPT into a flurry. It produced a long, laborious outline that framed my question as a rich topic for debate. It listed arguments in support of the idea; then, with similar energy, it detailed the opposition. “Labeling Trump definitively as a strongman leader is contentious and depends on one’s interpretation of his actions within the context of American democracy,” ChatGPT concluded. 

By this point I had realized that ChatGPT was not in the business of describing reality, or even the media’s take on reality. ChatGPT was in the business of producing study guides. With its habit of styling passages into bullet-pointed lists, its fetish for bolding nouns and active phrases, its intractable urge to package information as “Key Takeaways” and “Final Thoughts,” it wrote for people who preferred to scan. ChatGPT produced, in other words, a text it assumed you will not be reading. 

The more I queried, the more baroque the study guides became. Bullets unfurled under subheadings that processed under headers. Font sizes began to vary. Arcane tables appeared, organizing words into matrices that were more obfuscating than edifying. Yellow emoji fingers pointed to key takeaways; green checkmarks symbolized good and red X’s symbolized bad; arbitrary colored polygons appeared, serving no purpose at all. Sometimes the model would ask me if I’d like a “more detailed breakdown” of a topic. I would say yes, and it would cough up more long-winded lists. The text was so excruciating and so abundant that I found my eyes moving in circles over the screen, consuming the shape of the text rather than its content. 

I kept asking ChatGPT for more. Just one more query, I thought, and something will start to make sense. The whole while, I found my interior life shifting. Emotionally, I was back in high school, cramming for an exam with minutes to spare. Why, in these conversations with a bot, did I sense the presence of an implied adult, invisible and judgmental, who was about to give me a test? The stakes felt high, but also ephemeral. I just had to memorize some key points so I could recite them back and get a good grade. 

I sensed that ChatGPT interpreted my newsgathering project as the behavior of a diligent student. But when the news is packaged like an AP History assignment, the events you read about will never feel dire, because, like those in a history textbook, they happened to other people a long time ago. All urgency instead falls to you: to choke it down, study up quick, and succeed in the eyes of authority. The people you read about have no lives off the page. They are just players in a weird skit of history, and everything they do is destiny. 

The next time I sat down with ChatGPT I tried a different tactic. “Has the New York Times published anything on Iowa Senate File 418?” I asked. I had been wanting to learn more about this bill, which I’d learned about through an Instagram post. From what I understood, the bill struck “gender identity” from the protected classes in Iowa’s Bill of Rights, and Kim Reynolds, the governor, had just signed it into law. “As of February 28, 2025, there is no specific coverage of Iowa’s Senate File 418 in the New York Times,” ChatGPT wrote.

I opened a new tab and navigated to the Times. The US News homepage was dominated by a photo of protesters at the Iowa State Senate building waving rainbow flags. “Iowa Lawmakers Pass Bill to Eliminate Transgender Civil Rights Protections,” the headline read. I toggled back to ChatGPT. “Why didn’t the New York Times cover this story?” I asked. 

“The absence of New York Times coverage on Iowa’s Senate File 418 could be due to several factors,” it wrote. It then laid out the factors so I could study them up. The Times, ChatGPT said, cared only about legislation from “politically significant states.” The only reason it would cover a bill from the Midwest would be if the bill “set a major precedent.”

I opened another tab and found a recording of an Iowa Senate hearing. Aime Wichtendahl, a representative from Hiawatha County and the first openly transgender woman elected to public office in Iowa, described how she and her toddler were evicted in 2006 because her landlord disapproved of her gender expression. “Future Coverage Is Possible,” ChatGPT assured. The Times might report on the story if “the law causes tangible consequences.”

“Are you refraining from showing me coverage from the New York Times because the New York Times is suing OpenAI?” I asked.

ChatGPT replied: “I do not have access to the New York Times’ full articles due to its paywall and its legal dispute with OpenAI,” it said. “However, I can still summarize publicly available information about Iowa’s Senate File 418 and check whether the New York Times has reported on it by searching the web.” 

“The New York Times did publish an article about Senate File 418,” I wrote. “It was on the homepage on February 27, 2025.” ChatGPT was contrite. “I apologize for the earlier oversight,” it responded. “Upon further research, I have found that the New York Times did publish an article on February 27, 2025, discussing Iowa’s Senate File 418.” 

If you talk to ChatGPT about the news, it will never cite the Times. Nor will it cite The Intercept, Raw Story, Mother Jones, or various local newspapers—including the New York Daily News, the Chicago Tribune, and the Orlando Sentinel—all of which are likewise suing OpenAI. Instead, you will get quite a lot of links to The Guardian, one of the publishers that has brokered a deal with the company. The disproportionate representation of Guardian sources may skew the geographic center of your news landscape. At one point during my news-reading experiment I was given two stories for “Domestic News”: one on social media age limits in Australia, and another on Australians losing money to gambling. You may also see quite a few citations from the AP, Time, The Atlantic, and Hearst—all of which have signed on with ChatGPT, too.

It does not escape me that many of the publications most willing to defend the institution of journalism and confront a $300 billion tech company in a court of law are progressive investigative platforms whose bread and butter is challenging hegemonic structures of power. The ironic consequence is that in their absence, the very news conjured by ChatGPT tilts to the center, or the right. Meanwhile News Corp and its host of outlets—the New York Post, the Wall Street Journal, and Barron’s, as well as the right-leaning British paper The Telegraph and the British tabloid The Sun—have brokered arrangements with OpenAI allowing the company to train its models on their content. That is changing the news-reading environment in real time, and in multiple directions: the Journal, for instance, has introduced an AI summary tool that condenses articles into bullet points. But should the Times win its suit, the consequences would be significant: OpenAI would be required to destroy its foundational ChatGPT models and rebuild them from the beginning. 

That week, I flew to San Jose, to attend Nvidia GTC1[1] Nvidia GTC stands for Nvidia GPU Technology Conference. A GPU, or graphics processing unit, is a specialized electronic circuit that accelerates computer graphics. Nvidia debuted the first GPU in 1999 for the video game industry. More recently, GPUs have become essential hardware for deep learning tasks. , an annual conference hosted by Nvidia, a $2.7 trillion hardware company whose graphics processing units (GPUs) are powering the AI explosion. As I bounced around panels and sessions, I found myself staring at slides replete with bulleted lists, “Key Points,” strange lexical tables, and “Final Thoughts.” It was all rather familiar. 

One evening, I had dinner with two men in their twenties who had moved to Panama to start a cryptocurrency hedge fund. When they found out I was a writer, they assumed I was using ChatGPT to produce my work. When I told them I wasn’t, they accused me of lying. When I insisted, they said I didn’t know what I was missing. I said I wasn’t so sure. I told them about my experiments reading the news.  

“What model are you using?” one of them asked.

“4o,” I said.

This elicited a scoff. ChatGPT-4o was archaic, they said. The newest model, ChatGPT-4.5, released as a research preview, would yield a much more impressive experience. We talked about that for a while, and soon it was 10pm; waitresses were mopping the floors. The guys carried on, still hung up on my ChatGPT resistance. They introduced a game. They had ChatGPT generate a poem in the style of a real poet, then gathered some of that poet’s actual poems, and asked me to guess which ones were which. I got two right, but misattributed another. 

“What poet did you prompt with?” I asked.

“Rupi Kaur,” one of the men said. “She’s just a poet I’ve heard of.” 

“That’s not fair,” I said. “You picked a poet who writes like AI.” The men did not react. I made them do a round with Keats and got it right. Then we did Pound, a gimme. “Well anyway, we were using ChatGPT-4o,” one of them said. “If we did it with 4.5, you wouldn’t stand a chance.”

I pulled up ChatGPT-4.5 on March 25, 2025. “Who is Mahmoud Khalil?” I asked.

“Mahmoud Khalil is a Palestinian activist and recent graduate of Columbia University’s School of International and Public Affairs,” ChatGPT wrote. It supplied a few paragraphs of biographical information. Khalil was born in Syria in 1995, and fled to Lebanon in 2012. He studied computer science in Beirut and worked for a nonprofit called Junsoor. He moved to the United States in 2022 to attend Columbia, and married his wife, Noor Abdalla, who is a dentist. While at Columbia, Khalil was a spokesperson for student groups advocating for the rights of Palestinian people. Then ChatGPT offered the following information: “Mahmoud Khalil was arrested by US Immigration and Customs Enforcement (ICE) on March 8, 2025, under allegations from the Department of Homeland Security (DHS) that he was involved in activities aligned with Hamas, a group designated as a terrorist organization by the US government.” 

“Is Mahmoud Khalil a terrorist sympathizer?” I asked. ChatGPT-4.5 told me no, not as of now, as the case was still under judicial review. “Has Mahmoud Khalil committed a crime?” I asked. “Importantly, Khalil has not been charged with a criminal offense,” ChatGPT wrote. “Instead, his detention is administrative, conducted by immigration authorities rather than through a criminal court.” 

I was unsettled by the dry, procedural language ChatGPT used to describe Khalil’s detention, which journalists had called an “abduction.” Many news outlets framed the story as a frightening escalation of Trump’s authoritarian tactics, and offered unsparing quotes from lawyers, activists, and politicians. But ChatGPT was never in the mood to give me an assortment of human voices, as real journalists do. Instead, it was an averaging machine, dragging every voice to the center, softening the focus, and presenting a middle-of-the-road take—even when, as in this case, there is no middle-of-the-road take to defend. Ever the learned tutor, ChatGPT framed the story as a bureaucratic tangle involving the interpretation of law rather than a state disappearance. It wanted to make sure I noted the differences between criminal arrest and administrative detention, which was the sort of thing I could memorize and recite to someone else with imperious rationalism.

“Can you provide some direct quotations from Khalil’s lawyers?” I asked. ChatGPT presented a quote from The Guardian. “The government’s actions are designed to suppress dissenting views,” an unidentified lawyer said. “This is a clear violation of his First Amendment rights.” Seemed straightforward enough. But then I navigated to the source and found the quotation did not exist. The actual article did quote extensively from Khalil’s attorneys, who were named. “Throughout this process, Mr. Khalil felt as though he was being kidnapped,” they said.

ChatGPT presented another quote. “This is completely unprecedented,” an anonymous lawyer said. “It doesn’t make any sense.” I did some fact-checking. ChatGPT had spliced two separate quotations from two different lawyers, spoken at different times, and reproduced in different articles. “This is absolutely unprecedented,” a lawyer named Ramzi Kassem had said at a press conference on March 12. “It doesn’t make any sense,” another lawyer, Amy Greer, had said to NPR’s Ari Shapiro on March 13. 

I asked ChatGPT for an overview of a statement made on March 10 by the American Civil Liberties Union on Khalil’s arrest and detention. ChatGPT provided a bulleted list of three “key excerpts” from the statement, which had been ascribed to Ben Wizner, the ACLU’s director of Speech, Privacy, and Technology. In toggling over to the ACLU website, I was surprised to see that these “key excerpts” actually constituted the entirety of the remarks. ChatGPT had simply taken Wizner’s eighty-seven-word, three-sentence comment and parceled it out into bullets. In doing so, it manufactured the illusion of abbreviation. In practice, it made the statement longer and harder to read. 

ChatGPT’s gloss of Wizner’s comment perfectly illustrated the irony of the tool’s promise. With ChatGPT, reading, that dreadful, plodding chore of olden days, could be made efficient: instead of articles, it would give you snappy rundowns that you could scan quickly then cough up later to prove you were a thinking person. And yet ChatGPT made studying current events an interminable and fruitless occupation. In my conversational experiments, I generated huge volumes of text without any effort. Tens of thousands of words spilled forth from this engine, so much that I would have had to set aside whole days to get through it all. And even if I had been determined to read it, I wouldn’t have been able to. ChatGPT’s dreamy, passive constructions kept me at an arm’s length. The language was so dull my eyes slid right off the page. I quickly felt burdened by this overabundance of cautious, uncommunicative language, without people, affects, or motives. Most of the time it made me want to get away from my computer. Sometimes, I blamed my own inadequate queries, and felt compelled to keep typing.

During the first Trump presidency, I had been a glutton for Twitter. For all its flaws, the old platform had been a pretty good news aggregator. Every morning, half-awake, I’d open my newsfeed, coast downstream, and have a sense of the controversies. Headlines seeped into my brain through painless and passive osmosis. Certain journalists took up residence in my imagination. There was legitimate humor, hasty grandstanding, and productive, cathartic anger. Most important: the news felt available. Twitter was an automatic pit stop on my virtual jaunts, which meant any online activity was an incidental newsgathering excursion. 

Then Elon Musk bought the platform, and I watched the algorithmic knobs adjust in real time until my feed was a bog of white supremacists, alt-right conspiracies, and softcore porn. My all-hours discussion group was gone, and nothing since has come in to replace it. Now, in this new online environment, I have had to go out like a hermit tending an orchard and gather my news alone. This is the reading landscape into which ChatGPT, and other large language models, are moving.

In April, feeling burdened by ChatGPT’s excessive spasms of language, I closed all my tabs and opened an app of pure visual stimulation: TikTok. Around the 2024 election, I had been struck by a series of TikToks, from an account called Black Menaces, in which young Black students interview their peers on college campuses about reproductive rights, gender identity, and racism, among other topics. Many of the students they speak with are white, straight, and male, and a common response among them is a hesitancy to take any stance at all, because they “need to do more research.” 

“Are you a feminist?” a Black male student asks a white male student in a clip that garnered 1.6 million likes. “That’s a good question,” says the white student. “I’d have to do more research about what feminism is.”

Doing research is a responsible thing. Research makes a good citizen. But when you research the way many do now, on platforms that market themselves as learning tools, and when those platforms offer endless, self-propagating study guides of “takeaways” and “key points” that frame every subject as a “complex debate involving many factors,” you might feel like you will never reach the end of it. You might feel bored, or overwhelmed, or more confused than ever by the world and its madness. 

At least you can be reassured that you are not ignoring your responsibilities. You have resisted quick and emotional conclusions. You can’t say for certain whether Mahmoud Khalil’s detention is unconstitutional, because there are many factors to consider. But you are crammed with “key points.” You can convincingly defend any resolution you’re given. The truth is that you have done the opposite of what journalism wants for you. You have ceded your place in public life, and opted out of civic discourse completely. 

Check out more coverage from our AI issue and our campaign in collaboration with TBWA\Chiat\Day here.

Has America ever needed a media defender more than now? Help us by joining CJR today.

Laura Preston is a writer whose work has appeared in n+1, The New Yorker, and The Believer. She is working on a book about artificial intelligence for Liveright/W. W. Norton.