Join us
The Media Today

A New Report Takes On the Future of News and Search

The Tow Center for Digital Journalism interviewed news and tech industry representatives about AI’s impact on platforms and publishers. They expressed some hope and a lot of trepidation.

May 15, 2025
Fish Reversed created by Better Images of AI

Sign up for The Media Today, CJR’s daily newsletter.

All this week, CJR is running a series of pieces, on our website and in this newsletter, about how AI is transforming the news media ecosystem. Today we debut a campaign in collaboration with TBWA\Chiat\Day: the PSAi, which we hope you’ll check out here.

Last week, Eddy Cue, Apple’s senior vice president of services, triggered publishers’ anxieties. As part of an antitrust trial involving a different tech giant, Google, he testified that, over the past two months, search usage on Safari, Apple’s Web browser, had declined for the first time in twenty-two years. The likely cause, according to Cue: the rising adoption of AI-powered search tools, which provide synthesized answers to users’ queries, reducing the need to click through multiple websites.

Cue’s remark played into a broader fear that has loomed over journalism in recent years, and affects the online information ecosystem as a whole: the specter of Google Zero, a term coined by The Verge’s Nilay Patel for “that moment when Google Search simply stops sending traffic outside its search engine to third-party websites.” Google’s longtime dominance in search is being destabilized by a perfect storm created by the explosion in AI-powered search products and mounting regulatory challenges in the US and Europe. Earlier this month, Tom Rubin, chief of intellectual property and content at OpenAI, told journalists at WAN-IFRA’s World News Media Congress that “users increasingly like answers to be delivered quickly, in a conversational and context-aware manner, as opposed to the traditional land of an inefficient list of ten blue links.” 

For publishers who rely heavily on search for discovery and traffic, this shift is seismic. How AI’s impact on the search landscape will affect the distribution, presentation, and consumption of news online is a central theme of a new Tow Center report, out today, that I worked on with Dr. Peter Brown. Between May and October of last year, we interviewed several dozen representatives from the news and technology industries about AI’s impact on the relationship between platforms and publishers. They expressed some hope and a lot of trepidation. “I have concern that, as we did in the past, we might be taking some short-term steps without paying close enough attention to their long-term ramifications,” one news executive told us.

As Jon Passantino wrote for Status last week, “we may be heading for a post-search web—one where content is not surfaced by keywords and ranking formulas, but synthesized and summarized by machines. In that world, publishers risk becoming invisible and their revenue models disrupted.” Indeed, many news publishers have been experiencing sharp declines in referral traffic from traditional search engines, particularly Google, which has been expanding its AI Overviews feature and experimenting with AI-only search results. Meanwhile, data from Comscore and Similarweb indicate that generative AI platforms like ChatGPT and Perplexity are contributing a negligible share of visits to news sites. A February report by TollBit, a marketplace for publishers and AI firms, found that AI search bots on average are driving 95.7 percent less click-through traffic than traditional Google searches. This drop may stem from users’ growing preference for “zero click” experiences; a Bain & Company survey published the same month found that 80 percent of consumers rely on AI-generated summaries or search page previews, without clicking through at least 40 percent of the time. As Axios reported in April, the decline in traditional search referrals is “unlikely to be offset by new AI search platforms in the foreseeable future, if ever.”

In this new era, it appears that the best publishers can hope for is accurate attribution and compensation from AI companies that use their data to train and ground their models—and that the compensation will make up for lost revenue from the decline of traditional search. While formal partnerships between news and AI companies purportedly help to ensure greater accuracy, only a handful of publishers have formalized any sort of compensation arrangement with AI companies; OpenAI and Perplexity have been among the only ones to do so formally, and even those deals are few and sparse. (Google and Meta have so far only signed a single licensing deal apiece with publishers.) It is not clear whether the number of deals is more a reflection of companies’ lack of interest in them or of publishers’ unwillingness to sign them. (It should be noted that there are some middleware companies like ProRata and TollBit that claim to be developing technological solutions to protect and monetize publisher content against AI scraping. The Financial Times reported this week that these kinds of content-licensing and data marketplace startups have secured two hundred and fifteen million dollars in funding since 2022.)

One of the people we interviewed for our report last summer expressed concern about the implications of smaller outlets being overlooked by AI companies that do not recognize what they can bring to the communities they serve: “If you’re a newspaper in Paducah, Kentucky, for example, and you’re the only one in a four-county area, your content is really valuable because if somebody queries a question about that part of the world, [you’re] the only game in town. [You’re] the ones that the content that gets served up comes from. And [you’re] unlikely to see any money from that because [you] don’t have Sam Altman’s email address.” (Altman is the CEO of OpenAI.)

Sign up for CJR’s daily email

For publications unable or unwilling to enter a licensing agreement, opting out does not appear to be a real choice, either. As my colleague Aisvarya Chandrasekar and I found in research that we published in March, many chatbots appear to be accessing content from sites that have blocked their crawlers. Testimony during the Google antitrust hearings also revealed that the search giant can use publisher content for AI Overviews even if publishers have opted out of being used to train Google’s AI products. This ostensibly means that publishers have to choose between allowing their content to potentially be used in Overviews or disappearing from Google Search entirely.

As one of our interview participants pointed out, rethinking how we present news is not necessarily a bad thing: “I actually think it’s fine if we change what we produce based on the way that the information landscape changes due to AI.… I don’t think it’s that we shouldn’t change; it’s that we shouldn’t change because [platforms] ask us to change. We should change because our users demand something different from us.” Yet the aggressive push toward chatbot-style search has clear limitations for journalism. As Laura Preston wrote for CJR earlier this week, “Many LLMs are trained on content produced by journalistic institutions, but do not adhere to journalistic standards when rehashing the material.” 

Furthermore, generative AI tools present an unprecedented abstraction of news content, effectively decoupling it from the people who make it. Journalism is far more than the act of gathering data. Yet the companies driving the development of chatbots and generative search tools show little interest in recognizing, valuing, or providing adequate transparency about how journalism is made. Some of our interviewees lamented how diminishing opportunities to reach audiences through platforms, combined with a rise in generative news summarization, posed a serious threat to news discovery: “If you believe that social is zero and search is zero, then what are you left with? Genuinely, where do people find out about news? How are you even aware as a news consumer of any brand, any [journalist]? Is it just all TikTok creators?”

All the same, some of our interviewees remained cautiously optimistic: “I think publishers shouldn’t lose hope that they don’t have the tools and the ability to stop all of their content being taken and used without consent,” a policy executive at a global outlet said. “They should have confidence that if they produce great journalism, it still has incredible value in the context of these new technologies. It’s just a question of: How do we establish the right frameworks to recognize that value and ensure that those revenues should be flowing back into creating more great journalism?”

Other notable stories:

  • Also out this morning in CJR’s series on AI, Roberto Ferdman profiles Civic Sunlight, an AI-driven service founded in Maine that “uses large language models to comb through hours of city council meetings and then output the most important or relevant bits.” (Civic Sunlight has expanded to nearby states, and similar services have sprung up in New York and California.) The service’s founders soon learned that they had a problem—their model “had a habit of coming up with facts and events that weren’t quite right, or hadn’t happened at all,” even if the information it provided was mostly accurate—and so they “came up with a decidedly non-technological solution: human intervention,” partnering with a local news organization. “Perhaps surprisingly,” Ferdman writes, “Maine’s existing local news community seems receptive to the idea” behind Civic Sunlight.
  • In an op-ed for the Times, the longtime Supreme Court reporter Linda Greenhouse assessed whether journalists should identify judges according to which president appointed them. In the early 2000s, Greenhouse was against the practice, arguing that it wrongly implied that “a given judge was doing politics rather than law,” but as judges increasingly sorted themselves into ideological camps, it perhaps became more defensible. Since Trump returned to office, such identifications have become “essential,” Greenhouse writes—they demonstrate that “the rule of law is not a partisan project,” since both Democratic- and Republican-appointed judges have stood in Trump’s way.
  • Erik Wemple, the media critic at the Washington Post, watched MSNBC for eighteen hours to test his thesis that, unlike CNN and even Fox, the network is rarely platforming vigorous debates these days—and observed only one that met his criteria in that time. “In pre-Trump times, MSNBC’s current programming model would have been outright journalistic fraud,” Wemple writes—but nowadays, segments featuring Trump boosters, as evidenced on CNN, “are loud, chaotic and poisoned by frequent distortions.” Wemple nonetheless leans “toward the CNN model, but not enough to strain my calves.”

Check out more coverage from our AI issue and our campaign in collaboration with TBWA\Chiat\Day here.

Has America ever needed a media defender more than now? Help us by joining CJR today.

Klaudia Jaźwińska is a journalist and researcher for the Tow Center who studies the relationship between the journalism and technology industries. Her previous affiliations include Princeton University’s Center for Information Technology Policy, the Berkman Klein Center’s Institute for Rebooting Social Media and the Our Data Bodies project. Klaudia is a Marshall Scholar, a FASPE Journalism Fellow, and a first-generation alumna of the London School of Economics, Cardiff University and Lehigh University.