Sign up for the daily CJR newsletter.
Bots are dominating internet traffic, and it’s possible they’ll soon be standing in for news audiences as well. AI platforms are releasing agentic tools like ChatGPT Pulse and Huxe, which generate personalized news briefings based on information the platforms have stored about us—our calendars, emails, interests, and preferences. And according to the Reuters Institute’s Trends and Predictions 2026 report, more than 75 percent of news executives expect this new breed of agentic apps to have a “large” or “very large” impact on news publishers: in a future where AI users primarily consume news via agents, a publisher’s main “readership” might come from bots crawling for content, not humans. But when readers get their news using AI tools, publishers have little control over how their coverage is summarized—and no visibility into who is reading it or how they’re engaging—and are therefore unable to act on those insights. That gap between the readers and publishers is only getting wider.
But publishers could get back some control. At the International Journalism Festival in Perugia, Italy, last month, Lucky Gunasekara, a media technologist and the founder of Miso.ai, and Florent Daudens, a cofounder of Mizal AI, said that a set of emerging protocols could help build a new monetization stack. Through open-protocol tools such as the Model Context Protocol (MCP) and Skill.md, developed by Anthropic, and open collaborative initiatives like the Really Simple Licensing standard, Gunasekara and Daudens envision a future when publishers have greater control over what coverage gets accessed by agents, under what terms, how it is represented, and how they are compensated.
Publishers could use protocols like MCP to retain some control over how their coverage is formatted and who can access it (even if an LLM ultimately delivers that information). Skill.md allows publishers to provide instructions on how their material should be represented by AI agents, such as preserving editorial tone, clearly attributing quotes, or including citations in summaries. The Really Simple Licensing standard, for its part, enables publishers to define the legal terms governing how their coverage can be accessed by AI systems, including what can be used for training, what can be summarized or cited, and what compensation they are owed.
Tools like these could also give publishers insight into who their users are and what information interests them, making news organizations better equipped to serve their audiences. Since some people engage with AI more freely than they do with other human beings (Shuwei Fang, a fellow at Harvard’s Shorenstein Center, calls this the “intimacy dividend”), the information that publishers could draw from these queries might be more revealing than what’s been possible via search—and could help publishers identify “data voids,” or topics where demand exists but reliable information is scarce, and produce coverage to fill those gaps. In Perugia, Daudens urged publishers to experiment with these protocols. “If you have a supplier mindset, you sign a one-off deal with OpenAI to distribute your content and you don’t get any signals from the demand,” he said. “If you have an expansion and builder’s mindset, you will explore the protocols.”
There are obstacles, however, to making use of these protocols. For them to be operationalized at a large scale, open protocols require widespread adoption from both news publishers and AI companies. For publishers, that means in-house or outsourced technical investment in, for instance, making coverage machine-readable. AI companies also have to buy in. There is, in theory, an upside for them—compared with data that is scraped by third parties, structured data provided directly by publishers could be cleaner, more reliable, less prone to confabulation. But past experience complicates this assumption: even though Wikipedia had a paid API available for years, for example, AI companies scraped its content without compensation, putting a significant strain on its infrastructure. Only after publicly calling out these practices did Wikipedia begin to secure licensing agreements—despite being one of the largest sources of data for language models.
Fundamentally, technical solutions alone cannot resolve what is a systemic problem: many AI companies are continuously accessing news organizations’ coverage without consent, disclosure, credit, or compensation. As Gunasekara put it in Perugia, “We’re not going to get to a fair market if a black market exists, and these people are stealing from you, and they’re making a buck off of reporters’ backs.” Meaningful change will require regulation and enforcement. And, as Courtney Radsch and Karina Montoya of the Center for Journalism & Liberty argued in a recent report analyzing the AI content-licensing market, much will depend on “whether the journalism industry collectively can make a unified case that its contribution to the AI economy deserves recognition not as a favor or as a product of platform goodwill, but as a matter of economic viability, legal obligation, and democratic necessity.”
There are some encouraging signs of coordination among publishers to this end—notably, the Standards for Publisher Usage Rights (SPUR) coalition out of the UK. In March, Emily Bell, the director of the Tow Center, wrote about the promise of SPUR, and how the “the collegiality to be found in resisting the worst, uncompensated excesses of AI overrides most existing differences.” (Belgian-based Mediahaus became its first non-UK member this week.) Last month, a newsletter called Charting Gen AI reported on a coalition of German news organizations, which identified an insufficient legal framework allowing AI developers “to exploit” journalism “without investing in research, information gathering or journalistic content themselves.” The Germans proposed an alternative, giving news publishers full control “over the use of their content by AI providers and platforms” and “clear, enforceable rights” forcing AI platforms to “provide appropriate compensation.” In Indonesia, more than five hundred members of the Indonesian Cyber Media Association are similarly pursuing a collective effort to ensure that AI companies pay up.
Public opinion may support intervention: a recent survey of more than twenty-four hundred adult Canadians conducted for News Media Canada found that 71 percent agreed that their country’s government should take action to prevent AI companies from taking and repackaging news coverage without permission or compensation.
Compensation is just the first step. As the digital ecosystem rapidly evolves, publishers need to understand changing audience behavior and needs. Existing deals from AI companies may pay news organizations, but they don’t guarantee accurate representation or consistent attribution, nor do they always provide granular insight about how audiences engage. Securing greater control over these things can help publishers better serve their audiences while preserving their editorial priorities.
Has America ever needed a media defender more than now? Help us by joining CJR today.