Sign up for The Media Today, CJR’s daily newsletter.
Seven years ago, the Yale law and philosophy professor Scott J. Shapiro launched a defense clinic for documentary filmmakers. Donald Trump had recently come to power, and Shapiro, like many, feared greater assaults on press freedom. Already, powerful corporate and political entities were shutting down films with threats of lawsuits, even before they made it to cinemas. Shapiro quickly assembled law students to handle cases pro bono. The venture was so successful that it continued into the Biden era. It has helped about thirty-five filmmakers, many of them award-winning, navigate potential legal minefields, from trespassing to libel to invasion of privacy.
With Trump returning to power, Shapiro is at it again.
This time he’s supervising law and computer students to develop AI tools for legal purposes. A top priority at the Yale CyberSecurity Lab he heads is an application that can detect defamatory material that could land someone in court.
This feels urgent to Shapiro: Trump 2.0 has vowed to remove guardrails that currently protect the media from libel lawsuits and surveillance. The past and future president wants to go after reporting he doesn’t like, a probable threat if his pick for FBI chief, Kash Patel, is anointed as attack dog.
Especially vulnerable are freelancers who lack the legal resources of larger outlets. That pool is expanding as the media industry moves away from legacy organizations toward independent writers and filmmakers. Few lawyers have the bandwidth to provide free help to the likes of Substack writers and small nonprofit outlets (although the Reporters Committee for Freedom of the Press, the leading provider of legal services to journalists, is stepping up its support for nonprofit newsrooms). Documentarians often work on their own for years, with limited budgets.
Shapiro believes his new technology could make a big difference by increasing the speed to process a legal vetting. For instance, it would take minutes to run a rough-cut film through the program, maybe an hour to double-check that the information is correct. Attorneys could otherwise spend days doing it themselves.
“You could have a media attorney increase their output to help clients pro bono,” he explained.
Shapiro, an expert on international law who codes as a hobby, dreamt up the AI idea while his documentary clinic accumulated memos on different areas of law. The same sort of issues kept cropping up. “So the idea was, why don’t we take this high-quality data that has been vetted by multiple lawyers and students and load it into a model that’s trained to think like a media lawyer?”
The result? “The outputs are better than what a lawyer would do.”
Here’s how his creation works.
First, Shapiro fine-tuned a large, open-source language model that assigns probability to words, much like ChatGPT does. To train the program, his team fed it legal prompts and accurate responses. Then Shapiro inputted memos so it could synthesize all the research that the doc clinic had done since its inception, in 2017. An example: a fine-arts photographer takes pictures of a family with children across the street and uses it as part of his exhibition. The family sues him for invasion of privacy. The three-thousand-word response, which is automatically generated and pops up on the screen, says that the photographer can get away with it under local laws.
To avoid the “hallucinations” of ChatGPT—when it makes things up—embedded in Shapiro’s tool is another one that verifies information. This checker application ensures that the cases cited exist, are quoted correctly, and are about the right thing. If it glows green, everything’s kosher. If it turns yellow, part of the answer is wrong. Red alerts flesh-and-blood lawyers to go to the database of cases that were taken from the memos to correct the mistake.
“We want the human to have as little friction as possible but always have the ability to check,” explains Shapiro. For instance, one time the model mixed up the name “Finkelstein” with “Stern,” in what Shapiro dryly calls “a Jewish microaggression.” The error was caught and resolved.
Shapiro expects the anti-defamation application to be ready this spring, for use in the doc clinic and eventually perhaps beyond. It’s meant for involved works like movies or long-form narratives that demand a lot of attention.
Many news executives worry the Trump administration will go after leakers of confidential information. The Yale tools can’t help with that. But the defamation detector could help if Trump acts on threats to loosen libel laws to go after reporting he doesn’t like. Or if Supreme Court Justices Clarence Thomas and Neil M. Gorsuch succeed in reconsidering a bedrock of press freedom: the 1964 New York Times v. Sullivan decision, which secured strong First Amendment protections for reporting on public figures.
Just as the doc clinic outlasted Trump’s first reign, so Shapiro believes his anti-defamation effort will last for years to come. “We want to train the next generation of media attorneys to use advanced technology to handle a greater workload—and help more people than they otherwise would be able to.”
News of the invention is generating buzz in the media safety world. “This is exactly what we need in this space,” says Harlo Holmes, director of digital security at the Freedom of the Press Foundation.
“So much of AI as it’s being pitched to media organizations has to do with embedding AI to spark engagement and otherwise monetize journalism as a product. This is an exciting innovation because it uses AI to protect media makers as they do their jobs.”
Has America ever needed a media defender more than now? Help us by joining CJR today.