Join us
Tow Center

Get Ready for More Big Tech Lawsuits About Design, Not Content

A new wave of cases pin blame for tech’s harms on product design—keeping the First Amendment and Section 230 out of the conversation.

March 19, 2026
Adobe Stock / Illustration by Katie Kosma

Sign up for the daily CJR newsletter.

For the past month, you may have seen stories about Mark Zuckerberg testifying before a jury—or about judges admonishing people for wearing Meta’s “pervert glasses” in court. Meta and YouTube are on trial in Spring Street Courthouse in Los Angeles, where a jury will soon decide whether they should be held accountable for getting children addicted to social media and affecting their mental health. But much of the coverage of the trial has missed a significant milestone. For the first time, in an instance where social media’s harms are being scrutinized, the First Amendment isn’t part of the discussion. Instead, the case is the first to go to trial as part of a nationwide wave of litigation that seeks to hold technology companies accountable for harm by using theories of tort law (which, as any first-year law student knows, is normally the province of ambulance chasers and too-hot McDonald’s coffee). 

It’s a remarkable strategy, and it sidesteps all the pitfalls that have kept platforms’ accountability tangled up in questions about freedom of speech. Up to now, the First Amendment has been liberally interpreted within the United States to give broad protections for acts of speech to people, and even companies. A piece of legislation called Section 230, originally part of the Communications Decency Act enacted in 1996, provides broad immunity to the companies behind social media platforms, meaning that technology companies are not liable for the things that users may post or say, even when that content incites genocide or insurrection

But the complaint in this trial draws analogies to cigarettes and casinos to allege that the teenagers represented by the suit “are the direct victims of the intentional product design choices made by each Defendant.” A twenty-year-old plaintiff identified as “KGM” took the stand, testifying that she started using social media around age six, and it “made me give up a lot—my hobbies and old interests. It prevented me from making friends, because I was on my phone at school. It caused me to compare myself to other people, and that made me feel very depressed.”

The case was chosen as a test to represent the approximately sixteen hundred plaintiffs, spanning three hundred and fifty families and two hundred and fifty school districts, with similar claims. While we await the verdict in LA, similar cases with tort-law theories wait in the pipeline. Attorneys general in more than forty states have sued Meta, claiming the company deliberately designed features to addict children. This summer, an Oakland court will be asked to decide if platforms like Meta, TikTok, Snap, and YouTube are defective products designed to encourage addictive behavior in adolescents.  

Next these legal theories will take on generative-AI chatbots, a realm where speech questions still linger. In a wrongful-death lawsuit, parents allege a company called Character.AI created a chatbot product with a defective design that led to the suicide of Sewell Setzer III, a fourteen-year-old boy in Florida. In legal filings, a lawyer for Character.AI revived a familiar refrain: that the content produced by the chatbot was protected speech. Set to go to trial in November, the case would have been one of the first to test a theory of a “nonhuman speaker” in a torts context. However, the parties settled in January—without Character.AI admitting liability or a judge ruling on the First Amendment considerations. 

But these questions will likely see their day in court, as the Character.AI suit was not alone. Last year, OpenAI was hit with four wrongful-death lawsuits alleging that ChatGPT is “defective and inherently dangerous” after family members of the plaintiffs died by suicide. OpenAI is facing another lawsuit that claims ChatGPT is “a defective product that validated a user’s paranoid delusions about his own mother,” leading to a murder-suicide. 

These cases will take time to unfold, through trial and inevitable appeals. In the end, companies found liable could be required to redesign their products in such a way that they might stop our infinite scrolling, or introduce us to warning labels. (There’s potential for hefty damages awarded to plaintiffs, too. After all, as lawyers for KGM and the plaintiffs argued in the Los Angeles trial: “What is a lost childhood worth?”) Regardless of this first verdict, creative lawyers have tried their first case on the road toward payouts for those claiming social media harmed their mental health—without involving laws that also affect freedom of speech.

Sign up for CJR’s daily email

Has America ever needed a media defender more than now? Help us by joining CJR today.

About the Tow Center

The Tow Center for Digital Journalism at Columbia's Graduate School of Journalism, a partner of CJR, is a research center exploring the ways in which technology is changing journalism, its practice and its consumption — as we seek new ways to judge the reliability, standards, and credibility of information online.

View other Tow articles »

Visit Tow Center website »

More from CJR