Sign up for The Media Today, CJR’s daily newsletter.
Remember Digg? It started in 2004, as an experiment to crowdsource the Web—to “digg” something was to upvote it—and became known, to some forty million unique monthly visitors, as the “homepage of the internet.” It was a social news site, a community-driven answer to the question of how to navigate a sudden explosion of content—similar to Reddit, which appeared about a year later, calling itself the “front page of the internet.” In time, Digg was bought, sold, and widely forgotten. That made rivals of Kevin Rose, a founder of Digg, and Alexis Ohanian, a founder of Reddit—until recently, when they announced they were teaming up to stage a Digg comeback. (“I really disliked you for a long time,” Ohanian tells Rose in a launch video. “Rightfully so,” Rose replies.) But this time around, instead of human-powered moderation, they would use artificial intelligence.
“Just recently we’ve hit an inflection point where AI can become a helpful co-pilot to users and moderators, not replacing human conversation, but rather augmenting it, allowing users to dig deeper, while at the same time removing a lot of the repetitive burden for community moderators,” Rose, who will serve as the new Digg’s board chair, declared in a press release. Per Ohanian, now a founder and general partner at Seven Seven Six, which is backing the reboot, “AI should handle the grunt work in the background while humans focus on what they do best: building real connections. No one dreams of spending their day hunting down spam or playing content police—they want to create, connect, and build thriving communities.”
How that would work, exactly, wasn’t clear—until Rose started buying up thousands of dollars’ worth of ads on Reddit, targeting content moderators with questionnaires that asked about the biggest difficulties they faced managing their subreddits. As Rose explained to the New York Times, he then ran the answers through an unspecified AI program and asked it to create new ways to address the moderators’ problems. With that, one of Web 2.0’s darlings entered the new age of AI-driven content moderation.
Digg’s media representatives declined to provide details, but pointed me to an interview with Rose and Ohanian at the Wall Street Journal’s Future of Everything Festival, a couple of weeks ago. Ohanian confirmed, without elaborating, that on the new Digg, moderation that was historically handled by humans is being done by AI. He described a scenario in which a user had a “terrible day” but was “actually reformable.” In that event, Ohanian said, AI would somehow intervene, deescalate, and dole out requisite punishments. As Rose put it, Digg is using “AI to do all the dirty, heavy lifting like the moderation, in a very transparent way.”
Content moderation experts seem to approve. “I don’t think it’s hype,” said Vaishnavi J, a founder of Vys, a trust and safety advisory firm that helps companies implement AI safeguards for youth harms. Olivia Conti, a trust and safety consultant who previously worked at Twitch and Twitter, agreed: “Machine learning has been used for content moderation for years, and as LLMs have come to the forefront, companies have adapted as the technology has gotten more powerful.”
Musubi, a startup founded in 2023, has already started using AI for moderation. “There are many creative and innovative ways that AI can be used for trust and safety solutions that are just beginning to be explored, which is exciting,” Fil Jankovic, Musubi’s cofounder and chief AI officer, told me. Alice Hunsberger, the head of trust and safety at Musubi, said that AI excels in certain areas of moderation: “repeatable tasks defined by clear, comprehensive policies.”
That could include spotting other AI or spam that clogs up a feed. “Machine learning AI systems alone or in conjunction with LLMs are excellent at pattern recognition and holistic review,” Jankovic said. “These are helpful for mitigating risk from bots, fraud, or other adversarial threats.” It could also involve reviewing content that clearly violates a platform’s rules without need for contextual interpretation, such as child sexual abuse material. Using AI in these situations, according to Vaishnavi, means “humans don’t have to be subjected to seeing that horrific and dark content, and it can now be removed in an automated way.”
Beyond that, Conti said, “I think that AI can help guide people through understanding and accepting the consequences of their actions, and better understanding platform and community rules.” But even if AI “can extend the capabilities of human moderators, particularly frontline moderators who are bearing the brunt of reviewing reported content,” it can’t replace them. “Most people will not accept being ‘calmed down’ by an AI chatbot,” she said. “A system that works well in theory might backfire in practice if users feel like they’re being punished or gaslit by a machine. There’s a fine line between deescalation and condescension, especially when it’s automated.”
Hunsberger agreed: human beings have their place. “People should always be responsible for defining and owning policies” and checking on how things are going, she said. “They’re critical for inputting relevant data and policy clarifications or instructions on emerging events, cultural nuance, and knowing when to make an exception to a rule or create a new rule.”
That may be especially true in a volatile situation—consider social media in a conflict zone. “Humans are really valuable when it comes to anticipating new threats and sourcing offline evidence to inform predictions,” Vaishnavi told me. “Humans are important for informing the right kind of prompts. The model will only return what you ask of it. And you need to know what to ask of it.”
Has America ever needed a media defender more than now? Help us by joining CJR today.