analysis

Who’s afraid of a big bad algorithm?

June 11, 2015

“IF YOU USE FACEBOOK to get your news,” Caitlin Dewey wrote recently in The Washington Post, “please—for the love of democracy—read this first.”

Dewey’s central claim is that since most millennials now get their news from Facebook, and Facebook has an algorithm that dictates what we see based in part on our biases, millennials in particular will not get a full picture of the news. Even worse, these algorithms are controlled by giant corporations out to make money, and seem completely unaccountable to the public interest.

But many current arguments about the dangers of algorithms tend to simplify how they work. First, algorithms are made by people, meaning they are often more sophisticated than we might think; this built-in human influence also offers a layer of quality control that is often ignored. Second, a good algorithm will show you what you want to read, but it will also continually refine its suggestions, introducing new content that lies outside your immediate interests.

Algorithms seem scary because they are the result of sequential decisions made by a machine: Under the Facebook model, we offer input, and based on mysterious weights over which we have no control, a computer makes a series of step by step decisions that presume to predict our interests better than we might be able to do on our own

But here’s the thing: algorithms are weighted in very deliberate ways. Google News’ algorithm, for example, favors particular types of news over others: news from big outlets like The New York Times, news with more context like original quotes, and timely news, to name a few. In essence, the people behind Google News are working to create a responsive algorithm to deliver what they feel is the best selection of quality news.

Critics often miss just what news-filtering algorithms have in common with how news has traditionally been selected. News outlets have long been black boxes, with editors determining the top stories of the day behind impenetrable walls. The “weights” used to make these decisions are not mathematical, but the result of a qualitative debate.

Sign up for CJR's daily email

Yet with the exception of the odd visitor in a Page One meeting, readers have no clue about what goes into crafting the day’s news. Thousands of decisions about what to cover, what not to cover, how long to stay on a story, how to dispense resources, and now, in a digital environment, what to put on the homepage or push out over social media, take place inside newsrooms that most people will never visit.

The fact that a machine is choosing our news disturbs a lot of people. But my research suggests that a number of news startups are thinking quite seriously about how to surface quality content that adds to people’s information environments.

At News360, a news aggregator, founder Roman Karachinsky explained to me that he had designed his startup’s algorithm to look for information a reader might care about and mix it up with information he or she might not otherwise see.

And at Medium, an executive explained to me that the company’s algorithm took into account both personalization and the importance of finding new information. “Our algorithm is designed to surface great things to read,” she told me. “They might be things everyone else is also reading, and they might be things that you have given signals that you are interested in.”

In other words, what you like matters, but so does what everyone else is reading. The latter will influence what you see.

We may not know what goes into algorithms or how journalists decide what becomes news, but at least journalists are concerned with showing us information that contributes to our role as citizens, or so the argument goes. Mark Zuckerberg, meanwhile, has acknowledged that he’s not too worried about this. “A squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa,” he famously told a reporter.

Zuckerberg’s approach sends chills down our spines when we think about an entire generation getting most of their news from Facebook. But anxiety about losing out on opportunities to be a good citizen because you might not see relevant content assumes that all it takes for us to be good citizens is exposure to good information. By that logic, as newspapers and television news providers have professionalized over the past 50 years, adding sophisticated investigative reporting to the mix, we should have seen some obvious signs of increased civic engagement: higher voter turnout, greater political engagement, more community activism. Instead, there’s plenty of evidence to suggest that such engagement is on the decline.

The reality is that people have always had the ability to pick and choose news and information of their own accord—and to do so in a way that reinforces their own beliefs and opinions. Researchers sometimes call this “selective perception.”

Studies show that partisans, when faced with even neutral information they don’t agree with, tend to think the media is biased against them. People sort and choose information of their own accord and ignore what they don’t wish to see; those who watch Fox News or MSNBC are choosing partisan news, in part because it reinforces what they already believe. People interested in sensation will choose salacious news over quality civic information, as Walter Lippmann bemoaned 90 years ago in his classic, Public Opinion.

You might argue that selective perception is porous, whereas an algorithm doesn’t even give you the opportunity to stumble across information with which you disagree. This rests on an old presumption that traditional journalism—with its bundling together of entertaining news and civic information—will lead to accidental discoveries of civic information that will catch our eye and inform us.

While I’m sure that this happens sometimes, we don’t really have any hard data to prove that it happens regularly, or any more often than the Facebook algorithm might introduce new information that you wouldn’t ordinarily read.

Consider the numbers from a blockbuster Science study Dewey mentions, on how Facebook’s algorithm responds to users’ political views. The study found that the algorithm cuts out 5 percent of liberal-leaning articles from a conservative’s feed and 8 percent of conservative-leaning articles from a liberal’s feed. I wonder whether the chance of such users stumbling across and actually reading an article that didn’t line up with their political views was ever any higher, even when they were reading a print newspaper.

And most important of all, we tend to forget that a good algorithm won’t continue to show you exactly what you want all the time. Good algorithms learn, and they are constantly refining how they provide you information. They do this by offering content that goes outside the boundaries of what you have already indicated you have liked. Otherwise, you’d get bored and stop coming back.

You might have noticed this if you look at your Netflix recommendations. You may be a horror movie fan, but Netflix might begin to ask you if you want to see Sci Fi movies, and then later if you want to see documentaries about the occult, and then documentaries on issues related to scientific mysteries. Gradually, a good algorithm not only gives you what you expect, but also helps you discover what you didn’t know you wanted to see.

There are certainly reasons to be concerned about algorithms. As Nicholas Diakopoulos points out in a report for the Tow Center for Digital Journalism at the Columbia Graduate School of Journalism, algorithms can have nefarious effects on our lives, from making it easier for companies to price-fix based on where you live and past purchasing habits to impacting government aid. He and other researchers call for more journalism focused on “algorithmic accountability.”

But to be most aware of what should concern us, we need to separate what is new and different about an algorithm from the techno-dystopian hype. That means knowing a little more about the history of news and media effects, and a little more about algorithms.

Then we can begin to understand precisely how (and if) our news environment is really changing as much as we think it is.

Nikki Usher is an associate professor at The George Washington University in the School of Media and Public Affairs. She is the author of two books, Interactive Journalists: Hackers, Data, and Code and Making News at The New York Times.