Analysis

What to look for before writing a story about an academic study

February 19, 2019
 

Facebook is 15 years old. And there are certain things, if we believe the reams of headlines we see based on social science studies, that we now think we know about its effect on human behavior. It can cause people to commit hate crimes, one set of headlines suggests. Older Americans spread more fake news, suggested another. (But luckily the quantity of fake news has diminished since 2016, according to still others.) And many of us were happy to read that abstaining from Facebook, for even a month, will make a person less politically polarized.

Like most buzzy headlines, specific details are not exactly forthcoming. The limitations of social science are crucial to understand, but such limits are often underexpressed in coverage. So when faced with a seemingly explosive research finding, especially about a moral panic, we must all learn to beware the viral social-science study and our own role in perpetuating misinformation about research on misinformation.  

This is a boom time for academics who are used to having their research—on politics, journalism, and social media, in particular—ignored by a wider public. It’s also a fraught time. Facebook, often the focus of such studies, but by no means the only culprit or area of interest, offers very limited access to its data.

ICYMI: Do journalists pay too much attention to Twitter? Here’s a study you should read

And academics face new pressures. Sometimes, they just want to get work out to test the waters, build momentum, or provide proof of concept for future funding. That means that even work that comes from some of the strongest institutions in the world, funded by the most prestigious of grants, posted on sites that have the imprimatur of elite academia, is sometimes released before it has been subjected to peer review.

Peer review has its own problems, but it is an important quality stopgap before research is published in a formal academic journal. Other academic peers, often anonymous, provide rigorous and exacting critique with the aim of rooting out methodological flaws, missed connections with existing research, clarity, and overbroad claims, among other concerns. This process is critical for social scientists, and for the journalists who later rely on their work.

Sign up for CJR's daily email

Unfortunately, it is often slow. On more than one occasion, the peer-review process for my work lasted for more than a year. For journalists, reading research findings about the 2016 elections after the 2018 midterms can seem quite dated. But reporting on pre-published, pre-reviewed research presents its own dangers.

The first study I mentioned argued that anti-refugee hate crimes in Germany increased disproportionately with higher Facebook use during times of high anti-refugee sentiment. The New York Times called it a “landmark study” and based a major feature story on it. But, as Felix Salmon pointed out, the study “was written by a pair of post-docs without any peer review, and there’s no particular reason why it should have been ready for the the social-media klieg lights that suddenly got trained on it.”

The second study, which blamed those over 65 for spreading most of the fake news on Facebook, was quickly picked up by dozens of outlets; within a few hours of its publication, I found myself talking about its merits on a public radio news talk show.

That study was peer reviewed, but not by a top journal in the field whose imprimatur would give it the kind of credibility that would merit the viral attention. Though I first thought the study had been published on Science, which was founded in 1880 and is among the top-three journals in the world, it was actually hosted on the open-access journal Science Advances, which I had not yet heard about in my work as a social scientist. While it’s published by the American Association for the Advancement of Science, which also publishes Science, Science Advances raised some red flags for me.

It has far less support from academics than its sister journal. It is a pay-to-publish open source journal, which charges $4,500 to publish a paper, and an extra $1,500 if the paper exceeds 15 pages. There is no one on the editorial board who seems to be doing work related to political science.

The third study, which suggested that quitting Facebook for even a month would make us all happier and a little less polarized, was temptingly viral. It has what scholars call “face validity”—it rings true based on common-sense expectations. But though the two co-authors are regarded as excellent, the study was not peer reviewed. And the authors are economists, not communication or political scientists, which means they may be leaving critical foundational insights out.

Journalists on deadline tend to gravitate towards familiar big names and work from prestigious universities—even if this work hasn’t yet been through peer review. As journalists, it makes sense to go to the most authoritative institutions and voices when sourcing a story, but in the case of academic research, relying on someone’s scholarly reputation isn’t a good enough proxy for ensuring sound research.

Not all academic research is created equal, and introducing half-baked studies into public discourse can have lingering effects. So I have created a checklist to help journalists parse complex studies in order to decide whether to report on them.

Two quick notes: First, this is intended as a basic guide for some best practices, not as advice to academics. Second, sometimes the most exciting research challenges conventional methodological approaches—which is to say, if a study looks super interesting, dig deeper.

 

Is the study peer reviewed?

Peer review guarantees that independent reviewers have looked at the study, assessed its rigor, and alerted the researchers behind the project to issues they may not have spotted themselves.

But peer review is not a catch-all, and not all levels of peer review are equal. If a study has been accepted for a conference, check how many papers that are submitted have been accepted—a 60% acceptance rate, for example, would suggests that most papers submitted make it in. If it’s in a journal, you can look for a journal’s impact factor or check Google Scholar journal rankings and if the journal is open source, be careful about “pay-to-play” publishing.

Other studies which have similar findings might diminish the news value of a particular paper claiming groundbreaking results, but are a good sign for its reliability. And as with all journalism, it’s important to check the biases of the researchers, particularly for corporate connections to the tech companies themselves, and make sure the research itself was conducted ethically.

 

Statistics are not all-powerful

Correlation does not imply causation. All sorts of combinations of variables might seem to have relationships, but that doesn’t mean that one thing causes another. Big data findings are especially prone to this, where correlations can appear due to the size of the data.

Equally, how a particular variable is defined can be key. What does “trust” mean? How are scholars defining “fake news”? When those variables are then put into mathematical models that make their own assumptions, it’s quite possible to get spurious correlations.

 

Be wary of surveys and polls

Some of the surveys and polls that journalists rely on for quick-hit stories are constructed with highly complex methodologies that belie their simple results. Others may not employ sophisticated methodologies at all—meaning journalists should be wary.

Being aware of the kind of sample—the group of people to whom the questions are asked—is vital. Most non-academics don’t know that Amazon’s Mechanical Turk is a great place to recruit research participants. But the people hanging out on Mechanical Turk are a distinct subset that is not necessarily representative of the group it’s purporting to represent.

And nearly every aspect of the survey design and implementation—question order, answer order, the internal consistency of the surveys, the technology used to administer the surveys, and the use of proprietary scales and indices for measuring certain outcomes— can affect results. Surveys and polls are best seen as a starting point, rather than an ending point, for stories.

RELATED: Which one of these 3 types of journalists are you? 

 

Be wary of experiments, too

Findings from experiments done in an academic setting don’t always translate to the world; for example, an experiment using mockups of Facebook pages rather than real ones doesn’t replicate the real experience of browsing Facebook pages in your own home. But for all sorts of privacy reasons, not to mention Facebook’s terms of service, that might be the best researchers can do. Plus, most of the time, people know full well they are taking part in an experiment. So in fact-checking research, for example, scholars have to figure out how to account for issues such as people reading more carefully than usual, or trying to impress the researchers with accepting a fact check.

Results from such experiments are vulnerable to “low ecological validity”—put simply, it’s unlikely that the experiment would work outside a lab context. That does not mean the study is not important; rather, its limitations must be addressed.

You may have encountered the term “natural experiment”; this method is intended to correct for some of these problems, with the idea that if you try to test something in the real world, you’ll actually see how people behave in the real world. Natural experiments have limitations, too, and there are often many factors outside of the researcher’s control that could affect results—an issue known as “internal validity,” if you want to be fancy.

It’s also worth checking whether there’s a control group not subjected to the experimental manipulation, or a set of variables that are held constant, to help establish a comparison point and suggest a causal effect.

 

Think about replicability

If the same experiment were to be repeated, would it generate the same results? There is a “replicability crisis” in the social sciences—particularly in psychology. A group at UVA tried to replicate 100 different psychology experiments and could only replicate a third of them.

Sadly, assessing replicability is not often encouraged because it does not generate new findings. But asking “have other people raised this question” and  “Are these findings in line with what others have found” can help you determine context around the study.

 

Avoid clickbait

Some research studies take years to complete. Reducing them to clickbait headlines can spread the very misinformation that journalists and social scientists  are concerned about in the first place. So choosing headlines wisely is perhaps the most important part of the whole process of communicating social science.

If the study wasn’t socially relevant, then you probably wouldn’t be covering it anyway. But it bears stating that journalists have a special responsibility to make sure that the headline reflects the results of the research—not the spicy statistic or possible extension of the research, but the actual findings of the research question the scholars tried to answer and the hypothesis they posed.

 

Have hope

Many of these issues used to proliferate in science journalism. In some cases, they still do. But science educators have worked hard to help journalists understand how to write about their research.

Now it’s time for the political and technology journalists to take the same care with the explosion of interest in social science research on information, politics, and technology.

Related: What a professor learned after interviewing a ‘lost generation’ of journalists

Nikki Usher , PhD is an associate professor at the University of Illinois in the College of Media.