Carl Bialik writes the weekly Numbers Guy column for the online edition of the Wall Street Journal. A former technology reporter for the online Journal, he co-writes a daily column about sports for the online Journal, and is co-editor of and writer for Gelf Magazine, which covers sports, the media and international news.
Bryan Keefer: Why do you think it is that journalists often fail to treat numbers with the same sort of skepticism — or same sort of enthusiasm, sometimes — as human sources, or as documents?
Carl Bialik: For some journalists, it could be that math was never their favorite subject, or that they never took a statistics class. But in a lot of cases, the numbers being thrown around don’t take any real advanced training to look into; it’s just a matter of reading the source documents. So I think in those cases, there are a number of reasons. And one of them is deadline pressure — it’s easier to just call the other side for five minutes and get a quick comment than to really put the responsibility on yourself to fact-check.
And in other cases I think, for some reason numbers seem to carry some sort of weight of authority, and if you can cite estimates and give a number, then it seems to stand on its own and be a very solid fact, even if what lies beneath the number isn’t much at all.
BK: I noticed a number of your pieces take the angle of challenging the conventional wisdom. Do you think that part of it might be that reporters get caught in the trap of assuming that conventional wisdom?
CB: Yeah, and I think part of that is the Nexis effect, or Google News effect. If you see a number, and it’s been reported somewhere authoritative, or if it’s just been reported in lots of places, then since numbers seem to have this air of certainty, reporters can start to accept it as fact. And even if the original article was responsible about saying where the number came from and giving some caveats and cautions, often when you’re writing the follow-up story, and you just have a paragraph to devote to what came before, a lot of the nuance gets lost.
BK: What do you think the biggest source of these bad numbers is? Is it politicians and interest groups and PR people promoting their agendas, is it just reporters’ misinterpretations of data, or is it something else entirely?
CB: I think it’s more the first category. Because people who are smart about dealing with the press are aware that reporters can often take numbers and repeat them, it’s just a good strategy on the part of advocacy groups or interest groups to put numbers into press releases and speeches, into other documents, into interviews. That just seems to become a part of the standard media strategy.
There are definitely cases, though, especially in science reporting or health reporting, where reporters are interacting directly with a less biased source, like a scientific researcher who really is just trying to share their findings, and who doesn’t have an agenda. And because of deadline pressures, and the story going through several edits by people who haven’t directly interacted with the researchers, you can get some misinterpretations. But of the examples I’ve seen so far, that seems to be the less common reason.
BK: Do you think that science and medical journalists, or any particular segment of journalists, get spun by these numbers more often? You’ve been a technology reporter, and you co-write a sports column — do you find that any particular segments of journalism are better with numbers or particularly worse with numbers?
CB: It’s hard for me to say anything too authoritative, because I haven’t really looked at it in an in-depth way, I do it on an anecdotal basis. I do read a ton of sports writing, and it’s interesting you mention that. I think sports writers do tend to be among the most numbers-savvy, because numbers are so intrinsic to sports, and they’ve been really analyzed and dissected. There are still baseball writers who rely on outdated stats, but for the most part you don’t see too much spin — the numbers kind of speak for themselves.
Business reporters tend to be really good at dealing with numbers because, again, that’s the core of their story. And if they interpret the numbers one way, and the stock market reacts differently, they’ll know quickly that they didn’t get the story right.
On the other hand, it seems like feature writing, that sort of consumer news, can be more manipulated, because [reporters] are often dealing with PR firms, trying to establish trends for things that are very hard to measure. And the numbers really aren’t core to the process. So there may be only one number out there, and it’s often coming from an advocacy group. And it’s become kind of a standard part of the structure of feature stories to get some numerical evidence somewhere near the top, and then move on. So the writer doesn’t want to dwell too long on where the numbers came from, but does want to have some numbers in there to point to if somebody questions that this is really a trend, if this is really a story. So that is a situation that makes number problems more likely.
BK: How do think the Journal does with numbers? Better, worse than the competition?
CB: I guess I’m something of a biased source. Because the Journal is so business-heavy compared to other publications, I think it does pretty well there. The Journal in general — I’m now freelance, but I was working there as a staff reporter, [and] I’m pretty young in my career, so it’s hard for me to compare my personal experience with other places — but it just seemed like there was always a very healthy, high level of skepticism. If you did want to demonstrate a trend, you really needed to get both quantitative backing, but also get people who were involved in the industry, or the area that you were writing about, to back this up. In fact, it was the managing editor of the online Journal whose idea it was to do this column, because his skepticism had developed into somewhat of a fascination with these numbers, and numbers getting repeated, and he really thought it would be a good idea to regularly look into numbers.
BK: Which do you think is the bigger problem: stories that misuse or misinterpret numbers, or stories that neglect quantitative data and numbers altogether? I’m thinking in particular of those attention-grabbing trend pieces that string together a couple of anecdotes but don’t really use any data.
CB: I think the first category is more harmful. It seems sort of contradictory, maybe, for somebody who writes a column called Numbers Guy, but I’d be happier if news contained fewer numbers, rather than more. It just seems like there are more numbers being reported than there are good numbers. And if you write a trend story, and you are honest with readers and don’t cite any numbers because no credible numbers exist, then readers have a better chance to decide on their own if this makes sense to them. Sometimes you need to make a qualitative argument, because there aren’t any valid quantitative arguments to be made.
BK: How can readers tell the difference between a valid statistic and a questionable one? Are there signs that tip you off that something is fishy?
CB: To really know, you would need to go to the source, and read through the study yourself, and if you don’t understand it, find somebody to explain it. That’s not really practical for somebody scanning the newspaper on their commute.
There are a few signs, which aren’t sure things but which help give you a sense of the likelihood [of whether] you should believe this. Among them are: If there’s no source attributed, then you should start out very skeptical, because you just have to take the reporter’s word for it. If there is a source attributed, try to think about what that source’s interest is in the number. If it’s an industry group saying that piracy is a big problem, well, you wouldn’t expect the industry group to say that piracy isn’t a big problem.
Think about what the number is, what it says, and how you would go about measuring it. There are some things that are actually pretty credible, that somebody could measure well. For instance, I wrote a column about a market research group that reported on how many iPods were being bought every month, and how many video games. And because so many stores use electronic scanners, and there are bar codes, that really is something that could be collected. Whereas, if you see a statistic on how many cell phones are left behind in taxis — if somebody wanted to measure that properly, it would be more expensive than just buying everybody a new cell phone.