Analysis

Why political journalists shouldn’t report on internal polling

August 10, 2015
Elektra Grey Photography

In the waning days of the 2014 midterm elections, as the political sands shifted decidedly away from many Democratic candidates, The New York Times published a roundup of the party’s decisions to abandon races once considered competitive to shore up vulnerable incumbents. Deep in the piece, reporter Jonathan Weisman quoted the campaign spokeswoman for Andrew Romanoff, the Democratic challenger to incumbent GOP Rep. Mike Coffman in Colorado, insisting their internal poll showed Romanoff down by just a single point.

“This remains one of the closest races in the country,” Denise Baron told Weisman, who wrote that the campaign was providing the data because it was “eager to show the Democrat was not out of the race.”

Romanoff lost two weeks later by 9 points.*

The fact that Baron got her numbers, however specious, into the Times at the moment she wanted to assert Romanoff had a political pulse is to her credit as a publicist. But for the news media to have allowed such data to appear–especially without any scrutiny or rebuttal–is illustrative of a troubling trend in political journalism. Where, critics ask, was the comparison to other polling, perhaps even Coffman’s internals? Or an explanation of the polling methodology, a reference to its margin of error, or any other details that provide context?

Internal polling is “being reported more often because it’s really easy journalism,” said Thomas Patterson, Bradlee Professor of Government and the Press at the Shorenstein Center at Harvard’s Kennedy School of Government. “In this environment when people are trying to crank out a lot of stuff, sometimes they do it. It’s almost always unfortunate.”

Weisman defended his decision, saying the story’s theme—that many Democrats were slipping out of contention—makes clear that this use of the poll data showed the campaign was “grasping for bright spots.” Yet the Times’ new director of polling, John Broder, who was not in his post when the Romanoff piece ran, said the paper has a written policy that leans heavily against using such material. “If the reporter had consulted with us–as he should have when citing any poll–we would have discouraged him from citing it,” said Broder via email.

Sign up for CJR's daily email

 

The polls are obviously partisan, subject to bias, and released only when in the self-interest of the candidate.

 

It is telling, though, that even at a publication with a written policy on the matter, campaign data can seep past the gatekeepers and into the public domain. “The proliferation of media sources and the need for copy has made this process of publicizing internal polls easier, it is true,” said Fritz Wenzel, an Ohio-based GOP pollster who until recently worked with Kentucky Sen. Rand Paul’s political action committee.**

The problems with such data are vast and fundamental. As Broder noted in his response to CJR, “The polls are obviously partisan, subject to bias, and released only when in the self-interest of the candidate. Any experienced political reporter will raise an eyebrow, or two, when a campaign operative says, ‘Psst, our internals show us up six over so-and-so in Cuyahoga County, but you can’t attribute that to me.’ ”

Reporters can be misled by internal data in myriad ways. Campaigns sometimes have access to multiple polls and “leak” the most favorable version. Questions can be asked in a certain order or with a certain tone that inflates a candidate’s numbers. Sometimes the “ballot test,” or horse-race question, is asked several times to gauge the effectiveness of certain political messages, but the figure provided to reporters may be the most favorable outcome.

A case study in what not to do is a July 22 Politico piece, “Internal poll shows Jindal gaining ground in Iowa.” In it, writer Eli Stokols reported numbers provided by the campaign of Louisiana Gov. Bobby Jindal that showed Jindal garnering 8 percent of the vote in Iowa and placing fourth among the 16 GOP presidential hopefuls in that all-important first-caucus state. Stokols backed the Jindal camp’s notion that this apparent surge was the result of the candidate’s “strong performance” at a forum the prior weekend and quoted Jindal pollster Wes Anderson gushing, “Bottom line, Gov. Bobby Jindal has taken off in Iowa.” 

Nowhere did Stokols reference any independent, non-partisan polling, almost all of which showed Jindal with about 2 percent support and wallowing near the bottom of the crowded field. The reporter didn’t explain the polling methodology, the campaign’s likely motives for sharing the numbers or forewarn readers to be skeptical. By contrast, James Q. Lynch of Iowa’s Quad City Times wrote about the same data, pointing out the struggling Jindal campaign’s reasons for releasing the poll and noting important background information Anderson declined to disclose.

Veteran political handicapper Charlie Cook of the Cook Political Report said Stokols failed to inform readers that the data, already out of step with other polling and the conventional wisdom, may have been inflated by a strategically timed advertising buy. “This is about what you would expect from a campaign of someone within the margin of error of zero in national polling who just spent about $700,000 on TV in Iowa to goose their numbers, then took and released a quick and dirty poll,” Cook said.

Politico Senior Politics Editor Charlie Mahtesian declined to comment on the Jindal piece because he was on vacation when it ran, and requests for comment sent through Politico’s communications staff and to Stokols went unreturned. Mahtesian, however, noted that Politico has no blanket policy on using internal polls. “You apply the smell test,” he said. “You try and figure out what they have to gain by giving you this information and figure out what your readers have to gain by exposing that to them.”

Both the media and campaigns also have been burned by campaign-sponsored polls that seemed plausible. The Washington Post’s Sean Sullivan wrote a report June 6, 2014 from a poll by House Majority Leader Eric Cantor’s team indicating the Virginia Republican led primary challenger David Brat by 34 points. Sullivan also cited an independent poll commissioned by the Daily Caller which had Cantor up by 11.

 

You apply the smell test. You try and figure out what they have to gain by giving you this information and figure out what your readers have to gain by exposing that to them.

 

Brat won by 12 points three days later, prompting public mockery of Cantor’s poll and journalists who reported it. The following day, the Post’s Philip Bump wrote a piece, “Beware Internal Polling,” advising, “If we take only one polling-related lesson away from this surprise, it is this: Internal polls don’t tell you anything about the state of an election.”

Another cautionary tale emerged from former Massachusetts Gov. Mitt Romney’s 2012 presidential campaign. In the months leading up to Election Day, Romney partisans lashed out at journalists because the campaign’s polling data showed the Republican winning in several key states even as public polling aggregated by Nate Silver’s FiveThirtyEight.com showed leads for President Barack Obama. After Obama’s resounding re-election, GOP pollsters admitted they mistakenly assumed that Obama’s 2008 victory was a one-time outcome that relied on increased enthusiasm by minority and younger voters that had waned.

In a post-election analysis, Silver wrote, “When public polls conducted by independent organizations clash with the internal polls released by campaigns, the public polls usually prove more reliable.” He said both sides can be misled, pointing to Democratic polls showing a tied race in Wisconsin in June 2012 as Gov. Scott Walker fought off a recall effort. Walker won by 7 points, in line with independent polling.

Many internal polls, of course, are accurate – and most are never released to the public. Campaigns spend a lot money on them to provide intelligence to plot strategy, and they can assist journalists’ understanding of a race when there are strong reporter-source relationships. Such was the case in the 2010 Nevada race between Senate Majority Leader Harry Reid and his GOP challenger, Sharron Angle. Then-Las Vegas Sun columnist Jon Ralston was the only prominent journalist to predict Angle’s defeat; he later explained that the Reid campaign had shared its internal numbers with him, and he concluded the campaign was sampling the Nevada electorate more accurately than public polling. Ralston used that information to inform his reporting, but didn’t publish specific stories about it.

Cook wishes more journalists were as skilled at reading sophisticated data and exercised that kind of restraint. “I would say 95 percent of all journalists and 80 percent of all political journalists don’t know how to read polls and need to be very, very careful in reporting polls,” said Cook. “I wish J-schools would do a better job teaching how to interpret polls, how to use polls. There’s a certain amount of ‘Kids don’t try this at home.’ ”

Yet the kids are going to try it–and can avoid being snookered by following some best practices, the pollster Wenzel said. Journalists ought to demand to see the entire poll, not “just one or two random questions pulled out,” and refuse to write about it if they can’t.

“Even if I couldn’t publish all of it, I would want to see it off the record,” Wenzel said. “And then you say that in the story: ‘We were able to talk directly to the pollster who conducted the survey and agreed with the campaign not to publish certain aspects of their data. However, we can ascertain that the ballot test was conducted in such a way so as to not bias the respondents.’ Readers have the right to know what the context of these polls is. If an internal poll’s all you got, that’s fine.”

In many cases, especially in down-ballot races, internals are often the only survey data that exists. Joey Garrison, who is covering the ongoing Nashville mayoral race for the Tennessean, cited the dearth of independent data as his rationale for writing several pieces based on internal poll data provided by candidates vying to finish in the top two in the Aug. 6 primary and advance to the Sept. 10 runoff. Most of those surveys showed four candidates clustered near the top in varying orders, and each candidate received headlines touting their results.

Garrison conceded that reporting such data is “a difficult decision” but that neither his newspaper nor Vanderbilt University polled. “One campaign’s poll by itself, I wouldn’t report on, but we have reported on multiple polls from various campaigns,” he said. “If you put it all together, it’s interesting to see some of the themes played out among various candidates. … I’m just trying to give the reader an idea based on multiple polls out there.”

Garrison said he recruited Vanderbilt pollster John Geer to review data with him and comment on the surveys’ credibility. Geer also co-wrote an op-ed for The Tennessean that warned readers to “beware of internal polls” because “these tools can attract attention for contenders but can be severely flawed.”

Beyond the Times’ general prohibition, the media landscape is dotted with mixed signals for using internal data. Bump told CJR the Post’s approach is, “If you have numbers and you can’t validate them, you shouldn’t be reporting them.” BuzzFeed Executive Editor Ben Smith–who as Politico’s media columnist in 2010 described internal polls as “basically, news releases, cherry-picked at best and fudged at worst”–declined to say whether the outlet he runs has any policy.

And despite an Associated Press reporter telling Baron, when she shopped the Romanoff data, that the AP “just doesn’t do that,” AP National Political Editor David Scott offered CJR a murkier reply. “When the AP looks at polling data, it assesses the results based on several factors–from the methodology of the poll to the timing of its publication to, yes, who is paying the pollster,” Scott said via e-mail. “There are some polls we dismiss based on methodology alone, since scientific validity is the bar every survey must clear. But after that, our assessment of the poll on its merits determines how we look at and, ultimately, use the data in crafting the AP report.”

That response, like Mahtesian’s “smell test,” troubles George Washington University political science professor Danny Hayes, a Washington Post contributor and an anti-internals absolutist. Hayes contends that opening the door to any use of these polls gives reporters too many opportunities to do so badly.

“Journalists have incentives to make mountains out of mole hills, to the detriment of the public and the political process,” Hayes said. “You need something to throw up on the website. You need hits. Your instinct is to write it up because at least there’s something that’s happening. But that implies to voters and readers the thing that you’re reporting on is truly substantively important in some way. Most of the time, it’s just not.”

 

*Correction: This article originally stated that Andrew Romanoff lost by 11 points. He lost by 9 points.

**Correction: This article originally stated that Sen. Rand Paul represents Tennessee in the United States Senate. He represents Kentucky. 

Steve Friess is a freelance journalist based in Ann Arbor and a journalism instructor at Michigan State University. Follow him at @SteveFriess.