behind the news

Margin of Ignorance

August 4, 2004

Early on the morning of July 12, CNN anchor Heidi Collins turned to CNN Political Analyst Carlos Watson for an explanation of the latest numbers spilling over from the boiling pot of Bush v. Kerry tracking polls. Displayed on screen were recent Newsweek numbers showing the Kerry-Edwards ticket with the support of 51 percent of the electorate, compared to 45 percent for Bush-Cheney. Nearly a week after John Kerry’s selection of John Edwards as his running mate, Collins wanted to know if this was proof of the anticipated VP bump.

Watson replied, “It’s the real Edwards bounce. They are now up by six points and maybe more significantly, Heidi, the last several times that you’ve seen John Kerry up, it still was within the margin of error. But this six-point lead now puts him outside the margin of error.”

For Watson, the six-point gap had little analytical value compared to the significance of Kerry-Edwards breaking the threshold of a statistical tie within the margin of error to a legitimate lead outside the margin of error. And that would have been a noteworthy observation — had Kerry’s numbers actually fallen outside the margin of error. But they hadn’t.

On that Tuesday morning, Watson fell prey to a trap that persistently plagues the political press. Watson is not the lone CNN culprit. On January 18 of this year, long-time CNN anchor Wolf Blitzer made a similar mistake discussing a potential Hillary Clinton v. Rudy Giuliani face off for Senate in 2006. Then on July 9 CNN senior White House correspondent John King hit the same pothole while reporting on the latest Kerry v. Bush Time poll. Other press outlets, small and large alike, are equally guilty, including the St. Louis Post-Dispatch (December 10, 2003), Time (January 12 of this year), United Press International (January 18, April 20, and May 4), The New York Times (June 12) Hearst News Service (July 9) and the Associated Press (also July 9).

This “innumeracy,” as Columbia University Journalism professor Todd Gitlin puts it, stems from a general misunderstanding of the definition of margin of error, further exacerbated by a competitive and essentially unmonitored polling culture in the U.S. media.

So what is the margin of error?

Sign up for CJR's daily email

The central limit theorem, on which polling is based, dictates that every poll has a margin of error, which determines a range of results within which the true number lies. A confidence interval accompanies the margin of error. The confidence interval (usually 95 percent in the US) indicates that, if several samplings of the same randomly selected pool of voters were taken, the true number for all voters would lie within the range set by the margin of error for 95 percent of the samples.

Lost? Skip over the nerdy math stuff?

As an example, let’s take the Newsweek numbers CNN political analyst Carlos Watson attempted to explain on July 12. In that poll, 51 percent supported Kerry-Edwards, while 45 percent supported Bush-Cheney. The poll had a margin of error of plus or minus 4 percent, with a confidence interval of 95 percent. The simple six-point gap between the two tickets led Watson to believe that the difference surpassed the margin of error.

However, the margin of error was plus or minus four points for each number. If we subtract four points from the Kerry-Edwards ticket and add four points to the Bush-Cheney ticket, the result is Bush-Cheney at 49 percent, two points ahead of Kerry-Edwards at 47 percent.

“Margin of error, Smargin of Smerror,” you say to yourself, the Kerry-Edwards six point advantage is still a lead, regardless of these calculations. Well, not really.

Dick Bennett, president of the polling firm American Research Group, told me, “If [the numbers] are within the margin of error, you can’t tell who is in the lead.” He added, “In theory, it’s tied.”

Furthermore, Bennett points out that reporters routinely misinterpret the confidence intervals that accompany the margin of error. Since 1976, Bennett has included the following line in all his polling reports:

Different random samples of this population will produce different confidence intervals for the results of this survey. If this survey were repeated among this population, the actual population results will fall between the confidence intervals for the results 95 percent of the time. It is impossible, however, to determine if the actual population results fall within the confidence intervals for the results of this particular survey.

Huh? Bennett puts it this way: Imagine a roulette wheel with 100 slots, of which 95 slots are green and 5 slots are red. Before the wheel is spun there is a 95 percent chance the ball will land in a green slot. Either way, the ball will land in a green or red slot and after we spin the wheel we will know for sure. “The problem with polling,” Bennett notes, “is that we cannot see the color of the slot that the polling number ends up in.”

In practice, going back to the Newsweek example, this means that 95 times out of 100 the true Kerry-Edwards number will fall between 47 and 55 and the Bush-Cheney number will fall between 41 and 49. But the other five times the numbers will fall outside.

The bottom line, says Bennett, is “Either the number is within the margin of error or its outside the margin of error … but I can’t tell you” whether it’s one of the 95 or one of the 5. And that’s something the average American news consumer is rarely told.

Aficionados of Canadian politics are treated to a more expansive explanation of polling information. As much as we Americans might not want to admit it, our northern neighbor is well ahead of us (perhaps at a price) in disclosing polling information.

John Wright, Senior Vice President of the polling firm Ipsos-Reid, which conducts polling in Canada, says the Canadian government, “has moved prudently … in making sure the voter has optimum transparency.”

Unlike the United States, sections of the Canada Elections Act mandate that, “The first person to release the results of an election opinion survey to the public during an election, and any other person broadcasting or publishing them during the following 24 hours, are required to indicate the poll’s methodology, including sponsorship, who conducted it, when it was held, the population from which the survey sample was drawn, the number of people contacted to participate, and the margin of error.” These laws are overseen by Election Canada, an independent body that reports directly to the Canadian Parliament.

The result is a reporting culture that appreciates the ins and outs of polling methodology. Vaughn Palmer, a veteran political columnist with The Vancouver Sun, was schooled in polling under strict British Columbia laws (no longer on the books) that required reporters to disclose 14 details about every poll.

The laws, Palmer says, were “pitched as a consumer awareness thing,” but eventually faced court challenge from a press wary of the government telling it how to report. Still, Palmer says, the laws, while they lasted, “heightened awareness about the significance” of polling methodology.

Born out of these laws were clear and concise phrases used to explain polling to the public that remain foreign to the US media. For example, in a July 9, 2004 story explaining the differences between two recent polls, Palmer wrote, “Pollsters, mindful of the principles of random sampling, always record that their results will fall within the margin of error 19 times out of 20. Perhaps this was this twentieth time, when the true picture was somewhere outside the box.” (italics added) “19 times out of 20” is a phrase that shows up repeatedly in polling stories throughout Canada.

Furthermore, the mere fact that Palmer used his print space on page A3 of The Vancouver Sun to help his readers digest the two differing polls is evidence enough of how reporters in Canada and the U.S. approach polling differently. Two recent polls in North Carolina turned out distinct results: A CNN/Gallup poll showed Bush up by as much as 15 points, while a Mason-Dixon poll measured the incumbent’s lead at only 3 points. You might expect the Associated Press to pass along these numbers, but you’d be bedazzled if it took the time to explain how two polls delivered such radically different results. As was the case with the more recent post-convention polls, the explainer articles gave us Bush and Kerry campaign talking points instead of providing relevant methodology information.

“Context is always important,” Palmer added, “People who report on politics regularly are usually struggling with this issue of why I am reporting on this poll and what does it say that other five polls [don’t say].”

As for the American press, Palmer senses it’s a “wash”; with a new poll out every day, “I don’t know how you sort it out.”

That thankless task falls, in part, to Cliff Zukin, vice president of the American Association for Public Opinion Research (AAPOR). AAPOR is a professional organization with 1,700 members that sets standards on disseminating polls for the news organizations. These standards go beyond simple instructions to report margin of error but also instruct on how a poll should be conducted following what Zukin calls, “the best practices.”

Still, Zukin admits, AAPOR “doesn’t have a lot of teeth to enforce” its standards, and “compliance is voluntary.” (AAPOR does adjudicate member complaints, like other professional organizations.)

The problem, says Zukin, is that “journalists are poorly armed” to report on polling. In response, AAPOR is gearing up to embark on an “aggressive” outreach program to educated journalists. “What we want to tell reporters is that not all polls are equal and [educate] them [on] how to tell a good poll from a bad poll.”

Also conscious of the “poorly armed” journalist is the Quinnipiac University Polling Institute which, according to assistant director Clay Richards, hired him and colleague Mickey Carroll primarily because of their backgrounds in political reporting. Their job, says Richards, is to “minimize misinterpretation by the reporters.”

As for the margin of error specifically, Zukin thinks that the “public has no clue” what it means. Producers and editors, however, should understand it. “They have a gatekeeper function and they should be able to tell that 42-42 and 48-42 are within the margin of error and should be able to tell the audience there hasn’t been a change.”

In an informal survey of polling coverage, Campaign Desk found very few journalists who demonstrated a working knowledge of margin of error. While most reporters understand that a lead separated by only a few percentage points falls within the margin of error, there is a system-wide ignorance of the fact that often even larger leads also fall within the margin of error, and are therefore statistical ties.

As of now this deficiency seems to be exacerbated by a media frenzy that is driven by speed and an emphasis on the horse race. As Zukin put it, the “media tends to own the polling industry in our country, and speed is valued over other considerations,” pointing to the recent polls that appeared overnight after Edwards’ selection as Kerry’s VP candidate. It’s doubtful, Zukin believes, that these polls were conducted in accordance with the “best practices” of AAPOR.

Feeding into this frenzy is the “competitiveness of the pollsters,” says the ARG’s Bennett, noting that “a lot of pollsters want to characterize the results … whether it’s accurate or not.”

He adds, “People don’t like equivocation. It’s all winners or losers.”

But, unfortunately for all the horse racing writers out there there’s no clear winner. In fact, in 32 national polls conducted over the last month by 21 reputable polling firms, only two have a revealed a candidate (in both cases Kerry) with a clear lead outside the margin of error.

Keep that in mind when the inevitable next barrage of seemingly conflicting polls bursts into the news cycle.

Thomas Lang was a writer at CJR Daily.