Early on the morning of July 12, CNN anchor Heidi Collins turned to CNN Political Analyst Carlos Watson for an explanation of the latest numbers spilling over from the boiling pot of Bush v. Kerry tracking polls. Displayed on screen were recent Newsweek numbers showing the Kerry-Edwards ticket with the support of 51 percent of the electorate, compared to 45 percent for Bush-Cheney. Nearly a week after John Kerry’s selection of John Edwards as his running mate, Collins wanted to know if this was proof of the anticipated VP bump.
Watson replied, “It’s the real Edwards bounce. They are now up by six points and maybe more significantly, Heidi, the last several times that you’ve seen John Kerry up, it still was within the margin of error. But this six-point lead now puts him outside the margin of error.”
For Watson, the six-point gap had little analytical value compared to the significance of Kerry-Edwards breaking the threshold of a statistical tie within the margin of error to a legitimate lead outside the margin of error. And that would have been a noteworthy observation — had Kerry’s numbers actually fallen outside the margin of error. But they hadn’t.
On that Tuesday morning, Watson fell prey to a trap that persistently plagues the political press. Watson is not the lone CNN culprit. On January 18 of this year, long-time CNN anchor Wolf Blitzer made a similar mistake discussing a potential Hillary Clinton v. Rudy Giuliani face off for Senate in 2006. Then on July 9 CNN senior White House correspondent John King hit the same pothole while reporting on the latest Kerry v. Bush Time poll. Other press outlets, small and large alike, are equally guilty, including the St. Louis Post-Dispatch (December 10, 2003), Time (January 12 of this year), United Press International (January 18, April 20, and May 4), The New York Times (June 12) Hearst News Service (July 9) and the Associated Press (also July 9).
This “innumeracy,” as Columbia University Journalism professor Todd Gitlin puts it, stems from a general misunderstanding of the definition of margin of error, further exacerbated by a competitive and essentially unmonitored polling culture in the U.S. media.
So what is the margin of error?
The central limit theorem, on which polling is based, dictates that every poll has a margin of error, which determines a range of results within which the true number lies. A confidence interval accompanies the margin of error. The confidence interval (usually 95 percent in the US) indicates that, if several samplings of the same randomly selected pool of voters were taken, the true number for all voters would lie within the range set by the margin of error for 95 percent of the samples.
Lost? Skip over the nerdy math stuff?
As an example, let’s take the Newsweek numbers CNN political analyst Carlos Watson attempted to explain on July 12. In that poll, 51 percent supported Kerry-Edwards, while 45 percent supported Bush-Cheney. The poll had a margin of error of plus or minus 4 percent, with a confidence interval of 95 percent. The simple six-point gap between the two tickets led Watson to believe that the difference surpassed the margin of error.
However, the margin of error was plus or minus four points for each number. If we subtract four points from the Kerry-Edwards ticket and add four points to the Bush-Cheney ticket, the result is Bush-Cheney at 49 percent, two points ahead of Kerry-Edwards at 47 percent.
“Margin of error, Smargin of Smerror,” you say to yourself, the Kerry-Edwards six point advantage is still a lead, regardless of these calculations. Well, not really.
Dick Bennett, president of the polling firm American Research Group, told me, “If [the numbers] are within the margin of error, you can’t tell who is in the lead.” He added, “In theory, it’s tied.”
Furthermore, Bennett points out that reporters routinely misinterpret the confidence intervals that accompany the margin of error. Since 1976, Bennett has included the following line in all his polling reports: