This article from CJR's archives is presented as part of our 50th anniversary celebration.
Elmo Roper was one of the early giants of American opinion polling. His survey work for Fortune magazine, beginning in 1935, is claimed to be the “first national poll based on scientific sampling techniques.” Roper’s 1962 CJR article makes a complaint not often voiced by today’s press critics: political journalists rely on polls too little.
It may be human to err, but to err time and time again, in precisely the same way, is folly of divine dimensions.
I am talking about journalists—when they tackle the job of predicting elections. Though their impressionistic predictions often land them in electoral soup, journalists keep on preferring the intuition of a backroom “political expert” to the full, exhaustive reporting of the public’s intentions by any source so dry and uninspiring as public-opinion polls. Journalists run about state or nation, talking to people, people, people everywhere in the dozens or even in the hundreds, ignoring the fact that scientific sampling procedures are available to determine which people should be chosen to represent the nation and that the results are available to all.
Journalists therefore give pollsters the opportunity after each election to write fastidious articles showing the journalists’ low batting averages. But the issue is more important than who is right, how many times out of how many elections. What is important is that journalists come to realize just what things their reporting techniques are equipped to do well and what things they should leave to others.
In criticizing journalists’ predictions, I am not suggesting that the polls are always right. Well-done polls have an inevitable statistical margin of error, and they can be wrong. But they rarely have been. (This generalization does not apply to private polls done for political clients, some of which are done with questionable competence or integrity, and with inadequate samples.) Everyone remembers that the published polls were wrong in 1948, but not everyone remembers that so was almost everyone else, including reporters and political pundits. And how many also remember that 1948 was the only time the published polls erred in predicting a presidential election?
In point of fact, no one can predict elections. Polls can assess the probabilities, but the situation can always change between the time the last poll is in and the time the first voting booth is open. When voting sentiment is extremely close, there is simply no way of telling the outcome beforehand. Nevertheless, an examination of the record of the polls shows them to be for the most part an extremely reliable gauge of voting intentions.
There is one serious difficulty in applying the polls to elections, and it lies in the realm of finances. The costs of scientific state-by-state polling of the entire nation are astronomical, out of the reach of any commercial organization and of all but a handful of extremely wealthy individuals. Therefore the polls are forced to restrict themselves to national and regional statistics. Yet most of our press is local, and if a journalist wants information on a senatorial or gubernatorial race, or wants to know which way his state is leaning for the presidency, more often than not the polls cannot help him.
Some good state polls are conducted, but many others are private and unavailable except when a candidate “leaks” an occasional tidbit. Such “leaked” polls are probably the least reliable guide of all to the electoral future. But the good ones—like the California Poll, the Minnesota Poll, or the Iowa poll—should be used more widely.
Just as important as the use of polls is the requirement that they be used properly and consistently. A notable example of inconsistency occurred in New Jersey’s gubernatorial election of 1961, in which the former Secretary of Labor, James P. Mitchell, ran on the Republican ticket against a little-known Democrat, Richard J. Hughes. Hughes was billed throughout the campaign as the underdog, struggling to win votes against a well-known opponent. Many were amazed when he won. They were surprised because journalists, in making their pre-election analyses, had paid attention to some polls but had arbitrarily ignored others.
During the campaign a series of polls was conducted for Hughes by the firm of John F. Kraft, Inc. At the time of the primary, Kraft found, few people knew anything about Hughes, and Mitchell was well in the lead. This poll was not released. In July another Kraft poll showed Mitchell to be in an even stronger position. The July figures were obtained in some manner by the Mitchell forces, who made sure the press was informed of them. This confirmed the journalists in their “underdog” view of Hughes—which they clung to until the election.
But here was their error. In September another poll showed that Hughes was catching up fast. Wishing to counteract the effect of the release of the July poll, the Democrats released this new information. When October interviewing showed a continuing movement toward Hughes, this information, without specific figures, was also made available to the press. But nobody was listening. Many reporters had apparently made up their minds that Mitchell was the strong favorite and shrugged off the later poll material as propaganda.
There may conceivably be journalists who deliberately create an impression that such-and-such an election is going such-and-such a way; there are certainly politicians and some poll-takers who adopt that technique. But the more frequent case is that the press uses an approach to elections shot through with all the flaws of impressionistic thinking when more scientific and reliable approaches are available.
I am aware of the press’s problems—not all of which opinion research can solve by any means. Deadlines must be met and articles written. When toward the end of a campaign a newspaperman is eager to write an authoritative, enlightening, and stimulating story, to hear from a pollster that “it’s a toss-up” must be dispiriting indeed. The pressures to predict are strong, as any pollster knows.
A preference for certainty over doubt, for the plausible over the proved, for drama over accuracy, for hunch and intuition over the hard-to-assemble facts, is a common human tendency. I suspect that we all tend to believe that what we personally feel, deep in our hearts, must be true—many times we have been disillusioned by life’s errant ways. But this tendency to trust our own intuitions, or those of our pal across the street, is precisely what we must avoid in attempting to measure the opinions of millions who do not live across the street.
Any attempt to analyze the complexities of modern public opinion must have solid statistical backing. Business, risking millions, knows this; so increasingly does government. But the press, presumably risking only a forgotten statement in yesterday’s discarded newspaper, often seems to behave as if it were operating in a simpler yesterday, when everybody knew everybody and the “labor vote” (or the “Italian vote” or whatever) could be “delivered,” and the electoral process could be grasped and analyzed in one man’s mind. I don’t know that it ever was that simple; but I know it is not that simple in 1962.
There is at great need, an unquestionable need, for all the fine political analysis that fine journalists can make. Journalists have much to tell us about the activities of politicians and the workings of political power. And the more eminent men of the press go beyond analyzing what is to suggest what should be; they are among our best critics of the political process. But when journalists want to put their finger on the public pulse and tell us with precision what men or women or farmers or factory workers or suburbanites or members of the upper middle class are thinking, feeling, and planning to do, they had better turn to the polls. The polls have for some time had their fingers on that pulse, and this is their small contribution to democracy.