Photo: Iowa Public Television
United States Project

Iowa’s Ann Selzer on what journalists need to know about polling

November 17, 2015
Photo: Iowa Public Television

The profession of political polling is “teetering on the edge of disaster,” Jill Lepore writes in this week’s New Yorker. It’s a stark diagnosis, but not a fringe opinion.

As voters continue their migration from landlines to hard-to-reach cell phones, response rates for public-opinion surveys have dipped under 10 percent. A wave of high-profile polling misfires—in the 2014 midterms; in the UK, Greece, and Israel; and most recently in this month’s off-year elections in Kentucky and other states—has prompted many political pundits to question the relevance and utility of a once-vaunted social science. Gallup, the organization founded by the godfather of political polling, recently announced it was abandoning the horse race altogether.

At least one pollster, however, has emerged from recent election cycles with her esteem among pundits and poll-watchers undiminished, even enhanced. Ann Selzer, director of the Des Moines Register/Bloomberg Politics Iowa Poll, has been in the field for nearly three decades, beginning as an in-house pollster at the Register in the 1980s before launching her own public-opinion firm in the early ’90s. In recent years, Selzer’s reputation was cemented with her controversial—but, it turned out, accurate—forecasts in the 2008 Iowa caucuses and the 2014 Iowa Senate race. FiveThirtyEight places Selzer’s firm at the very top of its national poll ratings. Chuck Todd of Meet the Press tweeted on Election Day 2014, “Once again, it is Ann Selzer’s polling world in Iowa, we’re just lucky to live in it.”

With another round of Iowa caucuses looming on the horizon, these accolades can’t insulate Selzer from the self-reflection and anxiety shared by many of her colleagues. But, as other major media outlets experiment with online polling and other novel techniques, she remains a traditionalist in her approach. Selzer spoke by phone with CJR from her Des Moines office last week about the challenges pollsters face, whether her profession is still relevant in the cell-phone era, and what journalists need to know when trying to distinguish good polls from bad. The transcript here has been edited for length and clarity.

What has changed the most in the years you’ve been working as a pollster?

Oh, goodness. Everything’s gotten faster, but there’s so much that has stayed the same, which is that the general premise is to approach it as a science—that you want every person in your meaningful universe, whether those are caucus-goers or general-election voters or the general population, to have an equal chance of being contacted for your poll.

Sign up for CJR's daily email

We’re seeing headlines about polling being “broken,” given the misfires in last week’s off-year elections in Kentucky and elsewhere, and in the recent UK elections, and in the 2014 midterms. Is there a sense that the profession is in crisis?

I think by nature we’re worriers, so we worry about all of these things.

You know, the crisis will come when everybody gets it wrong. And I don’t think we’re there just yet. I think that we have a proliferation of polls, and so it looks like a lot of people are getting it wrong, but there are a lot of people who are getting things right.

Certainly there are challenges that we’re facing with response rates—I mean, there are a lot of things that we’re worried about. And we really won’t know if everything is broken until everything has crashed. And not everything has crashed. So you can kind of separate the people who are getting along OK from the people who aren’t.

What about the issue of cell phones and declining response rates? Isn’t that going to continue to get worse? How do pollsters begin to counteract that problem?

Well, you do the best that you can. What I don’t want to do is be in a position where we are substituting our own judgment of what the electorate is going to look like for what the electorate is telling us it’s going to look like. So we rely on our method to take our best shot.

And yes, do I worry more about it? Of course. But I’m always worrying about things. [Laughs] It’s just part of my makeup to worry, so I can’t really say that I’m worrying more than I used to, because I always worry about how things are going to go. There are very few professions where you do a piece of work and then it sits out there for a couple of days and then you find out if it was or was not accurate, that things stayed the same and that what you ended up measuring turns out to be what ends up happening. That’s a very, you know, nervous-making proposition.

You drew praise following the 2014 midterms. Is there something you and your team were doing that you think too many pollsters are not doing, that they’re missing? 

It seems as though they make a decision about what the electorate is going to look like based on guesses—and I don’t know how you would do more than guess—and then the electorate doesn’t turn out to look like that. Well, that’s kind of a hole you dug yourself.

So when I see people saying, ‘Well, you know, we’ve got to figure out the size and scope of what the electorate is going to look like,’ I go, ‘Well how do you do that exactly? What are you deciding here?’

Consider a low-incidence event, like a caucus…. In 2008, our final poll said that for 60 percent of the people who were going to show up on the Democratic side, this would be their first caucus. Well, there isn’t a model in the world that would be based on past data that would suggest that 60 percent would be first-time caucus-goers. You just wouldn’t create a model that would look like that.

We trusted our data, and it turned out, the entrance poll said it was 57 percent, or something in that realm. So to me, that says if you substitute your own experience and say, ‘We’ve never had, ever, anything like 60 percent show up on a caucus night,’ then you’re just saying that history is everything and that things will behave as they’ve always behaved.

And in fact, the best predictor of future behavior is past behavior—until there is change.

But if you have this broader problem of declining response rates to polls, is there a way to do your job without some level of guesswork, for lack of a better word?

Two things to say about that. Let me talk about a [2012] Pew study, first of all. Maybe you know this study. They said, ‘Our response rates are terrible, so let’s see what that’s doing to the quality of our data.’

Normally they get a 9 percent response rate. So they said, ‘Now let’s do the same study but let’s spend a whole lot more money, re-contacting people, giving incentives, doing whatever we can’—they could raise it to 22 percent. But their findings were no more accurate.

So for now—and that will go on my gravestone, ‘for now’ [laughs]—it appears as though the lower response rate is not harming the quality of the data.

And so this leads to the point that I make when people say they don’t respond to polls, so how can I be accurate? I say, ‘Well, for now, there is apparently a doppelganger who is just like you who is willing to participate in these polls in a representative number.’ For now.

What are the most important things you think journalists need to understand when writing about polls?

We’ve jumped the shark, really, on the press being very discerning about which polls they’re going to cover and which polls they’re not going to cover.

I don’t see anybody saying, ‘This is a bad poll; we’re not going to mention it.’ Some of the polling aggregators I think have [influenced] this—they throw everything in. So without a reason to distinguish, everything is out there.

You don’t really have many reporters, in my experience, who are doing the work of looking at the methodology. This is how I spend a lot of my time, is reporters calling to say, ‘Well, what do you think of this poll?’ And I look at the methodology and say, ‘Well, here’s what they did; here’s how that would skew things one way or another.’ So they rely more and more on pollsters to explain things to them that—in theory, that would be their job, wouldn’t you think?

It does seem as if, despite the widely varying type and quality of polls out there, in a lot of news stories all polls are treated the same.

There was a poll last week, just for example, that was looking at likely Democratic caucus-goers, that had started with a list of registered Democrats and then further qualified people they would call in the first place by whether they had voted in at least one of the last two statewide primaries.

Not everybody who shows up on caucus night will meet those criteria; you’ve left out a fair number of people. My rule about each person who is going to show up on caucus night has an equal chance to be contacted—well, that poll didn’t start with that assumption.

So they find—I think it was a 41 [percentage point] gap between support for Hillary Clinton and support for Bernie Sanders. Well, if their polling method was going toward registered Democrats versus independents—OK, that’s one way it’s going to skew Hillary Clinton. If it’s going to skew toward people who participated before, they’re going to be older, not younger. So there are just things in the method right away that say, well, this explains why it has a 41-point advantage for Hillary Clinton. You see what I’m saying? But I don’t think the reporter who called me had looked at the methods at all. [Editor’s note: The director of the poll responded to a similar critique here.]

In the wake of these high-profile polling misfires, we’ve seen Gallup announce that they’re giving up horse-race polling altogether, and some critics lately have called into question the whole concept of horse-race polling, and questioned what value it has for the democratic process.

The Gallup organization has changed dramatically in the last few years, about what exactly it does and doesn’t do. They do a lot more research that is not election polling, but it’s private polling and health related, and that’s where they’re making their money. So to the extent that they had something that was very visible and was detracting from their brand, I don’t think it’s terribly surprising that they decided, ‘We’re not going to do it.’

In terms of whether it’s worth doing at all: When I’m giving a speech, I invite people to imagine what the election would be like if you didn’t have a poll to know what was going on. Would you expect Ben Carson and Donald Trump to be leading? How would your perception of what is happening be different if you didn’t have some sort of gauge of public opinion? To me, it’s still a very valuable exercise.

Deron Lee is CJR’s correspondent for Iowa, Missouri, Kansas, and Nebraska. A writer and copy editor who has spent nine years with the National Journal Group, he has also contributed to The Hotline and the Lawrence Journal-World. He lives in the Kansas City area. Follow him on Twitter at @deron_lee.