Polls apart

I’m sure some people must think I spend the bulk of my time hanging out on Internet comment boards. I’ll admit I spend more time checking them out than I should for the sake of my blood pressure (because blatant untruths, partisan or not, make it rise), but it doesn’t take long to get a read on what people are ticked off about.

But I usually don’t wear a frilly collar when I roll my eyes.
GIF found on giphy.

What often has my eyes rolling is the schizophrenic treatment of polls from people who have very little understanding of what they actually are and can do—if they’re positive for their guy, the pollsters can do no wrong, but if they’re negative, all pollsters should be hung by their pinky fingers.

True, some polling outfits are terrible and perhaps pinky nooses should be prepared. Those would mostly be polls that, for example, use too small a sample or a nonrepresentative sample (like, say, oversampling certain segments of the population), employ online-only opt-in polling that enables people to weigh in multiple times, or that use questions designed to lead to predetermined results. But the majority of old hands in the polling game are responsible and transparent, and do valuable service.

Thank you, bad pollsters, for having those pinkies ready for us!
Image found on Coastal Orthopedics.

Yet so many pollsters are castigated for not reflecting what hyperpartisans think they should.

There’s a reason Gallup dropped out of the prediction part of election polling in 2015, choosing instead to focus on how voters felt about issues. As Time’s Daniel White wrote at the time: “When it comes to election polling, it’s the best of times and the worst of times. On the positive side, there is more polling than ever from private universities, news media and small independent shops. Sites like HuffPost Pollster, RealClearPolitics and FiveThirtyEight also provide sophisticated analysis of what the polls mean. On the negative side, the glut of polls often doesn’t add up to much, while problems with getting accurate results are starting to hurt the polling industry’s reputation.”

When your no-account brother-in-law starts a poll gathered from talking to his beer buddies, of course it’s going to make all polling look bad. And no, those user polls on Twitter aren’t exactly reliable … or at all reliable.

Some would argue that it was a bad day for just about everyone.
Editorial cartoon by Steve Kelley, Creators Syndicate.

What many people get wrong about the polls in the last election is that most established polls were accurate within the margin of error on the popular vote count, and that—not the electoral college result—is what those national polls measure. To gauge the electoral college count, Frank Newport of Gallup said, you would need to rely more on state-level polling in swing states, but that polling can have its own accuracy issues (sample size, quality, etc.). Trying to predict how people will vote can also be brought low by unexpected election-day turnouts, or people who don’t know or don’t want to say who they’re voting for. Somehow the USC Dornsife/Los Angeles Times poll managed to get the final electoral result correct, but I’m not willing yet to get behind its online panel approach. When it gets a few more successes under its belt, I’ll reconsider.

No win either way we went. A friend suggested we should go back to 2015 and start over. We need to make that happen.
Image found on Boston Globe.

In a close race like this last one, especially with two such unlikable candidates (yet still more likable than Congress or Vladimir Putin—toe fungus might give all of them a run for their money), you have to remember that polls, which capture how respondents feel at a particular moment in time, don’t deal in certainties, but rather probabilities. As Bill Whalen, a research fellow at Stanford’s Hoover Institution, said after the election, “Ultimately, pollsters are not Nostradamus. When it comes to polls, at best, you’re showing the current state of the race but you have no idea who will show on the actual election day.”

Yeah, I know, hard to believe. Maybe that’s why outfits like RealClearPolitics and FiveThirtyEight aggregate and average polls, and generally can be a bit more accurate. Of course, if you don’t care about accuracy … well, you’re probably the people annoying me on those comment boards. My boy is glaring at you from cat heaven right now.

If you sense a stinkeye from above, it’s my boy.

So how can you tell if a poll is good or bad? There’s too much to be covered in this space, but most good polls have some things in common, including being transparent on methodology and questions when reporting results—if I can’t access sample size, margin of error, questions as asked with tallied responses, etc., I tend to not put too much faith in them.

Writing on the Post Calvin blog, Ryan Struyk (a fellow nerd—of the math variety, not word—and a data reporter for CNN) said the building blocks of good opinion polls include whether the polls randomly select participants (the preferred method) or the participants select themselves. Self-selection typically happens with online opt-in polls (you know, the ones you get pop-up or email invitations for), and is more likely to skew results. Whether live or automated interviews are used is also a consideration, as it’s easier to lie to a machine, and it’s illegal in most cases to robo-dial cell phones, so anyone who only has a cell phone wouldn’t be able to participate. The USC/LA Times poll is online with the same 3,000 or so panel members, so people who have no Internet access wouldn’t have been able to participate when the original panel was recruited; no live interviews are conducted.

This is how research nerds get out of arguments. It doesn’t work, really.
Editorial cartoon by Nate Beeler, Columbus Dispatch.

One should also consider how phone numbers for the poll are picked—the best coverage comes, Struyk wrote, from random-digit dialing to blocks of known residential numbers. Polls that use only numbers from voter registration are more problematic; as we’ve seen from voter rolls in Arkansas and elsewhere, clearing out old and incorrect information can be a massive task.

Weighting of data is also sometimes necessary to account for differences in the samples to match census demographics. Struyk noted that really good polls use an “iterative weighting model” to weight individual participants, perhaps by age and gender. He cautioned against weighting political partisanship.

And about that margin of error, Struyk wrote: “You just need a few hundred people to get a pretty good picture of what the whole country looks like if you have good sampling—and that’s probably why you’ve never been called for a poll. But the more people you ask, the more exact your answer is going to be. So the margin of error says, hey, we know we are pretty close.”

So the next time someone complains about a poll and says he’s never been called (because clearly that proves polls are wrong), you know he has no idea how polls are done. Now stop rolling your eyes until you get away.

Mr. Wonka, yet again you speak for me.
GIF found on BuzzFeed.

Advertisements