September 29, 2021 | Reading Time: 8 minutes

Why we can’t quit polling

Our love of public opinion surveys and data science reflects an aspect of American politics that’s at least skewed, at most broken.


Share this article

Lee Drutman — a scholar in worthy pursuit of a means of fixing America’s vicious polarization — recently offered an analysis in the Times that demonstrated an aspect of American politics that’s at least skewed, at most broken. 

The core of his article is a sensible argument that America needs a more balanced, flexible party system. To help us understand, he offers a 20-question survey of major policy issues (Question 1: “Marijuana should be legal,” offering five response options ranging from “strongly agree” to “strongly disagree”). The aim is sorting readers into different political home bases. Drutman contends that these political home bases should be the foundation of a future six-party system.  

Drutman’s vision sounds better than our current neck-on-throat two-party standoff. (He literally wrote the book on it.) The problem is the survey itself. Can it really support the weight placed on it? 

Over the last two decades, the average margin of error of all polls has actually been a whopping 14 points. So if you see a poll reported showing a dead heat, statistically speaking it could also be showing a total blowout.

Consider that first question on marijuana. I chose “somewhat agree.” I think we shouldn’t send people to overcrowded prisons and ruin their lives for possessing marijuana, especially given the stark racial disparities in arrests. But I don’t think marijuana should be exactly “legal” either, given our limited understanding of the drug.

But what conclusion can be drawn from my “somewhat agree”? Especially when it comes to the article’s project of placing me into a political home base with like-minded people? Is this an issue I’ve thought a great deal about, or a spur-of-the-moment reaction? Is this an issue that will affect my vote? Do I not care much about this particular question, but still use a candidate’s views on marijuana as a signifier of their ideology, which is something that I do care about? 

These kinds of issues haunt survey research, as well as the broader enterprise of understanding and measuring how Americans form and act on political opinions. A 2017 meta-analysis of public opinion research concluded that “there is no agreement among political scientists on how to best measure public opinion through polls,” and quoted a famous observation that “to speak with precision of public opinion is a task not unlike coming to grips with the Holy Ghost.”

Most people have heard of the high-profile polling mishaps of recent years. But the less prominent, and likely more insidious, problem is that we’re not measuring what Americans actually think or how they will act politically with anything near the accuracy that we believe we are. What we end up with is a murky lens through which to view our voters, our politics and our governance. Cloudy viewing leads to cloudy thinking: tautological, motivated reasoning about what people want, how we should be campaigning and what our leaders should do.

In the last 40 years, we have undergone a revolution in understanding how people make judgements we think we are testing in surveys. Much in the way that physicists for hundreds of years based their thinking on Newton’s laws of motion, political scientists long-built theories of political behavior on the idea that people thought about public policy issues and made rational voting decisions based on their preferences. 

But in founding the field of behavioral economics more than 40 years ago, Nobel laureate Daniel Kahneman and his colleague Amos Tversky proved that that’s not really true. People make decisions via a whole bag of mental tricks — shortcuts and biases and heuristics that help them turn the complex into the familiar. 

In polling, though, we still work in a pretty Newtonian world. If someone is asked for her opinion on issue X, we presume she will answer in a way that more-or-less accurately reflects her opinion. And if she is given the opportunity to vote in a democratic election, we likewise assume that she will rationally vote in a way that lines up.

But we really don’t know the degree to which that is consistently the case. There’s good reason to think that often it’s not. The reality of our minds is much more complicated, and the way we react to questions and the link to our subsequent behavior is a lot more convoluted. 

To continue the physics analogy, it is probably closer to quantum: in the same way that physicists believe that particles don’t really have a definite position until someone directly observes them, many voters don’t have a definite position on many issues until forced by some outside influence (i.e., being asked in a poll or being confronted with a voting decision) to express it. 

After all, do most people outside elite political circles really spend their time thinking about health policy, let alone sub-issues like universal coverage, choice of doctors, or prescription drug price negotiations? No (one-third of Americans don’t know that Obamacare and the Affordable Care Act are the same thing). And at the point that these issues are presented, the circumstances surrounding the question will go a long way to determining the answer they give.

There’s no simple reality
This is why it can make such a huge difference in polling to make minor changes in how questions are worded or the order in which they are asked. One example: Pew found a seemingly straightforward question on whether “jobs are easy to find” in someone’s area yielded a roughly even split in “yes” and “no” responses. But that turned into a yawning 27-point gap in favor of “no” when a single word (“good” jobs instead of just “jobs”) was added. One version says the public is divided almost in half. The other describes a 60-33 landslide. How does one draw reliable policy or political environment conclusions from that?

One can see this complication in Drutman’s and many other surveys. Question 16: “Should the government raise taxes on incomes above $200,000.” What can one confidently conclude from a “yes” or a “no” answer about the respondent’s views and ideology? Support for increasing the gas tax in surveys can run anywhere from a paltry 20 percent to a thumping 70 percent depending on whether the question explains how revenue will be used and percentage increases involved.   

Ditto for questions like, “Do you favor or oppose providing a way for undocumented immigrants already in the United States to become citizens?” It depends on which immigrants the respondent has in mind. Other polls find support at 71 percent for farmworkers, but only 44 percent for all undocumented immigrants. And even those numbers reflect embedded complexity, since they are a mix of “strongly support” and “somewhat support,” which, as my marijuana answer shows, could be expressing very different underlying thinking. 

Drutman’s survey concluded that my views fit comfortably into a new “American Labor Party.” That’s … wrong.

When surveys can plausibly be used to support different takes on what people think, they tend to become fodder for advocates. Two years ago, progressives cited support for the Green New Deal at 80 percent, Medicare for All at 70 percent (including 52 percent of Republicans), and Free College for All at 60 percent. Moderate Democrats responded that support for Medicare for All dropped to 48 percent when voters were informed that it is the same as single-payer coverage, and 34 percent when told that it might raise taxes. Support for the Green New Deal similarly wobbled if brushed with a light feather of context. 

The reality of what Americans truly thought on all of those questions? There’s no simple reality. Each result was conjured out of the context of the poll: who was being surveyed, under which methodological choices, with what wording and order of the question, and in what general context. When political campaigns use polling to simulate how issues like this will resonate with the voting public, they do a more sophisticated version of this exercise. But it is still a simulation rife with assumptions that may or may not play out as intended. 

What are we getting?
And of course, we can’t forget the elephant in the polling room — the inaccuracy of polls when it comes to the most basic of political questions: who will win an election. There have been gobs of virtual ink spilled on this topic, and it is not worth belaboring. Suffice it to say that it remains a persistent and troubling problem. When the American Association of Public Opinion Research issued a report this summer looking into why national polling on the 2020 election was the least accurate in 40 years (state surveys were the worst in the last 20), they concluded that it is “impossible” to say for sure.  

But it may come as a surprise that even before the high-profile shortcomings of 2020 and 2016, campaign “horse race” polling was a lot less accurate than people realize. Over the last two decades, the average margin of error of all polls has actually been a whopping 14 points. So if you see a poll reported showing a dead heat, statistically speaking it could also be showing a total blowout. Nor are we solving our polling problems. In 12 of the last 13 elections, the “generic ballot” has consistently underestimated Republican support, a continuing issue that pollsters can’t quite account for or fix. 

And while opinion research experts say they believe that issue-based polling is more accurate and less prone to these kinds of baseline errors (and spectacular misses) than candidate head-to-head polling, it’s really not clear. Issue-based questions do have advantages. Probing for views on health care policy or taxes may not introduce the same set of biases in respondents. On the other hand, a horse-race question is a much simpler proposition for the voter to consider.

None of this is to say that all survey research — campaign-generated or not — is bunk.

An additional layer of complexity comes from who is conducting opinion surveys. The website Fivethirtyeight famously brought polling averages into vogue, not only because they were supposed to smooth out the known statistical variations that come with individual polls, but also because pollsters are subject to all kinds of additional biases, methodological differences (we haven’t even delved into the weighting and likely voter models pollsters apply — the “secret sauce” as one pollster described it to me — that represent the survey administrator’s own judgement about what a “true” representative sample should be), wording preferences and general errors. Fivethirtyeight primly euphemizes all of this under the catch-all label “house effects.” 

Not to mention that after polling is released, shorn of the careful context that survey experts tend to apply and characterized by journalists and political operatives with various levels of expertise and agendas, it is really hard to know what you are getting.

Be careful how you read polls
The bottom line is that neither quiz-like surveys like Drutman’s, nor research probing for voter views, nor their more sophisticated cousins that political campaigns use to calibrate campaign messaging, are measuring what people actually think about a given issue. 

They are measuring how people respond to deliberately formulated question wording in a very particular and artificial format (i.e., in a poll or a focus group). They are frequently not asking questions in the terms that the voter themselves would use. These are questions designed by professional opinion researchers and/or political operatives who may be, intentionally or unintentionally, introducing terms and ideas that political elites tend to use. 

But it is from this miasma that our political leaders and the professional class of political operatives, journalists, commentators and policy designers draw their conclusions about what people want and what kind of politics or communications will be effective.  

To illustrate how this can go awry: Drutman’s survey concluded that my views fit comfortably into a new “American Labor Party.” That’s … wrong. The survey clustered me with people who “focus on economic populism, with an appeal to working-class Democrats who don’t have college degrees and don’t follow politics closely.” I’m an economically moderate former political operative with a Masters’ degree who writes about politics. But funny examples aside (and I don’t mean to pick on Drutman’s survey, which is likely intended to be more illustrative than exacting), the basic problem is pervasive. 

None of this is to say that all survey research — campaign-generated or not — is bunk. Carefully done surveys can measure changes in reactions to consistently-worded questions over time, and that tells us something. For example, that Americans’ trust in government to “do the right thing” has gone from 75 percent to 25 percent in the last 60 years is a fairly robust insight into our general thinking about government. One can draw reasonable inferences from that.  

Where we get into trouble in our campaigns and our political discourse is when we take survey research as a literal, or an all-that-precise guide into Americans’ thinking. Any survey result is worth querying. Is the finding relatively consistent over multiple surveys, done by different groups, with different wording, and on an issue that respondents clearly understand as intended? Is it being cited to advance an agenda? Is there a different way to construe it? 

Our politics have become more deeply mired in polarization, anger, and misinformation. We use opinion surveys and polls as our compass. If we aren’t more careful with how we read them, we may be doomed to continue wandering through this barren political wilderness. 

Matt Robison covers public policy and governance for the Editorial Board. The host of Beyond Politics Podcast and Great Ideas Podcast, for WKXL in Concord, NH, he lives with his family in Amherst, Mass.

Leave a Comment

Want to comment on this post?
Click here to upgrade to a premium membership.