Apr
29
Citizen Hack – Deciphering Polling – and Survey – Results (Civic Mind)
Filed Under Civic Mind | 1 Comment
“Dewey beats Truman!”
This headline, from the 1948 election, has become a historical lesson on the potential dangers of polls. In this case, due to the lack of a representative sample and the ability to determine whose vote was decided and would be cast, the results did not accurately reflect the intentions of the larger voting public. Flash forward to the 2016 presidential election, when some polls leading up to Election Day did not quite square with the results of who won in some states. Just as with any other means of gathering data to further knowledge, knowing what to look for when reading polling results can help you determine how much stock to put into the findings. But what is a poll, and in what ways does it differ from a survey?
Both formats of questions allow us to gather information from individuals, known as respondents, in order to analyze patterns in the public. Surveys provide a means for professionals to ask people directly – in a variety of formats – about their opinions, behaviors, experiences and personal characteristics, whereas polls focus on one or a few narrow questions. (Think about a field survey or survey course, which can offer a view of the larger landscape.) Surveys typically take much longer to craft and administer; polls can be run quickly and frequently. Each, then, serves a different purpose even though their methods may be very similar. However, in order to benefit from the availability of any results, we need to be aware of several key factors that shape whether we should put credence in them, as well as the inherent limits of this type of data.
Although surveying as a means of gathering data other than opinions can be used more extensively to capture all members of a group – like the US Census – our focus here is on the use of smaller subsets of individual, known as samples, in order to draw conclusions about a larger group of people, known as a population. Except for rarer – and expensive – types of data collection from the public, such as the census, almost all poll data comes from subsets of the population. The form and size of the sample help us know how reliable the findings may be.
Both surveys and polls come in many forms, only some of which provide valid information that you can generalize to the larger environment. In general, Scientific polls provide the most useful results, because the people responding are picked by the use of a random sample, in which each person has an equal likelihood of being chosen. Such an approach is most likely to result in a sample that is representative of the larger whole. Therefore, to have the most reliable information possible, you want to see that the poll was conducted with this method.
Non-scientific polls, which rely on convenience or other means to capture responses (i.e. clicking from a website), may have some useful information for limited purposes, but cannot be evaluated as representative of a larger whole. One of the least reliable types of non-scientific polls is a push poll, in which people contact respondents for other purposes and use the opportunity to press (push) ideas upon them using leading language. (Individual questions in a poll or larger survey can also be misleading, but push polls are designed to sway opinion as a whole, rather than measure it. For that reason, research often considers them faux polls.) However, the means of selecting the people to respond to the questions is just one element to consider; the number of individuals responding also matters to the reliability of results.
Sample size also helps us understand how seriously to take results. In essence, the larger the sample, the more reliable the results. In fact, it generally takes about 1,500 responses in a random sample to get helpful results. More is better, of course, but gathering more takes more time and money. Thus, researchers and other pollsters are always trying to balance accuracy with cost. However, they provide information that allows you to assess how accurate the results may be based on the number of respondents.
Poll results for specific questions should include a margin of error that indicates the range of confidence in the results. The smaller the margin of error, the more reliable the results. The closer the results – for example between two candidates – the greater chance that an outcome, such as an election win, could be unpredictable. If you have a 2-3% point difference between candidates and a 3% margin of error, then the leading candidate could be ahead further or could actually be behind the other candidate; that shifting is why some races are considered “too close to call”. The same could be said for differences of opinion on issues or events, all of which presume that we have well-formed questions.
Even with a reliable sample form and size, survey/ polling results can still be impacted by the wording of the questions asked. Individuals who use polls for research purposes spend a good deal of time refining the wording of a question, both to ensure that everyday respondents can understand it and to ensure that the question itself does not push a respondent towards a particular response (a leading question). A brief discussion of the factors that researchers take into consideration when drafting questions may be found here. In addition, other question-wording issues should be noted as well. Some research on surveys has even looked at the order of options presented in question and responses, finding that it can impact on the ways that people answer questions.
Ultimate, as with any data collection for any topic, we are best served by looking at the collective results of multiple polls or studies that use reliable methods to gather and process the data. For this reason, we often see some sources report moving averages of results across multiple polls collecting the same data over the same period of time. In addition, looking at results of larger-scale research projects, not simply those focused on electoral horserace or other sensational items, will yield more helpful information. For example, the American National Election Study and the General Social Survey are have been collected on a regular basis for decades – though the researchers gathering these data sets do not focus on prediction measures like news outlets do, their findings do allow users to make deeper meaning of results and their likely causes.
We can become more critical evaluators of data, using more than our own existing opinions to evaluate it by becoming better informed about these elements. Herbert Asher’s Polling and the Public: What Every Citizen Should Know provides a concise overview of the key elements discussed in this post and more. In addition, the American Association for Public Opinion Researchers (AAPOR) provides ethical guidelines, including disclosure expectations, and best practices for professionals who conduct polling; published results should always include these basic pieces of information. In addition, this recent episode of WITF’s Smart Talk (an NPR affiliate) offers some great insights on polling from Berwood Yost, currently the Director of the Center for Opinion Research at Franklin and Marshall College, and a former colleague of mine when I taught at Millersville.
We also need to keep in mind that there are limits to survey data and the conclusions that we can draw from it. Unless data is captured over time – usually referred to as a panel – the results are simply a snapshot, even though we need to consider the context in which these responses are gathered. In addition, unless data is gathered on other opinions, behaviors and respondent characteristics, we cannot draw larger conclusions as to the cause and effect of a specific opinion.
Polling and survey data, like other methods of gathering information, are not perfect. However, they do provide a much more reliable source of evidence than our own impressions based on people around us, who might not represent the larger whole, or on assumptions that our opinions reflect those of a larger portion of the public (known as false consensus). Ultimately, we should push ourselves to ask for data beyond electoral horseraces and simplistic issue stances. By harnessing effective data drawn from reputable sources, we can not only better understand the public mood, but we can better craft solutions to societal issues.
Comments
1 Comment so far
I’m saving this for future reference when I do my research methods class.