Comment: The closer we come to election day on 14 October, the more media focus we’ll see on political polls. Poll results are often used to project the makeup of parliament, despite them being snapshots of the polling period rather than reliable predictions.
Indeed, the accuracy of past political polls in New Zealand has been open to question. And the way results are framed can sometimes cause more confusion than provide useful context.
No poll is perfect. But understanding the quality of a poll and the results it produces requires knowing something about how the poll was designed and carried out.
We recently completed a guide to understanding public opinion polling in New Zealand that describes the important features of polls to look out for. These factors determine their quality and should be considered when making conclusions about the results.
Technical details (sample size, margin of error and so on) about a given poll are usually available. So what information is important to consider? Here are 10 things to think about when evaluating a political poll.
1. Sample size
Contrary to common misconceptions, good quality results about the New Zealand population can be obtained from polls with as few as 500 to 1,000 participants, if the poll is designed and conducted well.
Bigger samples lead to less random variation in the results—that is, the differences between the results from a sample of the population compared with the whole population. But bigger samples are more expensive to collect and they don’t make up for poor sampling design or polling process.
2. Target population
It should be clear which group of people the results are about. Results about sub-groups (for example, women in a certain age group) should be treated more cautiously, as these are associated with smaller samples and therefore greater error.
3. Sampling method
Sampling design is crucial. It determines how well the poll sample matches the target population (such as people intending to vote). Polls should be conducted with an element of choosing people at random (random sampling), as this achieves the best representation of the population.
Polls that allow for self-selection and that do not control who can participate—such as straw polls on media and social media sites—will end up over-representing some groups and under-representing others. This leads to biased, inaccurate results.
4. Sample weighting
When characteristics about the population and the sample are known—such as the percentage of women, age or region—“weighting” increases the contribution of responses from groups under-represented in the sample to better match the population of interest.
This is achieved by making responses from under-represented respondents count more towards the results of the total poll.
Weighting cannot be used, however, to correct unknown differences between the poll sample and the total population. The distribution of population characteristics, like gender and age, are known through the census, and can be adjusted in the sample with weighting.
But we don’t have known population characteristics for other things that may affect the results (such as level of interest in politics). Good sampling design, including elements of random sampling, are the best way to ensure these important but unknown characteristics in the poll sample are similar to the whole population.
5. Poll commissioner and agency
Knowing who paid for the poll is useful, as there may be vested interests at play. Results could be released selectively (for example, just those favourable to the commissioning organisation). Or there may be a hidden agenda, such as timing a poll around particular events.
Equally, we can be more confident in poll results when the polling agency has a strong track record of good practice, particularly if they follow national and international codes of best practice.
6. Poll timing
Knowing when the poll was conducted, and what was happening at the time, is important. Poll results describe public opinion at the time the poll was conducted. They aren’t a prediction of the election outcome.
7. Margin of error
Margins of error are a natural consequence of taking a sample. The margin depends on both the size of the poll sample or sub-sample, and the proportion of the sample selecting a given option. The margin of error is largest for a proportion of 50 percent and smaller at more extreme values—such as 5 percent and 95 percent.
This makes knowing the margin of error for smaller results very important. Minor parties, for example, may be close to the five percent threshold for entering parliament. Knowing the margin of error therefore provides a better picture of where they stand relative to this important threshold.
Considering the margin of error is also vital for assessing changes in poll results over time and differences within polls.
But the margin of error does not account for other sources of error in poll results, including those due to poor sampling methods, poorly worded questions or poor survey process.
The total error in a political poll consists of these other sources of error as well as the sampling error measured through the margin of error. Unless a poll is perfectly conducted (which is highly unlikely), the total survey error will always be larger than the margin of error alone would suggest.
8. Precise question wording
Responses to a poll question can vary markedly depending on how it is asked. So, pay attention to what specifically was asked in the poll and whether question phrasing could influence the results.
9. Percentage of ‘don’t knows’
Large percentages of “don’t know” responses can indicate questions on topics that poll respondents aren’t well informed on, or that are difficult to understand. For example, the percentage of “don’t know” responses to preferred prime minister questions can be as high as 33 percent.
10. The electoral context
The composition of parliament is determined by both general and Māori electorate results. Pay attention to the Māori electorates, where polls are often harder to conduct.
Māori electorate results are important, as candidates can win the seat and bring other MPs (proportionate to their overall party vote) in on their “coat tails”.
Finally—watch the trends
Making sense of polls can be challenging. Readers are best placed to interpret the results alongside other polls, past and present. Keeping margins of error in mind, this helps determine the overall trend of public opinion.
This article was originally published on The Conversation.
Nicole Satherley is an honorary academic in Psychology at the University of Auckland, Andrew Sporle is an honorary associate professor at the University of Auckland, and Lara Greaves is an associate professor at Te Herenga Waka—Victoria University of Wellington.