Recently, Pew Research Center released a study evaluating methods for modeling likely voters, with an eye towards determining if – and how – it can be improved. Notably, they found that in the 2014 midterm election, as well as a number of other recent elections, there has been a significant difference between public and private pollsters’ accuracy in predicting election results. Where public pollsters were significantly off in predicting the results, TargetPoint and other private researchers accurately predicted the margins nearly across the board.
Among the the most important differences between public and private methodologies is in the sampling method and how likely voters are modeled. Most public pollsters, such as major news organizations, use random digit dial (RDD) samples, which reach a random sample of the population, then narrow down who they consider to be likely voters based on a series of questions.
Some private campaign researchers – including TargetPoint Consulting – usually work directly for a campaign or organization, and rarely release their results to the public. Increasingly, campaign pollsters generate their samples from the voter file
This results in survey samples which are comprised of only registered voters, and include a wealth of historical, observed voting behaviors. With vote history information and other voter data, we are able to create models that include concrete track records, rather than solely requiring on often misstated self-reports of turnout intention.
The Pew study found that “adding voter file records of past vote produced the greatest improvement in the forecasts.” This information is difficult to integrate when using RDD samples, so Pew suggests voter file sampling frames as a strong potential solution.
Pew also evaluated the use of modeled vote probabilities to inform polling. Whether built on a cutoff – including everyone at least X percent likely to vote in the voter screen – or weighted based on actual probability to vote, they found that voter models were very helpful in increasing poll accuracy. In particular, they found value in accounting for every voter by probability. TargetPoint Consulting routinely uses this for projects in which modeling is included, predicting elections with poll-driven models that work far better than polling alone.
Additionally, TargetPoint Consulting can use vote likelihood in producing polling samples and quotas, thereby ensuring representativeness and aiding in distributing polling samples across modes.
As Pew found, good research goes hand-in-hand with good data, good research design and good sample management. As more and more people become harder and harder to reach, truly random methods and single-mode contacts become increasingly difficult. Utilizing the voter file and voter models to understand the actual electorate, we can determine who we actually need to poll, when our sample is properly representative, and how to turn the data into accurate predictions. Such polling can be more expensive than traditional RDD polling and requires researchers have ready access to voter files and a proper analytics infrastructure, but it is increasingly clear that such steps are necessary.
We urge anybody producing survey research or consuming research to read Pew’s report and strongly consider adapting some similar methodology.