When I criticize the way social scientists study politics using polls and other surveys, people say I don’t know what academics really do and, in fact, academics are doing what I say they aren’t doing.
What about when an academic says it?
When the people have spoken, it is critical to understand what it is that they have said. Nonetheless, even in high-profile American presidential elections, this important task typically is left to journalists and pundits who are unlikely to have the ideal tools, or adequate data to address this question. Because elections are not amenable to experimentation, it is difficult for scholars to make strong causal claims. As a result, most interpretations of election outcomes either rely on cross-sectional associations in survey data or are inferred from aggregate data on voting patterns by geographic areas. Neither approach is the best that can be done.
In observational settings, panel data are widely acknowledged as the ideal basis for causal conclusions (1). When analyzed appropriately, they have the ability to eliminate most potentially spurious associations. Surprisingly, there are few panels available to examine most election outcomes. The major data collections pertaining to elections are cross-sectional in design. However, for the two most recent presidential elections, a large, representative probability sample of the American public was interviewed in both October 2012 and again in October 2016, shortly before Donald J. Trump’s victory. This panel provides an unprecedented opportunity to examine the basis of mass support for the winning candidate, support that ultimately elected Donald J. Trump. What changed during this 4-year period to facilitate his support?