BLOG@CACM
Computing Profession

What Happened?

Posted
Seal of the President of the United States.
In this age of data analytics, professional pollsters and data scientists did not have their finest moment during the 2016 U.S. presidential election.

(Updated December 27, 2016)

Many people continue to be shocked by the election of Donald J. Trump as the 45th President of the United States. In this age of data analytics, professional pollsters and data scientists did not have their finest moment. While the past several elections have been more favorable to attempts to forecast outcomes ahead of time, that trend failed to hold in 2016. As forecasters attempt to understand exactly what happened, the data itself may hold vital clues.

There were numerous factors that contributed to the uncertainty in forecasting what would happen in this election. First, there were two third-party candidates that as recently as one month before Election day were collectively polling in high single digits (approximately 9-10% in some polls). However, the final share of the national vote for all third-party candidates was 6.01%, with Gary Johnson and Jill Stein accounting for 4.34%, or 72% of this total [Leip, 2016, accessed 12/27/2016]. What happened to their vote? Those who supported a third-party candidate in the pre-election polls overwhelmingly voted for Trump in many key battleground states, as polling averages showed that Hillary Clinton’s popular vote were in line with the polling numbers [The New York Times, 2016]. It was Trump’s numbers that were significantly underestimated by pollsters. Were the pollsters wrong? They were reporting the information that was being communicated by likely voters. What received less attention was the possibility of silent Trump supporters who either indicated their preference for one of the third party candidates or remained undecided, when in fact they were leaning or intending to voting for him. The numbers of undecided voters in several state polls were in the mid to high single digits leading up to Election Day, many of which were likely silent Trump supporters.

To illustrate this point, the seven key battleground states are analyzed (Florida, Michigan, Nevada, New Hampshire, North Carolina, Pennsylvania, and Wisconsin), of which Trump won five (Florida, Michigan, North Carolina, Pennsylvania, and Wisconsin) and Clinton won two (Nevada, New Hampshire). Minnesota is not included in this list, since there were a very limited number of polls reported, with the last poll dated October 25, a full two weeks before election day. This suggests that few people felt that Minnesota was in play for Trump, even though he visited Minneapolis several days before the election.

Table 1 reports the polling averages for these seven states [Real Clear Politics, 2016], based on polls that were reported from October 31 to November 7, as well as the actual vote percentages. The polling averages were computed by taking the percentages reported, weighted by the size of the polls, to obtain an overall poll average. Any remaining weight that was not reported was attributed to undecided voters.

Table 1
Polling and Actual Vote Percentages for Battleground States
  Trump Clinton Johnson, Stein and Others Undecided
Florida (Polls) 46.74% 46.40% 3.44% 3.42%
Florida (Vote) 48.60% 47.41% 3.99%
Michigan (Polls) 43.02% 46.50% 6.02% 4.46%
Michigan (Vote) 47.26% 47.04% 5.70%
Nevada (Polls) 45.66% 44.92% 3.80% 5.62%
Nevada (Vote) 45.50% 47.92% 6.58%
New Hampshire (Polls) 42.83% 43.49% 7.96% 5.72%
New Hampshire (Vote) 46.46% 46.83% 6.71%
North Carolina (Polls) 46.34% 45.30% 3.22% 5.14%
North Carolina (Vote) 49.83% 46.17% 4.00%
Pennsylvania (Polls) 45.14% 46.85% 4.68% 3.33%
Pennsylvania (Vote) 48.20% 47.48% 4.32%
Wisconsin (Polls) 40.39% 47.61% 4.78% 7.22%
Wisconsin (Vote) 47.22% 46.25% 6.53%

In all seven states, the polling gap between Trump and Clinton was covered by the undecided voters. The most extreme case occurred in Wisconsin, where the undecided voter percentage was exactly the gap. This suggests that the undecided voters were sufficient to give either Trump or Clinton a victory in any of the states. Trump outperformed the polls in all the states, except Nevada (which he lost). Clinton outperformed the polls in all the states, except Wisconsin (which she lost). For six of the seven states, the percentage of votes for Johnson, Stein, and others was below the national average of 6.02%. This data suggests that although Clinton was in a good position based on the polling averages in these seven states, the percentage of undecided voters created a small opening for Trump to defy forecasters’ expectations and take the lead. In other states that leaned strongly towards either Democrat or Republican, this level of uncertainty did not play a role in changing the outcomes.

Figure 1 presents the results of Table 1 in graphical form. For each state, the arrow shows how the expected outcome from the polls transformed into the actual outcome once votes were tabulated. The green lines on the plots indicate the possible splits for undecided voters (based on the proportion of undecided voters given by the polls). Note that in some states (e.g., NH) the arrows go beyond the green line, indicating that the third-party candidates received fewer votes than indicated by the polling data.

A closer analysis of the election day outcome yields some insight into the impact of the third-party candidates. Table 2 gives the vote totals for each of the major candidates (Trump, Clinton, Johnson, Stein) plus votes for others. In FL, PA, MI, WI, NV, and NH,

Figure 1: Plots of the expected outcomes from polling data (arrow tails)
and the actual outcomes (arrow heads) in each of the 7 battleground states
presented in Table 1. The green dots on the plots indicate
the expected outcomes based on the seven swing scenarios
provided on the Election Analytics website.
Table 3 shows the breakdown of these scenarios.
Table 2: Actual share of votes for each of the major candidates.
The “Total” column reports vote totals for third-party candidates and others,
the “ΔT,C” column reports the difference in votes between Trump and Clinton,
and the last column reports ΔT,C as a percentage of the total vote
for third-party and other candidates.

the gap between Trump and Clinton was 40% or less of the total votes for third party and other candidates. In MI, WI, NV, and NH, the gap between Trump and Clinton was 20% or less of the total votes for third party and other candidates. In all these states, Gary Johnson’s vote count was larger than the gap between Trump and Clinton. Moreover, in MI, WI, and NH, Jill Stein’s vote count was larger than the gap between Trump and Clinton. This suggests that there might have been a different outcome on election day had one or more of these third-party candidates not been in the race.

What happened in Pennsylvania may explain what happened in the upper Midwest, commonly referred to as the “rust belt.” A closer look at the county level data [State of Pennsylvania, 2016] may yield some insight. Figure 2 shows the numbers of registrations by county for Democrats (left) and Republicans (right) from 2006 through 2016. Note that the vertical axis is logarithmic. Across almost every county, including Philadelphia county (the largest county in Pennsylvania) there is an uptick in the number of Republican registered voters. The Democratic registrations, shown on the left, indicate an increase in some counties but most counties are at or nearly at. This slight increase in Republican voters is clearly discernible at the far right of Figure 2. From 2015 to 2016, the number of Republican voters increased by 10.8%, whereas over the same time period the number of Democratic voters increased by just 6.1%. The number of registered Republicans increased in all 67 counties from 2015 to 2016; the number of registered Democrats increased in just 45 of the 67 counties. In closely contested states, such small effects can be decisive. In 2012 Obama defeated Romney by 4.5 percentage points (52.0% to 46.6%) and in 2008 Obama defeated McCain by over 10 percentage points (54.5% to 44.2%).

A second explanation is that turnout was generally higher in “red” counties, that is counties where the proportion of Republican registered voters was higher. Figure 3 shows the turnout (percentage of registered voters who voted) for each county in Pennsylvania for 2016 (top) and 2012 (bottom). Red dots indicate counties that went for Trump or Romney, and blue dots indicate counties that went for Clinton or Obama. Several features can be gleaned from this figure. First, there seems to


Figure 2: Registered voters in Pennsylvania counties from 2006 to 2016
with Democrats on the left and Republicans on the right. There is a clear uptick
in Republican registrations in the year just before the election,
and this occurs in nearly every county.

be a positive association, between turnout and “redness,” if we measure redness by the proportion of registered voters who are registered as Republicans. Second, the distribution of percentages of Republican voters across the counties shifted to the right from 2012 to 2016. Third, the turnout was generally higher and less variable in 2016 than in 2012. The state’s largest county by population is Philadelphia county, which includes the city of Philadelphia and has by far the highest proportion of registered Democrats. This county had a higher turnout in 2016 than 2012, but Trump got a higher percentage of votes (15.5%) than Romney (14.1%) in the previous election.

Figure 4 shows the change from 2012 to 2016 in the percentage of registered Republicans and the turnout. Nearly all counties moved upwards, indicating a higher turnout generally. Most counties also moved rightward, indicating a higher measure of “redness.”

Forecasters are not predictors. The former provide probabilistic statements about potential future outcomes, while the latter provide definitive ones. A weatherperson is perhaps the canonical example of a forecaster. They use the available data and match it to historical patterns to provide a set of scenarios of what one can expect to observe. If they were asked to predict (i.e., make a definitive statement about) the weather, the tools available to them are woefully inadequate to provide a deep level of specificity and precision. That is why the paths of hurricanes in the Atlantic Ocean are tracked using multiple models, with often conflicting forecasts and uncertain paths. In the end, one model is likely to be correct. Does that mean that the incorrect models are and should be discarded? Of course not. Their time will come and each model will demonstrate its utility; it is just not known beforehand when that will occur. Another example of how difficult it is to predict future events is the NCAA Men’s basketball tournament (aka March Madness). The


Figure 3: Percentage of registered Republican (a measure of \redness” of the county) versus turnout.


amount of data available about the tournament teams and their performance dwarfs what pollsters had available to them in this year’s election. Yet, when ESPN posts the winning bracket in their bracket competition, it is rare that the best bracket (out of 10M+) contain fewer than six incorrect games out of the 63 games played in the main bracket draw. Does this mean that sports analytics is awed? Similar examples can be found in how we make investments, buying index funds rather than individual equities. Managed mutual funds typically under-perform index funds, indicating that the most skilled money managers fall short against a basket of

Figure 4: Percentage of registered Republican (a measure of \redness” of the county) versus turnout.

diversifed products. All this suggests is that there are limitations in the level of precision that can be achieved in forecasting something that has yet to happen, and whose outcome is fraught with uncertainty. Indeed, everyone expects to miss some of the games in their bracket, yet with a winner take all election forecast, the expectation of precision makes even the best equipped data scientists subject to errors. In essence, if one strives for forecasting perfection, the only solution is to make none.

Several popular forecasting websites, FiveThirtyEight, the Princeton Electoral Consortium, the New York Times Upshot, and Election Analytics, all had Clinton winning with a 75% to 99% chance of victory prior to the election. All such forecasts were wrong based on the outcome of the election, but were correct given the available polling data. For example, in Wisconsin, every single poll available from Real Clear Politics had Clinton ahead, with an average lead of 6.5 points (as computed by Real Clear Politics). Given this data, it would be hard to justify a forecast with Trump winning Wisconsin.

At Election Analytics, there were 21 possible forecast scenarios available, determined by whether the race was two person (Trump versus Clinton), three person (Trump versus Clinton versus Johnson), or four person (Trump versus Clinton versus Johnson versus Stein), and seven possible ways to swing the undecided voters (ranging from Very Strong Democrat to Very Strong Republication).

Table 3 provides the breakdown for undecided voters based on these seven scenarios.

 

 

 

 

 

 

The four-person, neutral undecided scenario yielded a 99.20% chance of a Clinton victory. Of these 21 scenarios, only one, (two person, Very Strong Republican Undecided Swing) had Trump winning (with a 77.66% chance and a 274.63-263.37 Electoral College win, in expectation), though through a different set of states than the actual outcome.

Trump versus Clinton is arguably the biggest upset since Truman versus Dewey in 1948. The Chicago Tribune‘s headline of Dewey defeating Truman is in some ways analogous to the data scientists like FiveThirtyEight and Election Analytics forecasting a likely win for Clinton. Once again, when one least expects it, a major political upset occurs. A comparison of the voting data illuminates which of these upsets was larger. Truman won 303 Electoral College Votes to Dewey’s 189 (Strom Thurmond won 39 as a third party candidate). Trump won 306 to Clinton’s 232. From a popular vote perspective, Truman won 49.55% to Dewey’s 45.07% [Leip, 2016], while Clinton won 48.04% to Trump’s 44.95%. Using this metric, Trump’s victory was more of an upset than was Truman’s. A closer inspection of the battleground states can shed more light on the matter. Truman had three states in which he won by under 1% : Illinois (0.84%), California (0.44%), and Ohio (0.24%). If Dewey had won all three states, he would have won the election. The total popular vote difference for these three states was just under 59,000 out of nearly 11,000,000 votes, or 0.54% of the votes cast in these three states, or 0.14% of the popular vote. Trump had three states in which he won by under 1.25%: Wisconsin (0.77%), Pennsylvania (0.72%), and Michigan (0.27%). If Clinton had won all three states, she would have won the election. The total popular vote difference for these three states was just under 78,000 out of over 13,000,000 votes, or 0.59% of the votes cast in these three states. From this state-level data, the Truman and Trump victories are at comparable levels of upsets, making them the two most surprising presidential victories in American history.

In the aftermath of this year’s election, data analytics has lost some of its luster. However, in time, it will regain its appropriate place in the forecasting world, as a tool to understand the possibilities of the future, not provide a prediction of certainty, even when the numbers indicate so. Yogi Berra had it right when he said that the future is hard to predict, because it has not happened yet. No truer words have been spoken.

References

D. Leip. Dave Leip’s Atlas of U.S. Presidential Elections. http://uselectionatlas.org/RESULTS/index.html, 2016. Accessed November 16, 2016.

The New York Times. Latest election polls 2016. http://uselectionatlas.org/RESULTS/index.html, 2016. Accessed November 16, 2016.

Real Clear Politics. http://www.realclearpolitics.com, 2016. Accessed November 16, 2016.

State of Pennsylvania. Voter registration statistics. http://www.dos.pa.gov/VotingElections/OtherServicesEvents/VotingElectionStatistics/Pages/Voter-Registration-Statistics-Archives.aspx, 2016. Accessed November 20, 2016

Sheldon H. Jacobson is a professor of computer science in the Department of Computer Science in the College of Engineering of the University of Illinois at Urbana-Champaign. Jason J. Sauppe is an assistant professor in the Department of Computer Science of the University of Wisconsin, La Crosse. Steven E. Rigdon is a professor in the Department of Biostatistics of the College for Public Health & Social Justice of Saint Louis University.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More