Friday, January 6, 2017

Assessing democracy in the aftermath of Trump's victory, Part XVI: The mechanical nature of elections

Once we strip away the bullshit, empirical political science basically got 2016 right.  The fundamental circumstances of 2016 predicted a close election, leaning Republican.  Alan Abramowitz's "Time for a Change" model-- the model I have always favored (but admittedly lost faith in this year)-- is based on GDP growth in the second quarter of the election year, the incumbent president's popularity, and a penalty that kicks in when one party has won two elections in a row.  Abramowitz's model predicted a Republican victory because GDP growth wasn't high enough to overcome the two-term penalty the Democrats were facing.  Everything else just... canceled out.

The point of the last bunch of posts in the series was that it shouldn't have done so, in rational choice terms.  The spatial models that political scientists have been using since Anthony Downs's An Economic Theory of Democracy puts candidates on a left-right spectrum, and the candidate closest to the median voter is supposed to win, unless something is fucked up.  While Trump has no location on that spectrum, he will rubber-stamp Paul Ryan's legislation, and since Ryan is more extreme than Hillary Clinton, that makes Trump effectively more extreme than Clinton, so in Downsian terms, we've got a rational choice failure.

Then, we've got "valence."  Donald Stokes introduced the concept of valence "issues," which are the issues where voters agree on outcomes, but disagree on who can provide those outcomes, like a strong economy.  Rational choice theorists have spent years including valence "dimensions" in their models, to capture personal characteristics that voters just intrinsically want, like competence and honesty because voters might accept an extremist if the centrist has a low score on the valence dimension.

Problem:  Trump objectively lies more than Clinton, has no relevant experience, and knows nothing about public policy.  He scores lower on the "competence and honesty" valence traits than Clinton, so Clinton has a policy and valence advantage.

And yet Abramowitz wins.  DDRRDDRRDDRRDDRRDD

How many elections would I have to flip for that to have been the perfect pattern from 1944 through 2012?  One.  1980.  If that had been a D, it would have been DDRR the whole way through.

And there is no policy justification for that pattern, in the way that there is for the Downsian spatial model.  There is no theoretical justification in the way that there is for valence characteristics.  Skeptics of the Abramowitz predictive model always bring up the point that it is atheoretical.  There is no reason to think that it should be true.

And they are right!  There's also no theoretical reason to think that quantum mechanics should work.  But, those equations sure seem to predict stuff!

2008 and 2012 were D years.  R came next.  R won.  Should R have won?  In a Downsian sense, no.  Trump will rubber-stamp Paul Ryan, who is far more extreme than Clinton.  In valence-terms, Trump certainly shouldn't have won.

And voters knew that.  One of the ironies is that voters knew that Trump isn't qualified to be president.

What was there to over-ride that concern?

DD

R came next.  It was mechanical.  The mechanical nature of the process gave the presidency to a man that most voters knew wasn't qualified to be president.

That poses some important questions for us to ask about democracy!

No comments:

Post a Comment