My co-authors and I finally have a working paper based on our 2008 survey of the general public and political scientists.  Many thanks to all the EconLog readers who helped along the way.

Here’s the basic idea from the intro.  Corrections and suggestions in the comments will be much appreciated.


Voters are not merely ignorant; their beliefs about policy-relevant subjects are often systematically biased.  Voters systematically overestimate the fraction of the federal budget spent on foreign aid and welfare, and underestimate the fraction spent on Social Security and health. (Kaiser Family Foundation and Harvard University 1995)  Less-informed voters favor systematically different policies than otherwise identical more-informed voters. (Althaus 2003, 1998, 1996)  Laymen’s beliefs about economics, the causes of cancer, and toxicology systematically diverge from the beliefs of experts, even when matched on traits like income, employment sector, job security, demographics, party identification, and ideology. (Caplan and Miller 2010; Caplan 2007, 2002; Lichter and Rothman 1999; Kraus, Malmfors, and Slovic 1992)  Voters also tend to discount evidence in conflict with their pre-existing beliefs. (Taber and Lodge 2006; Bullock 2006; Nyhan and Reifler 2010)  Taken together, the evidence raises a troubling question: If politicians cater to the policy preferences of the median voter, won’t inefficient and counter-productive policies win by popular demand? 

The strongest reply to this concern is that citizens vote for results, not policies…  One simple heuristic – reward success, punish failure – seems to allow voters with little, zero, or even negative knowledge about policy to extract socially desirable behavior from their leaders.

Unfortunately for democracy, this heuristic is not as foolproof as it seems. In order to reward success and punish failure, voters need to know which government actors – if any – are able to influence the various outcomes voters care about. (Arceneaux 2006; Anderson 2006; Cutler 2008, 2004; Rudolph and Grant 2002; Somin 1998; Lewis-Beck 1997; Leyden and Borrelli 1995; Kerr 1975)

[…]

The real danger to democracy comes from systematically biased beliefs about political influence. (Caplan 2007; Rabin 1998; Thaler 1992; Gilovich 1991)  Just as the market for automobile repair will work poorly if the average customer blames his grocer for engine trouble, local elections will work poorly if the average voter blames the president for the quality of public schools. 

To test the American public’s beliefs about political influence for systematic bias, we designed a new survey, and administered it to two distinct groups: (1) a nationally representative sample of Americans, and (2) members of the American Political Science Association who specialize in American politics.  One of the main ways that scholars have tested for the presence of systematic bias on other topics is to see whether average beliefs of laymen and experts diverge. (Caplan 2007; Lichter and Rothman 1999; Kraus, Malmfors, and Slovic 1992)…  If laymen and experts’ average beliefs differ, our defeasible presumption is that experts are right and laymen are wrong.

Systematically biased attributional beliefs turn out to be common and large.  Fully 14 out of 16 survey questions exhibit statistically significant biases.  Compared to experts in American politics, the public greatly overestimates the influence of state and local governments on the economy, the president and Congress on the quality of public education, the Federal Reserve on the budget, Congress on the Iraq War, and the Supreme Court on crime rates.  The public also moderately underestimates the influence of the Federal Reserve on the economy, state and local governments on public education, and the president and Congress on the budget.  While we are open to the possibility that non-cognitive factors explain observed belief gaps, controlling for demographics and various measures of self-serving and ideological bias does little to alter our results.  A full set of controls reduces the absolute magnitude of the raw belief gaps by less than 13% – and leaves the number of statistically significant lay-expert differences unchanged.