Well, they’re very good, anyway. The first is a 1999 gem by Philip Tetlock: “Theory-Driven Reasoning About Plausible Pasts and Probable Futures in World Politics: Are We Prisoners of Our Preconceptions?” (American Journal of Political Science 43(2): 335-66). The second is a 2005 piece by Erik Hoelzl and Aldo Rustichini: “Overconfident: Do You Put Your Money on It?” (Economic Journal 115: 305-318).

Tetlock’s piece explores the overconfidence of foreign policy experts on both historical “what-ifs” (“Would the Bolshevik takeover have been averted if World War I had not happened?”) and actual predictions (“The Soviet Union will collapse by 1993.”) The highlights:

  • Liberals believe that relatively minor events could have made the Soviet Union a lot better; conservatives believe that relatively minor events could have made South Africa a lot better.
  • Tetlock asked experts how they would react if a research team announced the discovery of new evidence. He randomly varied the slant of the evidence. He found a “pervasiveness of double standards: experts switched on the high-intensity search light of skepticism only for dissonant results.”
  • Tetlock began collecting data on foreign policy experts’ predictions back in the 80’s. For example, in 1988 he asked Sovietologists whether the USSR would still be around in 1993. Overall, experts who said they were 80% or more certain were in fact right only 45% of the time.
  • How did experts cope with their failed predictions? “[F]orecasters who had greater reason to be surprised by subsequent events managed to retain nearly as much confidence in the fundamental soundness of their judgments of political causality as forecasters who had less reason to be surprised.” The experts who made mistakes often announced that it didn’t matter because prediction is pretty much impossible anyway (but then why did they assign high probabilities in the first place?!) The mistaken experts also often said they were “almost right” (e.g. the coup against Gorbachev could have saved Communism) but correct experts very rarely conceded that they were “almost wrong” for similar reasons.

    One thing Tetlock didn’t do was make his experts put their money where their mouth is. Hoelzl and Rustichini’s paper strongly suggests that he should have. H-R re-ran a fairly standard experiment on overconfidence. Ordinary subjects took vocabulary tests (which could be easy or hard). They then got to vote for one of two options:

    Option #1: You win if you are in the top 50% of distribution.

    Option #2: You win with 50% probability regardless of your performance.

    Notice: Option #1 must be worse for half the subjects!

    In one version of the experiment, subjects were asked to imagine there was a $10 reward. In the other version, there actually was a $10 reward.

    The big result: “Choice behaviour changes from overconfidence to underconfidence when the task changes from easy and familiar to non-familiar. This effect is significant when monetary payments are at stake and weak when they are not.” Percentage of subjects who voted for Option #1:

    No Money

    Money

    Easy test

    63%

    64%

    Hard test

    56%

    39%

    Richard Thaler and other behavioral economists have argued quite aggressively that stronger incentives do not make people back away from their irrational beliefs. Hoelzl and Rustichini is a nice counter-example. In fact, they seem to show that people over-shoot! And at risk of sounding over-confident, if Tetlock’s experts had to bet real money on their predictions, I’m sure they too would have moderated their positions.

    Oh wait, you want me to bet actual money on that? Then I’ll give it 75%.