Bryan Caplan  

Two Flawless Articles on Overconfidence

PRINT
Critical Care Insurance... E for Excited...

Well, they're very good, anyway. The first is a 1999 gem by Philip Tetlock: "Theory-Driven Reasoning About Plausible Pasts and Probable Futures in World Politics: Are We Prisoners of Our Preconceptions?" (American Journal of Political Science 43(2): 335-66). The second is a 2005 piece by Erik Hoelzl and Aldo Rustichini: "Overconfident: Do You Put Your Money on It?" (Economic Journal 115: 305-318).

Tetlock's piece explores the overconfidence of foreign policy experts on both historical "what-ifs" ("Would the Bolshevik takeover have been averted if World War I had not happened?") and actual predictions ("The Soviet Union will collapse by 1993.") The highlights:

  • Liberals believe that relatively minor events could have made the Soviet Union a lot better; conservatives believe that relatively minor events could have made South Africa a lot better.

  • Tetlock asked experts how they would react if a research team announced the discovery of new evidence. He randomly varied the slant of the evidence. He found a "pervasiveness of double standards: experts switched on the high-intensity search light of skepticism only for dissonant results."

  • Tetlock began collecting data on foreign policy experts' predictions back in the 80's. For example, in 1988 he asked Sovietologists whether the USSR would still be around in 1993. Overall, experts who said they were 80% or more certain were in fact right only 45% of the time.

  • How did experts cope with their failed predictions? "[F]orecasters who had greater reason to be surprised by subsequent events managed to retain nearly as much confidence in the fundamental soundness of their judgments of political causality as forecasters who had less reason to be surprised." The experts who made mistakes often announced that it didn't matter because prediction is pretty much impossible anyway (but then why did they assign high probabilities in the first place?!) The mistaken experts also often said they were "almost right" (e.g. the coup against Gorbachev could have saved Communism) but correct experts very rarely conceded that they were "almost wrong" for similar reasons.

    One thing Tetlock didn't do was make his experts put their money where their mouth is. Hoelzl and Rustichini's paper strongly suggests that he should have. H-R re-ran a fairly standard experiment on overconfidence. Ordinary subjects took vocabulary tests (which could be easy or hard). They then got to vote for one of two options:

    Option #1: You win if you are in the top 50% of distribution.

    Option #2: You win with 50% probability regardless of your performance.

    Notice: Option #1 must be worse for half the subjects!

    In one version of the experiment, subjects were asked to imagine there was a $10 reward. In the other version, there actually was a $10 reward.

    The big result: "Choice behaviour changes from overconfidence to underconfidence when the task changes from easy and familiar to non-familiar. This effect is significant when monetary payments are at stake and weak when they are not." Percentage of subjects who voted for Option #1:

    No Money Money
    Easy test 63% 64%
    Hard test 56% 39%

    Richard Thaler and other behavioral economists have argued quite aggressively that stronger incentives do not make people back away from their irrational beliefs. Hoelzl and Rustichini is a nice counter-example. In fact, they seem to show that people over-shoot! And at risk of sounding over-confident, if Tetlock's experts had to bet real money on their predictions, I'm sure they too would have moderated their positions.

    Oh wait, you want me to bet actual money on that? Then I'll give it 75%.


  • Comments and Sharing





    TRACKBACKS (1 to date)
    TrackBack URL: http://econlog.econlib.org/mt/mt-tb.cgi/312
    The author at Statistical Modeling, Causal Inference, and Social Science in a related article titled Overconfidence in historical predictions; also a discussion of graphical displays of scientific results writes:
      Bryan Caplan writes about a cool paper from 1999 by Philip Tetlock on overconfidence in historical predictions. Here's Caplan's summary: Tetlock's piece explores the overconfidence of foreign policy experts on both historical "what-ifs" ("Would the Bol... [Tracked on July 16, 2005 9:12 PM]
    COMMENTS (7 to date)
    Mike Linksvayer writes:

    It appears that participants in the second paper more accurately assessed their relative performance with no money.

    Which of Thaler's or others' papers is the best place to get the "stronger incentives do not make people back away from their irrational beliefs" argument?

    What are the implications of either of these papers for idea futures/prediction markets?

    jaimito writes:

    If the prophets had to pay for being wrong, they would be less overconfident. According to the stories, in Oriental courts the Sultan or Chinese emperor used to behead the failed vizier. I wonder what that did for accuracy. It may have been a nice incentive.

    spencer writes:

    This reminds me of the old wall street comment that everytime a stock is traded someone has made a mistake.

    Roger McKinney writes:

    These articles remind me of one that Fortune or Forbes carried back in the mid 90s that said arrogance was a main cause of the failure of large established companies like Sears. My personal experience tells me that most managers are overconfident even when large sums of money or the success of their business is at stake. One reason for the overconfidence is that managers surround themselves with yes-men and brown-nosers.

    aaron writes:

    You seem awfully confident in these articles.

    aaron writes:

    Now, what if the overall performance of participants determines the size of the prize?

    deb writes:

    Gerd Gigerenzer, a psychologist was the first to demonstrate the easy-hard reversal in overconfidence judgments. Hoelzl and Rustichini don't cite his research. This is either outright plagiarism or extreme citational incompetence.

    If you read Gigerenzer, you'll see that the easy-hard reversal occurs even in the absence of monetary incentives. It might be stronger with monetary incentives, but your claim that it depends on incentives is false.

    Comments for this entry have been closed
    Return to top