Bryan Caplan  

Tackling Tetlock

College Illiteracy... How Bad is Life in North Korea...

Philip Tetlock, one of my favorite social scientists, is making waves with his new book, Expert Political Judgment. Tetlock spent two decades asking hundreds of political experts to make predictions about hundreds of issues. With all this data under his belt, he then asks and tries to answer a bunch of Big Questions, including "Do experts on average have a greater-than-chance ability to predict the future?," and "What kinds of experts have the greatest forecasting ability?" This book is literally awesome - to understand Tetlock's project and see how well he follows through fills me with awe.

And that's tough for me to admit, because it would be easy to interpret Tetlock's work as a great refutation of my own. Most of my research highlights the systematic belief differences between economists and the general public, and defends the simple "The experts are right, the public is wrong," interpretation of the facts. But Tetlock finds that the average expert is an embarassingly bad forecaster. In fact, experts barely beat what Tetlock calls the "chimp" stategy of random guessing.

Is my confidence in experts completely misplaced? I think not. Tetlock's sample suffers from severe selection bias. He deliberately asked relatively difficult and controversial questions. As his methodological appendix explains, questions had to "Pass the 'don't bother me too often with dumb questions' test." Dumb according to who? The implicit answer is "Dumb according to the typical expert in the field." What Tetlock really shows is that experts are overconfident if you exclude the questions where they have reached a solid consensus.

This is still an important finding. Experts really do make overconfident predictions about controversial questions. We have to stop doing that! However, this does not show that experts are overconfident about their core findings.

It's particularly important to make this distinction because Tetlock's work is so good that a lot of crackpots will want to highjack it: "Experts are scarcely better than chimps, so why not give intelligent design and protectionism equal time?" But what Tetlock really shows is that experts can raise their credibility if they stop overreaching.

Comments and Sharing

TRACKBACKS (2 to date)
TrackBack URL:
The author at Muck and Mystery in a related article titled Overconfidence writes:
    I've been wondering how those who were confident of their expertise would respond to the recent discussion of Philip Tetlock's new book - Expert Political Judgment: How Good Is It? How Can We Know? - mentioned in Science Class. Bryan Caplan's response... [Tracked on December 27, 2005 2:59 AM]
The author at Muck and Mystery in a related article titled Credulous Foxes writes:
    The post Overconfidence discussed and disputed Bryan Caplans reactions to Philip Tetlock's new book - Expert Political Judgment: How Good Is It? How Can We Know?, and drew heavily on J.D. Trout & Michael Bishop, especially their essay 50 Years of Succ... [Tracked on January 15, 2006 4:48 PM]
COMMENTS (20 to date)
Steve Sailer writes:

Right, the questions people are interested in hearing forecasts about are often ones that are contrived precisely to be hard to predict. For example, not whether the Super Bowl champ Patriots would beat the lowly Jets last night, but whether they'd beat them by more than the point spread.

Steve Sailer writes:

By the way, a fun trick to play on economists is to say, "Free trade has been backed by all hard-headed, highly successful, practical-minded men like Bismarck and Alexander Hamilton," and watch how all the economists' heads nod sagely in agreement, until one spoilsport says, "Hey, wait a minute, Bismarck and Hamilton were protectionists!"

Another fun trick is to get economists who drive Hondas to cite Ronald Reagan as a free trader and then ask them why their Honda was built on this side of the Pacific.

spencer writes:

The point spread is a price that brings supply and demand into balance so that the people running the book are not exposed to large risks.
It reflects the wisdom of the crowd and if it works as it is suppose to an equal number of betters will be on the over and under side of the bet.

So you should not use betting against the point spread as an example of making predictions.

Robin Hanson writes:

This is a fine hypothesis Bryan. I hope that someone can test it someday. The biggest lesson I take from Tetlock is that we need to start measuring more about opinion accuracy, to test all these theories we so easily generate, and so rarely test.

Roger M writes:

Bryan believes that experts have "reached a solid consensus" on core issues. My own experience has been that you can find experts on any side of any issue. Usually one side has a majority of "experts" on its side, but often the minority opinion turns out to be the correct one. This is particularly true of innovative ideas, which often take a long time for the establishment to accept.

Remember President Clinton having 500 economists approve of his tax increases? And global warming/cooling? Keynesian economics comes to mind, too.

Bryan loves to take shots at intelligent design, but as Dan Peterson shows in the June issue of The American Spectator, and in an online article,, there are a lot of experts on the side of intelligent design.

My guess is that Bryan's research suffers from selection bias, while Tetlock's covers experts on both sides of any issue. The experts on one side of any issue may be right more often than the other side, combining them in a statistical analysis will make all of them look bad. It's the fallacy of the average.

daveg writes:

So you should not use betting against the point spread as an example of making predictions.

The example seems to be between two levels of difficulty in making predictions. It is more difficult to make a prediction agains the spread than on absolute outcome of the game.

That is, the spread is a reasonable predictor of the outcome of the game.

It is also true that prediction is not the main purpose of the spread, but I don't think that invalidates the point being made.

daveg writes:

Could the experts be wrong about illegal immigration?

The cost of illegal immigration

Influx of unauthorized migrants contributes to existing problems

Sunrise Community Health Center's Dr. Daphne Madden, checks on Sarai Sanchez Castellar, 19, of Greeley hours before she gave birth.
Hillary Wheat/Greeley Tribune

Brady McCombs
December 27, 2005

GREELEY - Dr. Daphne Madden faces an agonizing dilemma at least once a month at the Sunrise Community Health Center in Greeley.

A young pregnant woman arrives to the clinic without health insurance or a Social Security number. The woman is an illegal immigrant and doesn't qualify for prenatal Medicaid assistance.

When she arrives at the hospital in labor, emergency Medicaid will cover her expenses, but before that, she's on her own. Most women who arrive in this situation find a way to pay for the prenatal care but some simply can't afford it.

The situation forces family physicians like Madden to choose between giving free medical care or putting a woman and her baby (who will become a U.S. citizen at birth) at risk.

"We can't give prenatal care for free because we have our own costs, which are some five to eight times more than that, particularly if they begin having complications," said Dr. Madden, who has worked at the Sunrise clinic since 1995.

Doctors aren't the only ones who feel the strain of the estimated 200,000 - 250,000 illegal immigrants living in Colorado.

Educators face the challenge of educating children who enter kindergarten unable to speak or understand English. Sheriffs and district attorneys struggle to shoulder the extra load of a growing population that - like any other - has members who commit crimes and crowd jails and court dockets.

Some argue that the illegal immigrants lower wages in agriculture, construction and service jobs while others say their presence creates negative stereotyping of all Latinos.


RogerM writes:

I think on your last post you're making the same error as Bryan in assuming that Tetlock said the experts were wrong. What Tetlock actually wrote was that experts and common people make equally bad forecasts, if they're both given the same information. Experts obviously know more than non-experts, but their egos get in the way of them making good forecasts. Check out the Muck and Mystery link below. It contains a good article on how experts could improve forecasts by relying more upon statistical models.

I was impressed by the fact that some of the studies of the fallibility of experts compare them with the results of statistical analyses. Maybe the lesson is that experts who use statistical analyses are more accurate than experts who don't.

At least, that's what experts using statistical analyses say.

Roger M writes:

The J.D. Trout & Michael Bishop essay "50 Years of Successful Predictive Modeling Should be Enough: Lessons for Philosophy of Science" quoted in the Muck & Mystery blog, provides evidence that experts using just their judgement are far worse predictors of the future than simple linear regression models, even models in which all of the coefficients are ones.

I've run into something similar with people investing in the stock market. Experts will spend thousands of dollars on models to guide them, then ignore the model results. Maybe that's why so few mutual funds do as well as stock indexes.

Simon writes:

I think that mahalanobis ( summarizes the problem very well.

"Oil Analysts, Wrong Since 2001, Predict Prices Will Rise in First Quarter"
Actual headline on my Bloomberg terminal today (no web story, unfortunately). This is a nice encapsulation of the "market expert" problem--if they were right more than chance, they would not give their opinions away for free to journalists.

Bryan, experts fail for simple reasons:
- One, they seldom really use their models
- Two, their models seldom include more than a few variables
- Three, they are clsoed system logics built to forecast in an open systems world
- Four, the models are only marginal in terms of statistical significance. They seldom explain most of the variance. This is particularly true of Behavorial Decision Theory. One of the great WEAKNESSES associated with BDT is the small amount of variance actually explained by these theories.

Dewey Munson writes:

Then again, life's problems are centered on the unpredictable and the ensuing solution efforts will distort analysis of prior predictions.

Roger M writes:

Trout and Bishop argue in their paper that experts fail because they don't use statistical models at all, but use intuition and expertise instead. Had they used any statistical model at all, even a bad one, they would make better forecasts.

Phil writes:

Isn't this somewhat of a tautology? In any field, no matter how advanced, there are things known and things unknown. So it shouldn't be a surprise to find that for things that are unknown to experts, the experts don't know them!

This is a bit of an oversimplification, of course, but still ...

Roger M writes:

I believe Tetlock was referring to predicting the future. The issue isn't what an expert knows or doesn't know, but how well experts can use what they know to predict the future. One would think that experts could make better predictions than nonexperts. The evidence says no.

Roger M writes:

Trout and Bishop use medical doctors as an example. They show that statistical techniques perform much better than physicians in diagnosing illnesses. But doctors won't use them.

Will Wilkinson writes:

Let me second Roger's promo of Trout & Bishop.

Epistemology & the Psychology of Human Judgment is a must-read for Bryan, Robin, and the people who love them.

Maybe a good topic for discussion -- maybe an urgent one, come to think of it -- in the many professional fields would be, "How to acquire expertise without the usual accompanying ego-bloat." Why do I suspect the lecturer on the topic would be a pompous, full-of-himself ass?

And -- given that the experts predict no more successfully than Joe/Jill Sixpack does -- maybe another good topic would be, Why do some people get to be professional pundits while the rest of of us don't? It apparently doesn't have to do with the, er, objective worth of their opinions and predictions. What then does it have to do with?

Anna Haynes writes:

Don't dispense with the experts just yet. From Carl Bialik on Tetlock in Jan 6. WSJ:

The New Yorker's review of [Tetlock's] book surveyed the grim state of expert political predictions and concluded by advising readers, 'Think for yourself.' Prof. Tetlock isn't sure he agrees with that advice. He pointed out an exercise he conducted in the course of his research, in which he gave Berkeley undergraduates brief reports from Facts on File about political hot spots, then asked them to make forecasts. Their predictions -- based on far less background knowledge than his pundits called upon -- were the worst he encountered, even less accurate than the worst hedgehogs. 'Unassisted human intuition is a bomb here,' Prof. Tetlock told me.

Comments for this entry have been closed
Return to top