Bryan Caplan  

Viscusi on Risk Analysis of Terrorism: A Strange Agnosticism

PRINT
A Sociologist Gamer on the Soc... When the Facts Don't Change, I...

Kip Viscusi is probably academia's most famous risk analysis. His decades of research document democracy's pervasive tendency to adopt regulations with absurdly high cost-benefit ratios - to spend billions fighting problems that barely exist. But in a recent interview with the Richmond Fed, he refuses to embrace a perfect example of the kind of thing he's spent his career studying: the War on Terror.

Richmond Fed: Setting aside the issue of the wars in Iraq and Afghanistan, how would you assess the public policy response to the threat of domestic terrorism post-Sept. 11? Are we thinking about the issues in a way that is roughly correct and weighing the costs and benefits in a generally rational way?

Viscusi: ...The reason this is tricky is we don’t have very good numbers on what these risks are. We just don’t have a lot of data — unlike, say the risk of being in an automobile accident. We know the probability of that with relative precision. But the estimates of the probability of a terrorist attack or the number of people who are going to die in the coming year are all over the map. So if you can’t assess the likelihood of a terrorist attack or how deadly it is going to be, it is really hard to say how much you should spend to try to prevent it.

I'm frankly puzzled. As John Mueller shows in Overblown, a hundred years of experience with terrorism have shown it to be an extremely small problem in the broad scheme of things. How much longer does Viscusi want to wait before he'll conclude that the risk is very low?

The disappointment continues. In the next question, Viscusi largely dismisses the mechanism most likely to resolve remaining uncertainty:

RF: What do you think of the proposal to establish a prediction market to help assess the likelihood of a terrorist attack?

Viscusi: One problem with that proposal is that people can affect the probability of that outcome. If you can bet on it and make a lot of money, then people may have an incentive to launch a terrorist attack so they can collect on their bet.

It's unlike Viscusi to worry about risks without looking at the empirical evidence, but that's just what he's doing. It's a lot easier to profit from terrorism in regular financial markets, but the 9/11 Commission found no evidence of this. Furthermore, a growing academic literature finds that attempted manipulation actually makes market prices more accurate by creating arbitrage opportunities.

Viscusi's doubts about prediction markets continue:

Also, I’m not sure the information you would get would be refined enough to help you devise a defense strategy. It wouldn’t help you much to know that the probability of an attack has gone up if you don’t know the target. So the markets would need to be very specific, such as the probability of the Holland Tunnel being blown up in the next month.
This answer makes little sense. If your goal is to compare overall costs and overall benefits, an overall terrorism risk market is precisely what you need to consult. In fact, the Holland Tunnel market would be fairly useless for policy purposes because it doesn't tell you if beefing up Holland Tunnel security would reduce terrorism or simply redirect it.

Viscusi has done more than anyone to raise the level of debate on the subject of risk. If we'd used a Viscusian lens to think about terrorism for the last six years, we would have saved a lot of money, and might even be safer, too. Unfortunately, terrorism is so emotional an issue that Kip Viscusi himself seems too nervous to speak out.


Comments and Sharing





COMMENTS (15 to date)
Dan Weber writes:
Kip Viscusi is probably academia's most famous risk analysis.

Bryan Caplan is effective communication.

Mr. Econotarian writes:

I do agree that the true risks of terrorism are very difficult to predict. 9/11 killed ~5,000 people, but it very well could have killed 10,000 or even 20,000 had timing and aiming been slightly different.

The risk of "banal" form of terrorism (a guy with a gun or a car bomb) is fairly well understood from the "hundred year history".

However "sneaky/complex" terrorism like training pilots and flying planes into buildings, poison gas release from tank cars or chemical plants near urban areas, or god forbid a nuclear weapon from a former USSR republic, these have costs that are much less predictable, and moreover the form of the events themselves are much harder to predict and protect from ahead of time.

The two forms should be separated in terms of protection strategy.

Alex J. writes:

Betting on a terrorist act and committing it to cash in is not like betting on Hillary Clinton in order to "raise the odds" of her winning. If I put money on Hillary I'm giving money to sharp investors who see that the new odds are erroneous. If I put money on a terrorist event that I know is likely, while most people don't know this, I am making the market more accurate, not less. If I was trying to manipulate the prices, it would be profitable to bet against me. But if I know the terrorist act was going to occur, then it would be a losing proposition to bet against me.

conchis writes:

"Furthermore, a growing academic literature finds that attempted manipulation actually makes market prices more accurate by creating arbitrage opportunities."

This seems to me to miss the point. The argument here isn't that the incentive to launch attacks would make markets less accurate; it's that accuracy would be bought at the cost of increasing the likelihood of terrorist attacks.

(Of course, whether that's the case is, as you note, more debatable. There's at least a reasonable case that terrorists aren't likely to be particularly interested in placing bets here - particularly if it could jeopardise their ultimate goal by drawing the attention of the authorities.)

John Thacker writes:

a hundred years of experience with terrorism have shown it to be an extremely small problem in the broad scheme of things.

A hundred years of experience with modern mass-destruction weaponry? I think not. And lets not forget that that hundred years of experience includes a hundred years of democracies (and other governments) imposing quite severe civil liberties restrictions (much more severe than now, in many ways) to try to fight that terrorism. Of course, that doesn't prove that those tactics were effective.

It's not as though this argument is going to proceed much, thanks to everyone's priors. We're spending a lot on terrorism prevention; any lack of terrorism is taken as evidence that the spending is unnecessary by one, whereas it's taken as justification for the spending by another.

I do suspect that, e.g., massive screening of domestic airline passengers is pretty wasteful.

Dan Weber writes:

I could see terrorists shorting the acts they're going to commit, purposefully losing money so that they'll be more likely to succeed.

For example, if there was a market about someone blowing up the SuperBowl™, and I wanted to do that, I could massively bet against it. The government would see that it was unlikely to happen and reduce security. I then blow it up. I lose my money, but I just consider that money spent towards my goal.

How much money would it take to influence public policy like this, though? Probably a lot. Odds of even a general attack are, say, 1:100. Trying to increase that "100" value would be take a lot more leverage than raising that "1" value. We'd need these markets to be very big.

Stuart Buck writes:

As John Mueller shows in Overblown, a hundred years of experience with terrorism have shown it to be an extremely small problem in the broad scheme of things.

This is a very odd way of attempting to predict the unpredictable. As Nassim Taleb would say, your reasoning here is akin to the turkey who, in the week before Thanksgiving, reasons that he's had a few hundred days of healthy living, and based on that unbroken record of experience, projects a long life for himself. The point is, we're not dealing with a law of nature here, where if you observe a ball dropping 1,000 times, you can start to infer the force of gravity. The future risk of terrorism has virtually nothing to do with how prevalent terrorism was in the past 100 years. The risk over the next 5 years could be very small; it could be very large if it turns out that, right now, someone is figuring out how to weaponize an infectious disease that would cause a pandemic.

Buzzcut writes:

Here's what I think about your "hundred year experience with terrorism": one word: Black Swan.

Steve Sailer writes:

The terrorist attack on the Archduke Ferdinand in 1914 set off World War I, which might be the worst thing that ever happened.

Roy Haddad writes:

What a perfect example. Granting your claim (I simply wouldn't know), the worst terrorist attack ever was the worst ever precisely because of the insane overreaction of the rest of the world.

Paul N writes:

Here here Bryan, great post.

And I agree with Haddad in response to Sailer's comment.

Tom writes:

Bryan, I do not think you are thinking of all the costs of terrorism. Look at the billions taken off the top of the economy of 9-11. Even that would have been much worse if everyone hadn't expected we would retaliate.

The war in Iraq is a different question. It's more of an example how the cost of a problem rises sharply the longer it's ignored.

Two clunker posts in row. (re Cheney facts post)

Stephen W. Stanton writes:

It's been said before... But Viscusi has a point. Two, actually.

1. Terrorism prediction markets can be gamed. The incentives are too strong, and the ability to alter outcomes is too easy. We're not betting on the weather or a particular stock price, here. We're betting on the likelyhood of a lone nutter blowing up one gas truck in the right spot. Give me a million bucks, I'll fiund you that nutter.

2. Terrorism risk has increased because of DIS-continuity of history. TV, Internet, mobile telephony have finally removed barriers between cultures that had been easy to maintain. Modern and fundamentalist cultures are reacting together like lithim and water... Which are stable on their own.

Add to that the increasing capability of very small numbers of people to do very big amounts of damage... Via nukes, bio agents, and even conventional combustibles in very dense areas... You have a situation that simply never existed before.

Quick analogy: Your car mave have been safe for 120,000 miles.... But the moment your brakes fail, that's when your predicted level of safety detaches from the historical trendline.

Bill S writes:

The conclusion that "terrorism is so emotional an issue that Kip Viscusi himself seems too nervous to speak out" seems unwarranted.

To the extent Viscusi has made a career of assessing the costs and benefits of preventing fairly predictable types of risks, e.g. industrial accidents or auto accidents, that does not imply that he has all the answers -- or even the right analytical framework -- for assessing the costs and benefits of wildly unpredictable and potentially high impact risks like terrorist events.

I think Viscusi's reluctance to speak out on the topic of terrorism topic may have nothing to with nervousness or the emotional nature of the issue. Instead, it may reflect his understanding that he is being asked to comment on "Black Swan" risks that are beyond his ken -- and where it is fair to say that "nobody knows anything."

Nicholas Taleb's latest book has some interesting rants against financial economists misapplying Gaussian models to the wild uncertainty of financial markets. The same type of rants could well apply to economists naively using trailing data to make assessments about the "mean costs per life saved" from anti-terrorist efforts. In situations of wild uncertainty, over-reliance on trailing data can be highly misleading, as some previous comments indicate. The earlier comment on Taleb's example of a turkey's experience before and after Thanksgiving Day illustrates the point.

Oddly enough, the best book I know of regarding the appraisal of wild uncertainty is about the movie industry. It is called "Hollywood Economics" by Art DeVany (who was also mentioned in Taleb's book , The Black Swan).

DeVany explains why situations characterized by wild uncertainty should not be modeled with typical Gaussian risk distributions. Instead, he suggests using stable Paretian distributions. These distributions are anything but stable, in the conventional sense of the word, since their variance is infinite.

If terror risks follow stable Paretian distributions, it's hard to know in advance whether anti-terror efforts might save zero, fifty, fifty-thousand or a million lives. It also means that terror-obsessed screenwriters for a TV show like "24" could turn out to have a better handle on evaluating terror risks than, say, learned professors at GMU.

Conrad H. Roth writes:

"As Nassim Taleb would say, your reasoning here is akin to the turkey..."

Er, wasn't that Bertrand Russell?

Comments for this entry have been closed
Return to top