Arnold Kling  

Thoughts on Probability and Uncertainty

Explaining Oliver Williamson... Certainty, Uncertainty, and Ma...

Eric Falkenstein watched the Youtube of the Caplan-Boettke debate on Austrian economics. Falkenstein concludes,

What is needed is something constructive, something the Austrians, Post-Keynesians, or Taleb, have failed to do.

Let me place this in the context of my introduction to the philosophy of probability. I say that probability can be axiomatic, empirical, or subjective.

When I say, "There is a probability of 0.5 that the flipped coin will come up heads," I could be saying:

1. That is the definition of a fair coin.
2. I have observed lots of coin flips, and the empirical frequency of heads is 0.5
3. My personal opinion is that there is a probability of heads of 0.5

I think these are three different uses of the term "probability," and we make philosophical errors by confusing them. The statement about the coin fits best with an axiomatic approach. A statement about the probability of selecting an American male at random and having him be 7 feet tall or higher rests on an empirical notion of probability. And the probability that inflation will average more than 6 percent over the next five years is largely subjective.

Subjective probability is particularly important with regard to non-repeatable events. If you believe that macroeconomic history generates sufficient repetition, then you would move the inflation forecast over into the empirical category. I would not do so.

One way to make the term "radical uncertainty" operational is to say that we are talking about a forecast for a non-repeatable event, where in order to forecast you cannot simply look at empirical data (such as the proportion of men over 7 feet tall in a large sample) but must do extensive interpretation of the meaning of historical data. That is, whenever the most applicable definition of "probability" is subjective probability, we are talking about radical uncertainty. That may or may not conform to what the Austrians think of as radical uncertainty.

For axiomatic probability and for empirical probability, there is reason to hope that different people will come to agreement. We can all agree that the probability of rolling two dice with a sum of 8 ought to be 5/36. We can all agree that if, say, 0.004 percent of men are over 7 feet tall (I have no idea what the true number is), then choosing a man at random gives us a 0.004 percent chance of choosing a man over 7 feet tall.

But we need not come to agreement over the probability that inflation will exceed 6 percent on average over the next five years. If Robin Hanson sets up a betting market, then those of us who put our money where our mouths are can produce a market-based estimate of the odds. But that market-based estimate is just, in the end, another version of subjective probability. A rational person may disagree with the market forecast, and yet be unwilling or unable to place a large enough bet to move that forecast.

For me, radical uncertainty refers to non-repeatable events. (By the way, the line between a repeatable event and a non-repeatable event is not necessarily simple to draw. Is A-Rod's next at-bat in postseason a repeatable event? Or not? In this case, I would treat it as a repeatable event.) It means that the applicable definition of probability is subjective probability. And it means that reasonable people do not necessarily have to come to agreement on the probabilities before the uncertainty is resolved. This latter point nay cause smoke to pour out of Robin Hanson's ears.

Returning to Falkenstein's complaint, I think that there is a basic conundrum. Perhaps his idea of "something constructive" is something that works more like axiomatic probability or empirical probability. However, those definitions do not apply for non-repeatable events.

Comments and Sharing

TRACKBACKS (1 to date)
TrackBack URL:
The author at Interfluidity in a related article titled Information is stimulus writes:

    Suppose that the Federal government were to offer sizable loan guarantees for any and all "green energy" companies. Any firm, including new entrants, would be eligible. The government would do some cursory due diligence, only to establish that the c...

    [Tracked on October 12, 2009 4:59 PM]
COMMENTS (14 to date)
ThomasL writes:

I have taken "radical uncertainty" to apply, not as Bryan uses in the debate, the case of knowing literally nothing (which is, as he asserts, difficult to imagine), but instead describing a case where what one does know informs the situation so little that it cannot be used to make decisions.

Or, I suppose in Bryan-speak, a case where the probability is incalculable, p = ?

I don't think a 'p=?' case is impossible, particularly when one is relying on myriads of other actors with private knowledge and private motives. I may be able to guess that they know something I don't, and I can guess at their motives insofar as I have general knowledge of possible motives, but overall only with a small degree of probability of being correct in any individual case. When multiplied by thousands or millions of participants, I might arrive at a uselessly small 'p' in short order, even in a universe where it were somehow possible to prove that my initial probability values were accurate.

ThomasL writes:

To clarify, as I wrote that rather badly. It isn't, of course, the 'p' that would get small, since that would imply I knew something to be unlikely. I meant that the _range_ would become so broad as to be practically infinite, leaving me with a fat "?" as an answer. This is in contrast to the Bryan-esque (at least in the debate) view that I will somehow always have sufficient knowledge to be be able to put it to use churning out probabilities that I can apply to any given situation.

Charley Hooper writes:

I agree. This is what David Henderson and I said in our book, Making Great Decisions in Business and Life:

If I pull a foreign-looking coin out of my pocket, what probability would you assign to my flipping a head? The answer is fifty percent if the coin is fair. What if the coin isn’t exactly clean, balanced, unblemished, and symmetrical? You have to believe that the coin is fair to assert 50 percent, and beliefs are based on information. If we flipped this particular coin 10,000 times and counted the number of heads and tails, then you might have more reason to assert a 50-percent probability, but you also, at that point, would have much better information. Probabilities move from subjectivity towards objectivity as our information improves. But it is only the trivial cases that have perfect information and can be called objective. Not only the majority of cases, but also the most interesting cases must be called subjective because they’re based on our personal assessments of future events.

Unit writes:

Ask a 5 year old if inflation is going to average more than 6% over the next 5 years. That illustrates radical uncertainty to me. I have no difficulty in imagining that I'm that 5 year old in many situations.

Bryan says that we always know something reasonable. But in his book he argues that often times the median voter is like the 5 year old of the previous example. Pressed to assign a probability the median voter will give a completely subjective answer based on factors that are not related to the given problem.

Zane Selvans writes:

Complaining about Taleb not being constructive seems to suggest that Falkenstein hasn't understood one of Taleb's main points: that simply being able to honestly admit ignorance is itself valuable. Complicated non-linear systems need not necessarily yield their underlying probability distributions to either empirical or axiomatic understanding, and in those cases, is it not better to admit you just don't know what's going on and act accordingly, than to come up with an apparently plausible (but in fact not meaningfully testable) subjective narrative about the system? writes:

Popper's propensity interpretation of probability is the most useful. You should include that.

Tracy W writes:

Zane Selvans - but what does it mean to "admit you just don't know what's going on and act accordingly" if the decision you're facing is a long-term one like building a power station?

Mike Rulle writes:

Re: non-repeatable events.

It has always fascinated me that certain markets produce prices which, after the fact, prove to be (on average) the best guess. Of course, the efficient market hypothesis asserts this (by and large correctly I think) for the stock market. But betting markets also produce this. Try beating the spread consistently on NFL games if one doubts this.

Does this have any thing to do with "probability"? I think so. Even in a world of radical uncertainty, sports betting markets pick the spread which has a "50-50" chance of being out guessed by any individual bettor. Economies are more complex than betting markets.

But, if the above is true, can we infer that certain political economy policies are less likely to be successful than others---even in a world of radical uncertainty?

JP Koning writes:

Just like value is always subjective and not intrinsic, aren't all human-made probabilities always subjective? Isn't probabilizing the same process as valuing?

It seems to me that you are saying that probability is subjective in only 1/3 cases, but otherwise it is an objective & intrinsic quality of something.

Zane Selvans writes:

Not knowing the best course of action to take under uncertainty doesn't change the uncertainty. Sometimes, we just have to admit we don't know.

But I think in a lot of circumstances, even when things are radically uncertain, you can make rational decisions. Are the consequences of being wrong severe and negative? If so, can they be limited, and if not, maybe the prudent thing to do is walk away altogether. Or are the consequences of being wrong, or exploring naively, potentially enormous and positive?

Thomas Esmond Knox writes:

You say that the probability of the coin coming up heads is 50%.

What are the betting odds?

I would require much better odds than even money before I would make a wager.

Steven H. Noble writes:

It's worth noting that probability predictions for non-repeatable events can be analyzed like those for repeatable events, assuming you have some meaningful way to collect a bunch of them.

The simplest way is to take a bunch of p_i's for a series of upcoming events and sum them to x. If more than x of those events occur then in general you are under-estimating their likeliness, and if less then you are in general over-estimating.

You can then partition in various ways and see where estimations are furthest off, etc. Good confidence windows for the distance between x and the number of outcomes are a little tricky, but "good enough" solutions aren't so bad.

And of course, as with repeatable events, it's important to keep an eye out for implied assumptions of independence.

For example this is a good way to rate weather predictions where the probabilities of the predictions vary. But you can look at predictions of interest rates and stock prices as well.

dWj writes:

Building a bit on Steven H Noble: if you flip a coin to test whether it's a fair coin, how many times do you have to flip it before you think 50% is no longer a "subjective" probability? If you make a series of predictions of various events, assigning 60% odds to each event, and, after that many predictions, I find that about 20% of the events take place, can I assert that your subjective odds are "wrong"?

Bekah S writes:

Subjective Economics differs for everyone. When someone is involved in the economic decision they will have some sort of subjective opinion. Ideally, someone outside the decision or prediction should be putting in his opinion also to bring in a subjective point of view. But this also brings in another view in normative economics. There is no such thing as completely objective opinions. To say opinion is to say subjective. Therefore, subjective probability always occurs when predicting an outcome.

Comments for this entry have been closed
Return to top