Bryan Caplan  

The Trojan Horse Example

PRINT
Unions and Productivity... An Attempt to Answer Bryan's Q...

Austrian economists often attack the mainstream for ignoring something they call "radical uncertainty," "sheer ignorance," or sometimes "Knightian uncertainty." A common Austrian slogan is that "Neoclassical economists study only cases where people know that they don't know; we study cases where people don't know that they don't know."

All of this sounds plausible until you press the Austrian to do one of two things:

1. Explain his point using standard probability language. What probability does "don't know that you don't know" correspond to? Zero? But if people really assigned p=0 to an event, than the arrival of counter-evidence should make them think that they are delusional, not than a p=0 event has occured.

2. Give a good concrete example. I've heard Israel Kirzner give examples involving library books, pay phones, and bumps on the head, but none of them make any sense to me. A better example, suggested by Richard Langlois, is the Trojan Horse. What I discovered a few years ago, and only remembered yesterday when my kids read a book about the Trojan War, is that in Virgil's canonical account, the Trojans specifically consider the possibility that the horse contains Greeks:

Laocoon, follow'd by a num'rous crowd,
Ran from the fort, and cried, from far, aloud:
'O wretched countrymen! what fury reigns?
What more than madness has possess'd your brains?
Think you the Grecians from your coasts are gone?
And are Ulysses' arts no better known?
This hollow fabric either must inclose,
Within its blind recess, our secret foes;
Or 't is an engine rais'd above the town,
T' o'erlook the walls, and then to batter down.
Somewhat is sure design'd, by fraud or force:
Trust not their presents, nor admit the horse.' (bold-face mine)
So why in the world does the Trojan horse ruse work? Because Poseidon punishes the leading skeptic by sending sea serpents to eat him and his children, and that ends the debate:
Two serpents, rank'd abreast, the seas divide,
And smoothly sweep along the swelling tide.
[...]
We fled amaz'd; their destin'd way they take,
And to Laocoon and his children make;
And first around the tender boys they wind,
Then with their sharpen'd fangs their limbs and bodies grind.
The wretched father, running to their aid
With pious haste, but vain, they next invade;
Twice round his waist their winding volumes roll'd;
And twice about his gasping throat they fold.
[...]
Amazement seizes all; the gen'ral cry
Proclaims Laocoon justly doom'd to die,
Whose hand the will of Pallas had withstood,
And dared to violate the sacred wood.
Of course, the fact that the Trojan horse is a bad example of radical ignorance doesn't prove that Austrians can't produce a decent example. But I've been pressing Austrians on this point for over a decade, and yet to get a decent response.

Anyone want to give it a try? I have it on good authority that Poseidon's sea serpents will leave you alone even if you hit the nail on the head. :-)


Comments and Sharing





TRACKBACKS (1 to date)
TrackBack URL: http://econlog.econlib.org/mt/mt-tb.cgi/853
The author at Knowledge Problem in a related article titled Radical ignorance and Knightian uncertainty: Bryan's thinking too hard ... writes:
    Lynne Kiesling ... or he's being fatuous; I prefer to think he's trying too hard. Bryan Caplan's got a challenge to come up with an example of radical ignorance/Knightian uncertainty that's better than the Trojan horse example. Arnold Kling's got... [Tracked on June 18, 2008 7:39 AM]
COMMENTS (39 to date)
Matt writes:

I love a good puzzle.

It is not mass ignorance, it simply appears to be mass ignorance. What the Austrian sees is suddenly there is profit in reorganization by many many firms. At certain, almost preordained moments, we see sudden urges by industrial sectors to reorganize for efficiency. When the firms time it, like a herd of gazelle, it is the mass auction of labor and capital.

A writes:

Nassim Taleb's work is not quite deserving of all its high praise for the same reason, I think.

David writes:
"Neoclassical economists study only cases where people know that they don't know; we study cases where people don't know that they don't know."

Perhaps the reason they don't know is that they are letting their bias be their guide. I don't know if I get how this is different from rational irrationality.

Braden writes:

I found Tyler Cowen on radical uncertainty interesting. (Man, your link styling is almost invisible.)

Constant writes:

I thought this was a pretty familiar and accepted idea, or maybe I'm confusing it with something else. See some of the better discussion about Donald Rumsfeld's quote. Are the Austrians saying something different from this?

Blackadder writes:

I'm not sure what is being asked for here. An example where people "didn't know that they didn't know" about something? Presumably that is an everyday occurrence.

James writes:

Number 1 seems pretty toothless except in the case where the Austrian actually claims that sheer ignorance is supposed to correspond to a probability. No one makes that claim, to my knowledge.

Number 2 is better, but whether or not an example is persuasive to Bryan Caplan depends on both Bryan Caplan and the example given. Even if this radical uncertainty stuff is bunk, your test seems suspect if it doesn't generate a genuine false positive here and there.

greenish writes:

I have chosen a number between one and ten. What's the probability distribution?

Les writes:

Examples of people who don't know they don't know are easy to find. Here are some:

a) Supporters of Obama don't know that his economic proposals are ludicrous and dangerous;

b) Supporters of so-called "climate change" don't know that it has no scientific validity;

c) People who believe that the U.S. can be self-sufficient in energy don't know they don't know much about energy.

John writes:

I don't find this particularly complexing. I view the argument about Knightian uncertainty as when you don't know what the underlying probability distribution is. Therefore, if you were to calculate something like the mean or expected value, you aren't calculating it properly because you can't see the entire distribution. The stock market is a perfect example. I don't know what the underlying distribution is so it is hard for me to calculate it's variance. Sure, in the past there were large tails to the distribution and it had a particular skewness, but should that distribution have more kurtosis or less or more positive skewness or more negative. It's impossible to know. See Mises on case probability and class probability.

MNC writes:

What's the probability that God exists, Bryan? Why is your answer any more valid to someone who argues that it's 95 percent? Given the variance, isn't it simply an unknown?

What's the probability that something like zippers will someday exist ten years before they did? Again, what about the variance of estimates?

Tell me what people will really demand thirty years from now. Why aren't you a billionaire given your superior prediction and estimation powers?

It's not much different from your book: Anyone who doesn't think like you is rationally irrational. It's not that they're rationally ignorant, it's that because it's not what Bryan believes they're dismissed as idiots. "no decent response"? According to whom?

Blackadder writes:

I'm still not sure exactly what Bryan is asking for here, but I'm going to attempt to answer him anyway (consider it a nerd's version of daredevilry).

First, if one had to give a probability to "don't know you don't know" then the probability I would pick is .5. If we have no idea what the probability of event x is, and we don't no range of probabilities is, so far as we know, more likely than any other, then the thing to do would be to take the average of all the options (from 0 to 1), which would be .5.

Second, if the Trojan Horse would have been a good example of Knightian uncertainty absent the bit about Laocoon and the snakes, then just imagine the same scenario without the bit about Laocoon and the snakes, and you have your example.

Or, if that example seems too cute, consider a man who goes to a casino to play blackjack. Being statistically minded, he has a decent grasp on the probabilities of his winning given the cards he and the dealer have been dealt. What he doesn't realize, however, is that it's possible the game in question is rigged, nor if he did think of this possibility, would there be any way for him to know just how likely or unlikely this possibility is.

Fabio Rojas writes:

There are lots of examples of being unaware that you don't know something:

- somebody in the near future will invent a new product that makes what you sell irrelevant and you fail to think about this possibility

- a disease that no one knows about yet

- Native americans in 1491 probably didn't know that Europeans would arrive in the near future

In each case the person has actually assigned p=0 to each event but they don't "know" what the event is in the same way that I "know" other events. At best, you might say that these exist in a residual category of "stuff that might happen but can't define."

I think the austrians have a simple point here. People in a given situation usually have some idea of what the situation is about. The conceptualization has bounds - nobody can possibly conceive of everything that can happen.

I would rephrase Bryan's point as this: such radical uncertainty must be exceedingly rare because most events bear some similarity to what has gone before. So while I posit that radical uncertainty exists and is interesting, it probably doesn't apply to most things.

James A. Donald writes:

Estimates of future supply and demand for oil failed to allow for weakness and collapse of nation states. It is simply not mentioned in any of the projections, that some nation states would be unable to expand oil production for security reasons, and that in others fights over oil between subnational groups would destroy oil extraction capability. When a nation goes down rendering oil extraction difficult, people do think they are delusional. They think the nation is still there, but somehow behaving oddly.

They did not know that they were badly under estimating this form of political risk, and frequently still don't know even after it happens.

Unit writes:

I'd say someone who doesn't know he doesn't know gives probability 1. He's absolutely certain and does not realize the underlying uncertainty.

Dave writes:

I can imagine two contexts in which the notion of Knightian uncertainty relative to neoclassical "risk" might make sense, though I'm not at all certain that I really understand it, so this may not make sense.

The first is the inherent danger of extrapolating a model. I might interpret "risk" as the variability I see in the fitted data, whereas Knightian uncertainty reflects a separate source of uncertainty when I apply that model beyond it's strict confines, as in forecasting, for example. The latter does not lend itself to any sense of objective probability assignment, for if it did, it would be data I've observed, in which case I could in principle reflect it in my original model.

The second interpretation might have to do with dimensionality. With any representation of a process, I am necessarily doing data reduction, which means that either there is at least one dimension that I don't include in my representation, and/or a dimension whose value I've recorded with limited precisions. Hopefully, I've done this such that the data reduction does not lose valuable information, but whether or not I accomplish that goal is dependent on how I use the model. This is equivalent to saying that for every representation (i.e., model), there are a set of assumptions required for the model to strictly hold. Thus I could interpret Knightian uncertainty to be the uncertainty introduced when those assumptions do not hold, which they almost never strictly do.

Ultimately, and in any case, it is necessarily true that it is impossible to assign a probability to all potential events that may affect one's belief in the future. Otherwise, we are describing a process in every possible dimension, in which case we have no data reduction, thus have no model, and cannot generalize or make predictions.

Of course, it's equally likely that I've seriously misunderstood the issue....

Greg Ransom writes:

Try this. Read some Popper.

Kuhn and Hayek also talk about advancing into the unknown and the previously unimagined in the sciences.

It's a great analogy and will help you think about this stuff.

Greg Ransom writes:

The Hayek/Popper/Kuhn point:

Knowledge advances into the unknown and often into the realm of the yet to be conceived. It's a place where you fun into new conceptual entities and conceptual networks.

Entrepreneurs run into the same thing all the time -- especially with technological change. Opportunities can be nothing more than imagining things as you never imaged them before (think Apple computer in the early days).

Do I need to teach you the logic of the probability calculus to explain why this calculus doesn't apply to a domain without prior givens?

Minh Ha Duong writes:

p({}) of course.

By definition, the empty set describes events that don't belong to the frame of reference. A way to deal with unknown unknowns in standard probability language is to reject the axiom according to which the probability of the empty set is zero.

An overwhelming majority of probability users is not aware that p({})=0 is an axiom, and would hold it as self obvious is asked. But if you played Monopoly, surely you realize that sometimes you have to throw the dice again. The result is not always a number between 1 and 6.

That said, only probability-related theory I know that allows to allocate some belief mass to the empty set is a variant of Dempster-Shafer known as the Transferable Belief Model.

Daniel writes:

I posted this in response to Arnold's question, but it's primarily relevant here, so I'll post it here. Basically, my suggestion is that you check out this paper for a good summary of what a lot of people think about this question (section 6 on specificity is the relevant one, but it probably wouldn't be helpful without having read the earlier stuff). I'll summarize it, but here's the link:

Joyce, J. 2005. ‘How Probabilities Reflect Evidence’ Philosophical Perspectives 19: 153-178.
http://www.blackwell-synergy.com/doi/abs/10.1111/j.1520-8583.2005.00058.x

The idea is that in order to capture the distinction you're interested in, we need to use a generalization of the Bayesian framework where we represent an agent's opinions by a probability function, and we represent an agent's opinion about some proposition p by the value that his probability function assigns to p. In the generalized framework, we represent an agent's opinions by a set of probability functions--which we call his representor--and we represent the agent's opinion about a particular proposition p by the set of real numbers n such that for some probability function f in the agent's representor, f(p) = n. Now, we think that often, an agent's evidence singles out a unique value that he ought to assign for p, maybe in a case where he's got excellent evidence that a coin is fair, the value he assigns to the proposition that the next toss will land heads should be 0.5. In this framework, we'd capture that by saying that every function in his representor should assign 0.5 to the proposition that the next toss lands heads. But other propositions aren't like that. If the agent is presented with an urn, and is told that there are black balls and white balls in the urn, but isn't told about the proportions, we might think that the proposition that the next ball drawn will be black is an unknown unknown, in the sense that there isn't some particular probability value the agent should assign to it. We'd represent this by saying that the agent's representor should include functions that assign various different values to p, maybe the set of values assigned to p by functions in the agent's representor should be the entire interval [0,1].

Anyway, all this is only the bare bones of a start, in part because I still haven't said anything about what this means for stuff like betting (what bets on p should you take if your representor assigns p the entire unit interval?) But it's a start at how you might go about trying to model these things, and much more has been said (including stuff about how this stuff connects with betting). I also don't think the framework is obviously the right one, but it's certainly worth giving some thought if you're interested in taking seriously the distinction Brian's talking about, and in trying to model it formally.

JeffHallman writes:

Dave is almost right. Say you fit a model to some economic data, and the usual battery of statistical tests provide no compelling evidence that the model is wrong. Along with all the parameter estimates, you also have estimates of the standard error of the model and the variances of the parameter estimates themselves. All of these give you some information about risk, assuming the model is correctly specified.

But in fact there is no way to know how much risk is coming from the fact that your model is actually wrong. The data were not generated by someone simulating your model. Knightian uncertainty is the risk you didn't model, because there isn't any way to. The real world is always more complicated than the model, and LTCM and the subprime mess show what happens when you naively think the future will be like the past just because that's all you can account for with an empirical model.

Adam writes:

Easy. Many writers before the PC did not know that they did not know that word processing would make the typewriter obsolete. In fact, you could use any of the dramatic innovations in technology in the last twenty years and simply state that up until they hit, people "did not know that they did not know" the sorts of changes they were in for.

Adam writes:

I've got a more in depth response, if you're interested.

Bob Murphy writes:

Brian,

The Trojan Horse stuff was cool; I hadn't remembered people warning about the horse, either.

But as to your basic question, I really don't understand what your problem is. Since I've heard the matter framed by Kirzner a bunch of times, I'll just paraphrase his position below:

===========

In the mainstream approach, they dealt with the objection to perfect knowledge by building random variables into their models. Then the agents in the model redo the optimization, this time not knowing future prices, production functions, etc. with certainty, but knowing *with certainty* the relevant distributions on all future worlds.

This approach still leaves no room for pure entrepreneurial discovery. No mistakes can ever occur, and there are thus no pure profit opportunities.

===========

Now to your question, give you an example? OK, when Henry Ford adopted an assembly line approach. It's not that the people before him suddenly thought, "Whoa, we had assigned probability zero to that technique being a good idea, but now we realize we were delusional!"

No, they just slap their hands and say, "Why didn't WE think of that??"

I think your problem, Bryan, is that you are basically saying, "The Austrians say you can't use standard probability language in a world of uncertainty. But since they can't describe this world using standard probability language, I don't think they're making any sense."

Will Wilkinson writes:

Like most of the commenters, I just don't get this. A mind, being a small physical system, can represent only a finite number of propositions. But there an infinite number of possible propositions. Only propositions actually represented in consciousness or memory (or entailed by ones represented), can be assigned a probability of truth by the subject. Therefore, there are infinite possible propositions to which subjects assign no probability. They are wholly outside the mind's representational system. That's different from saying that all those possible propositions are assigned a probability of 0. That infinite set is the "radically uncertain" set.

Isn't this point just a straightforward corollary of the rejection of the physically (as in physics!) impossible neoclassical assumption of unlimited memory and computational capacity?

Will Wilkinson writes:

A follow-up thought: If the whole point is that some propositions are unrepresented, and so can be assigned no probability, then it is just a sophistical trick to demand an example to be convinced. The act of providing an example requires a proposition brought to mind. "Give me an example of radical uncertainty" is like "You say there are some things that cannot be seen. Show me one!"

Daniel writes:

Will, I think this is the controversial part of what you said: "A mind, being a small physical system, can represent only a finite number of propositions. But there an infinite number of possible propositions. Only propositions actually represented in consciousness or memory (or entailed by ones represented), can be assigned a probability of truth by the subject."

This would certainly be attractive if you're thinking of beliefs in terms of some kind of language-of-thought hypothesis, where to believe a proposition requires having some sentence of mentalese that expresses the proposition somewhere in your head. But you might not think of beliefs that way. If you think of them as posits that we use to predict and explain people's behavior, then there isn't any obvious obstacle to interpreting finite creatures as having infinitely many beliefs, even when they don't all follow from some finite subset of those beliefs.

Suppose, for example, you used the following oversimplistic, operationalized notion of belief: someone believes p if they're happy to accept a bet on p at even money. On this definition, it's plausible that there are infinitely many things I believe that don't all follow from some finite subset of my beliefs. For example, I believe there's no leprechaun in front of me. I believe there's no unicorn in front of me. I believe there's no leprechaun/unicorn hybrid creature in front of me...It's possible for me, as a finite being, to be disposed to accept bets on infinitely many propositions.

More would have to be said, but my point is that if you think of the opinions of an agent as the states that you'd ascribe to it in order to predict its behavior, rather than as sentences it has explicitly represented in its head (together with those that follow from the explicitly represented ones), then the physical finitude of an agent doesn't present an immediate obstacle to its having infinitely many opinions.

mk writes:

I put this in a different thread, but to recapitulate:

An unknown unknown is presumably just something that happens that isn't in the model space. Sure, you could say it is a probability zero event, that's fine. But we don't think we're delusional if an unknown unknown happens, because we don't fully believe our models. Models are just guides.

If Bryan thinks this is a very bad thing (we must always model what we believe and believe what we model!!) then we can add a "fudge factor" event to a probability distribution, representing the union of all events that are not captured by the explicit event space.

A simple example is, I ask you who will be president, and you say "60% Obama, 40% McCain" or something. Do you really believe the probability that someone else will be president is zero? What about the probability that we will all annihilate ourselves, or the American government will collapse, before the next election?

If these things happened I might be really surprised, but just because I modeled them as having no probability mass, doesn't mean I would start questioning my sanity if they happened. We just don't fully believe our models, that's all. We recognize they're a little oversimplified and they exist to serve targeted purposes.

Daniel Klein writes:

I like Will's comment. The Smith-Hayek appreciation of knowledge's richness is not something to be explained. If an Austrian says that Austrians "study cases where people don't know that they don't know," then they are wrong, and Bryan is correct to whack them for it.

Knowledge's richness is not explanandum, it is explanation. Specifically, an explanation of some of the by-and-large relative virtues of freer versus less free arrangements. (I've assayed the argument in two papers on discovery.)

Mises-Rothbard are weak on all this because their concern is to claim a scientific foundation for laissez faire economics. Bryan is correct that knowledge's richness doesn't advance that claim. In fact, in important respects it gets in the way of those pretensions.

Smith-Hayek criticizes the scientific pretensions of interventionist economics--a very different attitude. Here, appreciation of knowledge's richness is valuable. It helps us to understand the relative lameness of restrictive regimes, as well as the folly of the restrictionists.

Kirzner tries to bridge the two, but the bridge doesn't hold.

Peter Boettke writes:

Bryan,

Asking someone who doesn't think probability theory captures completely the situation to give a p value is sort of tilting the discussion -- no?

Anyway, I think Fabio raises the relevant point when discussing the reasonableness of not knowing that you don't know and his example of the introduction of a new product.

Whether or not we debate this on "Austrian essentialist" grounds is beside the point, lets just talk about the idea of our cognitive limitations as well as confusions. We could sum up our state of ignorance along at least 3 dimensions: (1) we know we don't know (rational ignorance); (2) what we think we know ain't so (rational stupidity); and (3) we don't know that we don't know (sheer or utter ignorance). I think we can admit that all three cognitive states exist, but which one matters for what argument is a function of empirical magnitudes.

The "Austrian" point is that (a) the concept of sheer or utter ignorance is important for understanding the entrepreneurial market process; and (b) the concept of radical uncertainty in the subjectivist rendering (e.g., Shackle and Lachmann) highlights the human dilemma of choice. (a) and (b) need not go hand in hand.

Lets stick with (a) since I didn't want to emphasize "Austrian essentialism". When I teach this to undergraduates I use the following examples: 1) I am rationally ignorant of country music, I know it exists but I don't care to know anything about it; 2) I am rationally stupid about the NY Yankees --- I cheer for them even when I "know" they are not there and I insist to anyone who wants to argue with me that they are the best team in baseball because the cost to me being wrong is very low; but 3) I am utterly ignorant of the next great paradigm shift in economics, or the sort of new products that will appear on our shelves 20 years from now.

In the phraseology of the subjectivists, the future in unknowable but not unimaginable. There is a difference between imagination and knowing -- that I think you fundamentally deny in these discussions.

Anyway, here is the example I give to differentiate between search and genuine discovery, and that is the creation of scientific knowledge. In conducting our research, we engage in active search of existing knowledge. But hopefully during that process we have a "eureka" moment and stumble upon an idea that previously we haven't thought about (e.g., perhaps the idea of rational stupidity!) and as we work with this new idea we create new knowledge that then hopefully goes on the shelves for a new generation to read and think about --- and thus the circle of known knowledge expands and the quest for new knowledge continues.

Now to emphasize a point that I think aligns with your position --- the vast majority of scientific work is characterized by "search" -- normal science. But the "extraordinary science" of novel innovations to our thinking occur with these discoveries of thoughts previously unthought and perspectives on questions previously not seen. When they are in the state prior to their coming into being these ideas in a fundamental sense do not exist and thus do not fit into a probability distribution because they are not known possibilities.

Does that position seem so unreasonable?

Pete

Greg Ransom writes:

Will Wilkinson writes:

>>Like most of the commenters, I just don't get this. A mind, being a small physical system, can represent only a finite number of propositions. But there an infinite number of possible propositions. Only propositions actually represented in consciousness or memory (or entailed by ones represented), can be assigned a probability of truth by the subject.

Greg Ransom writes:

The problem with Wilkinson's picture of mirroring a world of propositions with a world of propositions in the head is that no one has ever been able to make sense of this picture, and there are strong reasons from Darwin, Wittgenstein, Hayek, Edelman and others to think that it is grossly wrong. So the argument against using "propositions" a probability calculus by a human being in a social community using a natural language to capture an as yet unimagined world is stronger than Wilkinson suggests.

Greg Ransom writes:

Daniel writes:

"Mises-Rothbard are weak on all this because their concern is to claim a scientific foundation for laissez faire economics."

Almost everyone now agrees that his model of "scientific foundations" has little to do with science, and has everything to do with a mistake picture of knowledge inherited from the philosophical tradition.

Economics is science, even though it doesn't fit the model of "scientific foundations" popular 60 years ago -- much as Darwinian natural selection is science, though it also fails to fit the old model.

mk writes:

I'm glad someone brought up Wittgenstein, because I'd say the best response to Bryan's worry is somewhat Wittgensteinian in spirit.

A model is not a perfect representation of reality. It does not even try to perfectly represent everything we think about reality.
In particular, it is not generally the job of a modeller to spend days and days imagining all sorts of outlandish scenarios to add to the event space, so that we may assign vanishingly small probabilities to them.
Instead, we use a model as a pragmatic tool to help us solve a problem. It does not necessarily matter if the model assigns zero probability to events which actually might happen.

So when events happen which the model said were zero-probability, we don't go crazy because we have a pragmatic notion, not an idealistic notion, of a model's utility.
All that said, sometimes of course it is useful to spend time imagining unusual things that might happen and assigning small probabilities to them.

Russell Nelson writes:

@greenish: "I have chosen a number between one and ten. What's the probability distribution?"
Either you're lying, or the probability that you have chosen a number is 100%. Since I don't know you, I'll assume you're telling the truth.

Russell Nelson writes:

I'll take a cut at #2: Stock picking. People who pick stocks (and this includes more professionals than not) don't know that the stocks they pick perform worse than the average stock. If they knew this, then they would put all their money into an exchange-traded fund like SPY.

Victor writes:

How about the iPhone back in the 17th century? The time span is longer, but the idea is the same. How do you assign a probability to something you have no idea could possibly happen. The problem is you assume that the space on which the probability distribution is defined is known. It clearly is not, and cannot be known beforehand

Anonymous writes:

[Comment removed for supplying false email address. Email the webmaster@econlib.org to request restoring this comment. A valid email address is required to post comments on EconLog.--Econlib Ed.]

Sean writes:

I think Caplan's argument is a perfect example of radical ignorance. He doesn't seem to comprehend the nature of consciousness or unconsciousness. A probability cannot be assigned to anything of which we do not even know the existence or possibible existence. I can't assign a probability to any of the entities, properties, and events of which I am ignorant. If someone reaches into a bag for the first time he cannot know anything about the probabilities of sampling, other than that the probability of dinosaurs, ocean liners, and redwood trees being sampled is 0. But from that negative information no positive prediction can come. Only after he has created a population sample can he make future predictions. If he pulls out different colored marbles he will only then be able to start making predictions about future sampled marbles. The only content of his knowledge at the start is that if something is in the bag it will be something that can fit in that particular bag (which already implies a knowledge about the physical world that could theoretically be absent). If some people have knowledge of what is in the bag and some don't, there will be advantages to some in making predictions about what will be drawn. But further, if we make the realistic assumption that no human has complete awareness of anything, there will always be an element of radical uncertainty in any prediction and, therefore, any action.

Comments for this entry have been closed
Return to top