Econlib Resources
Subscribe to EconLog
XML (Full articles)RDF (Excerpts) Feedburner (Oneclick subscriptions) Subscribe by author
Bryan CaplanDavid Henderson Alberto Mingardi Scott Sumner Subscribe by email
More
FAQ
(Instructions and more options)



Heh. That _is_ the same Eliezer who's writing Harry Potter and the Methods of Rationality, which is quite possibly my single favorite fanfic ever.
The presence of large numbers of alien cultists is a reason to be more skeptical of reports of alien sightings. The witnesses are probably just cultists.
"All men are so despicable in my eyes,
I should be sorry if they thought me wise."
Here's the original post.
@William Barghest
I think the argument is that the presence and nuttiness of alien cultists is not a good reason to disbelieve in aliens in general, not that it's not a good reason to disbelieve evidence the cultists provide. For example, the presence of UFO cultists is not a good reason to disbelieve that there are aliens in the Tau Ceti system who have never visited Earth. If an astronomer said they'd discovered a planet in orbit around Tau Ceti, and that spectrographic analysis indicated the presence of life "UFO cultists are nuts" would obviously not be a good argument against their position.I remember walking up to the professor at the end of a Game Theory class and telling him I perfectly understood the application of Bayes' theorem, can do the math, can apply the concept etc, but it still intuitively felt wrong. So we had this brief exchange:
Him: I'm thinking of a number between 1 and 1000, guess it.
Me: 432?
Him: And the probability you are right is?
Me: 1 in 1000.
Him: OK, my number is 432 or 629. Care to take another guess?
Me: 629
Him: And the probability of you being right is?
Me: 999 in 1000
And just like that, it suddenly made perfect sense. Maybe it will help someone else.
Some years ago I was employed as a statistician at a university research institute. One of our clients brought in a problem involving testing of materials that went into one of their products. Basically they took a sample and stressed it until it broke, counting the number of stress cycles. The issue was how many tests did they need to carry out (at a substantial cost per test) to state, with a desired confidence level, the useful life of the material (i.e., something like, with 95% confidence, how many cycles would the material survive in operation?).
Standard statistical testing, assuming normality and estimating the standard deviation from the sample itself, gave the answer 30 tests. Could they do with fewer tests?
It was clear to me that the standard method assumed they'd never seen the material before, and each test was done in complete ignorance of past data. Yet, they had records of thousands of tests, with estimates of standard deviations from each batch they ever tested. (All statisticians should be blessed with so much data.)
Bayes Theorem was the obvious approach. Given the background data from previous tests, including the distribution of standard deviations, the question became, how many tests on a new batch of the material (i.e. updated information) would be needed to give the desired confidence level? It turned out that only 16 tests would be needed. This was a significant saving for the client. Getting the answer involved a lot of numerical integration, but other than that it was straightforward. Bayes to the rescue!
PrometheeFeu,
And yet, why is it less intuitive if the exchange had been to guess a number between 1 and 3? I remember trying to explain the logic in the Monty Hall problem to a close relative once, and with 3 boxes, he just couldn't grasp it, but when I expanded it to 100 boxes with only one holding the prize, he got it instantly.
Taking things to extremes helps to ferret out lots of root issues. Not always, but it's good for figuring out how and why you feel about something.
That sentence was confusing. Can I rewrite it by eliminating all the negatives?
There are large numbers of embarrassing people who believe in flying saucers. If you believe that aliens would suppress flyingsaucer cults, then this is Bayesian evidence against the presence of aliens because we are less likely to see flyingsaucer cults if aliens exist. Otherwise, it is not.
@Yancey Ward:
I think it comes down to the shift in probability. In the classic 3 door Monty Hall you go from a 1/3 to a 2/3 probability. A significant shift for sure, but extremely small compared to the 100 door version where you go from a 1/100 to a 99/100 chance.
My other intuition would be the symmetry of the 3 doors problems. You pick one door, one door is opened and there is one door left you have not checked. You start at 1/3 probability and add 1/3 probability. That feels like it could be accounted for in any number of ways. On the other hand, the 100 doors version is very asymmetrical because you open 98 doors. It forces you out of the idea that opening the 98 doors could be irrelevant. After all, you started out with 100 doors, how can removing 98 of them be irrelevant.
But those are just guesses.
@PrometheeFeu and @Yancee Ward:
Another reason the Monty Hall problem is so much harder is that it is _not_ the same as the "pick 1 in a 1000" problem. Imagine if the conversation went like this...
Him: I'm thinking of a number between 1 and 1000, guess it.
Me: 432?
Him: And the probability you are right is?
Me: 1 in 1000.
Him: OK, my number is 432 or it _might be_ 629. Care to take another guess?
Me: 629
Him: And the probability of you being right is?
This is the Monty Hall problem and it doesn't seem nearly as obvious to me.
I did read it 5 times before my head noddedthen wondered if this was a question of logic or aliens. While the logic is good, the example is poor as it seems like the premise is a straw man argument to disbelieve in aliens. My view is that "embarrassing people's beliefs" (undefined what embarrassing meanswhich appears like a meaningful problem for this particular examplebut lets accept it as defined), one way or the other, are self evidently unrelated to the likelihood of aliens' existence. This would be similar to saying thunderstorms or outcomes of sports games are evidence of their nonexistence.
On a similar note, since we are getting all Monte Hallish, which is fun, one of my reasons for having always disbelieved in the seriousnessness of the global warming hypothesis is that they all seemed like kooks (equally undefined, admittedly, as "embarrassing"). Does that imply that these kooks' existence, in my Bayesian world, mean their views are uncorrelated with the liklihood of global warming's existence? Yes. Hence easily ignored.
However, when combined with my perception that those with these views were also strongly linked politically to income redistribution schemes, tax subsidized research, and fairly unusual large shoutdowns of those who disagreed, my view became that their research was inversely correlated with the likelihood of the existence of global warming (in the Gore/Hansen view of it at least) and also potentially dangerous. I then began paying more attention.