Arnold Kling  

Tyler Cowen Explains the "Flash Crash"

PRINT
Disaster and Recovery... Hummel on Fed Accounting...

He writes,


computers still are not meta-rational. They do not understand what they do not understand very well

But could one argue that Watson is Meta-rational, in that it assigns a probability to having the correct answer? Perhaps this feature should be added to other computer tasks, such as chess or trading stocks.

Note that Kasparov's evaluation of Watson, to which Tyler links, explicitly says that Watson still was not capable of knowing when it was about to make a blunder.

I agree that this is a very important issue. Any decision-maker, human or computer, that does not realize when it is not well positioned to make a good judgmen, is dangerous. One could say that it is this lack of meta-rationality that I find most frightening about the Obama Administration's policies. The policy makers assume that they know much more than they really do.

By the way, the flash crash happened on my birthday last year, in case you had forgotten about it (either the episode or my birthday).


Comments and Sharing





COMMENTS (8 to date)
Simon writes:

MR link is missing an h at the start of http

Guy in the Veal Calf Office writes:

Metrodorus complaind about this, more elegantly too, 2400 years ago: “None of us knows anything, not even this: whether we know or we do not know; nor do we know what ‘to not know’ or ‘to know’ are, nor on the whole, whether anything is or is not”

David R. Henderson writes:

Arnold,
Happy belated birthday!

Chris T writes:

This is why calls for sweeping change scare me. The odds that all consequences can be anticipated and prepared for are non-existent.

HFT Trader writes:

I don't think it's so much that computers aren't rational, but rather that the operators of those computers were not meta-rational about the rational of those computers. Most HFT shops shut down their machines or triggered internal circuit breakers and didn't bring the machines back up in the ~60 minutes of increasing volatility leading to actual 1000 point price swings.

The firms that did leave their machines on (including most of the divisions in my firm, except for one idiot execution trader who got scared and killed his desk's program) ended up making a fortune and pushing prices back to equilibrium.

It was super chaotic. The servers were which normally operate at ~10% CPU capacity were thrashing at 100%. No one could check reliably monitor their PNL until probably an hour or so later. But by and large the algorithms when they were allowed to run performed beautifully. It was the operators that were dumb.

Daublin writes:

Meanwhile, the real Watson actually does estimate its confidence level for each response. If you watch the show, you can see its confidence in each answer. It uses these confidence levels both to know when to ring in and to know which of its reasoning units to use for each question.

For example, in its infamous choice of "Toronto" in the first Final Jeopardy, it put a lot of question marks on its answer, indicating that it knew it was taking a wild guess.

Really, Watson probably estimates confidence far more carefully than humans do. Watson doesn't have any social biases to worry about, and it has immense computational resources.

PrometheeFeu writes:

I once spent some time researching and working on a computerized trading system. The system we were working on attempted to predict movements, but also to predict the accuracy of its predictions. I don't know that everyone does it, but from my research, it appears common that the system will attempt to evaluate the certainty of its own predictions. Of course, the system can get that wrong. But so can people.

PS: My evidence here is anecdotal showing only that SOME systems work that way.

Various writes:

That is a very deep thought Arnold, and I like it. I think you are right about the Obama Administration. That is always a danger with Central Planners, especially actors with limited downside if things turn out badly. They do have a bit of an incentive to ignore the downside. As you opine, I think they probably overestimate the extent of their knowledge. There may be another factor also. These folks are politicians (i.e., salesmen) afterall, so they are probably just as focused on the short to medium term chances of re-election, as they are worried about the substantive effects of their policies.

Comments for this entry have been closed
Return to top