Bryan Caplan  

Memory of Credibility

PRINT
Eichengreen on Market Failure... Trump, Krugman and the long ru...
The more I learn, the more I'm amazed by people who claim to base their views on "the data alone."  It's a noble dream, but ponder these harsh realities:

1. Carefully studying data is enormously time consuming and question-specific.  If you really hold yourself to this standard, you won't be able to responsibly hold opinions on more than a handful of questions.

2. As a result of #1, people who claim to base their views on "the data alone" end up outsourcing most of their data work to others.  There's nothing fundamentally wrong with this, but note the bait-and-switch: People who claim to rely on "the data" are in fact relying on their own judgments about which sources are credible

3. Data analysis and credibility assessment are radically different skills.  Yes, you could carefully audit an expert's data work on one issue, then broadly trust whoever performs well.  But that's only one approach.  We're at least as likely to evaluate credibility based on demeanor, word choice, and mood.  And this seems reasonable.  I delete spam because the senders radiate crazy, not because I carefully audit any of their statements.

4. On further thought, direct data analysis and credibility judgments rest on a more fundamental faculty: memory.  Even people who conduct their own data analysis don't re-do their work before speaking.  Instead, they rely on what they remember about their data analysis.  The same goes, of course, for credibility.  When we decide to trust someone, we rely almost exclusively on our memory of their trustworthiness.

5. The big lesson: Truth-seekers spend far too little time assessing the tools that underlie almost all of their judgments.  Data analysis is great, but knowing who's credible is even better - and measuring the reliability of your own memory is paramount.

6. Betting simplifies this seemingly Sisyphean process in two keys ways. First, betting is a good, clean way to evaluate credibility.  Second, betting constrains selective memory: If you want to objectively judge a person's reliability - including your own reliability! - don't go with your gut.  Check out his life-long betting track record.  It's not the best possible measure, but it's probably the best feasible measure.
 



COMMENTS (8 to date)

Data requires theory which must remain debatable.
There is no such thing as data without a theory, if I fairly represent Kuhn (The Structure of Scientific Revolutions) and Uncle Freddy (The Sensory Order).

You can't report "I saw three black dots" without embracing prior theories of: "three", "black", and "dot".

So knowledge building is a kind of a bootstrapping process in which theories need to be built, increment by increment, as supporting data are recognized and codified.

Free trade better than betting
Bryan's betting may require, it seems to me, a sort of public space, a region in human relationships in which a considerable gap has grown in the views expressed. If free trade is not reducing this gap to distances inscrutable to outsiders who remain rationally ignorant of the trading, then something is blocking the force of entrepreneurship which would so reduce that gap, or so I theorize. My libertarian knee-jerk leads me to blame the state for such blocking force, but my occasional liberality allows other causes. My thinking down a path like this started with my paper The Power of Ostracism.

Edmund Nelson writes:

While betting is an extremely useful measure of how good someones credibility is, one issue comes with how you measure the bet. Do you weigh it by total $ won/average bet size? If so that rewards people for getting lucky with long odds bets. Do we use some form of brier or log scoring rule? The issue with this comes from how do you determine exact probabilities of certain beliefs? My current measure is $ won/Bet size, but wins only count if both parties are A: serious academics in the field or B: Students of the field. It's also clear that measuring things through bets WON encourages a person to make a bunch of bets at mediocre odds and lose money on expectation. Bets like your 99:1 odds bet on ron paul becoming president. The other way works too, you made a bet about climate change, which if there is a 40% chance that you are right would be a +EV bet, however you would lose the bet more often than you win it. Admittedly you have a near spotless record (with 1 bet highly probable to be going sour in the near future)

One other issue I actually ran into myself is that when you offer someone a bet they will back out on their beliefs. Even slatestarcodex did this to some extent. I offered a 3:2 odds bet on something he argued had a 70% chance to occur (I think it has a 45% chance to occur) and he wanted to go down to even odds. I accepted, and we'll see come feburary who's right.

(you also seem to delete questions that aren't spam as well)

[We do not delete comments that aren't spam unless the commenter has not validated his email address or the comment violates our longstanding comment civility policies.--Econlib Ed.]

Trevor H writes:

Cathy O'Neil at mathbabe.org and in her new book Weapons of Math Destruction also argues quite persuasively that data isn't really even objective. The way it's collected and structured are also influenced by cognitive biases.

The dream of basing views on the data I think is an attempt to purge cognitive bias, or at least claim that it's purged. Your approach is superior, cognitive bias cannot be completely eliminated. Betting acts as a form of quality control to help you manage it.

Mike White writes:

You are on a roll or in the zone or whatever you want to call it! Always great, but really great lately. :-) Thanks.

Phil writes:

overall good post

"6. Betting simplifies this seemingly Sisyphean process in two keys ways. First, betting is a good, clean way to evaluate credibility. Second, betting constrains selective memory: If you want to objectively judge a person's reliability - including your own reliability! - don't go with your gut. Check out his life-long betting track record. It's not the best possible measure, but it's probably the best feasible measure."

should I weight track record or overall returns on investment?

seems like different answer to that get you different interesting results

if you're trusting track record, you less likely to be getting information that wrong

if you're trusting overall returns, you're more likely to get information that is actionable useful (you might be learning something that everyone else doesn't already know)

KenB writes:

I'm largely in agreement, but re your betting history, I don't think that's particularly informative -- you have a small number of bets and you mostly seem to bet only at a pretty high threshold of anticipated success. No way to know what probability you assigned to the event, only that you felt the other party's confidence was significantly too high.

Scott Alexander, among others, creates a yearly prediction list with percentage probabilities attached, then revisits after a year and gauges his success rate in comparison to his probabilities. That seems more helpful for gauging reliability.

James writes:

Are there any actual people that fit the criticisms here? Someone name names, please.

"Carefully studying data is enormously time consuming and question-specific. If you really hold yourself to this standard, you won't be able to responsibly hold opinions on more than a handful of questions."

I have never met anyone who claimed to rely only on the data in the sense that they personally did all of the data collection and analysis. This is a straw man.

"People who claim to rely on "the data" are in fact relying on their own judgments about which sources are credible."

Or they rely on data about the past predictive success of different data sources and analysts.

"On further thought, direct data analysis and credibility judgments rest on a more fundamental faculty: memory. Even people who conduct their own data analysis don't re-do their work before speaking. Instead, they rely on what they remember about their data analysis."

Or they rely on handwritten notes and powerpoints and spreadsheets, etc.

To be fair, when people claim to rely only on the data, this is often an aspiration rather than a frank self assessment but the issues raised here seem to miss any actual human being.

David Condon writes:

People haven't relied solely on their own memory since the invention of writing. Storing information for later usage is certainly important, but you shouldn't rely on something as flimsy as memory to do it. To counter point 5, the key is to work to improve the organization of one's notes on problems, and then remember how to browse through those notes. It's not so important to measure the quality of the organization except to the extent that it is in doubt whether the organization of your stored information is improving. Usually you should know if your ability to store information is improving or not. Measuring the reliability of the storage and organization systems of others is important, so that you know better who to rely upon.

Comments for this entry have been closed
Return to top