Bryan Caplan  

How to Make Me Lose an Ideological Turing Test

I Don't Want It, But I Insist ... College Returns and Nonexperim...
I bet I would do relatively well on ideological Turing Tests.  But whenever I say the words "I bet," my mind starts to imagine losing scenarios.  If someone wanted to make me fail an ideological Turing Test, what kinds of questions would they ask?  Leading contenders:

1. Any question related to current events or personalities.  Unlike most people interested in politics, I don't follow the news.  So if you want to unmask me, ask me about something one step more obscure thab Weiner's self-portrait or Schwarzenegger's love child.  Of course, it would be even easier to just ask me about sports.

2. Any question that solicits a political mood.  I may be good at crafting arguments for opposing views.  But I'm probably not very good at mimicking political emotions I can barely fathom in the first place - or telling a nuanced performance from Hestonian over-acting.  I'm especially bad at faking non-Caplanian moods like ambivalence, ennui, and angst.

3. Questions that explicitly solicit arguments.  I'm apt to get carried away, and forget that these implicitly test whether you understand what people take for granted.  Even if I keep this fact in mind, it's hard to strike a believably intermediate stance.

4. Questions that require a keen ear for contemporary sub-cultural word choice.  In some ways imitating a liberal from the 1930s would be easier for me than a liberal from today.  What's Obama-era speak for "unscrupulous money changers"?

The bright side: To win my proposed bet I merely have to be less tone deaf than the competition.

P.S. Here's Ilya Somin pretending to be liberal.

P.P.S. David Gordon critiques DeLong's attempt to pretend to be Nozickian.

Comments and Sharing

COMMENTS (4 to date)
rpl writes:

It seems like the "Ideological Turing Test" is starting to experience a little mission creep. As I remember it, the test was originally proposed to explore the question of how well people can articulate the beliefs of their ideological opponents (vs. thinking and talking about them as caricatures). Giving a convincing performance of actually believing those things, of sharing common interests, or of having the same feelings is beyond the scope of the test as originally proposed.

Such things would have been in scope for Turing's test because the point of that test was to determine whether it was even possible for a machine to have feelings, interests, etc. But that wasn't what we were supposed to be after here. Rather, we were trying to get at the question of understanding.

Michael Keenan writes:

The link to David Gordon's critique mistakenly links to DeLong again. It should be

Bryan Caplan writes:

Thanks, Michael! Fixed it.

Aaron Brown writes:

Double check the David Gordon link; it's still not right ("%5C" sneaked onto the end).

Comments for this entry have been closed
Return to top