Bryan Caplan  

The Metaphorical Fallacy

PRINT
Name Robin's Creatures... Blahous on Social Security...
Consider the following argument:

1. Cars are mechanical horses.

2. Horses are faster than walking.

3. Therefore, cars are faster than walking.

Pretty plausible, right?  Unfortunately, this argument is logically as awful as:

1. Cars are mechanical horses.

2. Horses eat oats.

3. Therefore, cars eat oats.

Both arguments are examples of what I call the Metaphorical Fallacy.  Its general form:

1. X is metaphorically Y.

2. Y is literally Z.

3. Therefore, X is literally Z.

Ludicrous, but oh so tempting - especially if you're part of a subculture that loves the metaphor in question. 

To take a not-so-random example, consider my dear friend and colleague, Robin Hanson.  He's a long-time member of the science fiction and AI subcultures.  People in these subcultures love the metaphor, "The mind is a computer."  The result: For all his brilliance, Robin says many crazy things about the mind.  Things like:

1. The human mind is a computer.

2. Computers' data can be uploaded to another computer.

3. Therefore, the human mind can be uploaded to a computer.

I say this argument is just as ridiculous as:

1. The human mind is a computer.

2. A computer will overheat if its fan breaks.

3. Therefore, the human mind will overheat if its fan breaks.

The last time I checked, human minds don't even have fans!

Maybe one day Robin's conclusion will be vindicated.  Maybe one day he'll upload his mind to a computer.  Though I seriously doubt it, I don't deny the possibility.  My objection isn't to Robin's conclusion, but to his argument.  Calling the mind a computer is just a metaphor - and using metaphors to infer literal truths about the world is a fallacy.

P.S. I was vain enough to hope that I had discovered the Metaphorical Fallacy, but at least two philosophers already beat me to the punch.


Comments and Sharing





COMMENTS (23 to date)
drobviousso writes:

Your argument that the metaphor breaks down is a based on a metaphor that breaks down. We can't upload a mind to a computer right now because we don't have a full mapping of the mind, not because we don't know what the elements of the mapping are or that we wouldn't know how to represent those elements in a computer.

Also, the brain has thermoregulation functionality, and your brain breaks down if it isn't used. So there's that too.

John writes:

The mind is not literally a computer, but the mind is literally information--or at least the important bits are. And while it's impossible to upload, say, a chessboard into a computer, it's quite easy to upload the relevant information of the chessboard.

AS writes:

I suspect that the biggest appeal to mind uploading is simply wishful thinking. The actual basis for the beliefs? Simply that mind uploading and other aspects of transhumanism or science fiction sound really awesome, and it seems somewhat plausible. The metaphorical fallacy Brian speaks of is simply an attempt at putting mind uploading on a rational basis. Whether or not it is actually sound is not actually the most important thing to the proponent of mind uploading. The most important thing is that it serves a psychological purpose: to perpetuate his faith in dubious ideas.

Kevin Dick writes:

It may be the case that Robin means the human mind is _literally_ a computer. Many people with a formal computer science background use the term "computer" differently from laypeople.

Sure, sometimes they mean the same type of physical object as laypeople. But often they mean something roughly equivalent to "information processor".

In this case, "The mind is a computer" is not a metaphor. Robin is asserting that the human mind obeys a set of information processing rules. All systems that obey such rules can provably be "uploaded" to a generic enough information processor. Therefore, human minds can be "uploaded".

Now, you can argue that the mind doesn't follow such rules or that there's some fundamental limitation against us building a generic enough information processor, but simply asserting that he's using a metaphor when in fact he isn't won't carry the argument.

drobviousso writes:

"I suspect that the biggest appeal to mind uploading is simply wishful thinking."

Probably. But if we could get it right, and endogenous growth theory is even in the right ball park about what causes growth, it would be a Big F'in Deal. Not as overtly cool as the dreams of the console cowboys, but it'll probably feed more people.

The rule of thumb is I don't want to be the first person to test out if consciousness is portable, but I'd love to be able to spin off a few thousand free-thinking virtual machines and ask them all how me might solve some tough problem.

Andy Wood writes:
2. A computer will overheat if its fan breaks.

This is only true for computers that use a fan for thermal regulation. Some computers use other means, such as a blood supply, sweating etc.

Paul Crowley writes:

I would like to better understand what causes highly intelligent people to come to believe that this forms some part of the argument for why uploading is plausible.

Paul Crowley writes:

In more detail: the argument for uploading doesn't depend on any special feature of the brain as compared to anything else in life or any other physical phenomenon. The assertion is that any physical phenomenon - a plant growing, a landslide, earthquakes on neutron stars, stalactite formation, whatever - is in principle amenable to simulation on a computer if we can gather enough information and throw enough computing power at it, and the brain is no different. This is broadly the Church-Turing thesis.

A friend simulates acoustics at the drilling point of oil wells. They scan the cavern the drill bit is in, reconstruct it in a computer, and simulate to predict how the acoustics will affect future drilling. This process is exactly analogous to how brain emulation might work, but no-one is asserting that the cavern is a computer.

Brandon Berg writes:

Not fans as such, but the human brain does have a convection-based cooling system, and the brain will break down if it fails.

Brandon Berg writes:

The metaphorical fallacy is just another name for a false analogy, isn't it?

Robert writes:

Kevin Dick nails it. To me the claim that human minds are computers is so obvious as to be banal. If minds are not computers then what are they? Do you think minds can do non-computable things? If so, what?

Here's another ridiculous argument:

1. A Turing machine is a computer.

2. A computer will overheat if its fan breaks.

3. Therefore, a Turing machine will overheat if its fan breaks.

The last time I checked, Turing machines don't even have fans!

F. Lynx Pardinus writes:

Step 1 should be stated something like "a mind is computationally equivalent to a computer and is thus able to be simulated by a computer." It's not a metaphor.

Paul Crowley writes:

Hanson responds:

No, I’m pretty sure that I’m saying that your mind is literally a signal processing system. Not just metaphorically; literally.

Sol writes:

I think Bryan's point is spot on. Even if you assume the human mind only performs computable operations (which is dangerously close to assuming the conclusion IMO), that doesn't mean it automatically can be uploaded and run on a digital computer.

Any digital computer can be simulated on a Turing machine. But the converse is not true, because Turing machines have an infinite amount of data storage. What if a brain is more like a Turing machine than a digital computer?

Perhaps more practically, the amount of processing power needed to fully simulate a brain might well be prohibitive. The mind has 10^11 neurons with 10^14 analog connections between then, all running in parallel. Proponents tend to use Moore's Law to handwave that complexity away, but Moore's Law is not a law of nature, it's just a handy guide to what has happened in the past.

An analogy of my own: From 1900 to 1960, we went from horses to cars to planes to rockets. Everyone who thought about it figured it was obvious we'd have a base on the moon, space hotels, and flying cars by 2000. Yeah, not so much, eh?

I'm starting to suspect computing devices may be the same way. It's easy to look at the last forty years and assume there will be just as much progress in the next forty years. But we're already starting to hit the limits of current approaches.

Personally, in 1990, I just assumed we'd have strong AI by 2010. Today it seems much less likely to me that we'll have strong AI in 2032. And strong AI is more or less equivalent to brain uploading....

Alexis Gallagher writes:

Paul,

I sympathize. Similarly to you, I would like to understand what makes highly intelligent people DISbelieve uploading is possible. I suspect the answer is that topics in AI expose deep scientific/philosophical assumptions that usually go unspoken and that are re-shaped in quite counterintuitive ways by certain kinds of scientific education. For instance, I doubt it's a coincidence that most of those who see uploading as self-evidently possible have studied computer science or physics at some point.

To have a go at your question, I think you're asking: why should emulation-of-a-brain produce a genuine mind when emulation-of-a drill does not produce genuine sound?

The reason this seems plausible to me is that, from an external perspective, the only mind-like thing that a brain produces is its informational input/output behaviors. And information is already an abstraction, not a physical thing, so there's no meaningful sense whereby "emulated information" is less real than non-emulated information. And it seems like the informational input/output behaviors of a brain can certainly be simulated with other physical substrata, since it seems like they can be modeled in purely computational terms, if necessary by emulating the actual physics of the brain.

Now from an internal perspective, I also have the feeling that a brain produces another mind-like thing besides informational input/output behaviors: it also produces the subjective experience of being a mind! I cannot account for that, and it's a deep puzzle. But I'm not convinced I need to account for it, as long as the external perspective is adequate for describing what's externally observable.

Back to the original question: why do intelligent people differ so much on this question? I suspect it's because people with backgrounds in hard science and computation have deeply absorbed the philosophical assumption that validates this external, information-theoretic perspective, while people with backgrounds in libertarian thought, in philosophy, or in quasi-philosophical disciplines like economics, are closer to the common sense folk view that the internal, subjective experience of mind is essential to intelligence.

Ted Levy writes:

"Ludicrous, but oh so tempting - especially if you're part of a subculture that loves the metaphor in question."

It's not that the subculture LOVES the metaphor, it's that the subculture LITERALIZES the metaphor, as Brian's knows from his appreciation of the work of Tom Szasz, and as is evident from the comments above...

Paul writes:

I've never understood the point of mind uploading, regardless of how possible it might be. Producing another running instance of my mental processes complete with memory might be interesting, but hardly counts as any sort of "transfer". No matter what the new running instance of 'me' might say, the 'old me' would likely have strong objections to being destroyed simply because a high-fidelity mind copy now exists somewhere else.

Other than attempting to discern how the brain works and the processes of a mind, scanning and uploading would offer few tangible benefits to this running instance of my mind.

Daublin writes:

I've never seen Robin make the argument that is attributed to him here. He could well be wrong,and in general, I appreciate Bryan's attempts to apply simple, common-sense arguments to simplify broad areas of study. In this case, though, the attack is against a complete straw man. It serves nothing.

In the interest of trying to raise the bar, let's take a look at something Robin actually says to defend brain uploads:

http://hanson.gmu.edu/uploads.html

"Imagine that before we figure out how to write human-level software, but after we have human-level hardware, our understanding of the brain progresses to the point where we have a reasonable model of local brain processes. That is, while still ignorant about larger brain organization, we learn to identify small brain units (such as synapses, brain cells, or clusters of cells) with limited interaction modes and internal states, and have a 'good enough' model of how the state of each unit changes as a function of its interactions. The finiteness and locality of ordinary physics and biochemistry, and the stability of brain states against small perturbations, should ensure that such a model exists, though it may be hard to find. [4]

"Imagine further that we learn how to take apart a real brain and to build a total model of that brain -- by identifying each unit, its internal state, and the connections between units. [5] A 'good enough' model for each unit should induce in the total brain model the same general high-level external behavior as in the real brain, even if it doesn't reproduce every detail. That is, if we implement this model in some computer, that computer will 'act' just like the original brain, responding to given brain inputs with the same sort of outputs."

This doesn't look remotely like Bryan's argument from metaphor.

bah writes:

"Calling the mind a computer"
is not talking philosophy.

When I say mind is computer, and you make arguments about heating and fans...bah.

You are not worth it

AgentHunt writes:

what is consciousness ?
-> neurons firing, taking input from external systems and feeding it to complex neural networks.
Networks so complex that it can produce infinite o/p to single i/p.
it is recursion, we take in information and give it to our-self's.
It is continuous recursion.
We call it consciousness.

Consciousness is no different from a stone rolling down a hill.
Consciousness, it is trivial.

it was bound to happen.

assuming physics law exists, and universe is there(existed) for (trillionx10^999...) of years:-
*just as a stone on a mountain slop has a good probability of rolling down, it is of no surprise that complex exotic reaction(that formed elements from energy...see fusion) lead to formation of things like complex proteins, which overlapped and folded on each other to give a DNA.

This would require a really long time.

A stone rolls over a slope because of the laws of physics, it has motion, it does something, it degenerates on the way down, it breaks some peaces, it disturbs the earth, it breaks other stones, it causes vibration in air. Technically it is as much as conscious as us.

Compared to what is there in the universe:consciousness, It is trivial.

AgentHunt writes:

what is consciousness ?
-> neurons firing, taking input from external systems and feeding it to complex neural networks.
Networks so complex that it can produce infinite o/p to single i/p.
it is recursion, we take in information and give it to our-self's.
It is continuous recursion.
We call it consciousness.

Consciousness is no different from a stone rolling down a hill.
Consciousness, it is trivial.

it was bound to happen.

assuming physics law exists, and universe is there(existed) for (trillionx10^999...) of years:-
*just as a stone on a mountain slop has a good probability of rolling down, it is of no surprise that complex exotic reaction(that formed elements from energy...see fusion) lead to formation of things like complex proteins, which overlapped and folded on each other to give a DNA.

This would require a really long time.

A stone rolls over a slope because of the laws of physics, it has motion, it does something, it degenerates on the way down, it breaks some peaces, it disturbs the earth, it breaks other stones, it causes vibration in air. Technically it is as much as conscious as us.

Compared to what is there in the universe:consciousness, It is trivial.

hanmeng writes:

Are similes better than metaphors?

A Thermodynamic Limit on Brain Size

If our brains have to be cooled like computer chips, is there a limit on how big they can be?
[Jan Karbowski at the Sloan-Swartz Center for Theoretical Neurobiology at the California Institute of Technology] points out that brain cooling is not a classic problem of surface-area to volume. Instead, brain cooling is more closely comparable to that in a combustion heat engine where a liquid coolant removes heat... This implies that the thermodynamics of heat balance does not restrict the brain size. And this in turn suggests that brains could be heavier than 5 kg, says Karbowski.

Drewfus writes:

There are three levels of reality:

3. Virtual
2. Logical
1. Physical

"The human mind is a computer" is a level 2 claim, if not level 3.

"A computer will overheat if its fan breaks" is a level 1 claim.

Bryan, i claim you have made a category error.

The blood supply of the brain cools the brain's neurons - this is the equivalent of a computer's fan(s) cooling its transistors. Please remember to make physical/logical/virtual distinctions in your definitions and comparisons. That includes within microeconomics.

Comments for this entry have been closed
Return to top