Bryan Caplan  

Is It Really Conscious?

It's Not Just About the Money... What if everything doesn't go ...
In The Age of Em, Robin Hanson tries to sidestep philosophical questions, especially, "Would your Artificial Intelligences (AIs) actually be conscious?"  But if you seek to evaluate the world of the future, everything revolves around this infamous philosophical challenge.  Suppose AIs perfectly simulate thoughts and feelings even though they experience neither.  If so, claims about their "welfare" are nonsense.  Giving up human welfare to "make AIs better-off" is nonsense on stilts.  And valuing a world with trillions of AIs and zero humans over the status quo could well be the greatest normative error ever made.

Faced with this hypothetical, most AI optimists reply, "Sure, if AIs were dead inside, you'd be right.  But that's a big if.  Why should we even take the idea seriously, much less believe it?"  Rather than rehash the p-zombie literature, let's start from first principles.

1. No one literally observes anyone's thoughts and feelings but his own.  I can see my own happiness via introspection - even with my eyes closed.  When I look at other people, I can see smiling faces, but never actually see their happiness.

2. This gives rise to the classic Problem of Other Minds.  How, if at all, can you justify your belief that anything besides yourself has thoughts and feelings?  Since you never observe others' thoughts and feelings directly, you can only know them by inference.

3. This is no trivial inference, because the world is full of things that suggest thoughts and feelings even though virtually everyone is virtually sure they don't have thoughts and feelings.  Take a diary.  It claims to have thoughts.  It claims to have feelings.  But that's not enough to convince us it has thoughts and feelings.  In fact, there's nothing a book could say that would convince us it's conscious.

4. The same goes for a long list of things: Movies, t.v. shows, tape recordings, audio files, runes carved on stone.  Whatever they show, whatever they say, you'll interpret as mere appearance of consciousness, not the real deal.

5. The same applies for things that provide contingent output: Choose Your Own Adventure novels, single-player videogames, Ouija boards, thermostats.  When you increase the difficulty level in a Chess game, you don't wonder if the computer or program will start to think thoughts and feel emotions.  

6. Many of the things we're virtually sure aren't conscious express more complex thoughts and feelings than the typical human.  Like War and Peace.  Many of the things we're virtually sure are conscious express far simpler thoughts and feelings than the typical human.  Like a mouse.

7. The vast majority of things that never express thoughts and feelings don't seem conscious to us.  Even things that look visibly human, like people in long-term comas.  If we learned that a monk had once taken a vow of silence, however, his subsequent failure to speak would not make us doubt his continued consciousness.  If he publicly took a vow of expressionlessness, similarly, his subsequent lack of facial expressions would barely change our minds about his inner states.

8. At this point, there are two ways to swiftly and permanently end contemplation of the Problem of Other Minds.  The first is solipsism - to say either, "I have no idea if anything other than myself is conscious" or even "I alone am conscious."  The second is materialism - to say either, "I have no idea if anything, including myself, is conscious" or even "Nothing is conscious."  Both views are so absurd there's little point arguing with convinced adherents.

9. Any sensible position on the Problem of Other Minds, then, must begin with a clear affirmation that (a) you are definitely conscious, combined with (b) some way of inferring that other things are conscious, even though (c) many unconscious things closely mimic conscious things.

Most of the AI enthusiasts I've encountered think the Problem of Other Minds is simple: If it quacks like a duck, it's a duck; if a machine acts like it has thoughts and feelings, it has thoughts and feelings.  But how can this simple solution be right when the world is already full of duck calls?

In the near future, I'll offer my solution to the Problem of Other Minds - a solution that strongly suggests AIs are no more conscious than Choose Your Own Adventure Novels.  Maybe I'm wrong.  But I'm not wrong to claim AI fans' impatient responses to the Problem of Other Minds need a lot more curiosity and a lot more work.

COMMENTS (18 to date)
Andrew Clough writes:

You seem to be contending that AI enthusiasts believe that books are conscious but that's so far from my observations that I think I must be misunderstanding you. My experience with AI enthusiasts is that they generally have tests in mind that would exclude books. If you don't think that AI enthusiasts literally believe that books are conscious then maybe a different line of argument would be more productive in refuting them?

Aaron Franklin writes:

I really don't understand why theory of mind philosophers totally disregard everything we know about how humans detect conscious intent.

Social skills take some form of empathy and/or manipulation skill detection. Evolution taught us how to detect consciousness.

We mainly do this by mirroring and other social skill tests in our day to day life (we even use dynamic displays as social test - such as handshake grip timing, and eye contact timing, and dancing.) And we create new test all the time (EM would need to beat humans at our own evolutionary game; not just Chess or Go).

Basically, we are Bayesian thinkers. We are trying to infer intent and the "other's" possible intention to change their action (In a human dynamic environment). We look for a threshold amount of information.

War and Peace suggest a conscious mind was the author by how many statistical points it racks up. One point in its favor is that it can write down what many of us experience internally etc. with new and original authors concretely expressing how we/I feel about new things.)

Bedarz Iliaci writes:

The Problem of Other Minds does not exist, no more than the Problem of External Reality or the Problem of Free Will.

If the other minds do not exist, whom am I talking to? Whom I am arguing with? When I write a paper arguing for something, surely I presume other minds whom I seek to persuade.

Thus, any sane reasoning presumes that other minds exist and also external reality exists.

RAD writes:

I like this idea of contemplating consciousness in terms of movie monsters/machines:

1. zombies
2. werewolves
3. vampires
4. Commander Data (Star Trek New Generation)
5. avatars

Zombies don't talk/think and their pain sensors don't seem to function. Werewolves don't talk/think like humans, that is, they don't have the inner voice that philosophers often consider the basis for "human consciousness". Vampires have all the physical pleasure/pain sensors hooked up and have a human inner voice but their emotional response to social interactions follows different rules (as does their physiology). Commander Data doesn't seem to have physical pleasure/pain sensors and he "suffers" from lack of emotional response to social situations. Avatars are like split Ems with the ability to plug-in higher level human thought/voice remotely.

I don't think Hanson side-steps the philosophical consciousness question. His giant assumption that whole brain emulation will just work covers the bases. With whole brain emulation he can assume that his Ems will have inner voice and emotional responses to social situations.

Matt Skene writes:

RAD, I don't think Brian will (or should) accept the claim that it's okay to assume that whole brain emulation allows Hanson to assume that his Ems will have an inner voice or emotions. You can emulate brain functions in all sorts of ways. Ned Block's Chinese Nation example illustrates quite well what Caplan has in mind. In his case, he does it with a robot sending radio signals to everyone in China and them each pushing buttons to send back signals to the appropriate receivers to copy synaptic activity. Block thinks it's weird to think you've just made a mind out of a billion isolated people pushing buttons. You can duplicate all of the activity of the brain without creating anything that seems even remotely likely to have emotions or an inner voice. Even if you perfectly duplicated brain activity in this way, there is nothing like a sensation anywhere in here. For that matter, there's nothing like a sensation anywhere in the brain, either, but that's a more substantive point than Brian needs here.

Tom Davies writes:

I take issue with your statement c) "many unconscious things closely mimic conscious things." As you say yourself, *no-one* thinks books, CYOA games, video games or chess programs are conscious, and that's because they do not 'closely mimic' conscious things. To 'closely mimic' a conscious thing would require rich, varied responses to communication from other conscious things.

I think it is obvious that other human beings are conscious -- I know for a fact that they are constructed physically exactly as I am, so it would be perverse to say that they do not experience the same things I do.

Given that EMs will process information exactly as I do, I believe they will be conscious too.

If you perform the thought experiment where your brain is progressively replaced with an emulation one neurone at a time, at what point do you expect to 'lose consciousness'?

I believe that passing the Turing test is sufficient to classify something as conscious, so I can imagine that other AIs will also be conscious, but I think you can discount the Turing test and still have very good reasons to consider EMs conscious.

Javier writes:

I think it might be possible to replicate exactly the same brain. But then in order to have consciousness you need to explain if you can have millions of identical bodies with the same consciousness (as far as I know no human has consciousness in more than one body), and also if you can have consciousness of bodies from the past or the future.

That is the same as asking "why I happen to be me and I am not Bryan Caplan? Why do I have consxiousness of exactly this body at this time and no somebodie elses or in another time?

The only solution to that is that there must be some kind of space-time sensor in the neurons dedicated to consciousness. And every configuration has one. Still it is weird to have consciousness.

RAD writes:

Matt, if whole brain emulation does not support inner voice and emotional responses to social situations then I don't think it is whole brain emulation, its something else.

I like Caplan's enquiry and I'm highly skeptical of Hanson's faith in whole brain emulation. My only nit pick with Caplan is that whole brain emulation by definition should include "the stuff of thought" which lets Hanson off the hook in terms of explaining the consciousness of Ems. He doesn't have to explain how whole brain emulation will work he just assumes that it will work when you hook it up the same way as the emulated brain.

DZ writes:

Is it in any way problematic that humans get to define the criteria for granting moral privileges to other potential forms of intelligent life? AI may surely never have consciousness the way a biological human does, but does that make our own moral criteria absolute? For example, we kill animals and eat them because, though they are alive and feel pain, (1) their cognitive abilities are inferior, (2) they can't participate in generative computation (create new concepts out of learned rules), (3) they don't appear to have abstract thought, (4) they can't enter into legal or moral contracts...etc. We see these differences between humans and animals and thus, these are the criteria for granting some form of rights. Similarly, we look for any difference we can with AI...and that is the new criteria for granting rights. It seems presumptuous for humans to continue conveniently defining the criteria for rights as those characteristics which have been specifically identified as different from humans. Just my thoughts.

Matt Skene writes:

Whole brain emulation would only include the "stuff of thought" by definition if physicalism is true by definition. Physicalism isn't true at all, but even those who think it is don't think it's true by definition. You need to give some reason to believe that it would have these results, especially since it seems like there are cases of what people have in mind by brain emulation that don't seem to produce them.

Dave writes:

It might help to look at this in terms of Turing's imitation game. The Turing test is widely misunderstood; its essence is that the test subject can convince a skeptical human judge of its humanity.

The test subject needs a sophisticated ability to interact, respond appropriately to conversational gambits, follow complex threads of thought. Static objects (books, movies, CYOA) obviously do not qualify. Tom Davies said something similar earlier. I would also echo Andrew Clough's remark that this reading seems so preposterous that there must be an elementary miscommunication.

RAD writes:

Matt, I think we are mostly on the same page, your issues are exactly my issues with whole brain emulation. Maybe I'm over simplifying or misunderstanding Hanson's views but I believe his premise is that we don't need to understand AI using first principles, you only need to be able to emulate a neuron, its connections, and have a mechanism to measure the total state of neurons/connections in a specific brain.

Implement the neural map in some kind of artificial neural matrix and voila, mind. You have to hook up sensors/actuators etc., but you don't have to understand how neurons implement language, you just have to instantiate the neural map of a brain that already knows language.

I don't buy it, but if you grant Hanson the premise then I think you have to grant him language/emotion too.

Dan Fitch writes:

The final argument that is hinted at at the end of the post sounds an awful lot like Searle's Chinese Room thought experiment, which doesn't convince "AI optimists" very well at all. I'm interested to see how Brian's argument differs.

Consciousness is context-dependent and I think one of the most amazing things about it is that we still have ongoing philosophical arguments about what it means to be conscious, like this one. We live in interesting times!

Ryan writes:

Using philosophy to argue whether AI is conscious is like using math to argue whether the sky is blue. Of course math is useful answering that question, but only because we have a model for what light is, a theory for how it behaves when travelling through the atmosphere, and good empirical data on the contents of that atmosphere. Most importantly we have a physical definition of what it means for something to be blue. Without that definition, it's a waste of time to argue--just look up and check if the sky seems blue.

Once we have useful theories of what consciousness, we can predict which AIs will be conscious and which won't. Until then, if it seems like a duck, then as far as I can tell it might be a duck.

austrartsua writes:

The duck quack argument stands solid. Nothing in the world today "quacks" like a fully conscious adult human being - other than a fully conscious adult human being. Show me another example. Even the most advanced AI programs pale in contrast.

Mice are not a good example. I would not say that we know for sure they are conscious.

Ken P writes:

I have a problem with the book analogy. The "decider " is the reader, not the book.

In general, I think the biggest stumbling block to developing AI as well as people guessing what is possible is the tendency to picture the AI as following a decision tree.

That approach works well for specialized AI like Watson, but if you want general AI and especially for consciousness, I would expect the need for an emergent thought process - not a decision tree with an external locus of control.

I picture the human mind as a seemingly infinite set of agents making their own Bayesian predictions that on a whole are quite tentative. Strokes, brain surgeries, etc reduce consciousness by creating a smaller subset of collaborating neurons.

I'm currently somewhat agnostic about the potential for machine consciousness, but more optimistic than most. I think it is possible and that most current pursuits are prone to failure by design. Eventually, I believe we will stop trying to get off the ground with bicycles.

Brian Holtz writes:

Strawman arguments, with zero hint that Caplan is familiar with


Tom West writes:

If we manage to get to a point where we can create and destroy hundreds of things each day that appear to be conscious, how quickly do we begin to think that there's nothing particularly sacred the equivalent biological system?

I'm not certain the world would be improved by an increase in the belief of that sort of hard core materialism.

Comments for this entry have been closed
Return to top