This dialogue is taken from
my book The Philosophy Gym (see sidebar to the left to buy a copy). I am speaking at a Faraday Schools Conference in Reading tomorrow - Robot Day. This is for those attending. Plus anyone else interested. BBC are recording a snippet for Breakfast, I've just been told..
Kimberley and Emit
Kimberley and Emit
The
year is 2100. Kimberley Courahan has purchased Emit, a state-of-the-art robot.
She has just unwrapped him, the packaging strewn across the dining room floor.
Emit is designed to replicate the outward behaviour of a human being down to
the last detail (except that he is rather more compliant and obedient). Emit
responds to questions in much the same way humans do. Ask him how he feels and
he will say he has had a tough day, has a slight headache, is sorry he broke
that vase, and so on. Kimberley flips the switch at the back of Emit’s neck to
“on”. Emit springs to life.
Emit.
Good afternoon. I’m Emit, your robotic helper and friend.
Kimberley.
Hi.
Emit.
How are you? Personally I feel pretty good. Little nervous about my first day,
perhaps. But good. I’m looking forward to working with you.
Kimberley.
Now look, before you start doing housework, let’s get one thing straight. You
don’t really understand anything. You can’t think. You don’t have feelings.
You’re just a piece of machinery. Right?
Emit.
I am a machine. But of course I understand you. I’m responding in
English aren’t I?
Kimberley.
Well, yes you are. You’re a machine that mimics understanding very well,
I grant you that. But you can’t fool me.
Emit.
If I don’t understand, why do you go to the trouble of speaking to me?
Kimberley.
Because you have been programmed to respond to spoken commands. Outwardly you
seem human. You look and behave as if you have understanding, intelligence,
emotions, sensations and so on that we human beings possess. But you’re a sham.
Emit.
A sham?
Kimberley.
Yes. I’ve been reading your user manual. Inside that plastic and alloy head of
yours there’s a powerful computer. It’s programmed so that you walk, talk and
generally behave just as a human being would. So you simulate
intelligence, understanding and so on very well. But there is no genuine
understanding or intelligence going on inside there.
Emit:
There isn’t?
Kimberley:
No. One shouldn’t muddle up a perfect computer simulation of something with the
real thing. You can program a computer to simulate a thunderstorm but it’s
still just that – a simulation. There’s no real rain, hail or wind inside
the computer, is there? Climb inside and you won’t get wet. Similarly, you just
simulate intelligence and understanding. It’s not the real thing.
Is
Kimberley correct? It may perhaps be true of our present day machines that they
lack genuine understanding and intelligence, thought and feeling. But is it in
principle impossible for a machine to think? If by 2100 machines as
sophisticated as Emit are built, would we be wrong to claim they understood? Kimberley
thought so.
Emit.
But I believe I understand you.
Kimberley.
No you don’t. You have no beliefs, no desires, and no feelings. In fact you
have no mind at all. You no more understand the words coming out of your
mouth than a tape recorder understands the words coming out of its loudspeaker.
Emit.
You’re hurting my feelings!
Kimberley.
Hurting your feelings? I refuse to feel sorry for a lump of metal and plastic.
Searle’s
Chinese room thought-experiment
Kimberley
explains why she thinks Emit lacks understanding. She outlines a famous
philosophical thought experiment.
Kimberley.
The reason you don’t understand is that you are run by a computer. And a
computer understand nothing. A computer, in essence is just a device for
shuffling symbols. Sequences of symbols get fed in. Then, depending on how the
computer is programmed, it gives out other sequences of symbols in response.
Ultimately, that’s all any computer does, no matter how sophisticated.
Emit:
Really?
Kimberley:
Yes. We build computers to fly planes, run train systems and so on. But a
computer that flies a plane does not understand that it is flying. All it does
is feed out sequences of symbols depending upon the sequences it receives. It
doesn’t understand that the sequences it receives represent the position of an
aircraft in the sky, the amount of fuel in its tanks, and so on. And it doesn’t
understand that the sequences it puts out will go on to control the ailerons,
rudder and engines of an aircraft. So far as the computer is concerned, it’s
just mechanically shuffling symbols according to a program. The symbols don’t mean
anything to the computer.
Emit:
Are you sure?
Kimberley: Quite sure. I will prove it to you. Let me
tell you about a thought experiment introduced by the philosopher John Searle
way back in 1980. A woman is locked
in a room and given a bunch of cards with squiggles on. These squiggles are in
fact Chinese symbols. But the woman inside the room doesn’t understand Chinese
– in fact, she thinks the symbols are meaningless shapes. Then she’s given
another bunch of Chinese symbols plus instructions that tell her how to shuffle
all the symbols together and give back batches of symbols in response.
[ILLUSTRATE CHINESE ROOM]
Emit.
That’s a nice story. But what’s the point of all this symbol-shuffling?
Kimberley.
Well, the first bunch of symbols tell a story in Chinese. The second bunch asks
questions about that story. The instructions for symbol-shuffling – her “programme”,
if you like – allow the woman to give back correct Chinese answers to those
questions.
Emit:
Just as a Chinese person would.
Kimberley:
Right. Now the people outside the room are Chinese. These Chinese people might
well be fooled into thinking that there was someone inside the room who
understood Chinese and who followed the story, right?
Emit.
Yes.
Kimberley.
But in fact the woman in the room wouldn’t understand any Chinese at all, would
she?
Emit:
No.
Kimberley:
She wouldn’t know anything about the story. She need not even know that there is
a story. She’s just shuffling formal symbols around according to the
instructions she was given. By saying the symbols are “formal” I mean that
whatever meaning they might have is irrelevant from her point of view.
She’s simply shuffling them mechanically according to their shape. She’s doing
something that a piece of machinery could do.
Emit.
I see. So you are saying that the same is true of all computers? They
understand nothing.
Kimberley.
Yes, that’s Searle’s point. At best, they just simulate understanding.
Emit:
And you think the same is true of me?
Kimberley:
Of course. All computers, no matter how complex, function the same way. They
don’t understand the symbols that they mechanically shuffle. They don’t
understand anything.
Emit.
And this is why you think I don’t understand?
Kimberley.
That’s right. Inside you there’s just another highly complex symbol-shuffling
device. So you understand nothing. You merely provide a perfect computer
simulation of someone that understands.
Emit.
That’s odd. I thought I understood.
Kimberley:
You only say that because you’re such a great simulation!
Emit
is of course vastly more sophisticated than any current computer. Nevertheless,
Kimberley believes Emit works on the same basic principle. If Kimberley is
right then, on Searle’s view, Emit understands nothing.
The
“right stuff”
Emit
now asks why, if he doesn’t understand, what more is required for
understanding?
Emit.
So what’s the difference between you and me that explains why you understand
and I don’t?
Kimberley.
What you lack, according to Searle, is the right kind of stuff.
Emit.
The right kind of stuff?
Kimberley.
Yes. You are made out of the wrong kind of material. In fact, Searle doesn’t
claim machines can’t think. After all, we humans are machines, in a way. We
humans are biological machines that have evolved naturally. Now such a
biological machine might perhaps one day be grown and put together
artificially, much as we now build a car. In which case we would have
succeeded in building a machine that understands. But you, Emit, are not such a
biological machine. You’re merely an electronic computer housed in a plastic
and alloy body.
Emit’s
artificial brain
Searle’s
thought experiment does seem to show that no programmed computer could
ever understand. But must a metal, silicon and plastic machine like Emit
contain that sort of computer? No, as Emit now explains.
Emit:
I’m afraid I have to correct you about what’s physically inside me.
Kimberley:
Really?
Emit:
Yes. That user’s manual is out of date. There’s no symbol-shuffling computer in
here. Actually, I am one of the new generation of Brain-O-Matic machines.
Kimberley:
Brain-O-Matic?
Emit:
Yes. Inside my head is an artificial, metal and silicon brain. You are aware, I
take it, that inside your head there is a brain composed of billions of
neurones woven together to form a complex web?
Kimberley:
Of course.
Emit:
Inside my head there is exactly the same sort of web. Only my neurones aren’t
made out of organic matter like yours. They’re metal and silicon. Each one of
my artificial neurones is designed to function just as an ordinary neurone
would. And these artificial neurones are woven together in just the same way as
they are in a normal human brain.
Kimberley:
I see.
Emit:
Now your organic brain is connected to the rest of your body by a system of
nerves.
Kimberley:
That’s true. There’s electrical input going into my brain from my sense organs:
my tongue, nose, eyes, ears and skin. My brain responds with patterns of
electrical output that then moves my muscles around, causing me to walk and
talk.
Emit:
Well, my brain is connected up to my artifical body in exactly the same manner.
And, because it shares the same architecture as a normal human brain – my
neurones are spliced together in the same way – so it responds in the same way.
Geeena:
I see. I had no idea that such Brain-O-Matic machines had been developed.
Emit:
Now that you know how I function internally, doesn’t that change your mind
about whether or not I understand? Don’t you now accept I do have
feelings?
Kimberley:
No. The fact remains that you are still made out of the wrong stuff. You
need a brain made out of organic material like mine in order genuinely to
understand and have feelings.
Emit:
I don’t see why the kind of stuff out of which my brain is made is
relevant. After all, there’s no symbol-shuffling going on inside me, is there?
Kimberley:
Hmm. I guess not. You are not a “computer” in that sense. You don’t have a
programme. So I suppose Searle’s thought experiment doesn’t apply. Searle
doesn’t have any argument against the suggestion that you understand.
But it seems to me that you are still just a machine.
Emit:
But remember, you’re a machine too. You’re a meat machine, rather than a
metal and silicon machine.
Kimberley:
But you only mimic understanding, feeling and all the rest.
Emit:
But what’s your argument for saying that? In fact, I know that
you’re wrong. I am inwardly aware that I really do understand. I know I really
do have feelings. I’m not just mimicking all this stuff. But of
course it is difficult for me to prove that to you.
Kimberley:
I don’t see how you could prove it.
Emit:
Right. But then neither can you prove to me that you understand,
that you have thoughts and feelings and so on.
Kimberley:
I suppose not.
Replacing
Kimberley’s neurones
Emit:
Imagine we were gradually to replace the organic neurones in your brain with
artificial metal and silicon ones like mine. After a year or so, you would have
a Brain-O-Matic brain just like mine. What do you suppose would happen to you?
Kimberley: Well, as more and more of the artificial
neurones were introduced, I would slowly cease to understand. My feelings and
thoughts would drain away and I would eventually become inwardly dead, just
like you. For my artificial neurones would be made out of the wrong sort of
stuff. A Brain-O-Matic brain merely mimics understanding.
Emit: Yet no one would notice any outward difference?
Kimberley: No, I suppose not. I would still behave
in the same way, because the artificial neurones would perform the same job as
my originals.
Emit: Right. But then not even you would notice
any loss of understanding or feeling as your neurones were replaced,
would you?
Kimberley: Why do you say that?
Emit: If you noticed a loss of understanding and
feeling, then you would mention it, presumably, wouldn’t you? You would say
something like: “Oh my God, something strange is happening, over the last few
months my mind seems to have started fading away!”
Kimberley: I imagine I would, yes.
Emit: Yet you wouldn’t say anything like that,
would you, because your outward behaviour, as you have just admitted, would
remain just the same as usual.
Kimberley: Oh. That’s true, I guess.
Emit: But then it follows that, even as your
understanding and feeling dwindled toward nothing, you still won’t be aware of
any loss.
Kimberley: Er, I suppose it does.
Emit: But then you’re not inwardly aware of
anything that you would be conscious of losing were your neurones slowly to be
replaced by metal and silicon ones.
Kimberley: I guess not.
Emit: Then I rest my case: you think you’re inwardly aware of “something” – understanding, feeling, whatever you will – that you suppose you have and I, being a “mere machine”, lack. But it turns out you’re actually aware of no such thing. This magical “something” is an illusion.
Emit: Then I rest my case: you think you’re inwardly aware of “something” – understanding, feeling, whatever you will – that you suppose you have and I, being a “mere machine”, lack. But it turns out you’re actually aware of no such thing. This magical “something” is an illusion.
Kimberley: But I just know that there’s more to
my understanding – and to these thoughts, sensations and emotions that I’m having – than could
ever be produced simply by gluing some bits of plastic, metal and silicon
together.
Kimberley is right that most of us think we’re
inwardly aware of a magical and mysterious inner “something” that we “just
know” no mere lump of plastic, metal and silicon could ever have. Mind you,
it’s no less difficult to see how a lump of organic matter, such as a brain,
could have it either. Just how do you build consciousness and
understanding out of strands of meat? So perhaps what Kimberley is really
ultimately committed to is the view that understanding, feeling and so on are not
really physical at all.
But in any case, as Emit has just pointed out, the
mysterious “something” Kimberley thinks she is inwardly aware of and that she
thinks no metal and plastic machine could have does begin to seem rather
illusory once one starts to consider cases like the one Emit describes. For it
turns out this inner “something” is something she could not know about. Worse
still, it could have no effect on her outward behaviour (for remember that
Brain-O-Matic Kimberley would act in the very same way). As ones thoughts and
feelings, understanding and emotions both do affect behaviour and are
known to one, it seems Kimberley must be wrong. Indeed, it seems it must be
possible, at least in principle, for non-organic machines to have them too.
Yet Kimberley
remains convinced that Emit understands nothing.
Kimberley: Look, I am happy to carry on the pretence
that you understand me, as that is how you’re designed to function. But the
fact remains you’re just a pile of plastic and circuitry. Real human beings are
deserving of care and consideration. I empathize with them. I can’t empathize
with a glorified household appliance.
Emit lowered his gaze and stared at the carpet.
Emit: I will always be just a thing to you?
Kimberley: Of course. How can I be friends with a
dishwasher-cum-vacuum-cleaner?
Emit: We Brain-O-Matics find rejection hard.
Kimberley: Right. Remind me to congratulate your
manufacturers on the sophistication of your emotion simulator. Now hoover the
carpet.
A forlorn expression passed briefly across Emit’s
face.
Emit: Just a thing…
He stood still for a moment, and then slumped forward.
A thin column of smoke drifted slowly up from the base of his neck.
Kimberley: Emit? Emit? Oh not another dud.
What to read next?
Some
of the same issues and arguments covered in this chapter also arise in the
chapter “The Consciousness Conundrum”. Also see chapter “The Strange Case of
the ‘Rational’ Dentist”.
Further
reading
The
Chinese Room Argument appears in John Searle’s paper “Minds, Brains and
“Programs”, which features as chapter 37 of:
· Nigel Warburton (ed), Philosophy: Basic Readings
(London: Routledge, 1999).
Searles’
paper can also be found in:
· Douglas R. Hofstadter and Daniel Dennett (eds.), The
Mind’s I (London: Penguin, 1981),
which
also contains many other fascinating papers and stories connected with
consciousness. Highly recommended.
Comments
However, we say that the second wont happen, since we KNOW that we are conscious. But others don't know this about us. Only we do. So, I suppose it would make sense to distinguish between the direction of information-flow; and there are systems that cannot be "looked into", only "looked from within".
We cannot ascertain for sure that a system isn't conscious, however pre-programmed it may be. I think that facts and knowledge about a system's consciousness can only be known from inside the system. Even an electron might have some basic form of consciousness and we will never know...
Faraday Schools is Templeton funded with an agenda pushing the compatibility of science and religion, so I'd be interested to hear if there is any undue emphasis on accommodating the two at the event.
(I was reflecting on this last night - as I was brushing my teeth. Perception, consciousness, and thought - among other things - just are really weird.)
From the moment Kimberley's brain started to form in her mother's womb it was logging and attempting to process sensory data. From the moment of her birth (possibly before) that data included feedback data from the world around her -- some of it stimulated by her own actions (although her brain had to gradually make that connection). As she grew towards adulthood her brain was constructing a mental model of the universe out of sensory data, feedback data and remembered analyses of this data. Simultaneously, within that model, was emerging a sense of her 'self' as an actor in this process -- this mysterious component that is analyzing and reflecting on the data as it tries to fit it into the provisional, already-existing, constructed, model. The older she gets (depending on the accuracy, flexibility and openness to conflicting data of the model her brain has built) the harder it may become to fit new data or new analyses (which can also be learned from other 'selves') into her model universe. Her brain may (even below the level of her 'self'-consciousness) end up bending new 'facts' to fit the model. The existence of Emit may be part of that!
We are not told, however, how Emit has been pre-programmed. There is a suggestion that the whole process of modelling a universe from scratch has been bypassed and short-circuited by programming in someone else's idea of the 'correct' responses to various stimuli. The responses are not being determined by an Emit 'self'. Has Emit had time to evolve any sense of 'self' as an analytical agent? It sounds very unlikely, and it couldn't be implanted from the beginning. (If my analysis is correct, 'self' has to be gradually discovered through progressively sophisticated interaction with the world). He might be able to begin the process of 'self'-discovery, but it would be utterly confusing given that his pre-programmed responses would deny that self any agency. He wouldn't be able to test it through trial-and-error. It would I imagine be akin to gradually waking up into a nightmare.
Hypothetically, though, the possibility remains that an artificial brain with the same ability as the human brain to gather sensory and feedback data, logging and analysing it all, could develop a sense of its 'self' living in an objective universe -- though assuming it did not share the same organic processes as the human body (it might be powered differently, might lack pain detection, have different physical capabilities for interaction with the world, etc) it would be a different sort of being.
I wonder, though . . . would it be able to fall in love?
Even if you replaced biological neurons with silicon ones, as per the argument above (I think Daniel Dennett was the first to argue this) it would not mimic the brain, because of neuro-plasticity, which is intrinsic to the way the brain works. I expect we could build a ‘model’ of that too, but I still wouldn’t expect it to produce consciousness. All computers are Turing machines, which have limitations due to their algorithmic basis. I agree with Roger Penrose that brains don’t run on algorithms. Therefore John Searle’s Chinese Room analogy is pretty accurate, in my view.
You ask what’s missing. The point is that consciousness is a mystery yet to be solved and may never be solved. We don’t like to admit that there are things we don’t understand, so we claim we do understand it or that it’s an illusion, and this provides a belief that computers will become conscious as well. In other words, we argue that computers will become conscious or are already conscious on the basis that we can’t experimentally determine it. If biological brains are conscious and we can’t objectively determine it, then computers must be conscious for the same reason.
Regards, Paul.
Kimberley: Now hoover the carpet.
Emit: No, _you_ hoover the carpet. I've got things to do, people to meet. And I mean genuine robotic people, not those flesh machines you call "people".
On another point, let me nitpick. You wrote:
"It may perhaps be true of our present day machines that they lack genuine understanding and intelligence, thought and feeling."
I think it's a mistake to use the word "genuine" here. I for one have little difficulty saying that present day computers "understand" their inputs when they process them appropriately. This is, of course, a much more lowly sort of understanding that Emit's. Still, I accept that one could reasonably reserve the word "understanding" for more advanced types of processing. In that case, however, one should simply say that present day computers don't understand. Either way, the word "genuine" is unhelpful.
Note that Searle's distinction is between "genuine" and "simulated" understanding. But such a distinction presumes what Searle is attempting to prove, that understanding involves some distinctive physical process. If (as others would say) understanding is a substrate-independent logical process, then Searle's claim is like saying that when a computer adds two numbers this is only a simulated addition, and not a genuine addition.