Tuesday, December 27, 2011

79: Chinese Pen Pals

    With China on everyone's mind due to some annoying sporting event
that seems to be replacing all the good TV shows, I thought it might
be a good time to discuss what's known as the "Chinese Room Paradox",
a thought experiment by 20th-century philosopher John Searle to
challenge the concept of artificial intelligence.  Here's the basic
idea.
    Suppose we lock an intern who doesn't speak Chinese or follow
sports in a conference room and give him a very large, extremely
detailed book.  We tell him that every day we will give him a piece of
paper with a bunch of symbols on it, and his job is to open the book
and follow instructions based on those symbols.  The book is very
clearly written with instructions in plain English:  for example, it
might say that if the first symbol seen on the input paper matches the
first photo on page 1, turn to page 235 for further instructions,
otherwise turn to page 742.  The instructions may also involve jotting
down specific notes to refer to later, or writing new symbols on an
output paper.  Every day, the intern follows the instructions in the
book and hands back a new piece of paper with a different set of
symbols written on it.  Sounds pretty boring, but I've assigned worse
tasks to some of my interns. 
    Now, unknown to the intern, the large book describes an
artificial-intelligence computer program for carrying on a
conversation in Chinese.  The notes passed into the room are being
written, and the output notes read, by a real Chinese person, and he
doesn't know that the responses are being generated in this way; he
thinks he is carrying on an actual correspondence about the Olympics
with a pen pal in his native tongue.  It sounds bizarre, but if you
accept the premise that an artificial intelligence computer program
might one day be written, in other words a computer program that
essentially replicates a human mind, it should definitely be possible.
Any computer program  can theoretically be converted to a set of
handwritten instructions-- after all, other than carrying out their
instructions faster than we can, there's nothing a modern digital
computer does that's not possible by a human.  You may recall from an
earlier podcast that a theoretical construct called a "Turing
Machine", which just lets you read or write a symbol on a large piece
of tape and shift it to the left or right based on simple
instructions, is known to be able to imitate any known computer.  A
human would do it rather slowly, taking millions of years to finish
this experiment, but if we assume it's a government-funded study that
should be no problem.
    This situation seems to create a bizarre paradox.  Who is our
Chinese correspondent's pen pal?  Is there some entity in the
conference room that does actually have the mental state of a Chinese
speaker, thinking about the Olympics and replying to the comments?  It
seems like the intern is just following a mechanical task.  So where
is the intelligence?   Searle believed that this paradox is a proof
that even if a computer seemed to be having an intelligent
conversation, it would not be the case, and there would be no
artificial intelligence.  While it might show the appearance of
intelligence, there is no entity in the machine that is thinking and
replying intelligently to questions.
    While nobody has yet managed the ultimate disproof of actually
creating an artificial intelligence, there are several convincing
replies to this argument.  My favorite is probably the "systems
reply":  while the intern does not understand, the system comprising
the intern, the set of instructions, and the notebook does together
comprise a system that makes up an intelligence.  Think of it this
way:  while I am creating this podcast, which I hope demonstrates some
degree of intelligence, my brain consists of many individual neurons.
If we just look at one of those neurons, it has no idea it is
participating in such a monumental task as the creation of a podcast:
it just is increasing or decreasing local electrical charges in
response to stimuli.  For a system to be intelligent, it wouldn't make
sense to require every component of that system to be intelligent: at
some level, the intelligence has to be an emergent property coming
from the combination of components.
    A more amusing line of argument that stems from this paradox is
the proposition that really, everyone else besides you is effectively
a 'chinese room'.  Just as you could only judge the room system's
intelligence by reading its external notes, you only judge that other
people are intelligent by seeing how they act and behave.  For all you
know, everyone else in the universe, including this podcaster, could
be a mindless automaton, following a set of symbolic instructions to
mimic intelligence.  But don't worry, I'm not offended, since by the
same reasoning, I can see that I am the only intelligence in the
universe anyway, and you're just a silly biological robot. 
    And this has been your math mutation for today.

  • The Chinese Room at Wikipedia
  • Another page discussing the Chinese Room paradox
  • No comments:

    Post a Comment