Friday, August 14, 2009

Why I don't buy the Chinese Room

The Chinese Room is a thought experiment proposed by John Searle, meant to evoke intuitions that disprove the computational theory of mind (roughly, that the mind is a symbol-manipulating, somewhat computer-like machine). Let me summarize the argument quickly.

Imagine you have a person inside a room. The room has an input slot, through which the person receives cards with Chinese symbols written on them. The person takes the card over to a book, which has something like a dictionary of "if... then" commands. If the card received has symbol X printed on it, for example, the book will tell the person to output a card with symbol Y. The symbols that are outputted are precisely correct responses to the input symbols, also in Chinese. The person will then write symbol Y on a card, and proceed to push it out of the room through the output slot. In this way, it would be hypothetically possible to hold an entire conversation with the room.

The question is -- does the person in the room know Chinese? Searle's argument is that if the computational theory of mind is right, we would have to say yes -- after all, the room is doing what a good computer should. It has a vast memory with instructions on how to proceed (the book), a processor to manipulate the symbols (the person), but it seems intuitive that neither the person nor the entire room understands Chinese. Imagine the person memorizing the book -- the intuition doesn't seem to change. No Chinese understanding going on. For intuitive simplicity, I will refer to the Chinese Person from now on, meaning the person who's memorized the contents of the book. I think this modification (which Searle himself suggested, so I'm not weakening his point any) serves to clarify the intuition just a little.

There have been countless attempts to argue against the Chinese room, find a flaw in the logic, and save the computational theory. The reason for this post is that the problem I've seen in the Chinese room for a while is one I haven't seen mentioned in the criticisms of it. That's not to say it hasn't been pointed out -- I just haven't seen it. So maybe this will be new to some people who already know, and even those who know to hate, Searle's argument.

The problem as I see it is that the book has to be not only readable, but writeable. No proponent of the computational theory has ever suggested anything like a hard-coded read-only set of commands. No one, not even Fodor, who is the most extreme nativist I know of, thinks that we don't learn anything throughout our lifetime. The computational idea is that the mind processes new information into, in some very unspecified sense, a set of symbols, which then modify its internal thought processes, which in turn modify the outputs it produces. Being writeable fits into the other feature that the book must have -- it must provide outputs in a generative way. No computational theorist would ever propose to model the mind as a stock series of canned responses. The book must, in fact, be constantly changing the output symbol it's ready to provide to any given input, based on previous inputs it received, the contents in the book already, preformance limitations, etc.

I don't see a way Searle could include writeability and generativity into the Chinese room argument without causing the intuition to fall apart. Imagine a third component of the system -- a set of instructions for how to rewrite the memory book. A metabook. A person memorizing this, and the original book would seem to have what the computational theory requires, but perhaps still be unable to understand Chinese. But, of course, there'd have to be a set of instructions to rewrite the metabook. At some point, early or late, one of the metabooks would need to have instructions for rewriting itself.

I don't know about you, but my intuition falls apart here. At some point, if we have a Chinese Person who has memorized the original book and all its companion metabooks, so that the Person can start legitimate conversations in Chinese, respond appropriately in a context-sensitive way, constantly adjust responses based not only on the Chinese words themselves, but also the speaker they originated from, the context, the weather, and a myriad other possible factors, and learn from interactions in a way that affects future conversations, I'd probably be willing to ascribe knowledge of Chinese to the whole system, er... person.

Unless understanding has to involve consciousness. And there, of course, Searle has me (and everyone else). Really, though, at that point, the argument is identical to Chalmers' zombie argument. I have to concede that unless we figure out what makes us conscious, we're not going to build a human being. But that seems to me to be a different argument than the one Searle wants to run. If I understand him correctly, what he wants to show is something more like, no robot we build or computational explanation we give will ever be able to work the same way people work. And that doesn't seem to follow. It's entirely conceivable to me that we humans works just like the Chinese Person, with an added element of consciousness and qualia, for all we know about these. In the end, if Searle's argument comes down only to the point that a robot copy of a person may still be missing qualia, I'd be willing to agree with him. But I'm not sure that the computational theory of mind suffers any.

8 comments:

  1. Hey Roman!
    Thanks for inviting me to your blog. Some thoughts on the CR argument.

    I don't know if anyone has previously suggested a "writeability" argument, but my impression is that something similar was proposed by Searle himself when he suggested a walking CR room with eyes. Kristen has a huge poster of various Chinese Rooms arguments outside her office, so I'll go read it sometime to double-check.

    In any case, I think that the argument is important, but may actually benefit Searle himself. My impression is that he has always tried really hard to make the CR as close as possible to a realized computational mind system with all the necessary components, and that he did not leave anything out. As such, if the writeability is really necessary for the room to understand Chinese, then this is not a strike against Searle, but against a computational theory of mind!

    In other words, how can a computational system with inputs, outputs, and hidden processes have its components be "writable"? Clearly, if it is to be human-like, it must, and some form of learning must occur. But, as Fodor himself expertly argued, learning in a computational process is a tricky affair, and either you will admit that everything is innate (and, therefore, pre-written) and just needs instantiation (which the CR room could do), or else how does it learn?

    Of course, I disagree with Fodor (at least at the extremity he goes to), and learning of some sort must occur. But whether or not that can happen with a computational theory of mind of the 70s and 80s, I am not quite sure. So I think that your argument is great, but it may support Searle's point.

    I am also very curious about the qualia comment, and why your intuitions are that we need something "extra" to make machines conscious. I have always strongly disagreed with Searle on his "only biological systems can have qualia" argument. For me, if you assume physicalism, then you assume that qualia just fall out of the physical process. It is only a mystery as far as we cannot figure out at the moment how that is (i.e., I share many of Dennett's intuitions). Qualia are only a problem, in my opinion, because the way the human mind looks inwardly on itself, and is, in reality, limited in what it can see. I think that one day we will figure out the mystery of qualia and assert them to a physical process in the same way that we figured out the mystery of humans being so amazingly designed without a creator through a biological process of natural selection.

    If you disagree, and think that machines (or even the CR room itself) cannot experience consciousness or qualia, then, in my opinion, one has to admit to some sort of a dualistic process. Although I have no proof that things go one way or another, I feel more comfortable with a physicalist assumption.

    ReplyDelete
  2. The Chinese room argument already assumes the program has space for data. Any self modifying program can be simulated by a non self modifying program so the writability point is irrelevant.

    For a knock down counter to Searle I think one needs the system reply. Searle's rebuttle of this is flawed in that it assumes that mind is indivisible or put another way that one brain cannot support two minds.

    ReplyDelete
  3. Hey - so I found out who the person behind the argument was (at least according to a giant CR poster in the JHU Cog Sci Department).

    It was Hannoch Ben-Yami (1993). You can check out the paper here: http://web.ceu.hu/phil/benyami/A%20Note%20on%20the%20Chinese%20Room.pdf

    Anyway, it is somewhat different, because he argues that the "fixed book" would not be able to answer questions like "what time is it now?", etc., but I think that it is a good read and support for your argument.

    ReplyDelete
  4. To Darko:

    Thanks for checking out the blog, and commenting! Sorry it's taken me a while to reply, things have been hectic with the move. I hope you stick around and comment on future posts, though.

    I don't have the room here to get into a full rebuttal on Fodor's learnability argument, but I think Margolis and Laurence do a relatively nice job with their 2003 paper titled Radical Concept Nativism. If you're interested in how Fodor's Puzzle can be solved within a perfectly computational framework, I'd look there first.

    The way that a hidden symbolic system can be writeable is something like what I describe in the post -- the book needs to have instructions on how to rewrite itself. That is, "given certain symbols, change other certain symbols". I don't see why this shouldn't be doable in a computational framework.

    To be clear, by referring to a computational framework, I don't mean any specific theory about exactly what properties the mind shares with a computer. All I mean is, very roughly and vaguely, the idea that there is something like the hardware (brain) and (software), and that the software is describable in something like the same way symbol-manipulating computer programs and other algorithms are describable. It is this vague and broad version of computation that Searle is against, I think.

    About qualia -- I am completely agnostic re: dualism vs. physicalism/materialism. I think, since we don't have an answer, it is pointless to discuss whether the answer will take the shape of "one type of thing" or "two" (by whatever standard of distinction). Obviously, they can interact, and obviously also, thoughts have intentionality and rocks don't. Given these, and other bits of data, what we need is a specific solution about what qualia really are, and how they arise, rather than a discussion of whether they're "the same kind of thing" as matter or not. The answer to that last bit will fall out of whatever we find.

    However, it is reasonably clear to me, that whatever kind of thing qualia is, it is not embodied by computations of increasing complexity. An enormously complicated program will not, by virtue of its complexity, be any closer to producing qualia than a simple one. This is what I take Searle's intuition to be. A book with a million pages has as much qualia as a book with one, and both are missing something (whether that something is physical, nonphysical, and how it works, is what we have to find out, but that is a separate point from Searle's).

    ReplyDelete
  5. To Barnaby,

    Sorry also for the delayed response.

    There are two sense in which writability could be equivalent or not, and it is my fault in the original post for not distinguishing between them. A writable and a non-writable program are equivalent are equivalent computationally, and this is the sense I take you to mean. But they are not equivalent in that they evoke different intuitions from the reader of the example, and there is the crux.

    Searle's point in using the book is to evoke an intuition in the reader that, although the Chinese Room fulfills all of the requirements of the computational view of the mind, it is fundamentally not a mind. That intuition relies in part on the book not being writable.

    The point of my argument is to provide an outline of a different instantiation of the computational theory of mind -- one with writability included -- that does not evoke the same intuition. While computationally equivalent, it is not equivalent in the other sense, since it does not (according, at least, to my intuitions) serve as well for Searle's purposes. If one finds an instantiation of the computational theory that does not evoke the same intuition as what Searle wants, that goes a long way towards countering his argument.

    As to the systems reply, and the inadequacy of Searle's response to it, could you clarify how you think Searle's response is inadequate, and specifically how having two minds in one brain would solve the puzzle he poses?

    Darko -- Thanks for the link! I'll check it out, though I'm a bit swamped right now.

    ReplyDelete
  6. Searle's response to the Systems reply is that the person (call her Anne) in the room could, instead of using a book and scraps of paper, keep all the details of the program and the relevant data in her head.

    This would bring the whole Chinese room into Anne's head. Searle then equates the system with Anne (as the whole system is in her brain). But this implicitely assumes that Anne's brain can only support one mind.

    Otherwise you can just state that by learning the program and memorizing its data Anne has created a second conciousness in her head. It is this second conciousness that understands Chinese but Anne's original conciousness still does not.

    This may seem unintuitive but one can argue that this is because what we are being asked to imagine (Anne memorizing potentially millions of lines of code and trillions of bytes of information) already stretches our intuitions to breaking point.

    ReplyDelete
  7. Hey - no problem on the delay, thanks for answering in the first place!

    So, let's rephrase your thought experiment with an addition of a learnability argument that can defeat Fodor's (bootstrapping, some forms of Bayes' networks, etc.). Since a book cannot do these things, why don't we replace the code-book with a computer. The person picks up a sheet of paper, types the commands on the computer, the computer makes a reply, and the reply is printed and sent out the output hole in the room.

    The computer can learn. It has the capacity for (some) generative content. It has access to the internet, and can seach for information as required. It can use a translator program to turn any information online into Chinese (like Google Translator can). It has a randomization procedure so that it mixes up answers, and generates sentences that appear unique.

    Clearly, this sketch-up should not satisfy you. Either a) This sketch-up does not reflect your problem, or b) Your intuition still fails, and you think that something inside the room may understand Chinese (or the room as a whole).

    So I am very curious which one of these two it is. Because for me, the moment you construct it in this way, I still think that Searle has a point (but that a systems-reply eventually defeats it).

    ReplyDelete
  8. To Barnaby: I'm sure I'm missing something, but I don't see what differences the systems reply makes. Lets say Searle grants two consciousness. How does either of these consciousnesses go beyond symbol manipulation and into what could be called "understanding"?

    To Darko: Same as the reply to Barnaby, I don't understand what difference the systems reply is supposed to make.

    About your computer analogy, my intuition is twofold. On the one hand, I think the computer evokes intuitions that are not directly those that don't necessarily match with the computational theory of mind. For example, a computer is serial, the mind is not... etc. Putting those sorts of issues aside, I agree with you that Searle still has a point, and I think it's exactly what I wrote in the original post.

    Searle has a point in that consciousness would still be missing, but what is unclear is whether this means the computational theory of mind is flat-out wrong or only incomplete. Searle seems to think the Chinese Room implies the former, whereas I think it could just as well be the latter. For all we know about consciousness, it could just be an extra layer that gets overlaid overtop of the kinds of computations a computational theory of mind would want to posit.

    ReplyDelete