Sunday, August 30, 2009

A quick thought on euthanasia

This, to me, is a puzzle.

Why is euthanasia considered not only an acceptable, but indeed, the single most humane end-of-life procedure when it comes to animals, while evoking dread, hesitation, and often many prohibitory laws when it comes to humans?

I really have no idea. Thoughts?

Tuesday, August 25, 2009

Diachronic vs. synchronic explanations: Why psychology needs both

So I'm at Harvard now, and settling in fine. I don't have internet at home yet, so posting might be a little bit spotty until I do. Now, let's jump into it.

Steve Pinker's "How The Mind Works" is a pretty classic book at this point. New York Times bestseller, Pulitzer finalist, and one of the few real, undiluted and scientific psychology/cognitive science books to have made it into the public consciousness. Pinker wears somewhat different hats in each of his books, and in this one, he is a staunch evolutionary psychologist -- dismissing Lewontin and Gould's "spandrel" argument out of hand (and unconvincingly, in my view), and claiming that psychology must turn to evolution for answers to its important questions, How The Mind Works.

The philosopher and cognitive scientist Jerry Fodor wrote a book titled "The Mind Doesn't Work That Way", both an obvious jab at Pinker's lofty title, and a serious critique of his ideas. One of Fodor's chief arguments is against evolutionary psychology as a whole. The argument can be summarized like this:

How something came to be is a "diachronic" (across time) explanation, and how something is now is a "synchronic" (concurrent time) explanation. According to Fodor, diachronic and synchronic explanations are unrelated to each other, because they answer different questions. The former answers the question of "How X came to be" or "Why X is the way it is, and not some other way". The latter answers the question of "How X works". Furthermore, Fodor argues that the twain shall never meet. Diachronic investigations are, in principle, incapable of answering how something works, and synchronic explanations are, in principle, incapable of answering how something came to be.

An illustrative analogy is the study of flight and the building of airplanes. If you want to know how to build an airplane, you don't go to the museum and study the history of flight, tracing its progress and evolution from the Wright brothers on. You ask an engineer and get the blueprints for a real, functioning, modern airplane. If you don't have access to that sort of thing, and we don't when it comes to the mind, you try to reverse-engineer a modern airplane. The history of flight, in short, will not help you.

What this all means is if we want to know how the mind works, evolutionary explanations (which are diachronic) are unhelpful. If we want to know why the mind is the way it is, and not some other plausible way -- for example, when we've built a model that mimics a particular function, but doesn't seem to work the way the mind does, and we want to know why -- we can look into evolution. But if we want to know how the mind works in its present form, we need to study it directly. Good old psychology.

When I first heard about this argument, I wasn't just stumped, I was distraught. I'm not the biggest fan of evolutionary psychology, mainly because I think it has some serious problems in its execution and the methodology available to it. But neither am I willing to throw the whole enterprise out or concede that it's irrelevant. Before I had any good reason, I had a strong gut feeling that Fodor was wrong. The good reason took a couple of days.

Fodor's mistake is ignoring the real reason for the existence of evolutionary psychology: the way that something came to be constrains the range of possibilities for how it can be now. We may have the capacity to completely redesign an airplane if we discover a better way for it to be built, but evolution is famous for its inertia. As Pinker points out, the reason the male seminal ducts don't wind straight down, hooking around the ureter, is that our reptilian ancestors' testicles were inside their bodies, and when our bodies became too warm to produce semen and the testicles had to descend into the sack of their present form, evolution could not redesign the plumbing, and the seminal ducts got caught on the ureter "like a gardener who snags a hose around a tree".

Of course, in this case, we can simply look into a human male and see the design of the seminal ducts. The synchronic explanation is available to us. But imagine if it was not -- for example, if we lived in some universe where evolution had been discovered, but where, as in the middle ages, dissection of cadavers was forbidden and imaging technology did not exist. Wouldn't it help to know that our reptilian ancestor's testicles were inside their bodies, and that the ducts wound around the ureter? Wouldn't it help to know even more that non-human mammals, especially those who we share a close evolutionary ancestry with, and whose testicles did descend into an external sack have their ducts snagged on the ureter? Would it really remain unconvincing if we found this design in all mammals with descended testicles? Or would we then think it pretty likely that our bodies have the same design?

The issue is that, while synchronic explanations provide evidence through deductive reasoning, diachronic explanations are necessarily inductive. Barring extreme philosophical skepticism that would claim every bit of reasoning based on observation is inductive, if a dissection or an X-ray-type body image shows seminal ducts going around the ureter, then that is the direct conclusion we deductively draw from that information, coupled with our knowledge about the reliability of the technology used, its functioning, and the trustworthiness of our own senses. On the other hand, there is nothing as compelling about finding the snagged seminal ducts design in our closesy ape ancestors. It could be that some beneficial mutation did take place between them and us, so that the design of our male reproductive system is different. But the closer the ancestors where we find the same design, and the more the findings in other animals support the evolutionary hypothesis that predicts which animals will have the same design (that is, all of the ones with descended testicles that are known to have evolved from reptiles), the more confident we can be.

And in studying the mind, this is the kind of thing we have to do, because synchronic explanations are often unavailable. Medieval sensibilities aside, we simply don't have the tools to directly divine the exact design of our cognitive machinery (and that is still true, despite all recent advances in neuroimaging). Evolutionary psychology, if we are careful with it and remain mindful of its shortcomings, can provide inductive evidence about the design of some particular cognitive system, if we can more directly observe the antecedents of this design in creatures closely related to our own evolutionary ancestors of different times, and chart its evolutionary trajectory (what it looked like in fish, in reptiles, in mammals, in monkeys, in apes...) Evolutinary psychology isn't dead or useless. It might, however, be humbled. And given some of its more blatant trespasses in recent years, that's not a bad thing.

Monday, August 17, 2009

A short pause

I'm currently in the process of getting ready to move to Boston, so there are likely not going to be any new posts for about a week, because I just won't have time to sit and write anything thoughtful. I'll resume posting soon (as well as responding to the great comments on some of the previous posts), so check back in a bit!

Friday, August 14, 2009

Why I don't buy the Chinese Room

The Chinese Room is a thought experiment proposed by John Searle, meant to evoke intuitions that disprove the computational theory of mind (roughly, that the mind is a symbol-manipulating, somewhat computer-like machine). Let me summarize the argument quickly.

Imagine you have a person inside a room. The room has an input slot, through which the person receives cards with Chinese symbols written on them. The person takes the card over to a book, which has something like a dictionary of "if... then" commands. If the card received has symbol X printed on it, for example, the book will tell the person to output a card with symbol Y. The symbols that are outputted are precisely correct responses to the input symbols, also in Chinese. The person will then write symbol Y on a card, and proceed to push it out of the room through the output slot. In this way, it would be hypothetically possible to hold an entire conversation with the room.

The question is -- does the person in the room know Chinese? Searle's argument is that if the computational theory of mind is right, we would have to say yes -- after all, the room is doing what a good computer should. It has a vast memory with instructions on how to proceed (the book), a processor to manipulate the symbols (the person), but it seems intuitive that neither the person nor the entire room understands Chinese. Imagine the person memorizing the book -- the intuition doesn't seem to change. No Chinese understanding going on. For intuitive simplicity, I will refer to the Chinese Person from now on, meaning the person who's memorized the contents of the book. I think this modification (which Searle himself suggested, so I'm not weakening his point any) serves to clarify the intuition just a little.

There have been countless attempts to argue against the Chinese room, find a flaw in the logic, and save the computational theory. The reason for this post is that the problem I've seen in the Chinese room for a while is one I haven't seen mentioned in the criticisms of it. That's not to say it hasn't been pointed out -- I just haven't seen it. So maybe this will be new to some people who already know, and even those who know to hate, Searle's argument.

The problem as I see it is that the book has to be not only readable, but writeable. No proponent of the computational theory has ever suggested anything like a hard-coded read-only set of commands. No one, not even Fodor, who is the most extreme nativist I know of, thinks that we don't learn anything throughout our lifetime. The computational idea is that the mind processes new information into, in some very unspecified sense, a set of symbols, which then modify its internal thought processes, which in turn modify the outputs it produces. Being writeable fits into the other feature that the book must have -- it must provide outputs in a generative way. No computational theorist would ever propose to model the mind as a stock series of canned responses. The book must, in fact, be constantly changing the output symbol it's ready to provide to any given input, based on previous inputs it received, the contents in the book already, preformance limitations, etc.

I don't see a way Searle could include writeability and generativity into the Chinese room argument without causing the intuition to fall apart. Imagine a third component of the system -- a set of instructions for how to rewrite the memory book. A metabook. A person memorizing this, and the original book would seem to have what the computational theory requires, but perhaps still be unable to understand Chinese. But, of course, there'd have to be a set of instructions to rewrite the metabook. At some point, early or late, one of the metabooks would need to have instructions for rewriting itself.

I don't know about you, but my intuition falls apart here. At some point, if we have a Chinese Person who has memorized the original book and all its companion metabooks, so that the Person can start legitimate conversations in Chinese, respond appropriately in a context-sensitive way, constantly adjust responses based not only on the Chinese words themselves, but also the speaker they originated from, the context, the weather, and a myriad other possible factors, and learn from interactions in a way that affects future conversations, I'd probably be willing to ascribe knowledge of Chinese to the whole system, er... person.

Unless understanding has to involve consciousness. And there, of course, Searle has me (and everyone else). Really, though, at that point, the argument is identical to Chalmers' zombie argument. I have to concede that unless we figure out what makes us conscious, we're not going to build a human being. But that seems to me to be a different argument than the one Searle wants to run. If I understand him correctly, what he wants to show is something more like, no robot we build or computational explanation we give will ever be able to work the same way people work. And that doesn't seem to follow. It's entirely conceivable to me that we humans works just like the Chinese Person, with an added element of consciousness and qualia, for all we know about these. In the end, if Searle's argument comes down only to the point that a robot copy of a person may still be missing qualia, I'd be willing to agree with him. But I'm not sure that the computational theory of mind suffers any.

Monday, August 10, 2009

Saying things you don't believe: A scientific virtue

In the course of scientific debates and discussions, I often say and claim things I don't actually believe. I may think these things have a good chance of being true, or maybe I just can't immediately think of why they're false, but I don't actually believe them the way that I believe that my lunch was tasty today or that my adviser is currently in Mexico. I may even think that what I'm saying definitely can't be right for a number of reasons that I'm already aware of, but that there is a nugget of truth in it, which can be stripped and pulled out of the whole, clarified and put forward on its own.

I think this is an activity that, outside of scientific discussions, seems pretty incomprehensible. After all, why would any sane person voluntarily debate and defend a position they're really not that sure of (or even blatantly know is wrong) just for fun? Is that person just being contrary? Arrogant?

But I think that when it comes to the deeper questions -- the questions none of us really know the answers to -- saying things we don't believe is really the only way to proceed, because it's the only way to say anything at all. Scientists would be a lot less productive if they waited until they had fully worked-out theories before proposing them to anyone, because there are enormous benefits in the discussion process itself. In many cases, saying things we don't believe is a sort of test -- if I can't think of any reason why what I'm saying is wrong, and you (a thousand yous) can't either, then maybe it's actually right. But of course, at the time I'm saying it, I'm far from sure it's right. I may even know it's wrong, but not know exactly why, or not know how to get at the one bit that seems right. Stating an idea and letting others tear it down is one way -- in my opinion, a very good one -- of getting at the truth. And that makes it, I think, a good scientific skill to have.

It's something I wish was more understood and appreciated in other domains -- in ordinary non-academic discussions of politics, religion and morality. It's especially something I wish was acknowledged by scientists themselves. Too often, I think, scientists feel the need to stand behind their claims, staking their reputations and careers not so much on the interestingness or eventual potential of their idea as on its truth in its current form. This necessarily causes ego to get involved and emotions to run high when theoretical matters are debated. It is as if the person feels that all of the rest of their ideas would cease to have value if the one at hand didn't turn out to be true, while in reality, just about everyone who has ever made an amazing discovery has had incredible amounts of false (not to mention just plain bad) ideas. Take, for example, Newton's weird theological and occult theories. Of course, there are many different reasons egos get involved in science, and a full discussion isn't necessary here (which is lucky for me, to paraphrase Jerry Fodor, because I don't have one). I think, though, that this unfortunate tendency would be curbed somewhat if scientists became more comfortable with -- and graduate students were explicitly encouraged to -- express thoughtful, well-reasoned and interesting ideas that are acknowledged to be quite probably false.

Thursday, August 6, 2009

All insurance should be public (and no, I'm not a socialist)

The idea of a public health insurance option is getting kicked around congress right now. I'm strongly in favor of a public option, but I think both sides of the current debate may be missing a crucial point:

Public health insurance must be cheaper than private health insurance, assuming no shenanigans on either option. Of course, any system could be made artificially more expensive than any other, but let's consider the general case where both are somewhat fair.

The reason for this is really pretty clear. If on average, $X is needed to treat everyone who needs medical care, an insurance plan that does not seek to make a profit will charge its members a collective sum of $X + $Y, where Y is the necessary operating costs such as employee salaries, etc. A private insurance company, on the other hand, needs to make a profit. They will necessarily charge $X + $Y + $Z, where Z is raised capital that goes to their shareholders, executives, etc. Of course, so far in the story, this is no different from any other capitalist enterprise. When buying any product, we pay more than its strict manufacturing value because whoever is selling it to us is making a profit.

But there's a reason we like capitalism in cases of most products -- for that Z margin by which we overpay compared to a socialized system, we get things in return. We get research and development of better products and services. We get competition between companies, which further leads to better products. We get, essentially, all of the much-touted benefits of the "profit motive" of capitalism.

And here lies the problem -- we do not get any of that with the private insurance industry, for one very simple reason. Insurance can't get any better because it really only does one thing. It is basically a collective money pool that works to average out risk between individual participants. I may not need an expensive operation at all, but if I do, it would bankrupt me, so I choose to pay a regular, smaller sum of money that won't bankrupt me. The expected value of what I pay should, in a well-calculated insurance plan, be equal to having no insurance at all (not counting the Y or Z costs, above). The reason I, and you too, need insurance, is to minimize the fluctuations of variance which would drive me bankrupt if I happened to get unlucky.

But the point is, there is no research and development. There is no better product. The only product possible in the insurance industry is this collective pool of money. Even the actuarial methods used to calculate risk, as far as I know, are basically worked out. They're not getting any more accurate, and there's no competition between companies for who has the most accurate expected value calculations, where accuracy would, of course, result in cost savings, which in a competitive free market could be passed on to the consumers, thereby working in a healthy capitalistic way. The only people who do not need insurance are people who are so rich that they could pay any unforeseen medical expense out of their own pocket (Bill Gates, say, should really not have any insurance, either public or private, if he wants to save money. A public plan would cost him $X + $Y, and a private plan, $X + $Y + $Z, but by paying out of pocket, his expense would only be $X).

This is where opponents of public health care will bring up the issue of choice. First, we must distinguish between two kinds of choice. There is the kind of choice politicians often rail about on TV -- choice of doctor or treatment. This is, obviously, a completely independent issue from anything discussed above. There is no reason why a public plan should provide any less choice than a private one. In Canada, where I'm from, I can switch doctors and ask for second opinions as much as any American, and if this is an added cost on the system, a public system should still be able to handle it cheaper than a private system -- This is again the X+Y versus the X+Y+Z argument from above, only the extra cost of doctor choice is included in X in both insurance systems.

The more relevant type of choice is choice of coverage. This is the only domain, as far as I can see, where private options may have any advantage over public ones. One person may think that the potential cost of operation A is manageable, and so this person may not want to pay costs $X + $Y for insurance that covers it, when they could avoid the insurance and end up paying just $X should they ever need operation A done. However, that person may still want coverage for operations B, C and D, while other people may think A is not out-of-pocket affordable for them, and want it covered too.

This is tricky territory. I admit that it may be that the profit motivate could be beneficial in providing people a variety of different coverage options. But I still think a public system could offer the same kinds of choices. Just as well as competing companies, you could have a government office that looks at what kinds of coverage options are most in demand, and creates different options with different costs tailored for this purpose.

We can and would want to modify the system further if we get into the issue of social justice -- who should have health insurance -- and away from the preceding strictly economic considerations. In order for health insurance to be meaningfully universal, there would still have to be a basic level of broad coverage out of which members cannot opt out at all -- this is necessary to make the system at all progressive; to have the prototypical Bill Gates shoulder some of the costs of healthcare for the poor who cannot afford to be taxed sufficiently to cover their share of the cost. I don't mean to propose a specific system, and I don't know which system is best, but the main point is that I don't see any inherent reason for why public insurance should not work the best. Perhaps it can work alongside with private options, which can provide for extra-basic choice at slightly greater cost due to the necessary added $Z private profit factor, as above -- if people want the choice and think they can save more money by not paying $X + $Y to be covered for a specific set of procedures, they may be willing to pay more -- $X + $Y + $Z for a smaller set of procedures, while saving money in the end. In other words, the Xs are of different sizes (containing different procedures), which may make up for the Z. Alternatively, if people want more coverage than is offered by the basic plan, a coexisting private plan may offer them greater coverage, if they are willing to pay the extra $Z profit cost.

Of course, everything I've said up until now applies not only to health insurance, but to all other forms of insurance as well. And I think the logic extends naturally there. For exactly the same reasons as the arguments above, car insurance should probably be public, or at least have a public option -- an idea I haven't seen proposed by anyone, really.

The key in all of this, I think, is to get away from the idea that a capitalistic, profit-driven system is automatically best. Ethical and social concerns are crucial, especially in domains like healthcare. But even when these are put aside there may be strictly economic reasons for why a more "socialized" system is better, depending on the exact nature of the product or service at hand. Capitalism should be one of several possible options, the merits and flaws of which ought to be compared to other options before the best is chosen. In the realm of insurance, I do not see what a capitalistic structure buys any of us.