Friday, November 20, 2009

One reason I like Stoicism

More posts are on the way. But, for now, a quick one.

I was thinking about one of my favorite quotes from the Stoic philosopher and Roman emperor Marcus Aurelius, when something occurred to me. I wonder if it'll occur to you too.

Marcus says:

"Say to yourself every morning, I shall meet with meddling, ungrateful, arrogant, deceitful, envious, churlish men. All these things happen to them because they are ignorant of good and evil. But I, who have seen that the nature of the good is beautiful, and of the bad is ugly, and that the nature of him who does wrong is akin to me, not only because we are of the same blood or seed, but because we participate in the same intelligence and the same portion of the divinity, I can neither be injured by any of them, for no one can fix on me what is ugly, nor can I be angry with my kinsmen, nor hate them. For we are made for co-operation, like feet, like hands, like eyelids, like the rows of the upper and lower teeth. To act against one another then is contrary to nature; and acting against one another is to be vexed and to turn away."


Here are some very minor modifications to that quote that occurred to me. There is nothing particularly systematic or rigorously philosophical here -- just a fanciful play with the quote, but one that conveys a very different religious and philosophical tone.

"Say to yourself every morning, I am a meddling, ungrateful, arrogant, deceitful, envious, churlish man. All these things happen to me because I am ignorant of good and evil. But you, who has seen that the nature of the good is beautiful, and of the bad is ugly, and that the nature of him who does wrong is akin to you, not only because you are of the same blood or seed, but because you participate in the same intelligence and the same portion of the divinity, you can neither be injured by me, for no one can fix on you what is ugly, nor can you be angry with me or my kinsmen, nor hate us. For we are made for co-operation, like feet, like hands, like eyelids, like the rows of the upper and lower teeth. To act against you then is contrary to nature; and acting against you is to be vexed and to turn away."

Now, when I considered the quote after doing this pronoun replacement, its Christianity completely caught me off-guard. It sounds to me like it could be a very profound and orthodox Christian prayer. Maybe this little play is completely silly and pointless, but it struck me also that this is precisely why I prefer Stoicism as a way of life to a religion like Christianity. Christianity would acknowledge all of the same power and ability, but consider it belonging to someone else -- to someone who one must prostrate before, and in some sense, beg from. Stoicism, in contrast, seems directly and enormously empowering -- you have all of these wonderful faculties, and you can and must choose daily to exercise them. I should note no specific anti-Christian bias here. Judaism, for example, seems not too far away from Christianity in advocating this sort of humble, prostrating, enervating kind of approach to daily life.

Thursday, September 10, 2009

Does your brain balance prediction and observation?

Sorry for the slight infrequency in my posts. Things are hectic in graduate school, as can be expected. I do still plan to update this blog with thoughts as frequently as I can find time.

So I just watched a talk by Moshe Bar of the Harvard Medical School. The thrust of his research program is that the mind (and neural substrates thereof) do not passively respond to environmental stimuli, but constantly attempt to predict what is about to happen -- what the eyes are about to see happen next, for example. Or, if seeing a blurry outline of an object, inferring what that object is from contextual and shape cues. Bar showed very neat MEG evidence showing that the time course for activation of the (prefrontal) brain area supposed to do the prediction is about right for his hypothesis. In other words, it activates before the area associated more directly with conscious recognition of an object does. Curiously, this means that the prefrontal cortex is involved directly in fairly low level vision. This, in and of itself, is interesting data, and Bar's hypothesis seems plausible. But to my mind, it says too little about the cognitive mechanisms involved. Here is a proposal about what could be going on, from a computational perspective.

This all struck me as very similar to the AI mechanism known as a Kalman filter. Without getting into the math, the basic idea behind a Kalman filter is that it adjusts the balance between prediction and observation in the model of the world that the organism dynamically builds. So, for example, a Kalman-filter-equipped robot that is navigating a ship could rely either on observations of the nearby shoreline combined with the speed reported by its engines to calculate its predicted position in the future. Alternatively, it could rely on "dead reckoning" -- knowing that it left harbor in a particular location, and headed in a particular direction with a particular speed. Which one the robot wants to rely on depends on how noisy each set of information is. If, for example, the robot is in a deep fog where the shoreline is hard to make out, and the engine speed-reporting device is malfunctioning, relying on dead reckoning may be a good idea. If, on the other hand, there is a strong but unpredictable current in the water (say, the ship passed through some whirlpool and came out facing a slightly different direction), then the robot probably wants to rely much more on the shoreline and engine speed readings.

The Kalman filter plays a role in all this by calculating the (mathematically provably) optimal balance between which set of information to rely on (prediction or observation), dependent on the noise of each, such that the model is maximally accurate.

The point of all of this is -- could it be that the neural architecture Bar provides evidence for is actually a neural instantiation of the Kalman filter? One way to test for this might be to see if activation of the prefrontal "predictive" area Bar identifies is lessened when the environmental input is clearer, and strengthened when it is noisier. Of course, even if the neural system is some sort of instantiation of a Kalman-filter-like device, it would not have to behave in this way. Perhaps the prefrontal area Bar identified is just the prediction element of the Kalman filter design, with a further "selective" element being present, which performs the actual computation of determining how the organism should balance relying on prediction versus observation.

Making specific predictions is complicated in this case, but it might also be worthwhile. The idea of a computational mechanism originally proposed in AI being instantiated in neural architecture unites the two fields in a pretty exciting way, shows exactly the kind of thing AI has to contribute to the study of the mind, and might even suggest that we instantiate a computationally, mathematically, theoretically optimal (!!!) mechanism.

Friday, September 4, 2009

The really depressing thing about the Viriginia governor race

If you don't know the story, it's summarized here. Essentially, Republican gubernatorial candidate Bob McDonnell is taking some political flack for a thesis he wrote 20 years ago during his Masters program. The thesis contained many pieces of pretty old school right wing agenda. For example, McDoinnell wrote that working women and feminists are "detrimental" to the family and that government policy should favor married couples over "cohabitators, homosexuals or fornicators."

I won't talk much more about McDonnell -- you can find plenty of bloggers and news stories covering his thesis and the political situation. The part that gets to me is a little different than the political issue of his thesis. Rather, I can't believe CNN is calling his work a "research paper" (Eg. in the article linked above).

Lets be clear here. McDonnell's thesis was written at an unabashedly evangelical university, started by Pat Robertson to further a Christian right-wing social agenda. It was originally named Christian Broadcasting Network University (seriously), eponymous for Pat Robertson's TV network. Here's what The Washington Post has to say on the thesis:

'The thesis wasn't so much a case against government as a blueprint to change what he saw as a liberal model into one that actively promoted conservative, faith-based principles through tax policy, the public schools, welfare reform and other avenues.

He argued for covenant marriage, a legally distinct type of marriage intended to make it more difficult to obtain a divorce. He advocated character education programs in public schools to teach "traditional Judeo-Christian values" and other principles that he thought many youths were not learning in their homes. He called for less government encroachment on parental authority, for example, redefining child abuse to "exclude parental spanking." He lamented the "purging of religious influence" from public schools. And he criticized federal tax credits for child care expenditures because they encouraged women to enter the workforce.

"Further expenditures would be used to subsidize a dynamic new trend of working women and feminists that is ultimately detrimental to the family by entrenching status-quo of nonparental primary nurture of children," he wrote.'


That's what CNN considers research? As far as I can tell, McDonnell's thesis had nothing but public policy proposals in it. There was no research into anything. Do they have any idea how this makes people perceive real research? How am I, a scientist-in-training, supposed to ever be able to tell any layperson that I do "research" too, and expect them to understand that what I do is as different from McDonnell's thesis as any two pieces of academic work can be? Is it any wonder the average non-college-educated American thinks that science and Christian science are kinda sorta pretty much the same thing?

I guess I might be hyperbolizing, but I see this kind of minor, subtle slip-up, where academia isn't even the focus of the news article, to be much more detrimental to the dissemination of academic understanding to the public than bad science writing. This is so small that the average person won't notice it enough to really think about the claim that McDonnell conducted research. Where a bad summary of a real research finding in the scientific press can cause laypeople to question the value of a field of study or a particular finding, or may misinform them about the finding itself, it will at least cause them to engage with and think about it. This kind of sloppy writing, on the other hand, both reflects and reaffirms incorrect public opinion about academia mostly without being consciously acknowledged, let alone challenged.

Sunday, August 30, 2009

A quick thought on euthanasia

This, to me, is a puzzle.

Why is euthanasia considered not only an acceptable, but indeed, the single most humane end-of-life procedure when it comes to animals, while evoking dread, hesitation, and often many prohibitory laws when it comes to humans?

I really have no idea. Thoughts?

Tuesday, August 25, 2009

Diachronic vs. synchronic explanations: Why psychology needs both

So I'm at Harvard now, and settling in fine. I don't have internet at home yet, so posting might be a little bit spotty until I do. Now, let's jump into it.

Steve Pinker's "How The Mind Works" is a pretty classic book at this point. New York Times bestseller, Pulitzer finalist, and one of the few real, undiluted and scientific psychology/cognitive science books to have made it into the public consciousness. Pinker wears somewhat different hats in each of his books, and in this one, he is a staunch evolutionary psychologist -- dismissing Lewontin and Gould's "spandrel" argument out of hand (and unconvincingly, in my view), and claiming that psychology must turn to evolution for answers to its important questions, How The Mind Works.

The philosopher and cognitive scientist Jerry Fodor wrote a book titled "The Mind Doesn't Work That Way", both an obvious jab at Pinker's lofty title, and a serious critique of his ideas. One of Fodor's chief arguments is against evolutionary psychology as a whole. The argument can be summarized like this:

How something came to be is a "diachronic" (across time) explanation, and how something is now is a "synchronic" (concurrent time) explanation. According to Fodor, diachronic and synchronic explanations are unrelated to each other, because they answer different questions. The former answers the question of "How X came to be" or "Why X is the way it is, and not some other way". The latter answers the question of "How X works". Furthermore, Fodor argues that the twain shall never meet. Diachronic investigations are, in principle, incapable of answering how something works, and synchronic explanations are, in principle, incapable of answering how something came to be.

An illustrative analogy is the study of flight and the building of airplanes. If you want to know how to build an airplane, you don't go to the museum and study the history of flight, tracing its progress and evolution from the Wright brothers on. You ask an engineer and get the blueprints for a real, functioning, modern airplane. If you don't have access to that sort of thing, and we don't when it comes to the mind, you try to reverse-engineer a modern airplane. The history of flight, in short, will not help you.

What this all means is if we want to know how the mind works, evolutionary explanations (which are diachronic) are unhelpful. If we want to know why the mind is the way it is, and not some other plausible way -- for example, when we've built a model that mimics a particular function, but doesn't seem to work the way the mind does, and we want to know why -- we can look into evolution. But if we want to know how the mind works in its present form, we need to study it directly. Good old psychology.

When I first heard about this argument, I wasn't just stumped, I was distraught. I'm not the biggest fan of evolutionary psychology, mainly because I think it has some serious problems in its execution and the methodology available to it. But neither am I willing to throw the whole enterprise out or concede that it's irrelevant. Before I had any good reason, I had a strong gut feeling that Fodor was wrong. The good reason took a couple of days.

Fodor's mistake is ignoring the real reason for the existence of evolutionary psychology: the way that something came to be constrains the range of possibilities for how it can be now. We may have the capacity to completely redesign an airplane if we discover a better way for it to be built, but evolution is famous for its inertia. As Pinker points out, the reason the male seminal ducts don't wind straight down, hooking around the ureter, is that our reptilian ancestors' testicles were inside their bodies, and when our bodies became too warm to produce semen and the testicles had to descend into the sack of their present form, evolution could not redesign the plumbing, and the seminal ducts got caught on the ureter "like a gardener who snags a hose around a tree".

Of course, in this case, we can simply look into a human male and see the design of the seminal ducts. The synchronic explanation is available to us. But imagine if it was not -- for example, if we lived in some universe where evolution had been discovered, but where, as in the middle ages, dissection of cadavers was forbidden and imaging technology did not exist. Wouldn't it help to know that our reptilian ancestor's testicles were inside their bodies, and that the ducts wound around the ureter? Wouldn't it help to know even more that non-human mammals, especially those who we share a close evolutionary ancestry with, and whose testicles did descend into an external sack have their ducts snagged on the ureter? Would it really remain unconvincing if we found this design in all mammals with descended testicles? Or would we then think it pretty likely that our bodies have the same design?

The issue is that, while synchronic explanations provide evidence through deductive reasoning, diachronic explanations are necessarily inductive. Barring extreme philosophical skepticism that would claim every bit of reasoning based on observation is inductive, if a dissection or an X-ray-type body image shows seminal ducts going around the ureter, then that is the direct conclusion we deductively draw from that information, coupled with our knowledge about the reliability of the technology used, its functioning, and the trustworthiness of our own senses. On the other hand, there is nothing as compelling about finding the snagged seminal ducts design in our closesy ape ancestors. It could be that some beneficial mutation did take place between them and us, so that the design of our male reproductive system is different. But the closer the ancestors where we find the same design, and the more the findings in other animals support the evolutionary hypothesis that predicts which animals will have the same design (that is, all of the ones with descended testicles that are known to have evolved from reptiles), the more confident we can be.

And in studying the mind, this is the kind of thing we have to do, because synchronic explanations are often unavailable. Medieval sensibilities aside, we simply don't have the tools to directly divine the exact design of our cognitive machinery (and that is still true, despite all recent advances in neuroimaging). Evolutionary psychology, if we are careful with it and remain mindful of its shortcomings, can provide inductive evidence about the design of some particular cognitive system, if we can more directly observe the antecedents of this design in creatures closely related to our own evolutionary ancestors of different times, and chart its evolutionary trajectory (what it looked like in fish, in reptiles, in mammals, in monkeys, in apes...) Evolutinary psychology isn't dead or useless. It might, however, be humbled. And given some of its more blatant trespasses in recent years, that's not a bad thing.

Monday, August 17, 2009

A short pause

I'm currently in the process of getting ready to move to Boston, so there are likely not going to be any new posts for about a week, because I just won't have time to sit and write anything thoughtful. I'll resume posting soon (as well as responding to the great comments on some of the previous posts), so check back in a bit!

Friday, August 14, 2009

Why I don't buy the Chinese Room

The Chinese Room is a thought experiment proposed by John Searle, meant to evoke intuitions that disprove the computational theory of mind (roughly, that the mind is a symbol-manipulating, somewhat computer-like machine). Let me summarize the argument quickly.

Imagine you have a person inside a room. The room has an input slot, through which the person receives cards with Chinese symbols written on them. The person takes the card over to a book, which has something like a dictionary of "if... then" commands. If the card received has symbol X printed on it, for example, the book will tell the person to output a card with symbol Y. The symbols that are outputted are precisely correct responses to the input symbols, also in Chinese. The person will then write symbol Y on a card, and proceed to push it out of the room through the output slot. In this way, it would be hypothetically possible to hold an entire conversation with the room.

The question is -- does the person in the room know Chinese? Searle's argument is that if the computational theory of mind is right, we would have to say yes -- after all, the room is doing what a good computer should. It has a vast memory with instructions on how to proceed (the book), a processor to manipulate the symbols (the person), but it seems intuitive that neither the person nor the entire room understands Chinese. Imagine the person memorizing the book -- the intuition doesn't seem to change. No Chinese understanding going on. For intuitive simplicity, I will refer to the Chinese Person from now on, meaning the person who's memorized the contents of the book. I think this modification (which Searle himself suggested, so I'm not weakening his point any) serves to clarify the intuition just a little.

There have been countless attempts to argue against the Chinese room, find a flaw in the logic, and save the computational theory. The reason for this post is that the problem I've seen in the Chinese room for a while is one I haven't seen mentioned in the criticisms of it. That's not to say it hasn't been pointed out -- I just haven't seen it. So maybe this will be new to some people who already know, and even those who know to hate, Searle's argument.

The problem as I see it is that the book has to be not only readable, but writeable. No proponent of the computational theory has ever suggested anything like a hard-coded read-only set of commands. No one, not even Fodor, who is the most extreme nativist I know of, thinks that we don't learn anything throughout our lifetime. The computational idea is that the mind processes new information into, in some very unspecified sense, a set of symbols, which then modify its internal thought processes, which in turn modify the outputs it produces. Being writeable fits into the other feature that the book must have -- it must provide outputs in a generative way. No computational theorist would ever propose to model the mind as a stock series of canned responses. The book must, in fact, be constantly changing the output symbol it's ready to provide to any given input, based on previous inputs it received, the contents in the book already, preformance limitations, etc.

I don't see a way Searle could include writeability and generativity into the Chinese room argument without causing the intuition to fall apart. Imagine a third component of the system -- a set of instructions for how to rewrite the memory book. A metabook. A person memorizing this, and the original book would seem to have what the computational theory requires, but perhaps still be unable to understand Chinese. But, of course, there'd have to be a set of instructions to rewrite the metabook. At some point, early or late, one of the metabooks would need to have instructions for rewriting itself.

I don't know about you, but my intuition falls apart here. At some point, if we have a Chinese Person who has memorized the original book and all its companion metabooks, so that the Person can start legitimate conversations in Chinese, respond appropriately in a context-sensitive way, constantly adjust responses based not only on the Chinese words themselves, but also the speaker they originated from, the context, the weather, and a myriad other possible factors, and learn from interactions in a way that affects future conversations, I'd probably be willing to ascribe knowledge of Chinese to the whole system, er... person.

Unless understanding has to involve consciousness. And there, of course, Searle has me (and everyone else). Really, though, at that point, the argument is identical to Chalmers' zombie argument. I have to concede that unless we figure out what makes us conscious, we're not going to build a human being. But that seems to me to be a different argument than the one Searle wants to run. If I understand him correctly, what he wants to show is something more like, no robot we build or computational explanation we give will ever be able to work the same way people work. And that doesn't seem to follow. It's entirely conceivable to me that we humans works just like the Chinese Person, with an added element of consciousness and qualia, for all we know about these. In the end, if Searle's argument comes down only to the point that a robot copy of a person may still be missing qualia, I'd be willing to agree with him. But I'm not sure that the computational theory of mind suffers any.

Monday, August 10, 2009

Saying things you don't believe: A scientific virtue

In the course of scientific debates and discussions, I often say and claim things I don't actually believe. I may think these things have a good chance of being true, or maybe I just can't immediately think of why they're false, but I don't actually believe them the way that I believe that my lunch was tasty today or that my adviser is currently in Mexico. I may even think that what I'm saying definitely can't be right for a number of reasons that I'm already aware of, but that there is a nugget of truth in it, which can be stripped and pulled out of the whole, clarified and put forward on its own.

I think this is an activity that, outside of scientific discussions, seems pretty incomprehensible. After all, why would any sane person voluntarily debate and defend a position they're really not that sure of (or even blatantly know is wrong) just for fun? Is that person just being contrary? Arrogant?

But I think that when it comes to the deeper questions -- the questions none of us really know the answers to -- saying things we don't believe is really the only way to proceed, because it's the only way to say anything at all. Scientists would be a lot less productive if they waited until they had fully worked-out theories before proposing them to anyone, because there are enormous benefits in the discussion process itself. In many cases, saying things we don't believe is a sort of test -- if I can't think of any reason why what I'm saying is wrong, and you (a thousand yous) can't either, then maybe it's actually right. But of course, at the time I'm saying it, I'm far from sure it's right. I may even know it's wrong, but not know exactly why, or not know how to get at the one bit that seems right. Stating an idea and letting others tear it down is one way -- in my opinion, a very good one -- of getting at the truth. And that makes it, I think, a good scientific skill to have.

It's something I wish was more understood and appreciated in other domains -- in ordinary non-academic discussions of politics, religion and morality. It's especially something I wish was acknowledged by scientists themselves. Too often, I think, scientists feel the need to stand behind their claims, staking their reputations and careers not so much on the interestingness or eventual potential of their idea as on its truth in its current form. This necessarily causes ego to get involved and emotions to run high when theoretical matters are debated. It is as if the person feels that all of the rest of their ideas would cease to have value if the one at hand didn't turn out to be true, while in reality, just about everyone who has ever made an amazing discovery has had incredible amounts of false (not to mention just plain bad) ideas. Take, for example, Newton's weird theological and occult theories. Of course, there are many different reasons egos get involved in science, and a full discussion isn't necessary here (which is lucky for me, to paraphrase Jerry Fodor, because I don't have one). I think, though, that this unfortunate tendency would be curbed somewhat if scientists became more comfortable with -- and graduate students were explicitly encouraged to -- express thoughtful, well-reasoned and interesting ideas that are acknowledged to be quite probably false.

Thursday, August 6, 2009

All insurance should be public (and no, I'm not a socialist)

The idea of a public health insurance option is getting kicked around congress right now. I'm strongly in favor of a public option, but I think both sides of the current debate may be missing a crucial point:

Public health insurance must be cheaper than private health insurance, assuming no shenanigans on either option. Of course, any system could be made artificially more expensive than any other, but let's consider the general case where both are somewhat fair.

The reason for this is really pretty clear. If on average, $X is needed to treat everyone who needs medical care, an insurance plan that does not seek to make a profit will charge its members a collective sum of $X + $Y, where Y is the necessary operating costs such as employee salaries, etc. A private insurance company, on the other hand, needs to make a profit. They will necessarily charge $X + $Y + $Z, where Z is raised capital that goes to their shareholders, executives, etc. Of course, so far in the story, this is no different from any other capitalist enterprise. When buying any product, we pay more than its strict manufacturing value because whoever is selling it to us is making a profit.

But there's a reason we like capitalism in cases of most products -- for that Z margin by which we overpay compared to a socialized system, we get things in return. We get research and development of better products and services. We get competition between companies, which further leads to better products. We get, essentially, all of the much-touted benefits of the "profit motive" of capitalism.

And here lies the problem -- we do not get any of that with the private insurance industry, for one very simple reason. Insurance can't get any better because it really only does one thing. It is basically a collective money pool that works to average out risk between individual participants. I may not need an expensive operation at all, but if I do, it would bankrupt me, so I choose to pay a regular, smaller sum of money that won't bankrupt me. The expected value of what I pay should, in a well-calculated insurance plan, be equal to having no insurance at all (not counting the Y or Z costs, above). The reason I, and you too, need insurance, is to minimize the fluctuations of variance which would drive me bankrupt if I happened to get unlucky.

But the point is, there is no research and development. There is no better product. The only product possible in the insurance industry is this collective pool of money. Even the actuarial methods used to calculate risk, as far as I know, are basically worked out. They're not getting any more accurate, and there's no competition between companies for who has the most accurate expected value calculations, where accuracy would, of course, result in cost savings, which in a competitive free market could be passed on to the consumers, thereby working in a healthy capitalistic way. The only people who do not need insurance are people who are so rich that they could pay any unforeseen medical expense out of their own pocket (Bill Gates, say, should really not have any insurance, either public or private, if he wants to save money. A public plan would cost him $X + $Y, and a private plan, $X + $Y + $Z, but by paying out of pocket, his expense would only be $X).

This is where opponents of public health care will bring up the issue of choice. First, we must distinguish between two kinds of choice. There is the kind of choice politicians often rail about on TV -- choice of doctor or treatment. This is, obviously, a completely independent issue from anything discussed above. There is no reason why a public plan should provide any less choice than a private one. In Canada, where I'm from, I can switch doctors and ask for second opinions as much as any American, and if this is an added cost on the system, a public system should still be able to handle it cheaper than a private system -- This is again the X+Y versus the X+Y+Z argument from above, only the extra cost of doctor choice is included in X in both insurance systems.

The more relevant type of choice is choice of coverage. This is the only domain, as far as I can see, where private options may have any advantage over public ones. One person may think that the potential cost of operation A is manageable, and so this person may not want to pay costs $X + $Y for insurance that covers it, when they could avoid the insurance and end up paying just $X should they ever need operation A done. However, that person may still want coverage for operations B, C and D, while other people may think A is not out-of-pocket affordable for them, and want it covered too.

This is tricky territory. I admit that it may be that the profit motivate could be beneficial in providing people a variety of different coverage options. But I still think a public system could offer the same kinds of choices. Just as well as competing companies, you could have a government office that looks at what kinds of coverage options are most in demand, and creates different options with different costs tailored for this purpose.

We can and would want to modify the system further if we get into the issue of social justice -- who should have health insurance -- and away from the preceding strictly economic considerations. In order for health insurance to be meaningfully universal, there would still have to be a basic level of broad coverage out of which members cannot opt out at all -- this is necessary to make the system at all progressive; to have the prototypical Bill Gates shoulder some of the costs of healthcare for the poor who cannot afford to be taxed sufficiently to cover their share of the cost. I don't mean to propose a specific system, and I don't know which system is best, but the main point is that I don't see any inherent reason for why public insurance should not work the best. Perhaps it can work alongside with private options, which can provide for extra-basic choice at slightly greater cost due to the necessary added $Z private profit factor, as above -- if people want the choice and think they can save more money by not paying $X + $Y to be covered for a specific set of procedures, they may be willing to pay more -- $X + $Y + $Z for a smaller set of procedures, while saving money in the end. In other words, the Xs are of different sizes (containing different procedures), which may make up for the Z. Alternatively, if people want more coverage than is offered by the basic plan, a coexisting private plan may offer them greater coverage, if they are willing to pay the extra $Z profit cost.

Of course, everything I've said up until now applies not only to health insurance, but to all other forms of insurance as well. And I think the logic extends naturally there. For exactly the same reasons as the arguments above, car insurance should probably be public, or at least have a public option -- an idea I haven't seen proposed by anyone, really.

The key in all of this, I think, is to get away from the idea that a capitalistic, profit-driven system is automatically best. Ethical and social concerns are crucial, especially in domains like healthcare. But even when these are put aside there may be strictly economic reasons for why a more "socialized" system is better, depending on the exact nature of the product or service at hand. Capitalism should be one of several possible options, the merits and flaws of which ought to be compared to other options before the best is chosen. In the realm of insurance, I do not see what a capitalistic structure buys any of us.

Saturday, July 25, 2009

Testing Gandhi

I suggested in a previous post that when Gandhi preached non-violence, his reasoning may have been that it is impossible for a group to continue oppressing another group when that other group is acting in ways that are universally and clearly seen as admirable. I mentioned this idea to Dr. Ori Friedman, and he suggested testing it in a psychological setting. I don't know how this didn't occur to me.

I don't do social psychology, so I doubt I'll ever get around to testing the idea myself. I do love thinking about experimental design, though, and how an idea like this could be tested. And maybe someone will read this and get inspired to look into this question. So here's how I would try to go about it, without having carefully thought any of this through.

I'd start with the individual level first. The line of studies could look something like this:

First, establish that neutral participants think better of people who are exercising active, courageous non-violence (this can be done by telling participants a story about an imaginary situation, showing them a made-up news clip, etc.)

Next, establish that this effect holds even when participants are initially given negative biasing information towards the non-violent group (ie. the same kind of manipulation studies use to invoke feelings of group prejudice. I know that Andy Baron has done this kind of work with kids).

Finally, the neatest thing to do would be a Stanley Milgram-type experiment, where participants are first made negatively predisposed towards a group, are next told of the group's non-violent resistance, and are then given the chance to cause pain (or so they think) to a member of the group (a real live research assistant). The control condition would be participants hurting a member of a group who they were negatively predisposed towards, but who was not described as a non-violent, Gandhi-esque resistor. Of course, this would never pass any ethics committee these days, but a social psychologist could probably come up with a modification that would be more ethical while still getting at the same thing.

I can see problems with these proposals already. How would one know that the non-violent resistor is really being portrayed admirably, and if they are, that the admiration evoked is due only to the description of them as non-violent and courageous, without using words like courageous, which of course evoke admiration inherently. I think there should be a way to design the study so that this is taken care of, but I couldn't be sure until I tried.

Anyway, I hope that for the psychology-minded reading this, you think the idea is interesting. And for those without a background in psychology, I hope it is both interesting and an reasonably accurate snapshot of how studies go from observations and hypotheses about the world, to ideas about human nature, to studies that can be proposed and begin to develop.

The problem with relativism

Relativism is, broadly, the idea that there are no absolutes within a certain domain. That what can exist is all "relative", depending on the frame or context within which it exists. As the cognitive scientist Jerry Fodor notes, the domains of relativism are many, mainly within the "social" sciences. There is cultural relativism in anthropology, which claims that values are determined uniquely by one's culture. There is relativism in sociology, where epistemic commitments are seen to be a product of class affiliation. And again, in my home discipline of psychology, in "empiricist" theories that claim the mind is infinitely malleable depending on life events and conditions.

What I'm going to argue is not that these theories are wrong for any sort of evidence-based reason, but that maintaining that they are "relativistic" is fundamentally incoherent, even when they are right.

The reason is this: In so far as these theories are scientific -- testable by the scientific method -- they must make systematic predictions. What I mean by systematic, in this case, is "given a set of circumstances or preconditions X, we expect a set of outcomes Y". There is no alternative to this system. Any inquiry is forced to group events into some coherent sets -- for example, differences between "men" and "women", "2-month-old babies" and "5-month-old babies", "electrons" and "protons", Culture A and Culture B, etc. It is impossible to have a science of idiosyncrasies. Even if we were somehow to have a science of one person, for example, "person" is already a grouping, a concept combining, say, a 10-year-old's identity as somehow "the same thing" as a 50-year-old's. I won't belabor this, but it can obviously be taken endlessly further to smaller and smaller scales and groupings.

The problem and the point is that a theory that says, "things are relative, depending on the context", has to specify what the things and the context are. And specifying those necessitates some smudging of the details, so that grouping the individuals can be coherent. In that way, "things" can never be totally "relative", because we are already dealing with abstractions and absolutes -- we want theories that, in the end, make universal statements -- "if the context is A, then the outcome will be B, for all instances of A and B". Relativism, no matter how extreme, still has to deal with universal rules whether it wants to or not. It's the only way to do science. At its best, relativism will provide us with exactly the same universals as non-relativism. Those universals will just have a lot if "if... then"-type rules. And the theory will only be coherent or testable as long as those universals make precise predictions without any further "relativistic" leeway. It wouldn't be science if it said, "Well, it turns out the data don't match my predictions, but... it's all relative anyway"

So imagine you have a simple relativistic theory, and a good one. For example, "Which language people speak depends on which language is spoken by the community in which they are raised". This is a good beginning. If we interpret it reasonably and charitably, it's even true. But the problem is, it's still far from being a theory. What this statement leaves out is what constrains the possibilities. Unless the theorist believes that what counts as "language" in this case is completely random -- anything goes, including monkey screeches, melodies, random flailing of the arms that signifies nothing, etc. -- the onus is still on the theorist to specify what the universals of language are. Why it is, that some children learn Japanese and other children learn English, while no children ever learn languages without syntactic structure, for example, or ones where word-boundaries blur (that is, words are not discrete and separable), or any of other countless possibilities that no one has ever observed and probably can't even imagine.

In other words, any relativistic statement, in order to have scientific meaning, must explain not just how things vary, but why they vary only as they do, and not in some other logically possible way.

This is the killer. In psychology, any empiricist who believes humans start out with a totally blank slate (maybe granting a few general learning mechanisms), must specify how it is that those mechanisms cause us to have the exact kind of minds that we have. Why we think of colors as blue and green instead of grue and bleen, and why we very quickly learn to associate "rabbit" with those hopping animals rather than with the particular configuration the furry limbs have when they're all stuck together the way they are. Similar issues are found in anthropology, sociology, and all the others.

In other words, whether they like it or not, relativistic theories must be theories of universals. In that way, they end up being no different than non-relativistic theories. Maybe non-relativists can say they're interested in what the constraints are for the range of possible things, and relativists are interested in which of the possible things turns out given which sets of conditions. But those are not rivaling perspectives, they're complementary halves of the same whole. And even those relativists will still need to group different circumstances and outcomes into describable sets, in order to provide any coherent explanations. These will generally be in a set of universal "if... then" rules, which can, of course increase in complexity as the analysis proceeds.

Tuesday, July 21, 2009

Can Gandhi's non-violence work? Part 2

In my post yesterday I tried to find a defensible and reasonable interpretation of Gandhi's doctrine of non-violence, specifically in the most extreme circumstances. I mentioned that I think Gandhi was getting at two points that are often overlooked by his critics, and went into a bit of detail about the first. Today, I'd like to look at the second.

In addition to positing that an oppressive people would always be moved by a virtuous, resistant and non-violent victim, I think Gandhi was also getting at a much more Buddhist/Hinduist benefit to heroic, active non-violence. I group these two traditions because I think that in the context of the following discussion, they basically say the same thing.

Here is another quote from Gandhi about the advantages of using non-violence during the holocaust. "If the Jewish mind could be prepared for voluntary suffering, even the massacre I have imagined could be turned into a day of thanksgiving and joy that Jehovah had wrought deliverance of the race even at the hands of the tyrant. For to the God-fearing, death has no terror."

The idea Gandhi is getting at is that the length of life (and death itself) does not matter compared to the quality of life. And the quality of life is all about inner spiritual life. A person's satisfaction with his or her own mind and life is flawless when that person behaves with virtue, in accordance with a set of predetermined principles. The idea is really very simple, but I think it is very difficult for us to accept today. After all, I think most of us, myself very much included, would do just about anything to extend the length of our lives (especially if we found out we were dying), but comparatively few of us concern ourselves with whether we are really living in accordance with the principles we think are good, noble and virtuous.

What I find so interesting about this idea is that there is no good way to really argue for it. Whether quality or length of life is more important seems like such a foundational preference -- a premise for many other conclusions and behaviours, but not a consequence of any sort of reasoning itself. The only argument I can think of making in favor of Gandhi's approach is that the Quality view would likely promote greater happiness than the Quantity view. But this is already presupposing that happiness (associated with quality) is more important than quantity, which is really presupposing Gandhi's position in the first place. The logic becomes circular, but I don't see any other way to justify either view yet.

Even if I don't know how to justify which view is "better" (if any), though, it definitely seems to me that Gandhi's view is plausible and shouldn't be dismissed out of hand, even if it's pretty foreign to the way most of us "modern Westerners" seem to think these days.

Monday, July 20, 2009

Can Gandhi's non-violence work? Part 1

This thought is not terribly original -- I've seen it elsewhere -- but I'd like to take it a bit further than the discussions I've seen before. I am obviously not a Gandhi scholar, so my treatment of Gandhi's moral and political philosophy will be necessarily cursory. I hope to get the general idea right, since that is the thing that interests me most.

Mahatma Gandhi's famous doctrine of non-violence brought independence to a nation (two nations, really -- India and Pakistan). That alone serves as lasting testament to the fact that non-violence can defeat an oppressive, violent force. But are there limits? Are there times when non-violence simply cannot cause such a force to stop?

Gandhi insisted on non-violence in every case of oppression, making no exceptions based on severity. Living during World War II, he famously said, "The Jews should have offered themselves to the butcher's knife. They should have thrown themselves into the sea from cliffs." That is a pretty extreme position in favor of non-violence, to say the least. Sounding probably more reasonable, he also wrote, "If there ever could be a justifiable war in the name of and for humanity, a war against Germany, to prevent the wanton persecution of a whole race, would be completely justified. But I do not believe in any war."

Much has been made of Gandhi's position. Most who hear about it, I think, can't help but scoff. And with good reason.

The obvious reaction is that it couldn't possibly work -- sure, maybe against the British, who were unwilling to kill every Indian in India, but not against the Nazis who were very much willing to kill every Jew in Germany and beyond. The strength of Indian non-violence lay in the bewilderment that it caused the British. The Indians were not dispensable in the eyes of the British -- inferior barbarians, maybe, but useful. The British needed the Indian population as a tireless workforce and revenue stream. The Nazis were in a very different position. They didn't need anything from the Jews other than their deaths.

So, of course, if the Jews had submitted en masse, bravely, defiantly, heroically to their destruction, well... they would have received it.

But I think Gandhi's point goes deeper than the critics have given him credit for. I think Gandhi was arguing two main points. I will discuss the first here, leaving the second for my next post.

The first point boils down to the idea that there can never be public support for a policy of destruction against those who are behaving admirably or virtuously (in the strongest sense of those words). This is what I find particulary interesting and original in Gandhi's thought. What he was positing is the existence of some behaviours -- specifically, the display of spritual strength required for brave, active non-violence in the face of pain and death -- that are essentially universals in evoking admiration. No person of any culture, no matter how oppressive, can view such behaviour without being moved to thinking that oppressing this person is unjustifiable. Certainly, Hitler himself might have been too mad to see it, but what I think Gandhi was implicitly positing is that public support among the Germans would erode despite the Nazi war machine's best efforts.

I don't know if Gandhi was right. I have no context for what life was really like for Germans at that time, and what kind of public sentiment was. I don't know if the fact that the Nazis were much more determined to kill the Jews than the British were to cause any harm to the Indians would have made a difference. It may be that any resistance put up by the Jews, even if effective, would not have reached the German masses since media was so tightly controlled.

But I think despite all that, it's clear that the Jews should have done exactly what Gandhi said. After all, it would have been worth a try. They had no other options. They were being sent to die anyway, and from what we can tell, had no method of effective violent resistance even if they had wanted it. The difference between what happened and what Gandhi advocated is that the Jews were (understandbly, of course) cowed and meek, whereas Gandhi was advocating mass acts of bravery. If one Jew, as I'm sure did happen, offered non-violent defiance he would have certainly been killed immediately. But if hundreds or thousands had acted the same way all at once? I find it harder to say.

I'm going to leave off for now. For the sake of readability, I'll split this long thought into two posts -- the second part should be coming tomorrow.

To start

I've never liked blogs. So, of course, I've created one.

The purpose of this space is to provide a venue for thoughts and analysis on the topics that interest me most. Generally, that will mean some blend of psychology or cognitive science, and politics. But if my only goal was to express my thoughts, I'd sing in the shower. What I'm really interested in is starting discussions. I hope that the things I think about are of general interest, not only to those involved or interested in science, but to anyone who enjoys critical thinking and amicable debate. I hope that you will find a lot to agree and disagree with in my future posts. I hope that I can make you think about something new, or in a different way, and that you find it useful and fun. And I hope that through your comments, you will do the same for me.