Consciousness and how it got to be that way

Thursday, July 21, 2011

Neuroeconomics and Delusions

Previously I posted about semantic reasoning and severity of delusions. Part of the severity of delusion can be thought of as the degree to which the delusional belief affects the individual's actions. However, frequently people's non-verbal behavior seems unaffected by their delusions. Informally speaking, there would seem to be two types of delusions that do not affect behavior. The first type are not-fully-endorsed delusions, and the second are abstract, behaviorally-contentless delusions. There are insights about both that behavioral economics can offer.

The first type of less-behavior-affecting delusions (not-fully-endorsed) occurs when a delusional person has developed a coping strategy to avoid dangerous or unpleasant behavior that their delusional belief would otherwise seem to require of them, if they fully accepted its implications (this is known as not endorsing their delusion). That is, someone believes the CIA is following them and planning to harm them, but they don't attempt to move, or stop using trackable electronic communications devices, or alter their schedule. The content of the delusion could also have internally consistent components that prevent it from affecting their behavior; i.e. if they start changing their schedule, the CIA will know they know, triggering an assassination.

From the standpoint of the effect on behavior, this type of delusion would not be considered severe (again speaking informally) because although the person's social relationships may suffer, their life is not otherwise disrupted. In my previous post I stated that one way to measure the severity of the delusion would be to measure how much discomfort the individual was willing to endure to make decisions that accord with the delusion. For a not-fully-endorsed delusion, not very much. Such an approach of course would be extremely unethical and therefore not useful to consider. However, an article in Neuroscience and Biobehavioral Reviews suggests this question can be approached non-interventionally, from a neuroeconomics standpoint by looking at how suboptimal a psychotic individual's decision-making is. Less-fully-endorsed delusions would be expected to have less impact on utility maximization than more fully endorsed ones. The neuroeconomics approach is potentially a very productive one for psychiatry because it can measure utility maximization along the entire spectrum from healthy function to badly psychotic, does not get bogged down in epistemology, and most importantly, is a good proxy indicator for the overall well-being of the individual.

The second type of less-severe delusion (again, in an informal sense of the degree to which it affects behavior) is those which are behaviorally contentless; i.e., the delusion is mostly or entirely abstract and does not contain actionable content. (Whether such a belief is really a delusion is actually questionable, since an abstract belief with no external behavioral impact could be argued to be meaningless, but this is more a question for logicians or epistemologists.) These delusions cannot be endorsed, because it has no concrete implications that would make the believer behave differently. Many of these are not technically (by DSM-IV) delusions, because they are "culturally appropriate". That is, if someone believes that a statue is alive and can cry figuratively, that at a certain place someone once rode a horse into the sky, or that people are on a mission from a supernatural being to perform some act in their life, these beliefs might actually not be delusions if they are commonly held in the believers' culture. At first this exception seems an outrageous concession to political correctness, but there are salient characteristics that set "culturally-appropriate" believers apart from isolated cases. For one thing, the type of person who has decided that a statue is alive and cries figuratively - and has decided this in isolation, without meeting anyone else who believes it - is probably different from someone who believes it as a result of being brought up and repeatedly told this by trusted friends and family. Indeed, not holding or at least professing such beliefs in many cultures could be seen as irrational in the sense that failing to do so can carry a risk of ostracism or even material harm to property or person.

But what is truly interesting is that these culturally-appropriate false beliefs tend to be behaviorally-contentless, about which observation a good economic argument can be made. Culture-specific beliefs of the type described above seem to be curiously free of direct, concrete behavioral manifestations, aside from very specific rituals at certain times. The beliefs concern entities that the believer cannot define, or cannot agree upon with others who claim to hold the same beliefs (though the believer may strenuously object when told that they are unable to define their beliefs). Furthermore, the beliefs are circumscribed in the sense that their implications are not generalized to the world at large. That is, a statue-crying-believer will not generally be curious to investigate what other measurable properties of the statue might be different from a normal statue, or whether there are other crying statues with similar properties. The crying-statue believer may even find these kinds of questions offensive and actively refuse to discuss them. This behavior seems to defend a behaviorally contentless belief from being recognized as such. The question then becomes: what is the purpose of such beliefs?

One answer is that they are primarily cultural loyalty signals with no semantic content (equivalent to "Go team!" or a verbal salute), but cannot be recognized as such by the believers because then they somehow lose their function. But if that's the case, it still doesn't explain why these beliefs are behaviorally contentless. This is where a neuroeconomic argument again applies. We should expect such beliefs to tend to be behaviorally contentless because otherwise they would damage the individuals' ability to maximize utility, and they would be selected against over time. The extent to which the belief does actually affect behavior will be partly offset by the loyalty signal value, so we should expect culturally-appropriate false beliefs to correlate in terms of their contentfulness and their loyalty-signal value. That is, the more that they're useful to signal your solidarity to receive in-group benefits and avoid punishment, the more these kinds of beliefs will actually make you do things instead of just say things. At the same time, when populations with two sets of cultural beliefs come into contact, the aggregate effect of the population with more severe behavior-affecting false beliefs will become apparent, because the loyalty-signal value of the beliefs is zero between populations that do not share it.

There are clear test cases for this theory in the world today where culture-specific-false-belief areas border each other, and where there are differences in terms of utility maximization as seen in economic development. I probably don't need to point them out because several have already been written about extensively in geopolitics books.

Hasler G. Can the neuroeconomics revolution revolutionize psychiatry? Neuroscience and Biobehavioral Reviews. 2011 Apr 29. [Epub ahead of print]

No comments:

Post a Comment