Consciousness and how it got to be that way

Friday, April 30, 2010

Predictable But Still Interesting Resolution to Bilingual Coma Case

I had previously written here about the case of the Croatian girl who supposedly woke from a coma miraculously speaking German. Of course, it turns out she already spoke German before. It's still neurologically interesting that ability in one language would be preserved than the other; Steven Novella covers.

Thursday, April 29, 2010

Medication Resources, Psychiatric and Otherwise

I promise a return to a less med-school-centric blog tomorrow. For now, in keeping with the theme, here are some interesting resources for medications and patients:

Psycho-Babble - a discussion forum about psychiatric medications. While it seems on its face like it could be a bad idea, strong moderation keeps it valuable and on-track.

Patients Like Me - a forum where patients can share their experiences with medications. Probably not a place to get real data, but nonetheless it might be worthwhile for marketing and healthcare workers to check out occasionally to see what the dominant threads are. General, not just for psychiatric meds.

For Second Year Med Students

The boards are coming up (as if you need someone else to remind you of that) and if your pathology isn't quite up to snuff, here's a Magic: The Gathering-style fantasy role-playing card game to help you memorize all the nasty unicellular beasties. Here, of course, is B. fragilis:


I think these guys were just huge nerds who wanted to make money by creating their own card-based RPG while seeming all responsible and mediciney. And good on em! (Hat tip Boing-Boing).

Top Psychiatric Prescriptions for 2009

Can be found here. While not of philosophical or scientific interest, it's certainly of professional interest to nervous system health professionals to ask to what degree the revenue growth is due to off-label use (at a guess, much more than half. Also note the comparison to simultaneous population growth).

Before starting med school I did clinical research for pharmaceutical companies for 12 years. In no polity that I know of can a drug be marketed "in general", that is, beyond some narrowly defined indication. Consequently the pattern of use of psychiatric drugs is very different than for other therapeutic areas. But they can certainly be used by practitioners this way. Is this inherent to the nature of psychiatric illnesses, or is it just an artifact of our regulatory enterprise? What risks are there to patients as a result of this pattern of use, and have we already seen an impact?

Sunday, April 25, 2010

Croatian Girl Wakes Up from Coma Speaking German

This is more dramatic than "foreign accent syndrome", although the story is short, the source suspect, and even without those two de-weighting considerations I would highly doubt it's real. I would put money down that we're going to find one of the following:

1) It's exaggerated, and at most she has some form of foreign accent syndrome.

2) The girl spoke German before she was in a coma.

Saturday, April 24, 2010

Free Will and Materialism or Epiphenomenalism are Not Mutually Exclusive

Some materialists are a little too eager to claim that free will is necessarily exploded if our consciousness is based in the material world and follows lawful processes. Of course this has implications for morality. However such an eager deconstruction is simplistic, and furthermore cannot be specifically pinned on
epiphenomenalism.

Regarding the possibility of free will, we recognize the following categories of relationships between any discrete entities in the universe, which must include conscious entities like humans:

1) Lawful relationships:
if operation X happens to entity A, then B always results. It is not trivial to note that knowledge of lawful relationships must result from repeated pattern recognition. It is frequently pointed out that after building his system of mechanics, Newton loudly proclaimed what he saw as a pre-determined clockwork universe; despite this he somehow avoided moral nihilism.

2) No relationship (noise): a failure of pattern recognition. Perhaps there is no pattern, or perhaps there is but we're too stupid to see it. Whether there is a generalized way to tell the difference between these two possibilities without knowing the pattern if one is determined exists is another question. Note that nuclear decay is lawful only en masse but at the level of individual nuclei is random; quantum mechanics famously emphasizes the non-predictable (non-lawful) behavior of individual particles. Matiyasevich showed more generally than Goedel that all systems must contain detail, that is, true statements that cannot be deduced by the axioms always exist ("details"). But this still doesn't get us out of the woods because intentionality is not random.

3) Free Intentionality
: Goal-seeking behavior that is not random but not forced by physical law to make the specific choices it does. This is most consistent with the folk model of behavior. Many materialists would claim this is a myth forced on us by our own neurology, and that there is no space for any options besides 1 or 2.

There are several answers to the problem of free intentionality that do not involve an appeal to faith ("i.e. It seems true so it must be"): yes, it does seem true, and while the preceding statement isn't the end of the argument, certainly it's premature to assume that free intentionality must be an illusion because our understanding in 2010 of the way the world works doesn't allow for that class of actions. (Note Newton's smugness.) We have nothing close to a complete understanding of the nervous system, so it's a little soon to be discarding introspection.

Grant for the moment that such a thing as free intentional behavior exists. I would argue (separately) that the property of free intentional behavior exists more in some entities than others, and that this property is lawfully determined. It is less clear exactly what those properties must share in order to exhibit free intentionality. Importantly, must free-intentional entities have experience; that is, can "dead" systems or much less advanced systems like cnidarians have free intentionality but not be subjectively aware? Because we're talking about humans in these discussions the assumption is that we would have both free intentionality and experience, but if an argument has been made that they must co-occur, I haven't yet seen it.


Epiphenomenalism

The bogeyman of free will is whether consciousness is an epiphenomenon of the real process of cognition, like a shadow or aftereffect. In some cases it certainly is: EEG studies have shown that people often decide to make a movement up to 300 milliseconds before they do, and the experimenter watching the trace literally knows before they do that they're about to move. First, because this is sometimes the case does not mean it is always the case.

Second, and more critically, why is it so troubling if our consciousness is on a tape delay? Assuming free intentionality applies to your nervous system, but not to your conscious awareness (if it's epiphenomenal) then you subconsciously have free will and become of a free intentional decision after it was made. In one sense "you" are just along for the ride, but it's a ride on a computer with free will that has been telling "you" what to do in a deceptive way since the day you were born and which cannot be separated from "you" without the end of your experience (because that means cutting your brain out). As long as the inseparable computer that's giving the orders has free intentionality, we have no right to be upset by the arrangement. The key to whether we can make free intentional acts is therefore not in the length of the tape delay, but in the behavior of the computer that's making decisions for "you". Our understanding of the nervous system does not yet allow us to rule out category #3 above, especially in light of strong introspective
evidence.


IF FREE INTENTIONALITY DOES NOT EXIST, WHY DOES IT SEEM LIKE IT DOES?

1) If free intentionality is in fact an illusion - why? Why would it be useful for organisms to deceive themselves into thinking that they have free will? The same question can be asked of conscious experience itself. How could such a thing be evolutionarily selected for, since it seems to have no outward manifestation?

2) For those who believe in a pre-determined universe, this means we're living in a static four-dimensional block of space-time. Why does "now" seem to be a special point in time? By this view, "now" is an illusion. How could organisms have developed that do not sense the full sweep of their existence? Why would this narrow focus on a gradually-shifting, illusorily-special space?

3) Free intentionality seems to manifest more at greater time scales. It is clear that there are behaviors resulting from Category 1 lawful relationships in every organism, including ourselves. If there is free intentionality, it a) probably occurs at least in the executive decision center and b) manifests over longer periods of time. Case in point, tell a person they have no free will and a common response is to hop on one foot or do something else socially unexpected. Find that person in exactly one month's time, and try to measure how different their life is because they hopped on one foot for five seconds; not very much I'll wager. It's the person who is making considered decisions based on semantic reasoning whose life can change over time. If free intentionality exists anywhere, it is in this kind of cognition.

Wednesday, April 21, 2010

ADHD, Dopamine, and Nomadism

My uncle Bill was one of those guys with "too much energy". Among other things, starting at age 10 he would climb out his bedroom window at 1 in the morning, walk on the train tricks to the city ten miles away, and be back in bed before dawn. My grandparents only found out about his nightly forays when the police found him outside at 3 a.m. What's odd is that I've heard of several others engaging in this exact same behavior - sneaking out at night to follow train tracks or paths.

This level of restlessness doesn't necessarily require a diagnosis but the repeated pattern is still striking. You can't help but wonder what gives some people this kind of energy and why it expresses itself in terms of "exploratory" behavior. One possibility is that these individuals are acting out behaviors which were adaptive at one time, and that in fact these are goal-less random walks of the sort that predators engage in to find new hunting territory. That I know of, none of these individuals were diagnosed ADHD, but it certainly brings to mind the hunter-farmer theory. It would be very interesting to genotype these individuals for DRD4 polymorphisms, since a) there are DRD4 alleles associated with ADHD (7R) and b) in one African population split between farmers and nomads, the 7R allele was associated with greater stature and weight in nomads, but not in sedentary farmers.

The next step is to look at behavioral correlates of the DRD4 allele. In particular:

1) Ability to tolerate pain or negative stimuli while pursuing a goal, DRD47R vs non-7R. This may be a way to measure the lay term "hyperfocus", to see if it's real and if so, how and to what extent it manifests in 7Rs.

2) Comparative activity of reward circuity in DRD47Rs to non-7Rs, perhaps by fMRI. An fMRI study has already been done correlating 7R, reward circuity activation from food, and weight gain.


Of interest to readers mostly for humor: I already know from an ethnic genotyping study that I have a DRD1 mutation, and although it was novel at the time to the big databases, it was in the middle of an intron so it probably has no effect. But I haven't yet been sequenced at my other receptors and plan two within the next two years. I would put good money on having at least one 7R. Given the greater food rewards, this only supports my argument that people with good manners just don't like food.

Saturday, April 17, 2010

NYT Article on Psilocybin Research

Read it here. It's good to see that laws which are meant to protect people's health are finally allowing important research to go forward instead of obstructing inquiry for no good reason.

Friday, April 16, 2010

Thursday, April 15, 2010

The Universality of Word Boundaries in Human Cognition

"...Twice I have taught intelligent young Indians to write their own languages according to the phonetic system which I employ. They were taught merely how to render accurately the sounds as such. Both had some difficulty in learning to break up a word into its constituent sounds, but none whatever in determining the words. This they both did with spontaneous and complete accuracy. In the hundreds of pages of manuscript Nootka text that I have obtained from one of these young Indians the words, whether abstract relational entities like English that and but or complex sentence-words...are, practically without exception, isolated precisely as I or any other student would have isolated them. Such experiences with naive speakers and recorders do more to convince one of the definitely plastic unity of the word than any amount of purely theoretical argument."

- From Language: An Introduction to the Study of Speech. Edward Sapir, 1921.

Tuesday, April 13, 2010

What Does the Autonomic Nervous System Lack? What Do Birds Have?

It's amazing that we're filled with nerves that can't directly provide us experience. When your stomach growls, it's because your autonomic nervous system is sending waves of contraction down your small intestine to clean out any food material that remains. But even though it's your own intestine, you have no power to start or stop this activity, as you may have discovered to your chagrin in a conference room at one point; you "find out" about your own organs' activity only indirectly, by hearing them as if they're coming from someone else, or feeling it incidentally in consciousness-impinging somatosensors on adjacent muscle or skin. This becomes even stranger when you ponder that we have more autonomic motor neurons than we have consciousness-impinging somatasensors. And yet, somehow, this mass of neurons certainly doesn't seem to be conscious. Why not?

I haven't seen any philosophical explorations of the nature of consciousness in terms of the ANS, but it may be profitable to ask what the autonomic nervous system lacks that the central nervous system has. Clearly a large network of neurons is necessary but not sufficient.

The question is not just why we can't consciously form migrating motor complexes in our intestines. They're connected to the wrong part of the brain for that, and there is no instance of smooth muscle which is under voluntary control. (If you can voluntarily move a muscle, it's striated; most of the rest are smooth, with the notable halfway exception of your heart.) As a speculative aside, smooth muscle is structured in a seemingly jumbled way relative to the machine-like geometry of striated muscle, but I wonder if this says more about the way we perceive patterns spatially and temporally than it does about the respective complexity of the tissues. That is, our space and time perception has developed for obvious reasons on the same scale as the voluntary movement of our bodies - of our voluntary nervous system. But the world sometimes becomes alien and even incomprehensible when we look at satellite images or super slow-mo videos; maybe in a real and objective sense there is a logic to the organization of smooth muscle but our pattern recognition filter is not designed to see it. After all, it took until the 17th century for someone to think of integrating information into a graph; there are many accurate ways to represent data coming in from the world, even if our consciousness doesn't automatically tie stimuli together that way.

Birds are another interesting problem. Where the ANS is missing a function (consciousness), birds are missing a structure, but still have the function. Even if they aren't conscious, birds are certainly intelligent, yet they have only a very thin cerebral cortex. Learning in birds occurs in the Wulst, within the corpus striatum. The cognate structure in mammals are the basal ganglia, which are islands of gray matter near the bottom side of the brain, surrounded by white matter. In mammals, they do have roles in learning and decision-making (especially the caudate nucleus and controversially the subthalamic nucleus, respectively) but they are certainly not sufficient. Why is the corpus striatum in birds adequate for learning? What design features does it share or how does it differ from the cognate structures as well as the cortex in mammals?

Saturday, April 10, 2010

Semantic Reasoning, and the Spectrum from Heuristics to Delusions

To what extent are heuristics (especially confirmation bias) just milder forms of delusions? How can we distinguish the two, how can we measure them, and how can we (or should we) treat them?

Delusions are fascinating in that someone with delusions has exactly the same sense input that the rest of us do. Unlike a schizophrenic who hears voices that no one else does, or someone who's taken LSD and sees lights and shapes and animals, a delusional person doesn't see or hear anything different. What he does is take the same sense experience the rest of us have, and interpret it differently. He might have heard the same group of schoolgirls laughing on the subway that you or I do, but where you or I are merely annoyed, the delusional person knows that the schoolgirls are laughing at him, because they're part of the conspiracy. A delusional person engages in top-down thinking, organizing their experience of the world around pre-existing beliefs. Psychiatrists commonly state that delusions are notoriously difficult to treat. You can give a full-on hallucinating schizophrenic medicines that will improve her positive symptoms, but there is no drug available that can dislodge a specific belief that doesn't accord with reality.

In fact there is a spectrum of false beliefs. On one end, we have flawed human cognitive shortcuts (heuristics) like confirmation bias, leading to small and often temporarily held, utterly inconsequential false beliefs. I guarantee that you have such quick-and-dirty false beliefs every day of your life, as do I, as does every human who has ever lived, but they're temporary and small and if they run head-first into information that doesn't seem to fit, we throw them out without noticing and adopt a new belief.

But from there we move on to the realm over-attached ideas, and from there to what most of us (except the delusional) would see as full-on delusions. At first it might seem that a behavioral definition would be the quickest way to measure this spectrum, but there are plenty of people with delusions who can hold down a job and keep the lights on without annoying the neighbors too much; but where and how to draw lines is not an academic question. Psychiatrists have to decide who needs treatment and who doesn't. Let's say your uncle insists that the Apollo moon landings were a hoax. How is this affecting his life? He makes coworkers roll their eyes at lunch and that's about it? Probably doesn't need treatment. On the other hand, imagine you go home and your mother tells you that the neighbors are spying on her and bugging her house and even sent people to follow her on her vacation to Tahiti; she becomes quite agitated when you ask her for evidence, and she insists soon she'll be forced to do something about it. You probably do want her to see somebody about this. And in addition to having a yardstick for what caretakers should do about delusional beliefs, there is the extremely interesting question of why in some people, the gain on the pattern recognition filter seems to be set too high, and they slide down the far side of the distribution from heuristics into delusion.

Keeping in mind that although it would be difficult to operationalize the following approach ethically, we might be able to get a quantitative answer for how severe the heuristic/false belief/delusion is based on how much risk they're willing to take for it (how it affects their decisions); that is, on what confidence they place in the belief when there are consequences, and how much suffering they're willing to endure for it. That means measuring adherence to delusions in the face of negative reinforcement (or missed rewards). That is to say - would the delusional person make a large bet on their beliefs, with generous odds for their opponent? (If they believe it's clearly true, why not give good odds to the opponent(s), to draw in more suckers?) Would they plan a legal strategy or medical procedure based on this belief? Of course, people do make these kinds of decisions based on false beliefs all the time, which shows the extent to which they take them seriously.

There is also a kind of natural selection argument to be made for long- and strongly-held false beliefs. That is, those false beliefs are most likely to survive over time which are attached to behaviors that keep them from coming into contact with clearly opposed reality; you wouldn't have a false belief for long if the belief didn't have these defense strategies. Consequently, delusional people often find ways to avoid entering into arrangements that directly subject the belief to scrutiny (for example, these very sorts of bets or experiments; the contorted explanations of why they won't enter these agreements are a dead give-away). This keeps us from using the previous approach to measure delusional strength, but then possibly the intensity of the protective behavior could be measured. Phobias are similar in that there are also elaborate recursive defenses erected around them; for instance, not only will a severe butterfly-phobic person refuse to talk about butterflies, she will also refuse to talk about having a phobia of butterflies, and refuse to talk about the fact that she won't talk about the phobia, etc. Operationalizing this approach ethically in the laboratory or clinic remains a problem, but people do the experiment on themselves voluntarily. People do in fact risk and lose their health, livelihood or life savings on delusional pursuits; this is why they are treated.

So far I've discussed false beliefs on a spectrum of apophenia, the imposition of patterns on information when it isn't justified, from minor heuristic mistakes to full-on schizoaffective delusions. Humans have developed the unique skill of semantic reasoning, a neat trick that allows us to glean more information from the world than just what our direct senses provide. The unique problem that comes with that skill is that we can make mistakes in those chains of non-sensory association but remain unaware of them.

It bears emphasis that apophenia is a basic activity of human cognition - we face the world with a set of pre-existing ideas, and only very rarely do we independently form a coherent new concept to explain what we've encountered. This is hard. The overwhelming vast majority or our concepts, even for the most original thinkers among us, are taught to us through language by other humans. Although apophenia implies no requirement for the imposed pattern to be pre-existing, in practice, people don't constantly impose brand new (unjustified) patterns on noise but rather filter everything through a top-down principle that's already there; confirmation bias is therefore a special case, although the most common form, of apophenia.

To see semantic reasoning in a fossilized, easily studied form that shows confirmation bias in spades, try analyzing the rhetorical structure of the arguments you hear over the course of a day (I'm not talking about Aristotle, I mean listening to people at work or in front of you at the grocery store). Stephen Toulmin took an inductive approach to rhetoric and showed that, when making arguments, what humans really do (almost always) is start with the end in mind, and get across to it from their premises on whatever rickety and incoherent rhetorical bridge they can put together. This is how humans actually make arguments, even if it's not an effective way to get at the truth. While it's true that humans do sometimes begin a chain of semantic reasoning without a conclusion in mind, this is a vanishingly small fraction of human semantic reasoning, even in people with good critical thinking skills who are paid to do it. (Critical thinking and self-criticism can be thought of as a form of recursive semantic reasoning that we've been forced to develop to avoid going off the rails constantly.)

It's exactly this semantic reasoning ability that begins to overtake sensory input the further we get toward the delusional end of the spectrum. To test this model, it may be productive to ask:

1) Whether individuals who grow up speaking more than one language are any less likely to become delusional (controlling for intelligence). Since concepts and definitions of words are not exactly analogous between languages, if delusions result from a flawed semantic reasoning process, the cognitive coexistence of 2 or more languages may offset errors.

2) Whether otherwise functional delusional individuals have more difficulty modeling false beliefs in others. This brings up the question of the overlap between delusion and autism, since one of the principal features of autism is the inability to model others' beliefs, especially false ones.

3) Whether individuals with language deficits are less likely to suffer from delusions.

4) Whether delusions and hallucinations are really two entirely different phenomena with different pathologies; this model predicts that delusions and hallucinations are two different phenomena and that there shouldn't be much overlap between the two (no spectrum). After all, if you believe someone is screaming in your ear that the house is on fire, the rational thing to do is run out of the house. Hallucinating people can sometimes be said to react rationally to false stimuli, as opposed to delusional individuals, who are doing the converse.

5) Lysosomal storage diseases have controversially been argued to be selected for by heterozygote advantage (increased semantic reasoning in heterozygotes). If this is ever established by direct testing of heterozygotes, it would be productive to see if increased semantic reasoning ability has an affect on risk of developing delusional beliefs.


Other Characteristics of Delusions

1. Evangelism. In addition to apophenia, false beliefs that we would normally categorize as delusions often have a compulsively evangelical component. That is, your neighbor insists that the town is poisoning the water supply, and what's wrong with you that you can't see it!? You must believe it! In fact this evangelism extends right up to and through serious consequences, like loss of jobs or relationships. How does this differ from non-delusional false beliefs? Let's pick a belief of mine that I (of course) think is true, but which large numbers of people think is false, that being my position on property dualism. However, if tomorrow I awoke to find that this had somehow become an offensive taboo topic, I would decrease my discussion of it (even if as an oppressed minority I would start working behind the scenes to make it acceptable again). As it is now, most people just don't care, so I generally don't bring it up other than with neuroscience students, philosophically-minded acquaintances, or on my blog.

Most people do hold beliefs which a) are not "mainstream", b) about which we wonder "what's wrong with people" that they don't agree, and c) that we do "evangelize" about - but we can shut up when we need to avoid boring or frightening people, or jeopardizing our careers. Delusional people often have trouble with this restraint, even if the subject of their delusion is something with no immediate threat to their or anyone's safety. (The topic of all humans' epistemic intolerance, far out of proportion to any threat to personal health and safety, is certainly a fertile topic.)


2. Social Context. Part of the offical psychiatric definition of delusion contains, strangely enough, a reference to the culture of the people putatively experiencing the delusion. That is, it's not a delusion if everybody where you live believes it. Suffice it to say, that's strange. While I don't intend the post to be an argument against religion, not to address the culture-specific nature of this defintion is to ignore the elephant in the room when we talk about delusional beliefs. There are some delusions that individuals develop all on their own, and these "stick out", because they're not culture-bound. Then there are delusional beliefs that are taught. Some of these exist in isolation ("black cats crossing the street in front of you cause bad luck") and some of them are deliberately reinforced by institutions and exist in complexes with other beliefs ("bad things that happen to you now are the result of bad things you did to others in a previous life"). Without arguing that all religion is delusional, believers and non-believers both can agree that some beliefs of some religions certainly are delusional, and while it's sometimes useful to be politically correct about it, no, their kids really recover from an illness because they set some photographs in front of a statue.

To illustrate the silliness of it, this means that someone in the U.S. who induced labor early to avoid the bad luck of delivering a child on 6/6/06 is not delusional (because lots of other Americans believe that number is bad, and lots of people actually did this!) but someone in China who did the same thing would be. It's also worth asking what this definition says about people who have extreme non-mainstream beliefs for which they can produce evidence. Was Galileo delusional? It seems a very short step from this to "the majority is always sane".

Of course there's a difference between a psychological definition of a belief as "not [officially] delusional" vs. recognizing it as true. We can recognize the pragmatic aspect of clinical practice and even a need for some political correctness to avoid seeming threatening to the public; to show up at Fatima and start prescribing antipsychotics would probably not get very far, and these people are often functional as part of a large group that shares the delusion. But it still seems prudent to remove this part of the definition of delusional, and make it a practice to categorize some people as "delusional but functional within a culture complex, therefore inadvisable to treat". Naive about the practice of medicine though I still am, at this point in my young career, that seems like this would be an honest, appropriate and accurate thing to write in a chart.


3. Self-Reference and Emotional Content. Has anyone ever delusionally believed that a casual acquaintance is being pursued by the CIA (as opposed to the CIA pursuing the delusional person him or herself?) Or has anyone ever become obsessed with spreading the gospel that Kenmore refrigerators in 1999 used 1 1/4" tubing instead of 1 1/2" (and what's wrong with everybody else that they don't know this?!?) I doubt that these kinds of delusions are common; passionately-held beliefs require some degree of inherent excitability. Threats to personal or public safety, or paradigm-shifting facts about the country or our history seem to make frequent appearances as delusions. One telling exception is that there is a class of people probably more in the over-attached idea category rather than fully delusional who we call "crackpots". These are the people who claim to be able to show you that they've disproved relativity, or the Basque language is a form of alien mathematics. Their appeals are to a narrow and obscure slice of the public but tellingly, they focus on the high status people in that field, from whom they demand recognition. Pascal Boyer has an excellent piece on crackpots; similar status-seeking behavior can be found right at UCSD, as it turns out.

The idea that delusions require emotional content is consistent with their position as the organizing principle of semantic reasoning in delusional people, and with what we know about the effects of traumatic experiences on brain architecture and cognition. Building on work by Tsien, Josselyn, and McGaugh, Kindt et al showed that human fear behaviors connected to a learned stimulus can be erased with the off-label use of propoanolol, a beta adrenergic antagonist that's already on the market. If delusions are organized in a similar way, perhaps administration of propranolol during behaviors driven by delusions could have a similar benefit.


4. The Over-Extension of Agency Onto the World at Large. To delusional patients, the world is often purposefully organized to some purpose, either very positive or very sinister - otherwise the delusion wouldn't have a strong emotional component. What this means is that everywhere the world itself watches them (with bugs, cameras, and secret agents, or with a powerful protective charm that makes them successful and keeps them from getting hurt in bad situations.) These people's agency detectors are over-active.

It's interesting that the dissociative anesthetics are currently considered the best pharmacologic model of schizophrenia and that one of the toxicities of chronic ketamine use is over-active agency detection (for a good example, see the delusions of John Lilly, M.D., of the Solid State Intelligence). While serotonin agonism has been mostly abandoned as a pharmacologic model to study schizophrenia, it should be noted that users of the HT2A agonist DMT report as a residual toxicity a sense of being watched by a disembodied mind. It's worth developing a way to measure this symptom and tracking the effect of serotonin antagonists on this symptom in delusional patients. Again returning to cases of autistics with delusions, it may also be instructive to see if delusional autistics experience these symptoms at the same rate as non-autistic delusional patients, since autistics are known for have an under-active agency detector.


CONCLUSION

As animals that make sense of the world not just through their senses but through chains of semantic reasoning, all humans commit confirmation bias errors that lead to false beliefs. For most of us, these errors are transient, do not dramatically affect our behavior, and can eventually be corrected by further information. For some humans, semantic organization or perception takes on too large a role. Cognition becomes predominantly top-down, and these humans become more heavily invested in false beliefs, to the point where harm to the health or property of themselves or others can occur. People at this end of the spectrum are delusional, and in addition to apophenia their false beliefs have other characteristics. These beliefs become central organizing principles of their cognition in part because they are strongly associated with highly emotional behaviors. Sometimes, these beliefs are reinforced socially by others who share them; the current official definition of delusions is somewhat disturbing with respect to this topic.

We can use willingness to adhere to the delusion in the face of financial or physical harm as a measure of severity. Ultimately, there must be a physical correlate in the brain for persistent, harmful, false beliefs, but efforts to detect or image them (should they ever seem feasible to explore) should be focused on treatment.

For fear memories in general we already have some indication of the physical correlates of the fear-association, as well as an experimentally verified way to erase memory-associated fear behaviors. This same therapy may be productive for delusions. It would certainly have fewer side effects than current antipsychotics. It is also worth asking whether there is a genetic contribution that predisposes individuals to delusion.

Friday, April 9, 2010

Transporters, Zombie Neurons, and the Hard Problem of Consciousness

Most philosophical materialists who investigate the hard problem of consciousness accept that consciousness is associated with matter being arranged within certain bounds: a central nervous system with inputs. Change that arrangement, and you will change or destroy consciousness.

This leads to thought experiments like the transporter problem. If a device exists which can break down the atoms that compose your body and send them somewhere else and recompose them in the same form, many (most?) thought experimenters would argue that the person produced at the other end is not you. This is my position; you're dead the moment that you get broken down. Certainly the person at the other end will step out with all your memories right up until the moment of breakdown and say "Was I ever silly to have doubted that!" - but you are dead. If this seems unclear, imagine several possibilities.

First is that the transporter malfunctions and sends a copy of you (same atoms or not) to the other destination, without breaking you down in the first place. Are you suddenly seeing out of four pairs of eyes simultaneously from both destinations? Another way to imagine it is if it were very low-tech: you're broken down into atoms, and someone records all the information about the arrangement of the atoms in your body with paper and pencil. This record is then sent by Pony Express from St. Louis to San Francisco, in which city chemists laboriously reconstruct "you" from the formula. Again, assuming the technology to perform such a feat chemically ever exists, certainly the person who wakes could honestly say "Wow, the last thing I remember was being broken down in St. Louis, and here I am in San Francisco two years later!" First, the (real in this case) continuity of memory cannot itself be an argument for the continuance of subjective experience - if we load up someone else with your memories, does that make them you? If we load up someone with false memories, does that mean the false life thereby represented actually happened, and the person in those memories is now reified? Certainly in the Pony Express transporter, a person will wake up in San Francisco with your memory, who thinks s/he is you, and is physically identical - but you won't ever wake up again once they take you apart in St. Louis.

Would it make a difference if the chemists in St. Louis send not just the instructions, but vials of the actual atoms of carbon and nitrogen and oxygen they got from your tissues? Self-evidently not, and atoms are equivalent anyway (unless we're going to suppose some elan vital or special consciousness juice for them. Which we're not.)

It's also worth comparing your post-disassembly fate, as a conscious human, to that of Hernan Cortez's ship. He had his men disassemble the ship at the shore of the Gulf of Mexico and carry it inland and uphill hundreds of miles, to be reassembled it in Lake Texcoco to attack the then-island capital of Mexico (yes, this is really what they did!) There was a ship in the Gulf; then a bunch of wood and nails and rope, but no ship getting carried up from the lowlands and back down to Tenochtitlan; and then again a ship in the lake. Where was the ship during the trip? A ship is just a certain arrangement of elements - so it was nowhere. The crucial difference is that a ship has no experience; there is nothing it is like to be a ship, so there is no property to be lost in the transport, regardless of whether they send the original wood and rope or just a set of instructions so the conquistadors can build another ship when they get to the lake.

The problem with the transporter thought experiment is this. If our continued experience is a product of the continued functioning of a specific material arrangement, how can any of us exist for more than a split second? Every second of every day some of your brain cells are dying, some of them are building new connections, molecules are being delivered or carried away by the cerebral vasculature - that arrangement is constantly changing. And yet, of course, despite each specific material arrangement of the brain being constantly destroyed moment-to-moment, we seem to be continuously conscious.

There are several possible ways out of this conundrum.

- The transporter-kills-the-old-you position is incorrect.

- Our seeming to be continuously conscious is an illusion. Each of "us" exists as a conscious entity only for mere fractions of a second, but we cannot tell, because we still have the entire memory of the last incarnation, and of course we're not conscious of our own immediate extinction.

- There's a limit on how much rearrangement can occur to your brain and allow continuity of experience. On one hand, these sorts of absolute differences in kind (rather than spectra of degree) are usually suspect. On the other hand there clearly are limits to the changes that can occur to tissue and allow persistence of the self.


The last option is the most attractive but it suffers from some of the same problems as does property dualism (a more respectable name for panpsychism). For example, if your brain as it is in this nanosecond is conscious, presumably that doesn't preclude your brain as it is in this nanosecond minus exactly one neuron in the dorsolateral prefrontal cortex from simultaneously being conscious (and so on in some enormous factorial function that gives all the possible combinations therein). If this is true and each arrangement is conscious discretely, either a huge number of conscious entities exists co-dominantly within any one brain, or the vast number of human consciousnesses are locked away, looking on from inside as one lucky combination of cells interacts with the outside world.

Asking these kinds of questions allows the problem of philosophical zombies to invade one's own skull. There is no way for any of us to determine that any other living thing has subjective experience of the world. As a thought experiment, imagine wiring your own brain up to a friend's, from whom you would receive all of their impressions of the world (a la Being John Malkovich). You would find that you were certainly having an experience not only of your friend's senses but of thoughts, memories, reactions to those subjective experiences, and so on - but the issue is that again, you have no way of knowing if you're "contaminating" your possibly-zombie friend's brain with your subjective awareness, if anybody was home in the first place or if you're just experiencing dead inputs and your own consciousness is imbuing those inputs with subjective experience. You could ask the same question of an inert, 100% certainly-unconscious piece of matter like an eyeball. On its own, and eyeball is not conscious. Wired into your brain, it is providing the raw stimulus for a subjective experience of vision, but only that - the eye itself cannot be the source of experience, only a source of information, if the two have no overlap. Given that on its own the eye is just a photochemical transducer, we might even reasonably ask why we would consider the eye to be the generator of experience, and not the flowers the light is reflecting off of, or the sun that's producing the light in the first place.

We can ask the same question about wiring up to your friend's brain or a new eyeball as we can with new neurons. Imagine the frightening event that you have a severe stroke, and lose large portions of your frontal lobes. Now also imagine that after an intensive surgery, your brain is repaired with frontal lobe sections from an organ donor. The same argument applies; is the new lobe capable of experience on its own, or did your remaining brain "contaminate" what was previously a zombie lobe? (And does consciousness always win, or might a zombie lobe overpower the conscious lobe into zombiedom? If a question seems silly, it probably is; such is usually the case when we dig deeply into the assumptions that make us accept differences of kind in nature.) More realistically - you're conscious now, yet your consciousness is built out of parts that are certainly not conscious (or at the very least much less conscious) than you are. And that's not all - "you" were not always capable of consciousness and in fact, the arrangement that is you was not even always capable of consciousness. There was a time when you were an embryo and did not even have neurons, much less a brain.

This has become a reductio ad absurdum. The question of philosophical zombies is meant to question to basic assumption of materialist accounts of consciousness, that a certain kind of arrangement of matter is what creates (or enhances) consciousness, and if two identical constructed identities differed in their degree of consciousness, the materialist account would fall apart. But once we assume that there is even one consciousness entity, then to assume that there could be zombies, or even that a part of the world is unable to provide subjective experience, we must make arbitrary distinctions outside world and central nervous system - while forgetting that the CNS is made up of discrete combinations of elements which in isolation are certainly not conscious, and which themselves originated from an experienceless state. Why assume another human brain that you're wired to, and which is now a causal factor in your experience, is a unique topic for the question of zombiehood, and your contamination of it with your "core" of consciousness? By having an experience of a flower, are you not contaminating the flower with your consciousness in exactly the same way? If it was not capable of interacting with you in an experiential way to begin with, even if it's depending on your eyes and brain to complete the experiential equation, you would not be able to have a conscious experience of it. Extended most generally, all information must be able to provide experience.

This is one approach to solve the problem of second-to-second changes or 1-cell different combination differences causing discrete consciousnesses. Any combination or arrangement whose elements have the basic requirements for consciousness is conscious (there can be no zombies then). Although consciousness can be profoundly reduced as with transporter decompiling, as long as the core basic requirements of the arrangement of entities is met, there will be continuous unified experience. This is also consistent with the panpsychist or property-dualist position advocated by Chalmers, that experience is a basic property of reality, and like gravity, consciousness changes its quality in response to aspects of matter around it.

Thursday, April 8, 2010

The Tactile Binding Problem and Literacy as Synesthesia

The term "binding problem" is used in several ways, which as is often a problem in a convergence discipline that is also "neat", muddies the waters. The clearest meaning is the limited one, where investigators are asking questions about how we can perceive a specific coherent stimulus with specific multiple properties where those properties are received and processed by discrete channels; color and shape in visual perception of an object is a common arena for these questions. In other words, when you look at a green bottle on a black table, you don't have to decide whether the bottle or the table is green or black (or whether the green thing is the bottle-shaped surface or the flat square surface). In fact you can't not see the bottle-shaped surface as green; you have no choice in the matter, since the two properties are combined before they enter your experience.

Sometimes people also use the term binding problem in a more general sense, of how all these stimuli are combined to form our full coherent conscious experience of the world. While a worthy goal, it's easier and more productive at this point to ask questions about the limited definition of the first sense of the term. Read about or click through to one such recent productive investigation here.

Another binding problem that I've not seen investigated is the tactile binding problem. Areas of skin on our torsos are innervated by discrete nerves emanating from specific positions between our thoracic vertebrae - yet to learn this, humans needed to study anatomy, rather than perform introspection on our experience. When something touches your chest and moves down to your navel it's traversing 10 separate nerves, but as with visual binding, there's no sense of "transfer" between channels; it's completely continuous. Why? Is the answer the same kind of answer for visual property binding pairs? Maybe that's another potential angle of investigation; maybe easier.

Note that there are some forms of possible, non-automatic pattern recognition (not "mandatory" pre-conscious pairs) that are highly semantic. That is, I go for a walk in the canyon behind my house, and if I'm paying attention, I see tracks in the dry dirt; if I'm paying more attention, I can differentiate them as coyotes rather than domestic dogs; and if I'm really concentrating I can determine that there were two of them, they were there about dawn, were chasing a rabbit, and ran down into the creekbed after it. Unlike sensory binding it would be easy for this process to be derailed. That is, imagine that I like bunny rabbits and find it unpleasant to think about their being devoured. If I see all the tracks and begin to suspect that Peter Cottontail met an untimely end on this very trail I could choose to distract myself from these highly voluntary semantic pattern-recognition efforts, and never consciously realize what happened. But if I come upon the coyotes at the start of their meal, then, no matter how much I might want to avoid it, I have no free will in the matter; my brain will bind "red" and "bunny rabbit shape" and I will be conscious of it.

Of course, my semantic reasoning about these marks on the ground could have been all wrong, and in fact this is much, much more likely than when I preconsciously integrate "bottle-shaped" and "green" or "bunny rabbit-shaped" and "red". Semantic reasoning allows us to have false beliefs, but even the integration of direct sensory input is not without glitches; optical illusions do occur. Both means of perceptions involve integration of discretely sensed information, it's just that one of those integrations is occurring voluntarily, consciously, and semantically.

What's more interesting (and the subject of another post) is that semantic learning that's so well-conditioned it is also automatic and pre-conscious, for example writing to literate people. You can't look at an "A" shape without thinking of the letter, even if it's a natural rock formation that humans have never touched. You can't see the word "cat" without thinking of the animal. (If you disagree, email me with your perfect Stroop test score.) There are various forms of synesthesia, many of them having to do with graphemes, and I submit that literacy is a form of conditioned grapheme-sememe synesthesia that only seems non-miraculous because we've been doing it for a few millennia. (A sememe is just a unit of semantic meaning.) Connections between various forms of synesthesia and reading/writing ability are therefore interesting, and anecdotally there is a positive association between dyslexia and synesthesia.

I Need Mie Gakure! Where Is It!?!?

Two things about Mie Gakure: 1) It appears to be fourth dimensional Crystal Castles. 2) I need it now. It was mentioned in XKCD and I'm not the only person freaking out about it; article with video demo here. I don't know whether it will help in discussions of free will but, more importantly, it will be cool.

I wonder if this was the first product placement in XKCD?

Wednesday, April 7, 2010

Piagetian Stages and Animate-Inanimate Distinctions in Language

Animate-inanimate distinction distinction is one of the more strongly recurring parameters in languages. The distinction has varying levels of importance and morphological encodedness in languages, from purely semantic as in English (the tell-tale is whether we assign gender) to more complex grammatical structures. Inuit languages even have a fourth person required when there is action between an animate third person agent and an inanimate agent ("He moved the boat"). In many languages, inanimate nouns cannot be the agents of actions against passive patients (you cannot say "The rock hit him", you must use another construction analogous to "He was hit by the rock").

Again we should take advantage of differences in neurology between humans, including those associated with pathology as well as those which we see in the normal range of development. Piaget noted that from the beginning of language production until about age 6, children indiscriminately assign animacy to inanimate objects: the sun shines because it's "happy", the toilet "wants" to suck them in. If we're honest, adults resort to this kind of thinking as a coping strategy when the cognition gets tough; medical students frequently hear during lectures that sodium "wants" to flow into a neuron .

But sodium's desires are a consciously used linguistic conceit, and we would expect that a neurochemistry lecture in Inuit would unmysteriously use animate rules for sodium until switching back after the didactic task is finished. Inuit kids talking about toilets sucking them in, and everything else, are unable to slice the world into these distinctions at all. The animate-inanimate mistakes made by children speaking these of languages at this age may therefore be instructive in investigations of human cognition and the physical correlates of the animate-inanimate distinction.

Tuesday, April 6, 2010

Intelligence, Language and Behavior in AIs and Animals

If someone came to me with a machine that could survive on its own for years in the canyon behind my house, not only finding its way around but scavenging fuel and meeting others of its kind to make more of itself so that there was sustainably and indefinitely a population of these machines back there in the brush, I would be damn impressed. I would even be prepared to call this invention "intelligent". Of course there really are such machines down in the canyon already; they're called coyotes, and they're pretty clever, but they aren't about to pass any Turing tests.

Cognitive philosophy discussions are often confused by equating "language" with "intelligence", or even assuming language is a good quick-and-dirty proxy indicator. Turing himself stressed the dirtiness of the measure at the beginning of the article where he introduced the test and didn't argue that a command of language necessarily required the commander to be thinking and/or conscious. Teaching a box to repeat superficial pleasantries for a finite duration, without that language relating to spontaneous behavior, seems like a fairly superfluous parlor trick and one which doesn't have much to say about intelligence. It's worth asking whether programmers working in a more highly inflected language like Russian are as impressed with this goal; if not, maybe they realize populating a structure with terms whose semantic content doesn't violate category expectations too badly doesn't prove much. Great, you wrote a program to tack case endings on the thousand most commonly used nouns, and some Markov chain rules to decide when to use them!

It seems clear that in the natural world some form of consciousness and spontaneous problem-solving behavior preceded language. There are animals from several taxa with excellent problem-solving skills (other primates, bears which arguably even have memes, crows, even octopuses) but without language. Leaving aside the hard problem of consciousness, we can measure behavior.

It's perhaps understandable that we're approaching this question bass-ackwards since instead of the natural world where language developed very late on top of and in context with self-replication capabilities, our toolkit is a class of entities which began as symbolic, syntactic rule-followers. But for more fruitful attempts at AI, we would do better to focus on building generalized problem-solvers and forget about chatbots. I suspect Turing would agree. That approach is the most likely one to effectively make the question of whether a machine can think seem as moot as whether a submarine can swim.

A Sketch of a Neuro-Linguistic Theory

Below is a brief sketch of a neuro-structural theory of language with a few supporting comments. following that is an outline of a program for exploring questions in historical linguistics. If similar work exists or you have thoughts (critical or otherwise) I would greatly appreciate hearing them in the comments.


1. Distinctions occurring universally or re-developing frequently in human language will have physically detectable neural correlates. They should be further investigated by pharmacological disruption of subsystems or review of patients with neurological deficits.


2. The basic unit of human language is the noun. All phrases are noun phrases. It is difficult or impossible to derive coherent information from isolated non-nouns, except for direct sense impressions ("green", "loud"). This has strong implications for cognition.

2-1. Chemical and electromagnetic imaging should eventually reveal nuclei or
networks of cells corresponding to specific nouns that when activated by an
utterance, activate earlier than other words in an utterance, regardless of
language word order. Children learning a language produce nouns first.

2-2. While nouns are often grouped into genders or other categories based on
some attributes, these are invariably irregular, and it is better to
use neutral terms like as noun class. Nonetheless frequently re-occurring
categories will have some neural correlate (for example, animate/inanimate
oppositions).

2-3. Possible problem for the theory: supposedly the Athabaskan language Hupa
of Northern California has a very limited number of nouns (a few hundred)
and the language is somehow constituted primarily by verbs [reference to
come]. This is easily the strangest thing I have ever heard about a
language and is reminiscent of Berkeley and Borges [reference to come].
However, I believe there are no native speakers left, I've not yet seen the
primary source and facts I've been able to gather are scanty. My hunch is
that either the primary source was a century-ago grad student who got a
little excited or that the primary source has been misinterpreted. However,
if this grammar is accurate, my noun-based theory is undone unless there are
somehow basic neurological differences between the Hupa and the rest of
us. Further militating against such radical innovations are the fact that
no other Athabaskan languages demonstrate such alienness, even those also
located on the Pacific coast like Port Orford and Tlaskanai [reference to
come.] This is a critical point of analysis for the theory and if any
speakers do exist, it would still be worth doing neural and genetic
investigations to see how they differ.


3. There are primary modifiers that directly modify nouns, in traditional grammar referred to as verbs and adjectives.

3-1. Primary modifiers encode information relating to the noun they modify
(number, noun class, case, time and intention). In many languages time is
encoded on adjectives, confusing to speakers of Western Indo-European
languages (e.g., Japanese, Mohawk).

3-2. Beyond their morphosyntax, the only distinction within the category of
first order modifiers is semantic, i.e.g, whether the modifier can mediate a
relationship or property between nouns. If so, these are called transitive
first order modifiers. If not, these are intransitive first order
modifiers. In English, adjectives are intranstive primary modifiers that
cannot be encoded with time information.

3-3. Even in languages like English or Spanish where there is a class of primary
modifiers not thought of as encoding time, there is a high degree of
interchangeability to the point where the adjective-verb distinction is
unclear. In Western Indo-European languages these are participles. It is
not possible to distinguish whether the last word of "She is finished" is an
adjective or verb because the categories are unnecessary.

3-4. Reflexivity is a form of ergativity; in languages where both exist like
Greenlandic, ergative blocks the reflexive morpheme. [reference to follow]
If there is a "native state" of languages with respect to ergative-
absolutive or nominative-accusative alignment, it is that all languages are
actually stative-active and contain both systems but where they are marked,
one is much better developed than the other. The alignment system is always
an extension of the animate-inanimate system.

3-5. Problem for theory worth investigating: relationship and storage of
abstract terms in terms of senses. To borrow and abuse terminology from
analytic philosophy, if we understand "cat" as the primary "anayltic"
element and primary modifiers are parasitic on the noun's concrete
qualities, then in neurological terms, "cat" cannot merely be a network of
primary modifiers - or they would be the elements. Note again that for any
materialist theory of language there must be some physical collection of cells in some
state in our brains that produces the semantic experience of "cat"; how it
is arranged and constituted relative to these other terms is the question.


4. Languages have secondary modifiers, in English called adverbs, which can modify primary modifiers. Their marginal importance is highlighted by their having little or no morphology, fewer or no rules about order in the sentence, and being often oddly affixed with cognition- related terms ("-mente" in Spanish, "-wise" in English).


5. There are non-content logical operators that mediate relationships between nouns. They are especially incoherent in isolation and are in fact always particles, even in languages that are not considered agglutinating.

5-1. There are two classes of non-content logic operators, those which can be
classified as those which give information about relationships between nouns
(in English, prepositions and conjunctions), and those which do not (in
English, articles; in Austronesian, focus markers, but they are actually the
same word category). Logic operators which do not provide information about
relationships between nouns often provide information about the importance a
speaker places on a noun.

5-2. Logic operators are often marked to agree with the nouns they modify.

5-3. Belying their close relationship, logic operators from the two classes
frequently merge; this could be better support if this is shown to
statistically occur over time more often than their ancestor words' use
together and their phonology would otherwise encourage.

5-4. Like other non-noun, non-primary-sense-datum words, these words are also
meaningless unless they are attached to a noun. In that sense they are like
particles, although they can have active morphology.

5-5. Languages with rich noun morphosyntax (particularly for grammatical role,
i.e. case) will have a far lower use of non-logic operators. To wit, the
parallel and independent developments of increased use of prepositions and
the institutionalization of articles in Western Indo-European languages
(Old English to modern English; Latin to Romance).

5-6. A physical correlate of operators is found in EEG studies. Normal English-
speaking subjects show a smaller ERP on reading prepositions than on reading
nouns, presumably because greater resources are required to recall the nouns
which contains extensive learned, networked sensory content. This
difference is not evident in schizophrenic English-speakers who must expend
the same electrical effort to recall prepositions as nouns. [Reference to
be inserted.] Notably, schizophrenics are grammatically intact but
semantically deficient, in terms of logical relationships of words, word
choice, focus and direction of discourse.


5-7. Language-deprived individuals retain an ability to produce discourse with
adequate logical word relationships, word choice, focus, and direction;
however they are persistently unable to learn to use logic operators in the
correct orientation to nouns, sometimes using them adjacent to primary
modifiers ("the ran"). [reference to follow] EEG studies are predicted to
show a normal lower-than-nouns recall of operator words (but this study has
yet to be done).

5-8. Schizophrenic speakers of languages with rich focus and topic-marking
systems are excellent cases for this theory and others which seek to
investigate the relationship between language and cognition, because
logical relationships in the speaker's cognition are more exposed. Washoe
has a famously complex topic-marking system [reference to follow] but
likely has no remaining native speakers. Tagalog or other Austronesian
languages may be good alternative candidates.

5-9. It is logical operators that allow recursion. Operators can place phrases
in subordination as a primary or secondary modifier to nouns or primary
modifiers respectively, or as equals (we call these operators
conjunctions). Studies of the spontaneous production of recursion in
schizophrenics may be useful.


6. Non-content non-operator terms (exclamations and hand gestures) can be thought of as products of the autonomic nervous system; they can be trained by their utterance does not carry semantic information though they may be rich in social signals.


7. Personal pronouns in this framework are similar to their understanding in other theories. Their repeated re-development in human languages in similar roles and their marking for number, gender, grammatical role, and social status reveals underlying categories in human cognition that must have (eventually) measurable physical correlates in the nervous system.


8. Word order - word order in any individual is the result of inductive learning; i.e., raise a child speaking only English with VSO order, the child will produce VSO English (hypothetically; experiments testing this hypothesis would be informative).

8-1. There are languages that resist classification as having a specific word
order (not just like English where discursive functions change word order;
normally English is SVO but we utter OSV statements for contract - "I
don't like spaghetti, but linguini I like." - and are best described in
terms of statistics, for example Tsimane. [M. Gurven, personal
communication]) This should be troubling to strongly rule-based word
order generativists.

8-2. The current theory predicts only that there will be a general statistical
tendency across human languages for words to occur temporally earlier in
sentences based on how basic they are to cognition. Consequently
noun-before-verb orders will be more frequent, and nouns will more often
than nouns precede first-order modifiers and particles.

8-3. Investigating the relationships between most permitted word-orders is not
necessarily productive (i.e., SOV languages tend to have postpositions not
because of any structural necessity but because there is a plurality of
SOV languages due to the importance of nouns, and more languages have
noun-initial phrases, so statistically we should expect lots of SOV and
postpositional languages; similarly there is little information in a
language like English that has noun before transitive first-order modifier
but noun after most intransitive second-order modifiers). Much more
interesting is arrangements that are universally forbidden. Why are first-order modifiers never in
any language further from the modified noun than logical-operator phrases
acting as first-order modifiers? [Reference by R. Morneau to follow]



Underlying the whole program is that the point of studying language is two-fold: first, to understand human cognition better; and second, to illuminate events in prehistory, which questions can also serve the primary goal. There are historical questions that are interesting for both of these reasons.


Some Investigations With a Novel Historical Linguistics Genetic Approach

A. An approach to determining genetic relationships between languages using phylogenetic methods on morphosyntax rather than phonology as with the comparative method may be fruitful. Morphosyntax must maintain some level of internal consistency and is constrained by neurology, so it necessarily a more conservative element of language and may enable reconstruction of relationships in deeper time than is possible with phonology-based methods, following the call of Nichols and others to focus on critical languages in the hopes that patterns of prehistoric migrations will be illuminated. Once the genetic patterns of well-attested languages have been reproduced by this method, it can be pushed further back for questions as follow.

B. What, if any, are the relationships between the current language families of Eurasia? How do these compare on the large scale to the genetics of the speakers of those families and the timing of the spread of important memes as shown by archaeology?

C. What, if any, are the relationships between the language families of North America? Given the the likelihood that Native Americans are descended from an isolated population of roughly 20,000 that survived in Beringia and began their diaspora near the beginning of the Holocene,[reference to follow] it is likely at the very least that they spoke similar languages at that time, and that deeper connections to northeast Asia exist and could be elucidated.

D. A new approach to the sprachbund problem - can a morphosyntax-based genetic method distinguish between areal (lateral) effects and genetic descent, thus illuminating contacts in prehistory? It's well-known that morphosyntax is the last feature to be borrowed between languages in contact, possibly again because of the need for systemic consistency. After training such a system on the well-attested Balkan sprachbund, it could be applied to investigate possible Uralic contacts with Germanic as the motivation for Germanic's innovations relative to Indo-European, as well as the sprachbund identified in the south Cascades by Delaney.[Reference to follow.]

E. A quantitative approach to model language development over time in the Oceanic branch of Austronesian. The mostly isolated languages of Polynesia can be to linguistics what the Galapagos were to Finches for Darwin. Estimates of population size over time, travel between islands, and internal constrains of the languages' structures can be built into a model that predicts language innovation rate and type.

F. Grammatical simplification of imperial languages. Latin's grammar simplified quickly during the Empire, and the Chinese languages were prefixing fusional languages as recently as twelve centuries ago.[reference to come] Is this a feature of all languages of multiethnic empires, i.e. did pre-classic Nahuatl require more grammatical decisions-per-syllable than the Nahuatl of Moctezuma's day? What is the mechanism (current theory, non-native speakers moving into the polity introducing imperfections through contact via intermarriage and trade).

G. What do the patterns of language evolution over time reveal about human neurology? What can we say about the biological basis of cognition based on the rate of recurrence of certain categories or grammatical rules? Bickerton [reference to come] began the investigation of this question in the context of creoles. With reference to creoles but more abstractly, are there generaly principles of what structural features survive or are innovated when there is a collision between two systems of rules that have some requirement for internal consistency? (e.g. lateral transfer of genes in biology; rule-based productive memes, like music, for example the blues and rock scale in the West which redeveloped pentatonic scales like those used by most humans)?

H. Experimental investigations of the neurologically-mandated structure of language. There are structures in human languages that, while possible and logically consistent, we never see. Attempting some objective measure of complexity, take two groups of subjects and teach each a constructed language (conlang). One group learns a language of that mimics real grammars, the other a conlang that contains never-observed patterns. Use the same phonology and control for complexity; see whether there is a difference in error-free production.

I. Experimental investigations of the impact of grammar rules apart from phonology. Take monolingual speakers. Again teach them conlangs. One conlang has a grammar identical to their own; another has a grammar imitating a real language of equal complexity. See if there are savings, i.e. if speakers more quickly and accurately learn the language whose grammar mimics their own but even though it shares no word roots at all.

J. An interesting and extremely controversial question is whether there are any genetic differences between individuals causing variation in the neurological hardware that the languages are running on. This might easily be true in a trivial sense between individuals; that is, there are genes affecting language use that differ within populations. More interesting but more disturbing is the issue of whether there are language differences in genetically distant populations owing to genetic-based cognitive differences between populations. That is, are there strange twists in Khoi-San or Pygmy languages relative to Indo-European that can be attributed to genetic, rather than cultural isolation? [Reference to come later.] Until recently many scholars put Piraha in this category but a) evidence increasingly suggests the differences in Piraha language usage are culturally-determined and b) we also would be less likely to find a genetic outgroup in a distant spreading zone than in Africa or at least in the Old World [Reference to come later.] This would also be a place to test the lysosomal-storage heterozygote advantage theory of Harpending and Cochrane [reference to come].

Schizophrenia and Heterozygote Advantage

The classic work on evolutionary medicine is the 1996 volume Why we get sick: the New Science of Darwinian Medicine, by Williams and Nesse. It's not classic enough that I've read it yet, so I may be rehashing some of their arguments. However, on reflection it seems there are four general ways to explain illness.

1) Noise. Entropy is positive; things break, and once you're past the age where your genes have assured their survival into another generation, you're largely expendable. (There are very important wrinkles to this in meme-rich species like us, but the general idea holds.) The idea seems clear enough, until you're explaining to your grandmother who's dying of heart failure that it doesn't matter because her DNA has passed on.

2) Pathogens. They evolve and new ones appear from elsewhere to which organisms have no resistance.

3) Mismatch. Environments can shift rapidly. Mismatch hypothesis was conceived about humans especially, who exist in dramatically different environments from that of our paleolithic ancestors a mere hundred centuries ago, owing almost entirely to physical changes we have made to our own surroundings. (Agriculture, living in cities, reading, mating and dominance patterns, etc.)

4) Heterozygote advantage.


The long and short is that there are a lot of schizophrenia candidate genes and no solid reason to link any of them to specific behaviors or treatment responses; only now are we starting to correlate their response-to-treatment with drugs. The classic case of heterozygote advantage is beta-thalassemia as a form of malaria resistance, but the explanation has since been expanded to cover SNP variants that result in rare monogenic diseases with uneven geographic patterning (like PKU) to controversial behavioral phenotypes (homosexuality) to putative recent natural selection of cognitive abilities (Tay-Sachs).

The second half of the aughts saw a growing list of candidate genes predisposing to schizophrenia, and Doi et al argue in this article that these candidates should be investigated as possible heterozygote advantage variants. However, behavioral phenotypes are difficult to model precisely because it's easy to imagine that the context of a behavior would dramatically change its advantageousness. For example, Jones et al found that while homozygotes for a COMT variant were more likely to behave aggressively than controls, while heterozygotes behaved less aggressively. Arguably a good polymorphism for modern humans; a good idea during the paleolithic? Other candidates show poorer fertility in siblings and other relatives of schizophrenia patients (which flies in the face of a possible heterozygote advantage). Heterozygotes for a variant in KCNH2, a primate-specific isoform of a potassium channel found in brain and heart identified by gene screen, were shown to have lower IQs and reaction speeds than controls. The picture is far from complete precisely because of the number of gene candidates identified during screening over the past five years, on the order of a dozen.

An increasing number of psychiatrists and neuroscientists believe that schizophrenia is not one disease, that in fact it is a spectrum of related disease processes (like cancer) that stem from mutations in any of these candidates. Viewed in this light, the clinical picture of this devastating and prevalent disease (1% of the population) becomes clearer. For any anti-psychotic, only 50-60% of the population of schizophrenics will respond. (Imagine if a new ibuprofen-analog came out, and 40% of people who took it got absolutely no pain relief. Not only would you consider the drug a poor one, you would want to find out what's different about that 40% that keeps them from getting relief.) Regardless of the involvement of any of the candidate genes, schizophrenia is doubtless a multi-component disease process, so we should expect that there are multiple targets.

It has been argued by Harpending, Cochran, Hawks and others that not only has there been recent selection changing the frequency of genes between populations in historical or near-historical times, but that some of these variants identified are for genes affecting the central nervous system. For this reason it is all the more interesting that a variant of a schizophrenia gene candidate identified as strongly heritable in Europeans (FXYD6, an ion channel regulator) is not associated with schizophrenia in Han Chinese (Zhang et al).

A logical next step would be to start genotyping schizophrenia patients for the candidate genes, then record efficacy and specific symptom alleviation for different drug therapies. Professor Brian Roth at Duke has done exactly this for the pharmaceutical half of it, though to my meager knowledge, no one has as yet sat down to try to correlate the effectiveness of drugs in treating the positive and negative symptoms of schizophrenia provided by people who have different candidate mutations. It seems that we may be trying to treat different diseases with the same drug, and then wondering why it's only 60% effective. This is what I'm considering doing for my independent research project (a mini PhD required of medical students); if we can improve outcomes for schizophrenics with the therapies already out there, that's a big win for schizophrenia patients and their families, and conceivably a step forward in understanding how the architecture of the human brain results in consciousness.

Future Direction

I've often found myself holding back from posting something for fear that a subject matter expert would come along and demolish it as either poorly thought-out or naive and ignorant of pre-existing work. But my to-read list isn't getting any shorter, and this isn't a journal article or a thesis. My reasons for writing this blog boil down to thinking out loud in public so I'm forced into some semblance of coherence.

Consequently going forward there will be more volume, with many posts possibly touching on ground that has already been covered by hard-working full-time philosophers and scientists. I hope to offer at the very least a unique synthesis from the perspective of a medical student interested in questions of consciousness, the structure of the universe and the impact of language on our knowledge of it, evolution in the abstract, and recent cognitive changes in humans in general. If you come back to the blog in the future and know of previous work, feel free to comment and direct me and other readers to primary sources.