Consciousness and how it got to be that way

Sunday, April 19, 2009

Free Will and Chocolate

Kant talked about heteronomy, the condition of an individual's being at least partly under the control of influences other than his or her own reason, and therefore not truly exercising free will. Of course this early Enlightenment ideal strikes us as a bit naive today, and obviously we recognize that we are animals with a physical form.

But there's no need to put such extreme, stark requirements on free will for its exercise to be unclear. A perfect example is my own appetite for chocolate. At the moment, I have avoided all forms of chocolate for 16 days. As far as I can remember in my life, this is a record. In the past when I wasn't so lucky I would declare "no chocolate for one month", and then three days later at 7-11 I would break down and get a Hershey bar.

Now, clearly in those 3-day cases I was unable to follow my prior edict, but at that moment, chocolate is what I wanted. And I acted on the urge. How is this not free will? Because the urge came from a pre-conscious animal drive for sugar and fat, and/or conditioning from previous purchases at that same 7-11? If ultimately what we call reason, and our entire executive center, is a slave of the passions as Hume suggested and as seems to be the case, how could it matter whether I was acting directly on an animal urge or on some long-term plan that was dictated by animal urges?

Tuesday, April 14, 2009

Language as Behavior

There's an old Indian fable that goes like this:

A wealthy, wise traveler who spoke a dozen languages came to the kingdom. He used his learning and wit to quickly attach himself to the Raj as an advisor. The problem is that so fluent and perfect were all the tongues that the advisor spoke that no one could tell his country of origin, and this concerned the Raj's guards. "What if he is a spy from an enemy kingdom?" The guard arranged meals between the advisor and visitors from a dozen different lands, speaking a dozen languages; during all of them, the advisor conversed with them as if he were a native, and none of the visitors could detect even the hint of an accent. The guards became desperate, sure that their Raj was allowing his court to be infiltrated by enemies. Finally, one guard had an idea. At lunch one day, one of the guards took the teapot from the servant and said "I'll handle this." Instead of pouring the tea the guard dropped the full pot of hot tea in the advisor's lap, who promptly leaped to his feet, cursing in Persian.


What does it mean when you stub your toe and grunt or curse? Even if you form a coherent monosyllable, in strict semantic terms it doesn't mean anything (unless you somehow misapprehended your injury as being causally related to copulation, feces, or a deity). It's "meaningful" in the sense that it means, behaviorally, you are suddenly and surprisingly in pain, but that's not linguistic. These kinds of nonsemantic utterances are problematic for philosophers of language because they have no truth value, yet they're clearly important to our linguistic lives. In fact they form a kind of instinctive, basic core of our ability to produce language.

When we think about language, I think we don't take nearly enough advantage of the other slightly less bright critters in our phylum. A few days ago I was at a shoreline park and noticed something interesting. There was a group of plovers pecking the wet sand at the water's edge, when two comparatively massive geese came lumbering toward them. The plover on the side of the group closest to the goose piped a little squeaking call, and all the plovers turned and flew away. Later on, I saw a single goose approach some plovers, and again one plover made the same call; again they flew away. Had I witnessed the ploverese word for "goose" or "fly away"?

Of course not. I observed the ploverese equivalent of what you say when you stub your toe: it's behavior. There's no real free will or cognition involved in either act, any more than there's free will when you move your arms to help you run. The plover call is a totally non-arbitrary act that can only be said to represent anything (have any meaning) insofar as birds call to each other as a warning - but that's the same kind of meaning that your toe-stubbing curse has.

Clearly humans were not always the paragon of animals, and there clearly must have been a time when our ancestors were limited only to non-semantic set-in-stone behavioral vocalizations, and that's why we often look at chimps. In one instance, researchers reported that when a new food (grapes) were introduced, the chimps began to make a different "excited" sound at feeding time. Could this be the chimp word for "grape", at least in that lab? Or are the chimps just excited in a different way, anticipating a different taste, of grapes? Is there a difference?

This is my central thesis, that language developed as, and remains at base, an expression of states of the the central nervous system. It is mostly a description of what's going on inside, not what's going on outside. Of course, in any organism that wants to get its DNA into the next generation, there will be some connection between the outside world, and the organism's internal state (which produces observable behaviors, including language) - but that connection can never be perfect.

This statement attacks the unstated assumption that the primitive content of language is semantic content. That is, that in its most basic form, language began as "grape", not as some excited (but nonsemantic) hooting about getting a certain kind of food. Perhaps the better way to look at language is as a set of behaviors reflecting internal states - vocalizations indicating the fear or hunger or aggression of the organism, which were themselves responses to the outside world. As nervous systems become more complex, the internal states of those nervous systems were more and more able to discriminate finer slices of objects and events in the external world, and therefore the vocalizations became more complex. Eventually, the ability to retain, process and pass around information would be selected for, and at that point there would be an evolutionary feedback loop. In plovers, the language behavior is extremely non-arbitrary and low-resolution by virtue of being filtered through the bird's simpler, less-networked nervous system. Consequently, there can be no subtle gradations in that call of the goose's size or speed or location or disposition, because the plover has no internal state to reflect all those dimensions (even if it can be aware of its own location or disposition).

This immediately puts several problems on new footing. First and foremost, the relatively late spread of genes that influence language (40,000 years ago) makes more sense when you realize that a complex nervous system with the ability to react more "finely" to the outside world would have to appear first. Certain kinds of basic verbalizations (like exclamations of surprise) become less of a puzzle when their lack of semantic content is excused. It is also less surprising that commands are in most languages the most basic form of verbs. Refocusing on language as a reflection of internal states takes some pressure off the Hegelian conundrum of definitions: when I say "I want pizza", there's no question about what "pizza" is to me, although that variable may correspond to a state in you that's different. In this light it's amazing that words align with things in the real world as well as they do - but it's good enough for government work. It may be objected that this places the truth value of statements entirely inside the subjective world of the speaker, but in principle, you could look at the neuron pathways active during an utterance to see whether that really is what they meant by "pizza" when they said it.

Monday, April 13, 2009

Cognitive Closure

Usefully defined, cognitive closure is a phenomenon where concepts or thoughts which are otherwise logically valid or accurately reflect some pattern in the real world are fundamentally unthinkable. It is assumed that limits to cognition in humans are owed to some commitment in our neuronal architecture and that other conscious beings could conceivably think thoughts which are for us imaccessible. Colin McGinn is well-known for discussing the concept in the context of arguing that the consciousness is one such cognitively closed arena.

There are at least four senses in which cognitive closure is trivially true; first, in terms of signifier transparency, or trivial closure due to habit. That is, I am a native English speaker, not a Japanese speaker, so when I look at a woody-stemmed plant ten meters tall with leaves and roots I cannot have the experience of thinking "ki" without it being polluted by thinking "tree". In fact in a real sense, I question the idea of a "literal" translation. There is just no way to convey in English the exact tone difference between German Sie and du or Spanish Usted and . But this is nitpicking; no one has exactly the same reaction to every object in the world either, based on their personal experiences (like Dennett's argument that the red you experience can't possibly be the same as the red I do). Sapir-Whorf notwithstanding, this is not a kind of closure that interests us.

Second and equally trivial are closures due to linear hardware limitations (storage or bandwidth limits). You and I can't multiply 151,692 by 65,778 in our heads. I don't think this is what we're talking about either.

Third, and slightly more subtle, is trivial closure due to lack of pattern recognition ability. Imagine I break the Mona Lisa's face into one of those Wall Street Journal dot portraits of a million black-and-white pixels, a thousand by a thousand, and I give it to you as a row of a million black-and-white squares, locking you in a room until you can tell me what it is. The chance that you would figure it out before your death is low, but as soon as I tell you "It's the Mona Lisa's face in rows of pixels" it would be mere minutes before you had arranged it properly. If that's cognitive closure, then your dog is similarly closed to language. He's been listening to you talk for years now and all he's figured out is treat, walk, and bad. In fact when Chomsky discusses this term this is the sense he means.

It's worth pointing out that even these so-called trivial examples, while not as eerie as the almost Lovecraftian way we think of closure, does in fact bring with it practical consequences. There is no reason to think our intelligence is at the upper bound of what is possible (I certainly hope not); a superintelligent alien conceivably could hold digits in memory and manipulate language in a way that puts us in the role of the aforementioned golden retriever. It is often objected that we now have machines to do our cognition for us, which is a mistake of definition: regardless of whether cognition is computation, it is also an experience. (Another trivial form of cognitive closure is that everyone's cognition is off-limits to everyone else's, because our nervous tissue is not contiguous: not the concept of the the first vs. third person divide, but the experience of it).

When you punch a bunch of big numbers into a calculator, you're really handling a cognitive black box; yes, you can check the output for consistency, but the cognitive experience of multiplication is closed to you, even though you can check the output for consistency. Dennett has argued against hardware-limitation closure based on the increased use of prosthetic apparatus (computers) allowing us to perform the calculations, but unless the calculator is wired to your brain and you experience the calculations, you're not experiencing them.

There are many trivial ways to understand closure but they are frequently confused with the deeper idea that there exist inferences or connections that accurate describe parts of the world which somehow our architecture obscure from us, not out of hardware limits, pattern recognition, mere linguistic habit, or isolation of tissue. The concept (which I call "strong cognitive closure") suggests far more fundamental limits to our minds, and because of the limited and klugey nature of our brains I'm very tempted to think such a thing may occur, but without a formal way to evaluate closed concepts, if cognitive closure of this kind does exist, the first question is whether we could even have an experience of it. That is to say, would we come to a point in a train of thought, be aware that said thoughts are coherent, but be frustrated and unable to proceed? Or would we be utterly ignorant that there was any barrier that we had just bounced off?

McGinn is arguing the first case in his discussions of consciousness, because we're all aware of our frustration with the topic of consciousness and its seemingly incommensurable first vs. third person modes. The problem here is in how we distinguish between something that is truly cognitively closed and something that is just a very thorny problem that we haven't solved yet. In other words, is there a way we can ever know for sure that something is cognitively closed?

For example: if we solve the Grand Unified Theory, we'll know it's not cognitively closed to us. But until we do, maybe it is, maybe it isn't. For that matter, even after it's solved and a handful of physicists understand it, it will remain cognitively closed to me and most likely you as well - unless there's a way to show there's a difference between not understanding something right now, and not ever being able to understand it in principle. Another chance to clarify what "real" cognitive closure is: certainly my brain as it is now constructed could not understand the G.U.T., because I lack the math. If the G.U.T. is cognitively closed to humans, the structure of our central nervous system assures that no amount of training could sufficiently alter the brain to accommodate the ideas. Again, is there a way to differentiate between these two?

It's worth pointing out that we're increasingly appreciating that the human mind works more like a maze of funhouse mirrors than a crisply calculating abacus - it is full to bursting with blindspots, hangups, and heuristics that may not have been much challenged a hundred thousand years ago in Africa, but today frequently get us in trouble (ask the psychologists - anchoring, sunk cost fallacies, you name it).
The encouraging thing, both in terms of self-actualization as well as in investigating cognitive closure, is that we have "meta-heuristics" which allow us to occasionally be aware of of our own shortcomings in such a way as to avoid those pitfalls. Our minds are clearly inelegant Rube Goldberg contraptions, but that doesn't mean were are helplessly clueless that this is so.

It seems to me that if there were understandable criteria for strong cognitive closure - if we had a list of consistent principles and could say "Anything that requires mental processes X, Y, and Z to understand can not be understood" - well, then we could understand it. Therefore if such a thing as cognitive closure does exist, it would necessarily include itself as one of the incomprehensibles, and consequently, the second case would obtain - that is to say, that we cannot be aware of cognitive closure when it occurs. If so, then the discussion ends here: we can never know if we've encountered a closure, and it would be exactly as if cognitive closure did not exist.

A theologian once said that God was so perfect, He didn't have to bother existing. So it is with strong cognitive closure. Having trouble understanding your credit card statement, or written Chinese? Weak (trivial) cognitive closure. Unfortunately I can't point you to an example of strong cognitive closure, because whichever position you take on it, for practical purposes, there isn't any.

Cognitive Enhancers

In Nature, Greely et al write in support of cognitive enhancers. Justin Barnard responds negatively.

We certainly owe ourselves a frank discussion of the potential individual and societal impacts of the increasing use of cognitive enhancing psychoactives; unfortunately Barnard is not contributing to this. As is often the case for such arguments, Barnard appeals (unclearly) to the idea that cognitive enhancement is "unnatural"; that humans, and human nature, are not to be evaluated solely in terms of information processing ability. But Greely et al do not make such an argument. Their aim is to explore a powerful (and even disruptive) medical technology in terms of expanding its potential benefits, and mitigating its risks. It would be equally incoherent for Barnard to object to improved agricultural technology by saying that there is more to man than satiating hunger. Of course there is. This concern is quite wide of the mark, which is that if we might make our world better with a new technology, we owe it to ourselves to explore that technology.

One problem with arguments like Barnard's about the ethics of self-alteration is that there is always a spectrum. Is it immoral for me to get that third cup of coffee if I'm flagging a little at 3pm? Caffeine is not only a well-established cognitive enhancer, its effect on physical tasks like long-distance running are well-known as well (here's my recent personal experience in a marathon). Was this "unnatural" of me? Or is it unnatural to raise your kids in a house with lots of books, because access to knowledge and reading adults has been shown to boost the kid's achievement later?

Let's look at another field of endeavor where these judgments are made constantly. Competitive cyclists and marathoners train at high altitudes to boost their red blood cell counts. In what way is relocating to a marginal low ppO2 environment for the sole purpose of training "natural"? Athletes who do well in these sports typically have naturally high red blood cell counts to begin with, and high EPO levels (the hormone that triggers blood cell production). So they inherited a few stretches of DNA with less stingy regulatory regions that I did. Is this fair? If it's unnatural for me to just take EPO, how about if I can boost my own endogenous production? This was a nifty trick developed a decade ago by TransKaryotic Therapeutics, a Boston biotech whose transposon technology got locked up by the legal team of EPO-hoard-ward Amgen. Still have the "ick" because it's a drug? Then let's review: going to a mountain so the thin air wrings more red blood cells out of your marrow is okay, but doing the same thing by coaxing your hormone production up (using your own genes!) is not okay. Aren't these distinctions starting to seem arbitrary?

Clearly, whatever the rules are regarding performance enhancement in a sport, they have to be consistent within that sport; and clearly, commonplace ninety-minute marathons will not have the same impact that chemically-induced geniuses will. This is exactly why the Nature paper recognizes that we have to proceed cautiously and safety is paramount - as with any other chemical we develop and ingest to improve our lives. For instance, the currently-available cognitive enhancer Adderall is merely a mixture of amphetamines. It's speed. It's entirely appropriate that access to this addictive psychoactive substance is controlled. It's also entirely appropriate that we explore the ways (if any) it's acceptable for this drug to be used by healthy people as an enhancer. Maybe for Adderall there are no such ways, but every molecule is different. That is to say, it's completely inappropriate to throw out a whole technology because a single tool has too sharp of an edge.

As a more speculative aside, it is my prediction that by the end of the twenty-first century, medicine will run out of diseases which can be treated by waiting until something breaks and then stopping the out-of-control process or aiding an atrophied one. There are many diseases which result from basic design flaws in the architecture of our tissues and the machinery of our cells - accidents waiting to happen - like back problems, hip and knee issues, cancer and autoimmune diseases. Fixes for these problems will require not only germ-line alterations, but even more profound re-engineering of the fundamental cogs and gears of metabolism. Whether we're ready for such an enterprise is a discussion that will likely happen after you and I have both returned to the elements. I make this point to say that I would not necessarily endorse such a move, and Greely et al are clearly not talking about anything so radical either, despite Barnard's anxiety. The Nature paper is advocating a search for better cups of coffee, and a sober discussion about their risk-benefit profile in general. That's all. Before we toss out an entire potential approach to bettering the human condition, the onus is on the advocates of Barnard's position to articulate their counterarguments more clearly, and with fewer appeals to vague romantic intuitions about the meaning of life.