Consciousness and how it got to be that way

Monday, December 21, 2009

Existence and Consciousness

To ask why there is something rather than nothing seems to assume on some level that it's less natural for the universe to exist than to not exist. It also assumes that some kind of existential inertia means that there will continue to be something rather than nothing.

This second assumption at least is not universal to humans, and is covered by Robert Nozick in the most effective treatment of this question yet. The Inuit believe that if hunting ceases, even for an instant, the universe will end. Several religious traditions hold that if at any given moment, at least one person somewhere is not copying their holy text, reality will sink back into chaos. These examples are interesting but are probably better explained as cultural technologies to keep people motivated in performing important activities, than as insightful cosmogonies.

Another question is whether it is clearly meaningful to ask counterfactuals about the fact of existence itself - whether existence had to exist - versus finite entities within existence. It is clearer that the pine tree outside my window, or you, might not have existed. In this way existence as a whole, the capacity for things to exist, is qualitatively different than a pine tree.

Changing gears to the ever-popular deep mystery, is it meaningful to talk about a universe that has no consciousness? Is self-awareness, a part of the universe experientially looping back on itself, necessary for existence? There is an intuition (which I share) that questions about necessity of existence and of subjective experience are getting at the same things.

Sunday, December 20, 2009

James Cameron and the Problem of Reference

Artificial languages are interesting, though I always find myself longing for some index of strangeness relative to the creator's language (pick a Native American language at random; is the conlang ever going to be more different than the creator's native language than the natural language is? I doubt it.) That said, James Cameron did it right for the Na'vi language in Avatar, reportedly claiming to have out-Klingon'ed Klingon. The linguist Cameron chose to create Na'vi summarizes its structure and during the preamble of the article, his interviewer states this gem:

"...since there is already tremendous interest in the [Na'vi] language, and some less-than-accurate information about it is currently floating around online, I asked Paul [the creator] if he could write up a formal description of Na’vi as a Language Log guest post."

The problem of reference is really a set of problems in different situations. The answer to this one is I think implicit and relatively obvious. Maybe we don't know if the current king of France is bald, but we do know whether Romeo is gay or Na'vi is agglutinating.

Thursday, October 22, 2009

Flatland and Free Will

In my previous post I asked why, if we do not have free will and the path of the universe is set in stone, we should have a seemingly privileged timepoint called "now". With no free will, there are no more degrees of freedom as you're reading this "now" "in the present" then there are degrees of freedom for something that happened ten minutes ago, or in 1588. In this setting "now" seems especially arbitrary and one wonders why nervous systems of this sort, i.e. that are constrained to one gradually changing temporal perspective, would ever appear - since events are all settled anyway. If we live in four-dimensional block of frozen space-time, why can't we see the whole thing? Why do we seem limited to one slowly shifting level within it?

Another way of looking at it (and responding to TGP's statement that "now" is a given) is to imagine a visit to Flatland. In Abbott's original conception, Flatland appears to three-dimensional beings as a plane in which 2-dimensional creatures like squares and circles are going about their lives, unaware (and unable to be aware) that above or below them, they were being observed by extra-dimensional beings. Abbott used Flatland as a way of arguing by analogy how fourth-dimensional objects would interact with and appear in our own three-dimensional universe (see the link for the full treatment).

If you look at our universe as four-dimensional space-time, then you can consider Flatland to not be two-dimensional, but three-dimensional plane-time. In a no-free-will Flatland, their universe would look to us like a tall box, with time-tracks - set-in-stone of every square and circle twisting through it like tunnels in an ant colony. If you wanted to be a three-dimensional sadist, you could climb up on a ladder and look at Mr. Square at the moment of his death in a two-dimensional hospital. Then you climb back down and again insert yourself into Flatland to find him enjoying lunch in a park the day after his twenty-third birthday. "You will die on the following date and time; I know, because I already saw it." Do you see why this is strange? From your three-dimensional standpoint, no-free-will Flatland is a giant, static sculpture. Why would the awareness of any entity in that block be constrained to any one plane within it?

By the same argument, in no-free-will space-land (where we live, if you don't believe in free-will anyway), we're stuck in a block of four-dimensional space-time. Fourth dimensional sadists are free to go scrambling up and down this block like you just did on Mr. Square's universe, except the fourth-dimensional sadists are looking for nasty tidbits to relay to unfortunate three-dimensional suckers like you. A fourth-dimensional sadist could pop in ninety seconds from now and tell you that you getting smooshed by a rabid slime mold on 19 July, 2025, and it knows because it already saw it happen. And in a very real sense, in a non-free-will universe, it already has happened. The disconnect is that you haven't experienced it yet, and in a no-free-will universe, that's what seems strange. If the events happening now are just as certain as the events happening then, why isn't seeing the future the same as turning your head to look at the other side of the room you're in? It's all already there.

An implication is that if we again assume a literal interpretation of multidimensional models of the universe, if the universe has a finite set of dimensions, it would necessarily be deterministic. The highest dimension would be a static one, and Mr. Square can't have free will if we don't.

Wednesday, October 21, 2009

Two Questions I Was Apparently Predestined to Ask

To those who think free will is an illusion: why does it seem like we have free will? More to the point, why do we perceive a special point in time we call "now"?

Panpsychist Accounts of Consciousness Are Still Testable

One challenge to David Chalmers' account of panpsychist consciousness is that it is untestable. If you argue that consciousness is everywhere (so goes the objection) then no observation can disprove your theory; therefore, it is not a sound theory.

Is this a valid objection? Chalmers is arguing that consciousness is a primitive feature of existence like charge or mass, that dimensional analysis by the four received basic units (charge, mass, distance and time) cannot in any combination "get us to" experience. One manifestation of mass is gravity. It is continuous throughout the universe; it is everywhere. Can gravity not be tested? The laws surrounding gravitation certainly can be, even though there is nowhere that gravity is truly zero.

If consciousness is (at least partly) epiphenomenal and supervenes lawfully on observable patterns in the material world, then these lawful relationships can and should be tested. The powerlessness of consciousness in epiphenomenal accounts (i.e. that our consciousness is caused, but does not cause anything, and we are in effect just along for the ride) is a problem that we've been wrestling with since Descartes and before, but it is a separate one. To argue the universality of consciousness does not make it any more untestable than gravity.

Thursday, October 8, 2009

Hot Pics!

Let me just be the first to say: that's one good-looking brain. In
particular, what a big hippocampus it has (all the better to remember you with):


In fairness, perhaps I am - it is - a biased observer of myitself. Perhaps the
study of the mind requires new pronouns.

I didn't just trip and fall into an MRI, I participated as a subject
in a memory task imaging fMRI study at my alma mater-to-be UCSD, by
the same group that wrote the voodoo correlations paper.

The worst thing about the experience? Tring to stay awake for the whole hour without being able to control any stimuli. I hope I gave them good data.

Monday, October 5, 2009

Resistance to Mutation and Preservation of Self

One of the main principles in living things is the preservation of self at the expense of non-self, the maintenance of order by the absorption of energy, often at the expense of others. Of course, cells and most multicellular organisms are intentionless automata. But to intentional beings like us, whose self is identified with our consciousness and which is in turn dependent on the continued coherence of one physical form, it's easy for us to make muddy assumptions about the significance and stability of self in other organisms. It's strange that somehow our intention arises from the behavior of an assembly of these intentionless automata. We are at once watching, and the products of, a process that ensures that entities which take actions to make more of themselves are the entities we see in the world, and it's very difficult for us not to ascribe intention and agency to all living things, even prokaryotes, exhibiting clear functionality as they do.

You might not like it if your children end up being biologically different than yourself, but bacteria don't and can't care. If a cell doesn't care that it (or its offspring) may change radically if they mutate, then why do cells expend such effort preserving consistency of self? We should expect to see most prominently the effects of entities that copy themselves - but because entropy is always positive, it's only a matter of time before they change. It doesn't matter that "self" is not consistent over time, just that the cause-and-effect tree continues growing new branches. Yet all cells develop and retain elaborate mechanisms to prevent changes to their DNA. If preservation of a consistent self is not a real end, then why do they bother avoiding mutation?

Of course, the obvious answer is that as life becomes more complex, any mutation is far more likely than not to be injurious rather than beneficial. The more complex the organism (the more elements in a system) the more likely this probably is. Simple one-celled organisms could tolerate a slightly higher mutation rate, because they have fewer metabolic entities interacting inside the cell and few if any controlled external interactions with other cells. By analogy, imagine changing software across the entire General Electric family of companies, versus at a one-office specialty manufacturer. Therefore, in bacteria we should expect and do observe a higher mutation rate over time, and more diverse and innovative biochemistry at any one point in time. For example, some bacteria that can use sulfur as the terminal electron acceptor, converting it to hydrogen sulfide, parallel to aerobic organisms like us who breathe O2 as the terminal electron acceptor and convert it to water; in fact, there are families of bacteria where some use sulfur and some use oxygen (like Syntrophobacterales; imagine if some species of primates were found to be sulfur reducing, and the rest were aerobic - but as said before, you just can't expect that kind of flexibility at G.E.) Viruses also have been able to innovate in terms of nucleic acid reproduction far beyond what cell-based systems use, and they are notoriously sloppy in reproduction, far more even than bacteria (hence the necessary concept of quasispecies). Although the numbers probably wouldn't surprise us, it would be an interesting comparison to define some quantitative index of biochemical innovation-per-clade.

If cells do not expend effort commensurate with the likely damage from mutations, they will die, and we won't see their descendants. "Commensurate" means that the more likely a mutation is to be deleterious, the more an (evolutionary steady-state with respect to mutation) cell will spend to make sure it won't happen. Probable fitness cost is determined not just by the chance that it will be good or bad, but how good or bad it will be. At a guess, a deleterious mutation is probably likely, on average, to damage the organism's fitness more than the rare beneficial mutation will improve it. It should be possible to add up the energy that (for example) DNA Pol I and other proofreading systems in bacteria require for activity. If we assume that mutation costs are steady-state (a safe first approximation after 3.7 billion years of these molecules running around loose) then this number will be a good reflection of the fitness cost of mutations to these organisms. It's also likely to be higher for multicellular organisms and lower for viruses, on a per-base pair basis. Even if cells were capable of ensuring 100% fidelity, it's very likely that there's some point of diminishing marginal returns beyond which it's no longer profitable for the cell to bother with proofreading.

Now imagine a planet with particularly forgiving biochemistry, where mutations are equally likely to be positive or negative, and (further simplifying) the mutations are always equally good or bad. In this scenario (and any scenario more benign than this one), cells which expend any effort trying to stop mutations are wasting their time and are at a fitness disadvantage. Mutation would occur rapidly and there would be no stable lineages. Although you would eventually see reproductive isolation, you most emphatically would not see any one stable species or haplotype more than another, aside from some effect that those organisms closer to the mean (the ancestral starting point that sets the center of the normal distribution) would probably predominate in the early period before a stable population is reached in the bounds of their environment. At this time the allele distribution would shift to become truly random.[1]

In contrast, in our world, there are species whose gene pools are stable over long periods of time, relative to the behavior of the cells that make up those species. Therefore, altruism can appear if a gene comes along that gives its cell the ability to recognize other carriers and treats them preferentially, making it more likely to see that gene in the future. But in our imaginary world of neutral-or-better and therefore constant mutation, there are no stable species. Unless a gene arises that can somehow measure phylogenetic distance in general and act proportionally to it, there would be little altruism.

Mutation cost is not context independent, and the following consideration of how to predict and manage mutation cost might seem teleological, but it turns out to have real world correlates. Imagine (back in our own world now) that there's an organism that's doing badly. Some indicators of its doing badly would be that this it doesn't encounter many conspecifics (because they're all dead, or the organism has migrated into a novel environment) or that the organism is always starving, or it's under temperature stress. If you were that organism, and you had to make a bet about how optimized your genes were for your environment, you'd bet not very, or at least you'd bet slightly worse odds than if you were making the bet when you were doing okay. (There are some huge leaps there, but you're necessarily making a decision with incomplete information). Consequently the chance of a mutation having a beneficial effect in an environment where you're doing badly is slightly higher than in one where you're doing well, because you can be a little more confident that you (and your genes) are less likely to be near a summit on a fitness landscape. To put it in the extreme, loser organisms might be better off with just about any change. If there's any way for the organism to recognize its bad fortune, and then adjust how much it spends on proofreading - or in some way allow mistakes to be expressed - that's the time.

As it turns out, such a mechanism exists. Hsp90, a chaperone protein that has homologs in most cells, conceals mutations by correctly folding mutant proteins - except under restrictive conditions, like temperature stress. The mutation rate does not change, but in response to underperformance, Hsp90 can suddenly unmask the accumulated genotypic variation, which will suddenly appear as phenotypic variation. Rutherford and Lindquist neatly termed this phenomenon evolutionary capacitance[2], and later groups explored the concept in the abstact[3].

It bears speculating what other meta-selection tricks cells might have developed. Are there mechanisms to slow evolution in successful species? In other words, do consistently well-fed organisms and/or ones that are crowded by the success of their own species (for example, cultured bacteria or New Yorkers) spend more effort on tricks to slow evolution, in recognition that they may well be near a fitness peak, making mutations slightly more likely to be harmful? Cells in active, dense culture (but with sufficient resources) could be tested for mutation rate, controlling for metabolic changes occurring in response to crowding. The interesting result would be that they actually are mutating more slowly than before the culture became dense. [Added later: when I wrote this I wasn't aware of the phenomenon of quorum-sensing. Best known in bacteria, it also occurs in some metazoans. In fact some work has shown a link between quorum-sensing and mutation but it is not what I had predicted. That is, I had predicted quorum-sensing bacteria that mutated slower when they're in crowded conditions with conspecifics, because it's worth the energy to avoid mutation since they're more likely to be in an optimal environment. However, what has been observed in P. aeruginosa is that "high frequency" strains emerge which have had certain virulence factors induced in a way suggestive of quorum-induction, but that the quorum-sensing genes have been deactivated by mutation more often than would otherwise be expected.]

There are cases where organisms intentionally mutate, the best example of which is the adaptive immune system of vertebrates. (Note in the context of the prior argument that the mutation rate has not been shown to change with stress.) Lymphocytes produce molecules with specific randomly variable peptide sequences (one of these molecule classes is antibodies). Because this hypermutation always occurs in a strictly delineated stretch of amino acid residues within a peptide, the innovation is in effect safely inside a box. That such a clever and complex mechanism should first emerge in response to the constant assault of pathogens is probably not surprising. But if it appeared once - are there organisms with other kinds of built-in selection laboratories for other purposes? It's always easier to disrupt something than improve it, and what lymphocyte hypermutation is doing is disrupting pathogens. If there are any other such selection systems in biology, chances are that their function is to invent new ways to break other organisms, as with the adaptive immune system. A prime place to start looking would be venoms.


REFERENCES AND FOOTNOTES

[1] The thought experiment of the forgiving-DNA planet (with mutations equally likely to help or hurt) concluded that there would be no stable lineages. However, an added complication would be that mutations would result neither in reproductive isolation, or in speciation (though still not with stable lineages within each reproductive silo). Language, which often branches from a common ancestor and can be followed "genetically", follows a very similar pattern, since to a first approximation, phonological and morphosyntactical innovations are neutral to the function of the language. However, reproductive isolation does still occur (i.e., an English speaker can't understand a Dutchman or German) but there are also dialect spectra (i.e. Dutch and German have intermediates that are mutually intelligible to both). It's difficult to say objectively whether these spectra are broader or occur more frequently in language systems than in gene systems.

[2] Rutherford SL and Lindquist S. Hsp90 as a capacitor for morphological evolution. Nature. 1998 Nov 26;396(6709):336-42.

[3] Bergman A and Siegal ML. Evolutionary capacitance as a general feature of complex gene networks. Nature. 2003 Jul 31;424(6948):549-52. Nature. 2003 Jul 31;424(6948):501-4.

Sunday, September 27, 2009

Malingering and the Deep Mystery of Consciousness

This past Friday I was fortunate to see UCSD neurologist Mark Kritchevsky. It's hard to imagine someone more enthusiastic about medicine or research than this guy. As I decide what to specialize in, it's experiences like these that will add up to nudge me in one direction or the other.

The reason I'm posting is something that the dean of medical education at UCSD (Jess Mandel) said to Dr. Kritchevsky during the discussion that cuts straight to the heart of a) the deep problem of consciousness and b) straight to the way physicians treat and diagnose their patients. In the context of patients who present with acute anterograde amnesia (they can't form new memories, then the condition resolves itself) Dean Mandel recounted working with a neurologist whose favorite diagnosis was malingering. Every time someone came in with a strange set of symptoms, this particular neurologist thought they were faking. Dean Mandel's (unstated at the time) question was: if this is your go-to diagnosis, then why the heck would you go into neurology?

This also highlights something I recently heard from the neurologist V.S. Ramachandran (who I was fortunate to see earlier in the week as well), which is that if you see a patient and you dismiss their complaint as crazy - it's more likely that you as the physician are just not smart enough to figure out what's going on.

Saturday, September 19, 2009

We're Living In the RNA World: Ribosomes Are the Meaning of Life

[Added later: Clearly the Nobel committee was influenced by this post.]

Our view of the world beyond our senses is necessarily influenced by the instruments we use to investigate it. There is far more protein in cells than nucleic acid; and to the naive observer, the primary sequence of an oligopeptide (with 20 possible monomer units) seems a far richer source of information than nucleic acids, with only 4. Consequently it wasn't until 1928 with Griffith's bacterial transformation experiments that people began to seriously think about nucleic acid instead of protein as the heredity chemical. The final coup came in 1953 when Watson and Crick solved the crystal structure and provided a mechanism for DNA to be the biological information carrier.

It hasn't been until the last two decades that we've begun to understand the importance and diversity of RNA in Earth's biochemistry, possibly because RNA is harder to work with than DNA. Tom Cech and Sidney Altman received the Nobel for their work in the 1980s showing that RNA has catalytic properties stemming from its extra (relative to DNA) 2' hydroxyl. Besides its crucial role in the ribosome, RNA can also catalyze its own splicing.

This last observation was critical for anyone trying to puzzle out the chemical origins of life on Earth. DNA by itself in solution is actually quite boring; it just sits there. If life began with DNA, it would have stopped there too. Crick and Orgel speculated about life's chemical origins in the 1960s, forced to the idea that there must have been either an all-protein world (from which the transition to DNA-as-heredity chemical is unclear) or an all-RNA world. They also (pessimistically) speculated that, so unlikely were these events, that perhaps replicator chemistry could diffuse more easily through space than we realize so that it only had to happen once in any given galaxy for it to spread to other stars (panspermia, first discussed by Arrhenius over half a century before; for a longer discussion on one angle on panspermia, go here). Once the work on RNA's catalytic (and autocatalytic) abilities began, the RNA-world began to seem much more feasible. As we now know, pointing to its initial favorability, RNA splicing and even the attachment of amino acids to tRNAs both proceed just fine on their own. Free nucleotides are still the universal energy currency of cells. Urey-Miller's experiments showed prebiotic processes that could produce amino acids; these studies were soon supplemented by observations that nucleobases occur spontaneously on asteroids and can have plausibly originated prebiotically.

So far this has been merely a review of the RNA world hypothesis. It's anything but proven, but it's the best story we have so far, and we even have pathogens active today that are merely self-reproducing (though host-dependent) RNAs (viroids). The idea is that once nucleic acid had been established as the dominant replicator, some interaction occurred between RNA and DNA that moved DNA upstream in the causal hierarchy; that is to say, put it chemically in charge, possibly owing to its greater stability and conservatism since it's less reactive and exists as backed-up double-stranded template and is less reactive. Like the seawater-like salt concentrations found in the cells of Earth organisms, the activity of catalytic RNAs is a biochemical fossil, along with the messenger intermediates delivering sequence information to the ribosomes, energy carriers like ATP, and cofactors like NAD.


Segregation of Fitness Factors Within Membranes

Whatever the molecular basis for Earth's first chemical replicators, if they had a diffusible element to their reproduction, they had a commonly recognized problem to solve: how to sequester advantage. That is, if there was a developing RNA world where autocatalytic RNAs had developed a set of consistent interactions to polymerize amino acids and take advantage of a more diverse chemistry, how did they keep neighboring strands from benefiting? Imagine an RNA molecule has the trick of polymerizing a nifty oligopeptide that fetches ribonucleotides just incrementally faster than any other neighboring RNA in solution, making RNA-1 increase its numbers faster than its competitors - right? Wrong. That nifty oligopeptide is going to diffuse away and is just as likely to help any other RNA in RNA-1's neighborhood. So, from RNA-1's fitness standpoint, why bother? Until there's a closed feedback loop - until that oligopeptide is for some reason more likely to associate with RNA-1 more than RNA-1's neighbors - there's no point. (Frequently overlooked in this scenario is defense - RNA-1 also wants to keep its neighbors from eating up all the free nucleotides around it, or poisoning RNA-1's phosphodiester formation process, or even hydrolyzing RNA-1 itself).

The solution, of course, was the lipid membranes that now surround all cells (but not all replicators). Membranes form a self-nonself boundary and sequester diffusible benefits while providing a defense against chemical predation. The details of how this might have begun are still up for grabs, since there would have to have been mechanisms to open the membrane from inside and to gather and react to information from outside. While admittedly all this is highly speculative just-so discussion, the central point is that it's very difficult to imagine how a well-elaborated RNA-protein interaction machinery could have developed prior to membrane encapsulation of RNAs and their associated products.


What Was the Mechanism for the Transition to the DNA World?

If we assume that cells are "about" making more nucleic acid, then DNA's stability and conservatism does in fact seem to make it a better reservoir of information. Still, RNA world speculations tend to be a little short on details on exactly how such a transition could have come about. At this point, we're talking about a reproducing RNA molecule surrounded by a membrane with some set of RNA > protein chemical rules - in modern-day terms, we have a lipid bilayer with ribosomal RNA that can reproduce. The rRNA reproduces either on its own or with the help of an RNA-RNA polymerase, like the one influenza still uses today (in fact, many eukaryotes also have endogenous RNA > RNA polymerases).

There are at least two ways we can imagine the transition occurring.

A. An intracellular transition. RNA-protein cells developed a reverse transcriptase that gradually assembled a DNA-mirror of the RNA genome of the cells. If the advantage of such a DNA-mirror was as a back-up in the event that the RNA genome is damaged, this could only have been selected for with some mechanism to convert DNA into RNA (in modern terms, transcription). Because of DNA's conservatism, cells which relied more and more on DNA would be favored, eventually leading to a cell where the only reproduction was of DNA, not RNA. One test for this model is to look for a most recent common ancestor of RNA-dependent RNA-polymerases that is older than the last common ancestor of reverse transcriptases, which is in turn older than classical DNA-dependent RNA polymerases (with reference to RNA Pol I which transcribes rRNA), which is in turn older than DNA polymerases. It should be pointed out that most eucaryotic cells code for reverse transcriptases, some of which are critical for DNA maintenance, but most of which do not obviously benefit anything but their own reproduction (selfish elements), and which comprise substantial portions of eukaryotic genomes. Selfish elements and junk DNA are thought to be absent from prokaryotic genomes due to selection pressure on fecundity.

B. Nucleus-as-endosymbiont. To look at cells in a non-nucleocentric way, eukaryotes have 3 membrane-bound organelles containing genomes: chloroplasts, mitochondria, and nuclei. Nuclei are unique among these three in that they export nucleic acids to interact extensively outside their own membranes. If there is latitude even in a world of highly "committed" biochemical structures (like the modern one) for the survival either RNA or DNA viruses, we can presume that there would have been room for a membrane-bound DNA viruses in the membrane-bound RNA-world. A DNA virus could infect an RNA-only cell (similar to Philip Bell's concept of "DNA virus as ancestor of all eukaryotic nuclei"). DNA viruses in the RNA world would need reverse transcriptase for their reproduction; if we also presume, in multiple infections, a viral pickup of the RNA-cells ribosomal and tRNA RNA genes, they would eventually be incorporated into the nucleus - or today, the DNA molecule. The obvious objection here is that this assumes the first DNA cell was a eukaryote. First, the model could still function without a membrane-bound DNA molecule; we just couldn't explain the membrane-segregated nucleus in terms of endosymbiosis. Second, assuming phylogenetic relationships do not conflict with this account, it can be further argued that as with selfish elements and junk DNA, ancestral cells having greater complexity than modern bacteria is not implausible; prokaryotes are a more stripped-down later version relying on greater simplicity, driven by the need for fecundity. That is to say, since the advent of DNA cells, prokaryotes have lost their internal membranes, rather than eukaryotes having gained them. Third, and most important for later points, phylogenetic relationships for the eukaryotic LUCA based on ribosomal RNA are at this stage still unclear.

Both theories fly in the face of the central dogma, but the RNA world is the best supported account of the origin of life, and details of the transition to the DNA world are sketchy. In the first, DNA is merely a backup for the (at that time) true RNA genome - a set of rRNA and possibly tRNA genes - and the DNA backup gradually usurps RNA's role. In the second, a DNA virus remains permanently in a cell and absorbs RNA genes for what would later become our rRNA genes (but coded in DNA).

Speculation about pre-DNA world biochemistry can be disorienting. Taken out of their central-dogmatic context, the definitions of genotype and phenotype become less clear - in the RNA world they overlap strongly - and there seems to be no clear causal starting point in the information cascade. Which leads to the question: is there even such a clear starting point in the modern DNA world?


Why the Centrality of DNA? Cyclic Cause and Effect

Humans are limited in our pattern recognition abilities and when we encounter a new complex phenomenon we necessarily think of it in isolable cause-and-effect narratives. Consequently, it's useful in biomedical research to think of DNA as being at the top of a causal cascade that results ultimately in the reproduction of more DNA (or in the production of protein, which results in the reproduction of more DNA). In this view, cells and whole organisms - phenotypes - are survival machines that DNA uses to make more of itself. Put concretely, the function of an apple tree isn't to make apples; it isn't even to make more apple trees. It's to make more apple tree DNA.

Of course, no DNA molecule ever reproduced itself without the help of a host of specialized proteins, and every DNA molecule in existence today is causally downstream of a set of protein-and-RNA mediated events going back billions of years (just as all of them are causally downstream from DNA). This seems trivial, but leads immediately to the question: why is DNA alone given a privileged place in that cyclic sequence of events?

There are two reasons this is so. In the first half of the twentieth century there were a number of cyclic biochemical processes elucidated, among them the urea cycle, glycolysis, and the central dogma of molecular biology. In all of these, some entity or set of entities A gives B gives C, a clear and isolable stepwise set of inputs and outputs. Second, special in the case of DNA, there is information. The carbohydrate monomers in a glycogen molecule are chemically equivalent, a chain of zero-zero-zero-zero. DNA contains a quaternary code that has no clear function apart from its information content - it has a meaning, in terms of corresponding to amino acids. Cells don't consume it for energy; cells don't build walls or tubes out of it. It's there to be read, and the only thing that determines its meaning is other DNA molecules.


Two Thought Experiments, and the Meaning of Life

Is that last statement true at all? Of course not. Without even getting into epigenetic phenomena, the on-its-own-quite-inert DNA molecule doesn't do anything that proteins (and the RNAs that made those proteins) don't let it. Without those RNAs and proteins, DNA "means" nothing. The multiple inputs upstream from DNA, and the relatively pristine outflows downstream from it, make it a convenient point in the process for us to manipulate cells. DNA only matters because of what it means, and its association with proteins that actually do the work of the cell. That fact that its conservatism allows us to trace ancestry doesn't force us to conclude that it's the point of the cell. Thought experiment: if tomorrow, physicists found some bizarre particle physics technique to trace cells based on something in the lipid membranes of lysosomes, would that mean the cell was about lysosomes?

No, you answer - unless that particle physics tag on lysosomes correlated with some property of lysosomes that interacted with the rest of the world in a consistent way, like the DNA-protein feedback loop. Then you'd have something.

Forgive the mental biology, but thought experiments allow us to think about problems without the anesthesia of the familiar masking patterns. So indulge me: imagine that the world's first biochemical string theorist is working on an exotic n-dimensional model of physics and discovers that, unique among known molecules, DNA has a prominent, nonrandom structure in the higher dimensions model. Applying this theory to phylogeny trees, it becomes clear that in those higher dimensions, some new property of DNA, its n-dimensional conformation, and not its linear sequence, are actually more conserved over time, and correlates better with protein function, than its 5' to 3' nucleotide sequence. Wouldn't that be exciting? It would become obvious that we shouldn't be so hung up on the primary sequence; we would then reasonably conclude that this higher-dimensional structure is somehow what cells are "about" and what evolution has been selecting for.

Given the title of the post, you've probably guessed where this is leading. My argument is that life on Earth is best understood in terms of an RNA World that to us seems masked by the DNA it uses as a backup. At least as much as it's about DNA, life on Earth is about ribosomes. Apple trees are one way that ribosomes can make more ribosomes, as are slime molds, great white sharks, and koalas. We can think of the function of DNA as either a) a backup of ribosomes and b) one stage in the manufacture of all the rest of the ancillary machines that ensure survival of ribosomes. How is this different from making the same argument about any other cellular element, that the point of life is proteasomes? Not only are rRNA sequences (not to mention tRNA anticodons) the most conserved nucleic acids in living things, what is conserved is the structure of the molecule, over and above its sequence. Even more specifically than ribosomes, what is being preserved and selected for and served in a giant feedback loop by the rest of the structures in the cell is this consistent set of RNA-protein interactions.

There are many possible objections to this shift in viewpoints, and among them I will address two: first, the existence of viruses, which are non-ribosome-containing replicators. Viruses are "genes that got away" but have no independent metabolism (or ability to reproduce) independently of ribosomes. In terms of this ribosome-centric hierarchy, viruses are a peripheral curiosity in the same way as prions, which are proteins that reproduce their shape (and therefore physical properties) but also cannot reproduce chemically in the absence of ribosomes; of course, in terms of biomedical relevance to human life, viruses are anything but a curiosity (but this may serve to obscure their hierarchical triviality). On the other hand, Viroids are RNA-only replicators which can, in fact, reproduce without ribosomes, using only RNA Pol II, and some are autocatalytic.

A second objection made frequently is that the structure of rRNA and the protein-nucleic acid that structure mediates (and the rich chemistry that sprouted around it on the young Earth) were driven by thermodynamics - that there's only one way to build the interaction surface, and only one set of 20 amino acids that could ever have been chosen. If the shape and activity of a ribosome are really just thermodynamic fate, then there's no heredity information in rRNA - no heritability information, no meaningful travel over time in the fitness landscape, and no possibility of "frozen accidents" that commit a system to climb toward local optima, any more than there is for a star or fire. Yes, fire spreads and consumes fuel, but the commitments of dumb physics force them to behave the way they do; there's no difference in the flame on a birthday candle whether you lit it from a lighter or at the edge of a burning pine forest. One way to test the question of whether ribosomes are stuck in one shape would be to run synthetic biology experiments with simple ribosome systems where you swap out the 20 legacy amino acids to search for a more robust set of monomers, with some kind of feedback to allow selection. But that would be an extremely complicated experiment. I rather think that the burden of proof is on the claimant who argues that the anticodon system we ended up with is necessarily the universally best one, and that RNA-descended aliens from Alpha Centauri would necessarily use the same set that we do.


SUMMARY

- Our understanding of molecular biology and the evolution of the cell is constrained by the useful conventions we use to study chemical processes in the cell. Despite the wide acceptance of the RNA World hypothesis, we still view life on Earth as being about DNA.

- The transition from the RNA world to the DNA world was mediated by either development of reverse transcriptase and RNA polymerase activity, or an endosymbiosis event involving a DNA virus infection and uptake by the virus of rRNA genes.

- DNA is best thought of as a back-up of a) rRNA and b) all other types of catalytic molecules (today, mostly proteins) all of whose function is to ensure the survival of rRNA.

- rRNA genes are the most conserved among living things; notably, the structure of the rRNA itself is even more conserved than the primary sequence.

- A better way of thinking about life than considering it to be about DNA is to think of it as about the propagation of ribosomes, or specifically, about propagating a specific set of protein-RNA interactions mediated by a specific type of chemical structure.

Thursday, July 23, 2009

Cool Evolution Find: Fossil Virus

Not in stone of course, but in crocodilians' genomes. ERV covers well. Question for evolution, as she puts it: if this virus was good enough at what it did to work its way into a genome, why does it apparently have no descendants today? Two possibilities come to mind: 1) its descendants are still around, but we haven't discovered it yet (remember Nannarrup hoffmani? A whole new genus of metazoans found 7 years ago in Central Park, New York?). 2) the chance of becoming an ERV do not correlate over time with fitness.

Wednesday, July 22, 2009

Consciousness, Reduction, and Memory

A tool that I think we underutilize in the hard question of consciousness is the idea that if some entities are conscious, and some are not, then there is a boundary between the two categories. My suspicion so far is that any close examination of this boundary inevitably becomes a reductio ad absurdum, and the boundary evaporates, regardless of the examiner's initial intentions; and once the boundary has evaporated, we're left with the unintuitive non-assertion that there is no reason to think everything doesn't have some rudimentary consciousness - or the non-starter that nothing is conscious. The first assertion led Chalmers to his famous and misinterpreted statement about conscious thermostats.

You don't think thermostats are conscious? Fine. What about dogs? That's a slippery slope; as a furry, warm blooded vertebrate primate, you're subject to some pretty powerful biases about what the signposts for self-awareness are. Roger Penrose once (half-jokingly) conceded insects to the world of unconscious Turing machines, and Dan Dennett immediately challenged him: why? In other words, if dogs are conscious, why not octopi, crabs, E. coli, and the Melissa virus? Where's the line, and what is it? A 40 Hz wave? Unsatisfying, to say the least.

The point is this: if you believe that consciousness only occurs in what we on Earth call living things, then you must also believe that at one point it did not exist. Fair enough: so where and when did the first glimmer happen? It's not obviously a meaningless question to ask whether the first consciousness appeared in a trilobite scuttling about in a seabed on the piece of continental plate that is now Turkmenistan, a minute before sunrise on 28 June, 539,640,122 BCE (I think it was a Tuesday). It's tempting to dismiss point-of-origin stories like this, but the alternative is either to accept some provincial point-of-origin since the Big Bang, or accept that consciousness is a spectrum, in which case we're back to the thermostat (or to none of us being conscious). Both alternatives are counterintuitive, but modern science is littered with such choices; not surprising given how far we are out of our depth, i.e. collecting food, finding mates, and running from predators in East Africa.

So far I have not explicitly stated the materialist assumption that consciousness is related only the the matter of the entity in question, and how that matter is arranged - but there is the stickier question of within what limits that matter can vary and remain conscious. By that I mean, my brain is physically different from yours, and from a monolingual Greenlandic woman's. You could rob each of these three entities of their current level of consciousness by making physical changes to them, but all three were different to start with, so how do you know they were all conscious? Another not obviously meaningless question is to ask whether a capacity for consciousness must necessarily permeate an entire species. Why assume the conscious/non-conscious boundary follows species boundaries? Maybe on that fateful June 28 in the early Cambrian, there was just one single conscious trilobite, surrounded by zombie trilobites. And maybe some humans are conscious and some aren't.

This may seem to point the way to a reductive program, to test the boundaries of what can be conscious. We can't go looking for the boundary with a time machine to see where it all began, and of course even if we could there remains the challenge with the hard question that we have to rely on first-person accounts we get through third person reports - we can't build a consciousness meter to wave at trilobites, and they can't tell us that the sunrise was pretty. And even if they did it doesn't prove they experienced joy at seeing it; the whole problem is the inviolable subjective first-personness of it. But since we are assuming that consciousness relies on matter and arrangement (i.e. nervous systems) in a reproducible way, in a pattern that at least some humans can understand, we can still reductively investigate alterations of the material state of the basis of consciousness using human first-person accounts, in ways that don't veer off into other problems of behavior as such investigations often do. This still won't answer the hard question but it will at least show us to what things the hard question can apply.

We can't quite cut out brain tissue and ask people whether they're conscious (the idea is to restore the previous state, when you know they were conscious as near as that can be known by a third-party). But what we can do, and have done is study cognitively abnormal humans who can communicate their experience. These break down into 1) people with some disorder, either through trauma or congenital condition and 2) people who change their brain chemistry either from some activity (meditation, extreme exertion) or consume mind-altering compounds. With #1, people usually remain in the same state. With #2, these occasions occur under very uncontrolled conditions and we have very limited options ethically - "go run a hundred miles then meditate for a month and tell me what it's like" - and scientifically - there are only so much agonists for receptor X and the brain doesn't cooperate in the way they're distributed.

If we have anything to say about it hopefully the numbers of people in category #1 will drop. If as time passes our ability to reversibly under- or over-stimulate parts of the brain increases, as I hope it will, then I hope future neuroscientists will be able to pick from a suite of compounds that block specific tissues in the brain (not just receptors) from interacting with the rest. Of course, this program might not be able to tell the difference between basic requirements of consciousness, and the provincial arrangements of our own brains - or primate, or mammalian, or vertebrate brains. (Note: I am not advocating kidnapping of and experimenting on aliens, though if you have one, call me). It seems to me the two components of consciousness in our normal cognition that would be of immediate interest and are relatively isolable in anatomical terms are memory (sensory, short- and long-term) and goal-orientated behavior, specifically with regard to pain and pleasure.

Regarding reductive investigations of memory: is it possible to remove consciousness in a way that is reportable later? In other words, say you find a molecule that shuts off only the here-and-now experience, but not anything else, including memory. While the subject is under the influence, she's same as she ever was, lucid, talking, responding - a classic philosophical zombie. You give the wash-out. She reports that she now remembers the conversation, remembers what happened while she was "under", but somehow didn't experience them at the time. Can this even be meaningful?

Science fiction thought experiments of memory implants come to mind: in the movie Blade Runner, androids that live only four years are given childhood memories so they don't realize they're androids; did they experience those childhoods? The reverse situation - that is, experience, but no memory, known as anterograde amnesia - is the subject of the movie Memento (category #1, abnormal human), and does occur in the real world, but there are also abundant real-world examples in category #2, as anyone who consumes alcohol can learn. A personal account illustrates this. At a friend's birthday party I overindulged. Among the many escapades that evening which charmed and delighted my fellow party-goers was the following groan I emitted while sitting in the hot tub: "Oh god, I'm going to be sick...what's the point of blacking out if you have to experience it anyway." At which point they wisely shooed me from the tub, and my prophecy was realized.

I tell this story not to concern you more for my liver than my brain, but because the interesting part is that I don't remember it. I did black out - I know this only from (effectively) third-hand accounts. Thanks to my ethanol-clogged NMDA receptors, I have no memory at all of that event (or many others that evening). Did I experience nausea? Where did the experience "go"? What evidence do I have that at that moment I was not a zombie, even to myself? One solution is that I'm silly to worry about where the experience goes - that at least in human brains, experience requires only sensory memory (in us, a second or so), or that it requires sensory and short-term memory (in us, five to ten minutes). But is there a drug even in principle like the one in the experiment above that could have saved me from the experience of nausea but preserved the memory? That's not an entirely dispassionate question, becuase I would make that trade in a second.

The second area of investigation - goal-seeking behavior - raises questions about whether it is meaningful to talk about consciousness in the absence of pain or pleasure. I'm not talking about full-body analgesia; I'm talking about not experiencing psychological discomfort in response to thoughts about seeing their kids at the end of the day or worrying about your mortgage. Granted, that's a little more involved than questions about memory.

I think a continuing focus on specific parts of the brain in a reductive search for absolute boundaries of consciousness - if indeed there are any - is wise for more than just theoretical reasons. Any research program that ignores its sources of financial support is one that won't move along very quickly. The hard problem of consciousness, while I consider it the central question of philosophy and science, is not one that promises any immediate application that can return effort invested in it. I'm obviously sympathetic to philosophy and science for its own sake, or I wouldn't be writing this out of personal passion. But we all want to see progress on this question, and being able to sell the research based on applications to Alzheimers, ADHD, and schizophrenia would go a long way to obtaining support and public awareness. Technology that we don't yet have that will require money to develop - or, technology that we do have that requires money to obtain and use. And in the end, I can't think of a better outcome anyway than that this research could end up helping human beings suffering with cognitive disorders.

Monday, July 20, 2009

Monkeys Can Respond to Grammar-Like Patterns in Sound

Article here. Essentially, tamarin monkeys showed a capacity for recognizing a pattern of phonemes, and then recognizing when a novel pattern appeared (if an affix was used in the wrong place).

Primates can frequently recognize language-like stimuli when exposed to them, but whether they can be trained to generate them is another question. (And we know they don't generate them spontaneously.) It seems to me there are two take-homes here. First, that many primates (including those not even that closely-related to us) have the hardware for linguistic pattern recognition. This raises the question of whether other linguistic substrates delivered over a time sequence could evince similar responses: chains of images for example. It would be interesting to compare human and non-human primates in these experiments (with sound and non-sound grammar).

The second, and to me more interesting, point to investigate is to what degree non-human primates can associate the patterned-sounds they're hearing with semantic content. No, it wasn't a concern at all in this experiment, but it's crucial to the development of language. While Endress et all conducted this experiment with particular reference to grammar acquisition, a model of vocabulary acquisition in human children uses much the same pattern-recognition skill: children learn words by looking for sequence rules, and take notice when they're violated. A chid learning English for the first time has no way to know that "word boundary" is two words, and where the dividing line is, until they figure out that English words don't often start with "db"; in fact words don't even have that sound combination so much (and of course they hear the words separately). The tamarins were doing some form of this.

What the tamarins can't do that childre can - or at least no one's shown that tamarins can do it, and I don't think anyone expects it - is that once they've parsed out the elements and learned the order they usually occur in, they can build a network and assign each element to an object or attribute they see in the real world. I wouldn't be blown away if a tamarin learns to be "surprised" (as in this experiment) by an "o" coming at the beginning of a word, as opposed to at the end. What the tamarins won't ever learn is that the -o ending means that the word in question is an object that is receiving action, as opposed to performing it. Yet somehow every normal Japanese child learns exactly this by age 5, and lots of other such content-sound relations as well.

Animals clearly can associate a few sets of sounds with concrete, specific content. Anyone who has ever had to spell out "W-A-L-K" to a fellow human in front of a dog knows this. But the extent of language perception in non-human animals is an interesting question because it gives us a chance to do some comparative biology with reference to the following questions. How how many of these sets of sounds can the dog (or the tamarin) learn? How generalizable is the ability - i.e., given the internal states of the animal as they reflect the outside world, how broad or distinct are the categories that can be covered by a single signifier - your dog understands tree, but does it understand redwood? Plant? How complex a relationship between signifiers can the animal construct, e.g., can the animal tell a difference between walk, walked, and don't walk? And to what degree are these things related - that is, do human children get a mnemonic benefit from pinning the sounds they learn into a richer network of semantic content? Pinning down these differences to the activity of physical structures in our brains will go a long way to understanding how we acquire and process language.

Bias Bias

So we're biased against seeing our biases: while it's nice to have experimental verification, this should not be surprising, else our biases would be subject to examination and we could get rid of them.

A frequent subtext of bias studies goes is this: "Look how distorted our perspective of the world is. It's good that we study these tendencies so that maybe we can diminish or eliminate them, and people would have a less distorted view of reality." It would be useful to ask what would humans be like without these biases. How would individuals behave differently? What would society look like? Aren't some of these biases fairly obvious beneficial self-deception strategies that evolved as a result of conspecific competition; would cutting them out of some humans (but not all) actually result in individuals handicapped in the survival and reproduction game, and wouldn't similar strategies redevelop over time? Most importantly, without our biases would we be happier? Is it meaningful to talk about waving a magic wand and re-wiring the brain to eliminate these biases, or are they so deep-wired as to require much more profound commensurate changes to retain a functioning central nervous system?

Sunday, July 19, 2009

What Do Serial Killers and Suicidal Rats Have in Common?

Serial murder is such an astonishingly maladaptive behavior that I've often speculated whether we're not seeing a) a gene that, when heterozygous, is adaptive, but when homozygous, can lead to this behavior, or b) a toxoplasma gondii-like infection.

Toxoplasma gondii is the pathogen that makes the rodents it infects behave (basically) suicidally around predators, like cats; the cat eats the rat, the organism survives in the cat's gut, and when the cat defecates, it spreads more Toxoplasma. (I first read about this organism and the incredibly specific behavior an infection engenders in the work of Daniel Dennett, who is a big fan of this organism as a metaphor for other replicators.) Now it turns out that humans infected with Toxoplasma are also more likely to behave dangerously, judging by car accident rates. (Hat tip to Marginal Revolution).

Humans engage in many apparently maladaptive behaviors (serial murder among them) and this story gives us no reason to conclude that serial murder is the result of an infection, but it does show that even in humans, complex behaviors can be affected by an organism in ways similar to the other host species we've studied. Behaviors like serial killing are so inexplicable that hypotheses about their origins should include infection as a possible etiology.

I grant the full-on speculative nature of this post, and even if serial killers are infected with a T. gondii-like pathogen, then it remains to be explained a) whether the infection-induced behavior would have been the same in our hunter-gatherer ancestors (how can you be a serial killer in a band of 25 people?) and b) how exactly the behavior would improve transmission, which is clear in the case of rats and cats.

Thursday, July 2, 2009

Cows, Free Will, and Nuclei

I'm often amazed at the stupidity of cows. I encounter these mooing morons frequently on trail runs that pass through pasture lands. Frequently there will be a cow standing astride a path that I'm running along at constant speed, gazing in dumb fascination at me as I approach: 200 meters, 100 meters, 50 meters. Even though I've now been approaching the cow at constant speed in a straight line in the open for the last two minutes, it's often not until I'm within 10 meters that the cow will suddenly realize I'll eventually reach its position, and that's when it suddenly turns in a panic and runs away.

Of course, in no technical sense was my continuing along the same path clearly "predestined", even if you don't believe in free will - any number of plausible forces could intervene to stop me - I could decide to stop because I just ran up a steep hill, or a meteor could hit me, or I could turn left and run into the grass for no reason - but the cows' failure to move until I'm almost on top of them certainly does not result from any nitpicking of causality. Possibly the cows are conditioned by the ranch hands they see more often, and which do stop before they get too close. Or, funnier and just as likely, they're just too damn dumb to recognize the pattern (my straight-line path) and extrapolate it to realize that I'm going to get to their position - not until I'm barely ten meters away.

For the sake of argument let's explicitly assume that there is something in the universe that is not predestined to happen in a certain way at a certain time - and if you'll accept any example, you'll accept that the decay of a single radioactive nucleus fits the bill as non-predetermined. Sure, trillions of them decaying together will fit a pretty nice predictable curve, but it's hopeless to try to predict the decay of individual nuclei; nuclei of the same isotope are, by any measure we know, absolutely indistinguishable, and they decay randomly (that is, without any pattern that we can recognize). The question, then, is: is an individual decay really non-predetermined? Or are we cows that just can't recognize the pattern that they're following? Most importantly, is it possible in principle for a non-pattern-recognizer to ever tell the difference between these two alternatives?

Because the universe contains a finite number of elements, infinite computing power isn't possible; therefore there will never be an infinitely powerful pattern recognizer. Consequently, the question asked in the previous paragraph becomes very important. If it is not possible for a non-pattern-recognizer to tell the difference between inability to recognize a pattern in a system, and actual non-predetermination in that system, then we can never tell if we have free will, or are merely stupid.

Tuesday, June 23, 2009

Is underperformance in the presence of superiors a deceit strategy?

Humans often perform worse on tasks under pressure when in the presence of superiors. This is interesting because evolutionary psychology arguments can be made for the opposite effect (performing better in the presence of superiors). This effect is apparently not a voluntarily controllable one.

A study at John Moores University shows that other primates underperform on problem solving tasks in the presence of superiors - but interestingly, this experiment was designed to evince deception.

This study aimed to correlate monkey species' ability to deceive with the strictness of their social structures, and they did so (positively). One of the researchers argues that the less deceptive primates are more like humans, because their social groups are fluid - but that's only been true for a few millennia. Hunter-gatherers fifty thousand years ago would have found it much more difficult to decide to join a new foraging band because they didn't like the scene they were in. So, social group plasticity have been much lower for most of the history of our species, making the ability to deceive more important than these researchers might otherwise argue.

Furthermore, the smarter a species - that is, the better a problem-solver it is - the more important are its interactions with conspecifics, and the less important are its interactions directly with the environment. Who cares if you can forage for tubers - you're an entertainment lawyer! So not only the potential to, but the usefulness of, deception becomes greater in proportion to the intelligence of the animal.

This is not proof that underperformance in presence of superiors in humans is definitely an unconscious deceit strategy, but the existence of the behavior in other primates, along with its probable greater importance in humans, is reason for further investigation.

Thursday, June 11, 2009

Your Day Is Over

Get the divergence time for any two animals or groups of animals. Awesome. It's like Google Maps to the natural history of Earth. Hat tip to ERV.

Monday, June 1, 2009

Numbers That Have No Meaning

The Planck-time - the smallest slice of elapsed time that we can currently conceive of as physically meaningful - is about 5 x 10^-44 seconds. A year is 31,557,600 seconds long, and the universe is about 1.4 x 10^10 years old. This means that since the Big Bang, there have been about 8.8 x 10^60 Planck-times so far - 8.8 x 10^60 instants, to put it crudely and with apologies to Einstein.

Now let's count things. Defining only fundamental particles as things, in the standard model some have thrown a dart and come up 10^100 particles, one googol. That'll work for now. The vast number of permutations with this set of individual things is 10^100!. A scary big number, but still finite. Of course, if you count photons as things, photons vastly outnumber quarks and leptons by a factor of at least a billion. Fine; let's make it 10^209!. Then the number of instants in which things can have happened (8.8 x 10^60) multiplied by the possible combination of things in each instant (10^209!) is the number of things that can have happened so far in the universe. Let's call this huge but still finite term Ω.

You may argue with the figures I've used or even the rather ham-handed back-of-the-envelope calculation here, but my point is that the number of things that can have happened so far is finite, and so is the number of things that can ever happen, whether you expect a Big Rip or a proton decay at some point 10^10^70 years from now. In fact the real number of things that can have happened up until this point must be much smaller than what I've proposed; every arrangement of those 10^209 elements is constrained by the previous arrangement as a result of things like the speed of light and the conservation of energy.

So now we have Ω - so what? What's interesting is that there must also be a number Ω + 1; a number which exceeds possible events x things to describe - a number that cannot refer to anything real. Yes, Ω will get larger as time goes on, but it will still be finite, and arithmetic will always allow Ω + 1. That's nothing new; examples abound of theoretical computations that could not be completed before the expected decay of protons, even with the resources of the entire universe's fundamental particles marshalled for the task. Many of them involve board games.

So this means that mathematics - even arithmetic - is richer than it needs to be to describe our impoverished universe, and that there exist numbers which are simultaneously logically valid but which can in principle never have meaning in physical reality. My intuition is that this has less to say about reality than it does about mathematics, which is a particularly effective form of language we are just in the early process of developing to understand the world.

Wednesday, May 13, 2009

Prehistoric Crossings that Could Have Happened

Cryptohistory is full of accounts of putative pre-Columbian contacts between New and Old World Civilizations; their frequency in lay literature and on the web is dependent much more on how interesting the encounter would be than on how likely it is to have happened based on the evidence.

For example: Zunis are a linguistic isolate with a bizarrely high frequency of Type B blood. They must be the descendants of lost Japanese pilgrims! Or - it's possible for a late iron age Norwegian to sail west from South America across the Pacific, approximating Inca technology - therefore, the Pacific could have been settled by South Americans! We know better today, half a century later, from DNA and linguistic evidence. But did Polynesians come to South America? The discovery of conquistadors' texts that said there were already chickens there suggested this, but so far DNA analysis has been inconclusive. For my money, the only decent evidence so far for trans-Pacific contact prior to Europeans is the linguistic and technology evidence in raft-building by the Chumash on California's central and southern coasts.

But all of these contact theories are relatively recent, in terms of prehistory. The major gene distributions and language families that we see today are the echoes of earlier migrations, the most famous of which is the crossing of the Bering Sea land bridge by Siberians to populate the Americas, 20,000 years ago or more. But we know that sometimes paleolithic people crossed water too: the first Australians were able to cross channels that were still tens of miles wide during the last ice age at least 40,000 and perhaps as long as 70,000 years ago. In Japan, at least by 30,000 years ago, people were able to make two > 50 km open water jumps across the Tsushima Strait. Consequently there's one other place where I'm curious as to why there isn't more discussion of why a water crossing didn't seem to have happened - the Strait of Gibraltar.

What is now the Sahara was much wetter as the Earth's climate shifted away from its most recent glacial maximum, and only now are we starting to understand the diversity of the people that lived there. One site in particular (the Tenerian culture at Gobero in Niger), only discovered in 2000, has burials dating to 7500 BCE. These were pastoralists, and it's clear that a greener Sahara could have supported a much denser population of pastoralists then than now. We invariably assume that Europe was colonized from its southeast, through the Middle East. Why could "green Saharans" not have made it across the Strait of Gibraltar to Iberia?

We know for certain that neolithic Afro-Asiatic speakers (the Guanches) settled the Canaries. Of course the interesting possibility is a connection to the Basques. It's possible that some Tenerians did colonize Iberia, but that water crossing is an effective enough barrier to gene flow that the genes coming through the Middle East into Europe swamped the Tenerian genes. A quick test would be an analysis of Basque mitochondrial DNA against Guanche and Gobero samples; if it has been done at this point I'm not aware of it.

Sculpture before Image

In September 2008, archaeologists from the University of Tuebingen in Germany discovered an inch-long ivory carving of a nubile woman that by carbon-dating is on the order of thirty-five thousand years old (to be published in this week's Nature). After reveling in the impact of looking at something carved by cognitively active humans living that long ago, my next thought was whether sculpture pre-dates flat visual art.

The oldest sculptures we have are thousands of years older than than oldest visual art. The oldest cave drawings we have are from Chauvet, which are at most 30,000 years old. This is an outlier. We have over a hundred Venus-type sculpture artifacts older than that, dating from the period 25,000-29,000 B.C. (see Venus of Dolni Vestonice). This still isn't a huge sample, and maybe images don't age as well as sculpture - but more importantly, it's reasonable to expect that even older images and sculptures are both waiting to be found not just around the Alps and Pyrenees but throughout Central Africa.

Alternatively, maybe sculpture really did come first. This makes sense. To us today, drawing is easier than sculpture because, while we're are familiar with both, creating a drawing recognizably representing an animal is less time-, material-, and skill-intensive than creating a sculpture. Now that we have lightweight drawing mediums (nowadays including electrons!) drawings are also more portable. But to someone who has never seen either medium, a sculpture may seem less difficult to "get your head around". A woman is a three-dimensional thing; so is a piece of ivory. The leap to representing a woman (or horse) as a flat, untouchable image on a surface - as if they're trapped under ice - must have seemed a real abstraction.

Phonetic writing systems today offer another example of a mode of representation that is anything but obvious to those never before exposed to it. Ask any literate native Chinese speaker. Representing utterances phonetically rather than semantically - that is, using sound-based rather than meaning-based symbols - is so abstract, when though about from first principles, that it's amazing anyone thought of it at all. It's therefore not surprising that there was a gap of millennia between the idea of regular correspondence between visual symbols for words and phonetically defined ones, although the non-obviousness of alphabets is difficult for modern literate Westerners or Middle Easterners to fully appreciate today.

The benefit of phonetic systems, once you invent them, is that they are much easier to learn - hence why in Japanese book stores I go to the children's section and read about cats and bears in the 42 basic phonetic characters of Hiragana, but I'm hopeless if I try to read the grown-up prose in yesterday's Asahi Shimbun. Learning those 42 basic characters in Hiragana are on the same order of difficulty as the 26 in the English Roman alphabet and 33 in Cyrillic, but wholly different from the number (and visual complexity) of the 3,000+ you need to for the newspaper. It is also not surprising that as adults, it's more likely for monoliterate ideogram-readers to learn an alphabet than it is for monoliterate alphabet-readers to learn an ideogram system. While there may be some influence behind this asymmetry from the current realities of history and economics, I doubt it will change very much. Similarly, now that we have drawing in addition to sculpture, we demonstrate a preference for it - because it's easier and cheaper.

So did sculpture pre-date image? Carbon-dating of the available evidence supports this so far, and it's consistent with the development of human representation systems for which we have more direct evidence. I expect that there are more caves around the Pyrenees for us to discover with paleolithic artifacts. Chauvet Cave was only found in 1994. I predict that some of these caves will yield more carved figures that are older than any known drawings.

Sunday, April 19, 2009

Free Will and Chocolate

Kant talked about heteronomy, the condition of an individual's being at least partly under the control of influences other than his or her own reason, and therefore not truly exercising free will. Of course this early Enlightenment ideal strikes us as a bit naive today, and obviously we recognize that we are animals with a physical form.

But there's no need to put such extreme, stark requirements on free will for its exercise to be unclear. A perfect example is my own appetite for chocolate. At the moment, I have avoided all forms of chocolate for 16 days. As far as I can remember in my life, this is a record. In the past when I wasn't so lucky I would declare "no chocolate for one month", and then three days later at 7-11 I would break down and get a Hershey bar.

Now, clearly in those 3-day cases I was unable to follow my prior edict, but at that moment, chocolate is what I wanted. And I acted on the urge. How is this not free will? Because the urge came from a pre-conscious animal drive for sugar and fat, and/or conditioning from previous purchases at that same 7-11? If ultimately what we call reason, and our entire executive center, is a slave of the passions as Hume suggested and as seems to be the case, how could it matter whether I was acting directly on an animal urge or on some long-term plan that was dictated by animal urges?

Tuesday, April 14, 2009

Language as Behavior

There's an old Indian fable that goes like this:

A wealthy, wise traveler who spoke a dozen languages came to the kingdom. He used his learning and wit to quickly attach himself to the Raj as an advisor. The problem is that so fluent and perfect were all the tongues that the advisor spoke that no one could tell his country of origin, and this concerned the Raj's guards. "What if he is a spy from an enemy kingdom?" The guard arranged meals between the advisor and visitors from a dozen different lands, speaking a dozen languages; during all of them, the advisor conversed with them as if he were a native, and none of the visitors could detect even the hint of an accent. The guards became desperate, sure that their Raj was allowing his court to be infiltrated by enemies. Finally, one guard had an idea. At lunch one day, one of the guards took the teapot from the servant and said "I'll handle this." Instead of pouring the tea the guard dropped the full pot of hot tea in the advisor's lap, who promptly leaped to his feet, cursing in Persian.


What does it mean when you stub your toe and grunt or curse? Even if you form a coherent monosyllable, in strict semantic terms it doesn't mean anything (unless you somehow misapprehended your injury as being causally related to copulation, feces, or a deity). It's "meaningful" in the sense that it means, behaviorally, you are suddenly and surprisingly in pain, but that's not linguistic. These kinds of nonsemantic utterances are problematic for philosophers of language because they have no truth value, yet they're clearly important to our linguistic lives. In fact they form a kind of instinctive, basic core of our ability to produce language.

When we think about language, I think we don't take nearly enough advantage of the other slightly less bright critters in our phylum. A few days ago I was at a shoreline park and noticed something interesting. There was a group of plovers pecking the wet sand at the water's edge, when two comparatively massive geese came lumbering toward them. The plover on the side of the group closest to the goose piped a little squeaking call, and all the plovers turned and flew away. Later on, I saw a single goose approach some plovers, and again one plover made the same call; again they flew away. Had I witnessed the ploverese word for "goose" or "fly away"?

Of course not. I observed the ploverese equivalent of what you say when you stub your toe: it's behavior. There's no real free will or cognition involved in either act, any more than there's free will when you move your arms to help you run. The plover call is a totally non-arbitrary act that can only be said to represent anything (have any meaning) insofar as birds call to each other as a warning - but that's the same kind of meaning that your toe-stubbing curse has.

Clearly humans were not always the paragon of animals, and there clearly must have been a time when our ancestors were limited only to non-semantic set-in-stone behavioral vocalizations, and that's why we often look at chimps. In one instance, researchers reported that when a new food (grapes) were introduced, the chimps began to make a different "excited" sound at feeding time. Could this be the chimp word for "grape", at least in that lab? Or are the chimps just excited in a different way, anticipating a different taste, of grapes? Is there a difference?

This is my central thesis, that language developed as, and remains at base, an expression of states of the the central nervous system. It is mostly a description of what's going on inside, not what's going on outside. Of course, in any organism that wants to get its DNA into the next generation, there will be some connection between the outside world, and the organism's internal state (which produces observable behaviors, including language) - but that connection can never be perfect.

This statement attacks the unstated assumption that the primitive content of language is semantic content. That is, that in its most basic form, language began as "grape", not as some excited (but nonsemantic) hooting about getting a certain kind of food. Perhaps the better way to look at language is as a set of behaviors reflecting internal states - vocalizations indicating the fear or hunger or aggression of the organism, which were themselves responses to the outside world. As nervous systems become more complex, the internal states of those nervous systems were more and more able to discriminate finer slices of objects and events in the external world, and therefore the vocalizations became more complex. Eventually, the ability to retain, process and pass around information would be selected for, and at that point there would be an evolutionary feedback loop. In plovers, the language behavior is extremely non-arbitrary and low-resolution by virtue of being filtered through the bird's simpler, less-networked nervous system. Consequently, there can be no subtle gradations in that call of the goose's size or speed or location or disposition, because the plover has no internal state to reflect all those dimensions (even if it can be aware of its own location or disposition).

This immediately puts several problems on new footing. First and foremost, the relatively late spread of genes that influence language (40,000 years ago) makes more sense when you realize that a complex nervous system with the ability to react more "finely" to the outside world would have to appear first. Certain kinds of basic verbalizations (like exclamations of surprise) become less of a puzzle when their lack of semantic content is excused. It is also less surprising that commands are in most languages the most basic form of verbs. Refocusing on language as a reflection of internal states takes some pressure off the Hegelian conundrum of definitions: when I say "I want pizza", there's no question about what "pizza" is to me, although that variable may correspond to a state in you that's different. In this light it's amazing that words align with things in the real world as well as they do - but it's good enough for government work. It may be objected that this places the truth value of statements entirely inside the subjective world of the speaker, but in principle, you could look at the neuron pathways active during an utterance to see whether that really is what they meant by "pizza" when they said it.

Monday, April 13, 2009

Cognitive Closure

Usefully defined, cognitive closure is a phenomenon where concepts or thoughts which are otherwise logically valid or accurately reflect some pattern in the real world are fundamentally unthinkable. It is assumed that limits to cognition in humans are owed to some commitment in our neuronal architecture and that other conscious beings could conceivably think thoughts which are for us imaccessible. Colin McGinn is well-known for discussing the concept in the context of arguing that the consciousness is one such cognitively closed arena.

There are at least four senses in which cognitive closure is trivially true; first, in terms of signifier transparency, or trivial closure due to habit. That is, I am a native English speaker, not a Japanese speaker, so when I look at a woody-stemmed plant ten meters tall with leaves and roots I cannot have the experience of thinking "ki" without it being polluted by thinking "tree". In fact in a real sense, I question the idea of a "literal" translation. There is just no way to convey in English the exact tone difference between German Sie and du or Spanish Usted and . But this is nitpicking; no one has exactly the same reaction to every object in the world either, based on their personal experiences (like Dennett's argument that the red you experience can't possibly be the same as the red I do). Sapir-Whorf notwithstanding, this is not a kind of closure that interests us.

Second and equally trivial are closures due to linear hardware limitations (storage or bandwidth limits). You and I can't multiply 151,692 by 65,778 in our heads. I don't think this is what we're talking about either.

Third, and slightly more subtle, is trivial closure due to lack of pattern recognition ability. Imagine I break the Mona Lisa's face into one of those Wall Street Journal dot portraits of a million black-and-white pixels, a thousand by a thousand, and I give it to you as a row of a million black-and-white squares, locking you in a room until you can tell me what it is. The chance that you would figure it out before your death is low, but as soon as I tell you "It's the Mona Lisa's face in rows of pixels" it would be mere minutes before you had arranged it properly. If that's cognitive closure, then your dog is similarly closed to language. He's been listening to you talk for years now and all he's figured out is treat, walk, and bad. In fact when Chomsky discusses this term this is the sense he means.

It's worth pointing out that even these so-called trivial examples, while not as eerie as the almost Lovecraftian way we think of closure, does in fact bring with it practical consequences. There is no reason to think our intelligence is at the upper bound of what is possible (I certainly hope not); a superintelligent alien conceivably could hold digits in memory and manipulate language in a way that puts us in the role of the aforementioned golden retriever. It is often objected that we now have machines to do our cognition for us, which is a mistake of definition: regardless of whether cognition is computation, it is also an experience. (Another trivial form of cognitive closure is that everyone's cognition is off-limits to everyone else's, because our nervous tissue is not contiguous: not the concept of the the first vs. third person divide, but the experience of it).

When you punch a bunch of big numbers into a calculator, you're really handling a cognitive black box; yes, you can check the output for consistency, but the cognitive experience of multiplication is closed to you, even though you can check the output for consistency. Dennett has argued against hardware-limitation closure based on the increased use of prosthetic apparatus (computers) allowing us to perform the calculations, but unless the calculator is wired to your brain and you experience the calculations, you're not experiencing them.

There are many trivial ways to understand closure but they are frequently confused with the deeper idea that there exist inferences or connections that accurate describe parts of the world which somehow our architecture obscure from us, not out of hardware limits, pattern recognition, mere linguistic habit, or isolation of tissue. The concept (which I call "strong cognitive closure") suggests far more fundamental limits to our minds, and because of the limited and klugey nature of our brains I'm very tempted to think such a thing may occur, but without a formal way to evaluate closed concepts, if cognitive closure of this kind does exist, the first question is whether we could even have an experience of it. That is to say, would we come to a point in a train of thought, be aware that said thoughts are coherent, but be frustrated and unable to proceed? Or would we be utterly ignorant that there was any barrier that we had just bounced off?

McGinn is arguing the first case in his discussions of consciousness, because we're all aware of our frustration with the topic of consciousness and its seemingly incommensurable first vs. third person modes. The problem here is in how we distinguish between something that is truly cognitively closed and something that is just a very thorny problem that we haven't solved yet. In other words, is there a way we can ever know for sure that something is cognitively closed?

For example: if we solve the Grand Unified Theory, we'll know it's not cognitively closed to us. But until we do, maybe it is, maybe it isn't. For that matter, even after it's solved and a handful of physicists understand it, it will remain cognitively closed to me and most likely you as well - unless there's a way to show there's a difference between not understanding something right now, and not ever being able to understand it in principle. Another chance to clarify what "real" cognitive closure is: certainly my brain as it is now constructed could not understand the G.U.T., because I lack the math. If the G.U.T. is cognitively closed to humans, the structure of our central nervous system assures that no amount of training could sufficiently alter the brain to accommodate the ideas. Again, is there a way to differentiate between these two?

It's worth pointing out that we're increasingly appreciating that the human mind works more like a maze of funhouse mirrors than a crisply calculating abacus - it is full to bursting with blindspots, hangups, and heuristics that may not have been much challenged a hundred thousand years ago in Africa, but today frequently get us in trouble (ask the psychologists - anchoring, sunk cost fallacies, you name it).
The encouraging thing, both in terms of self-actualization as well as in investigating cognitive closure, is that we have "meta-heuristics" which allow us to occasionally be aware of of our own shortcomings in such a way as to avoid those pitfalls. Our minds are clearly inelegant Rube Goldberg contraptions, but that doesn't mean were are helplessly clueless that this is so.

It seems to me that if there were understandable criteria for strong cognitive closure - if we had a list of consistent principles and could say "Anything that requires mental processes X, Y, and Z to understand can not be understood" - well, then we could understand it. Therefore if such a thing as cognitive closure does exist, it would necessarily include itself as one of the incomprehensibles, and consequently, the second case would obtain - that is to say, that we cannot be aware of cognitive closure when it occurs. If so, then the discussion ends here: we can never know if we've encountered a closure, and it would be exactly as if cognitive closure did not exist.

A theologian once said that God was so perfect, He didn't have to bother existing. So it is with strong cognitive closure. Having trouble understanding your credit card statement, or written Chinese? Weak (trivial) cognitive closure. Unfortunately I can't point you to an example of strong cognitive closure, because whichever position you take on it, for practical purposes, there isn't any.

Cognitive Enhancers

In Nature, Greely et al write in support of cognitive enhancers. Justin Barnard responds negatively.

We certainly owe ourselves a frank discussion of the potential individual and societal impacts of the increasing use of cognitive enhancing psychoactives; unfortunately Barnard is not contributing to this. As is often the case for such arguments, Barnard appeals (unclearly) to the idea that cognitive enhancement is "unnatural"; that humans, and human nature, are not to be evaluated solely in terms of information processing ability. But Greely et al do not make such an argument. Their aim is to explore a powerful (and even disruptive) medical technology in terms of expanding its potential benefits, and mitigating its risks. It would be equally incoherent for Barnard to object to improved agricultural technology by saying that there is more to man than satiating hunger. Of course there is. This concern is quite wide of the mark, which is that if we might make our world better with a new technology, we owe it to ourselves to explore that technology.

One problem with arguments like Barnard's about the ethics of self-alteration is that there is always a spectrum. Is it immoral for me to get that third cup of coffee if I'm flagging a little at 3pm? Caffeine is not only a well-established cognitive enhancer, its effect on physical tasks like long-distance running are well-known as well (here's my recent personal experience in a marathon). Was this "unnatural" of me? Or is it unnatural to raise your kids in a house with lots of books, because access to knowledge and reading adults has been shown to boost the kid's achievement later?

Let's look at another field of endeavor where these judgments are made constantly. Competitive cyclists and marathoners train at high altitudes to boost their red blood cell counts. In what way is relocating to a marginal low ppO2 environment for the sole purpose of training "natural"? Athletes who do well in these sports typically have naturally high red blood cell counts to begin with, and high EPO levels (the hormone that triggers blood cell production). So they inherited a few stretches of DNA with less stingy regulatory regions that I did. Is this fair? If it's unnatural for me to just take EPO, how about if I can boost my own endogenous production? This was a nifty trick developed a decade ago by TransKaryotic Therapeutics, a Boston biotech whose transposon technology got locked up by the legal team of EPO-hoard-ward Amgen. Still have the "ick" because it's a drug? Then let's review: going to a mountain so the thin air wrings more red blood cells out of your marrow is okay, but doing the same thing by coaxing your hormone production up (using your own genes!) is not okay. Aren't these distinctions starting to seem arbitrary?

Clearly, whatever the rules are regarding performance enhancement in a sport, they have to be consistent within that sport; and clearly, commonplace ninety-minute marathons will not have the same impact that chemically-induced geniuses will. This is exactly why the Nature paper recognizes that we have to proceed cautiously and safety is paramount - as with any other chemical we develop and ingest to improve our lives. For instance, the currently-available cognitive enhancer Adderall is merely a mixture of amphetamines. It's speed. It's entirely appropriate that access to this addictive psychoactive substance is controlled. It's also entirely appropriate that we explore the ways (if any) it's acceptable for this drug to be used by healthy people as an enhancer. Maybe for Adderall there are no such ways, but every molecule is different. That is to say, it's completely inappropriate to throw out a whole technology because a single tool has too sharp of an edge.

As a more speculative aside, it is my prediction that by the end of the twenty-first century, medicine will run out of diseases which can be treated by waiting until something breaks and then stopping the out-of-control process or aiding an atrophied one. There are many diseases which result from basic design flaws in the architecture of our tissues and the machinery of our cells - accidents waiting to happen - like back problems, hip and knee issues, cancer and autoimmune diseases. Fixes for these problems will require not only germ-line alterations, but even more profound re-engineering of the fundamental cogs and gears of metabolism. Whether we're ready for such an enterprise is a discussion that will likely happen after you and I have both returned to the elements. I make this point to say that I would not necessarily endorse such a move, and Greely et al are clearly not talking about anything so radical either, despite Barnard's anxiety. The Nature paper is advocating a search for better cups of coffee, and a sober discussion about their risk-benefit profile in general. That's all. Before we toss out an entire potential approach to bettering the human condition, the onus is on the advocates of Barnard's position to articulate their counterarguments more clearly, and with fewer appeals to vague romantic intuitions about the meaning of life.