In my previous post I asked why, if we do not have free will and the path of the universe is set in stone, we should have a seemingly privileged timepoint called "now". With no free will, there are no more degrees of freedom as you're reading this "now" "in the present" then there are degrees of freedom for something that happened ten minutes ago, or in 1588. In this setting "now" seems especially arbitrary and one wonders why nervous systems of this sort, i.e. that are constrained to one gradually changing temporal perspective, would ever appear - since events are all settled anyway. If we live in four-dimensional block of frozen space-time, why can't we see the whole thing? Why do we seem limited to one slowly shifting level within it?
Another way of looking at it (and responding to TGP's statement that "now" is a given) is to imagine a visit to Flatland. In Abbott's original conception, Flatland appears to three-dimensional beings as a plane in which 2-dimensional creatures like squares and circles are going about their lives, unaware (and unable to be aware) that above or below them, they were being observed by extra-dimensional beings. Abbott used Flatland as a way of arguing by analogy how fourth-dimensional objects would interact with and appear in our own three-dimensional universe (see the link for the full treatment).
If you look at our universe as four-dimensional space-time, then you can consider Flatland to not be two-dimensional, but three-dimensional plane-time. In a no-free-will Flatland, their universe would look to us like a tall box, with time-tracks - set-in-stone of every square and circle twisting through it like tunnels in an ant colony. If you wanted to be a three-dimensional sadist, you could climb up on a ladder and look at Mr. Square at the moment of his death in a two-dimensional hospital. Then you climb back down and again insert yourself into Flatland to find him enjoying lunch in a park the day after his twenty-third birthday. "You will die on the following date and time; I know, because I already saw it." Do you see why this is strange? From your three-dimensional standpoint, no-free-will Flatland is a giant, static sculpture. Why would the awareness of any entity in that block be constrained to any one plane within it?
By the same argument, in no-free-will space-land (where we live, if you don't believe in free-will anyway), we're stuck in a block of four-dimensional space-time. Fourth dimensional sadists are free to go scrambling up and down this block like you just did on Mr. Square's universe, except the fourth-dimensional sadists are looking for nasty tidbits to relay to unfortunate three-dimensional suckers like you. A fourth-dimensional sadist could pop in ninety seconds from now and tell you that you getting smooshed by a rabid slime mold on 19 July, 2025, and it knows because it already saw it happen. And in a very real sense, in a non-free-will universe, it already has happened. The disconnect is that you haven't experienced it yet, and in a no-free-will universe, that's what seems strange. If the events happening now are just as certain as the events happening then, why isn't seeing the future the same as turning your head to look at the other side of the room you're in? It's all already there.
An implication is that if we again assume a literal interpretation of multidimensional models of the universe, if the universe has a finite set of dimensions, it would necessarily be deterministic. The highest dimension would be a static one, and Mr. Square can't have free will if we don't.
Consciousness and how it got to be that way
Thursday, October 22, 2009
Wednesday, October 21, 2009
Two Questions I Was Apparently Predestined to Ask
To those who think free will is an illusion: why does it seem like we have free will? More to the point, why do we perceive a special point in time we call "now"?
Panpsychist Accounts of Consciousness Are Still Testable
One challenge to David Chalmers' account of panpsychist consciousness is that it is untestable. If you argue that consciousness is everywhere (so goes the objection) then no observation can disprove your theory; therefore, it is not a sound theory.
Is this a valid objection? Chalmers is arguing that consciousness is a primitive feature of existence like charge or mass, that dimensional analysis by the four received basic units (charge, mass, distance and time) cannot in any combination "get us to" experience. One manifestation of mass is gravity. It is continuous throughout the universe; it is everywhere. Can gravity not be tested? The laws surrounding gravitation certainly can be, even though there is nowhere that gravity is truly zero.
If consciousness is (at least partly) epiphenomenal and supervenes lawfully on observable patterns in the material world, then these lawful relationships can and should be tested. The powerlessness of consciousness in epiphenomenal accounts (i.e. that our consciousness is caused, but does not cause anything, and we are in effect just along for the ride) is a problem that we've been wrestling with since Descartes and before, but it is a separate one. To argue the universality of consciousness does not make it any more untestable than gravity.
Is this a valid objection? Chalmers is arguing that consciousness is a primitive feature of existence like charge or mass, that dimensional analysis by the four received basic units (charge, mass, distance and time) cannot in any combination "get us to" experience. One manifestation of mass is gravity. It is continuous throughout the universe; it is everywhere. Can gravity not be tested? The laws surrounding gravitation certainly can be, even though there is nowhere that gravity is truly zero.
If consciousness is (at least partly) epiphenomenal and supervenes lawfully on observable patterns in the material world, then these lawful relationships can and should be tested. The powerlessness of consciousness in epiphenomenal accounts (i.e. that our consciousness is caused, but does not cause anything, and we are in effect just along for the ride) is a problem that we've been wrestling with since Descartes and before, but it is a separate one. To argue the universality of consciousness does not make it any more untestable than gravity.
Thursday, October 8, 2009
Hot Pics!
Let me just be the first to say: that's one good-looking brain. In
particular, what a big hippocampus it has (all the better to remember you with):
In fairness, perhaps I am - it is - a biased observer of myitself. Perhaps the
study of the mind requires new pronouns.
I didn't just trip and fall into an MRI, I participated as a subject
in a memory task imaging fMRI study at my alma mater-to-be UCSD, by
the same group that wrote the voodoo correlations paper.
The worst thing about the experience? Tring to stay awake for the whole hour without being able to control any stimuli. I hope I gave them good data.
particular, what a big hippocampus it has (all the better to remember you with):
In fairness, perhaps I am - it is - a biased observer of myitself. Perhaps the
study of the mind requires new pronouns.
I didn't just trip and fall into an MRI, I participated as a subject
in a memory task imaging fMRI study at my alma mater-to-be UCSD, by
the same group that wrote the voodoo correlations paper.
The worst thing about the experience? Tring to stay awake for the whole hour without being able to control any stimuli. I hope I gave them good data.
Monday, October 5, 2009
Resistance to Mutation and Preservation of Self
One of the main principles in living things is the preservation of self at the expense of non-self, the maintenance of order by the absorption of energy, often at the expense of others. Of course, cells and most multicellular organisms are intentionless automata. But to intentional beings like us, whose self is identified with our consciousness and which is in turn dependent on the continued coherence of one physical form, it's easy for us to make muddy assumptions about the significance and stability of self in other organisms. It's strange that somehow our intention arises from the behavior of an assembly of these intentionless automata. We are at once watching, and the products of, a process that ensures that entities which take actions to make more of themselves are the entities we see in the world, and it's very difficult for us not to ascribe intention and agency to all living things, even prokaryotes, exhibiting clear functionality as they do.
You might not like it if your children end up being biologically different than yourself, but bacteria don't and can't care. If a cell doesn't care that it (or its offspring) may change radically if they mutate, then why do cells expend such effort preserving consistency of self? We should expect to see most prominently the effects of entities that copy themselves - but because entropy is always positive, it's only a matter of time before they change. It doesn't matter that "self" is not consistent over time, just that the cause-and-effect tree continues growing new branches. Yet all cells develop and retain elaborate mechanisms to prevent changes to their DNA. If preservation of a consistent self is not a real end, then why do they bother avoiding mutation?
Of course, the obvious answer is that as life becomes more complex, any mutation is far more likely than not to be injurious rather than beneficial. The more complex the organism (the more elements in a system) the more likely this probably is. Simple one-celled organisms could tolerate a slightly higher mutation rate, because they have fewer metabolic entities interacting inside the cell and few if any controlled external interactions with other cells. By analogy, imagine changing software across the entire General Electric family of companies, versus at a one-office specialty manufacturer. Therefore, in bacteria we should expect and do observe a higher mutation rate over time, and more diverse and innovative biochemistry at any one point in time. For example, some bacteria that can use sulfur as the terminal electron acceptor, converting it to hydrogen sulfide, parallel to aerobic organisms like us who breathe O2 as the terminal electron acceptor and convert it to water; in fact, there are families of bacteria where some use sulfur and some use oxygen (like Syntrophobacterales; imagine if some species of primates were found to be sulfur reducing, and the rest were aerobic - but as said before, you just can't expect that kind of flexibility at G.E.) Viruses also have been able to innovate in terms of nucleic acid reproduction far beyond what cell-based systems use, and they are notoriously sloppy in reproduction, far more even than bacteria (hence the necessary concept of quasispecies). Although the numbers probably wouldn't surprise us, it would be an interesting comparison to define some quantitative index of biochemical innovation-per-clade.
If cells do not expend effort commensurate with the likely damage from mutations, they will die, and we won't see their descendants. "Commensurate" means that the more likely a mutation is to be deleterious, the more an (evolutionary steady-state with respect to mutation) cell will spend to make sure it won't happen. Probable fitness cost is determined not just by the chance that it will be good or bad, but how good or bad it will be. At a guess, a deleterious mutation is probably likely, on average, to damage the organism's fitness more than the rare beneficial mutation will improve it. It should be possible to add up the energy that (for example) DNA Pol I and other proofreading systems in bacteria require for activity. If we assume that mutation costs are steady-state (a safe first approximation after 3.7 billion years of these molecules running around loose) then this number will be a good reflection of the fitness cost of mutations to these organisms. It's also likely to be higher for multicellular organisms and lower for viruses, on a per-base pair basis. Even if cells were capable of ensuring 100% fidelity, it's very likely that there's some point of diminishing marginal returns beyond which it's no longer profitable for the cell to bother with proofreading.
Now imagine a planet with particularly forgiving biochemistry, where mutations are equally likely to be positive or negative, and (further simplifying) the mutations are always equally good or bad. In this scenario (and any scenario more benign than this one), cells which expend any effort trying to stop mutations are wasting their time and are at a fitness disadvantage. Mutation would occur rapidly and there would be no stable lineages. Although you would eventually see reproductive isolation, you most emphatically would not see any one stable species or haplotype more than another, aside from some effect that those organisms closer to the mean (the ancestral starting point that sets the center of the normal distribution) would probably predominate in the early period before a stable population is reached in the bounds of their environment. At this time the allele distribution would shift to become truly random.[1]
In contrast, in our world, there are species whose gene pools are stable over long periods of time, relative to the behavior of the cells that make up those species. Therefore, altruism can appear if a gene comes along that gives its cell the ability to recognize other carriers and treats them preferentially, making it more likely to see that gene in the future. But in our imaginary world of neutral-or-better and therefore constant mutation, there are no stable species. Unless a gene arises that can somehow measure phylogenetic distance in general and act proportionally to it, there would be little altruism.
Mutation cost is not context independent, and the following consideration of how to predict and manage mutation cost might seem teleological, but it turns out to have real world correlates. Imagine (back in our own world now) that there's an organism that's doing badly. Some indicators of its doing badly would be that this it doesn't encounter many conspecifics (because they're all dead, or the organism has migrated into a novel environment) or that the organism is always starving, or it's under temperature stress. If you were that organism, and you had to make a bet about how optimized your genes were for your environment, you'd bet not very, or at least you'd bet slightly worse odds than if you were making the bet when you were doing okay. (There are some huge leaps there, but you're necessarily making a decision with incomplete information). Consequently the chance of a mutation having a beneficial effect in an environment where you're doing badly is slightly higher than in one where you're doing well, because you can be a little more confident that you (and your genes) are less likely to be near a summit on a fitness landscape. To put it in the extreme, loser organisms might be better off with just about any change. If there's any way for the organism to recognize its bad fortune, and then adjust how much it spends on proofreading - or in some way allow mistakes to be expressed - that's the time.
As it turns out, such a mechanism exists. Hsp90, a chaperone protein that has homologs in most cells, conceals mutations by correctly folding mutant proteins - except under restrictive conditions, like temperature stress. The mutation rate does not change, but in response to underperformance, Hsp90 can suddenly unmask the accumulated genotypic variation, which will suddenly appear as phenotypic variation. Rutherford and Lindquist neatly termed this phenomenon evolutionary capacitance[2], and later groups explored the concept in the abstact[3].
It bears speculating what other meta-selection tricks cells might have developed. Are there mechanisms to slow evolution in successful species? In other words, do consistently well-fed organisms and/or ones that are crowded by the success of their own species (for example, cultured bacteria or New Yorkers) spend more effort on tricks to slow evolution, in recognition that they may well be near a fitness peak, making mutations slightly more likely to be harmful? Cells in active, dense culture (but with sufficient resources) could be tested for mutation rate, controlling for metabolic changes occurring in response to crowding. The interesting result would be that they actually are mutating more slowly than before the culture became dense. [Added later: when I wrote this I wasn't aware of the phenomenon of quorum-sensing. Best known in bacteria, it also occurs in some metazoans. In fact some work has shown a link between quorum-sensing and mutation but it is not what I had predicted. That is, I had predicted quorum-sensing bacteria that mutated slower when they're in crowded conditions with conspecifics, because it's worth the energy to avoid mutation since they're more likely to be in an optimal environment. However, what has been observed in P. aeruginosa is that "high frequency" strains emerge which have had certain virulence factors induced in a way suggestive of quorum-induction, but that the quorum-sensing genes have been deactivated by mutation more often than would otherwise be expected.]
There are cases where organisms intentionally mutate, the best example of which is the adaptive immune system of vertebrates. (Note in the context of the prior argument that the mutation rate has not been shown to change with stress.) Lymphocytes produce molecules with specific randomly variable peptide sequences (one of these molecule classes is antibodies). Because this hypermutation always occurs in a strictly delineated stretch of amino acid residues within a peptide, the innovation is in effect safely inside a box. That such a clever and complex mechanism should first emerge in response to the constant assault of pathogens is probably not surprising. But if it appeared once - are there organisms with other kinds of built-in selection laboratories for other purposes? It's always easier to disrupt something than improve it, and what lymphocyte hypermutation is doing is disrupting pathogens. If there are any other such selection systems in biology, chances are that their function is to invent new ways to break other organisms, as with the adaptive immune system. A prime place to start looking would be venoms.
REFERENCES AND FOOTNOTES
[1] The thought experiment of the forgiving-DNA planet (with mutations equally likely to help or hurt) concluded that there would be no stable lineages. However, an added complication would be that mutations would result neither in reproductive isolation, or in speciation (though still not with stable lineages within each reproductive silo). Language, which often branches from a common ancestor and can be followed "genetically", follows a very similar pattern, since to a first approximation, phonological and morphosyntactical innovations are neutral to the function of the language. However, reproductive isolation does still occur (i.e., an English speaker can't understand a Dutchman or German) but there are also dialect spectra (i.e. Dutch and German have intermediates that are mutually intelligible to both). It's difficult to say objectively whether these spectra are broader or occur more frequently in language systems than in gene systems.
[2] Rutherford SL and Lindquist S. Hsp90 as a capacitor for morphological evolution. Nature. 1998 Nov 26;396(6709):336-42.
[3] Bergman A and Siegal ML. Evolutionary capacitance as a general feature of complex gene networks. Nature. 2003 Jul 31;424(6948):549-52. Nature. 2003 Jul 31;424(6948):501-4.
You might not like it if your children end up being biologically different than yourself, but bacteria don't and can't care. If a cell doesn't care that it (or its offspring) may change radically if they mutate, then why do cells expend such effort preserving consistency of self? We should expect to see most prominently the effects of entities that copy themselves - but because entropy is always positive, it's only a matter of time before they change. It doesn't matter that "self" is not consistent over time, just that the cause-and-effect tree continues growing new branches. Yet all cells develop and retain elaborate mechanisms to prevent changes to their DNA. If preservation of a consistent self is not a real end, then why do they bother avoiding mutation?
Of course, the obvious answer is that as life becomes more complex, any mutation is far more likely than not to be injurious rather than beneficial. The more complex the organism (the more elements in a system) the more likely this probably is. Simple one-celled organisms could tolerate a slightly higher mutation rate, because they have fewer metabolic entities interacting inside the cell and few if any controlled external interactions with other cells. By analogy, imagine changing software across the entire General Electric family of companies, versus at a one-office specialty manufacturer. Therefore, in bacteria we should expect and do observe a higher mutation rate over time, and more diverse and innovative biochemistry at any one point in time. For example, some bacteria that can use sulfur as the terminal electron acceptor, converting it to hydrogen sulfide, parallel to aerobic organisms like us who breathe O2 as the terminal electron acceptor and convert it to water; in fact, there are families of bacteria where some use sulfur and some use oxygen (like Syntrophobacterales; imagine if some species of primates were found to be sulfur reducing, and the rest were aerobic - but as said before, you just can't expect that kind of flexibility at G.E.) Viruses also have been able to innovate in terms of nucleic acid reproduction far beyond what cell-based systems use, and they are notoriously sloppy in reproduction, far more even than bacteria (hence the necessary concept of quasispecies). Although the numbers probably wouldn't surprise us, it would be an interesting comparison to define some quantitative index of biochemical innovation-per-clade.
If cells do not expend effort commensurate with the likely damage from mutations, they will die, and we won't see their descendants. "Commensurate" means that the more likely a mutation is to be deleterious, the more an (evolutionary steady-state with respect to mutation) cell will spend to make sure it won't happen. Probable fitness cost is determined not just by the chance that it will be good or bad, but how good or bad it will be. At a guess, a deleterious mutation is probably likely, on average, to damage the organism's fitness more than the rare beneficial mutation will improve it. It should be possible to add up the energy that (for example) DNA Pol I and other proofreading systems in bacteria require for activity. If we assume that mutation costs are steady-state (a safe first approximation after 3.7 billion years of these molecules running around loose) then this number will be a good reflection of the fitness cost of mutations to these organisms. It's also likely to be higher for multicellular organisms and lower for viruses, on a per-base pair basis. Even if cells were capable of ensuring 100% fidelity, it's very likely that there's some point of diminishing marginal returns beyond which it's no longer profitable for the cell to bother with proofreading.
Now imagine a planet with particularly forgiving biochemistry, where mutations are equally likely to be positive or negative, and (further simplifying) the mutations are always equally good or bad. In this scenario (and any scenario more benign than this one), cells which expend any effort trying to stop mutations are wasting their time and are at a fitness disadvantage. Mutation would occur rapidly and there would be no stable lineages. Although you would eventually see reproductive isolation, you most emphatically would not see any one stable species or haplotype more than another, aside from some effect that those organisms closer to the mean (the ancestral starting point that sets the center of the normal distribution) would probably predominate in the early period before a stable population is reached in the bounds of their environment. At this time the allele distribution would shift to become truly random.[1]
In contrast, in our world, there are species whose gene pools are stable over long periods of time, relative to the behavior of the cells that make up those species. Therefore, altruism can appear if a gene comes along that gives its cell the ability to recognize other carriers and treats them preferentially, making it more likely to see that gene in the future. But in our imaginary world of neutral-or-better and therefore constant mutation, there are no stable species. Unless a gene arises that can somehow measure phylogenetic distance in general and act proportionally to it, there would be little altruism.
Mutation cost is not context independent, and the following consideration of how to predict and manage mutation cost might seem teleological, but it turns out to have real world correlates. Imagine (back in our own world now) that there's an organism that's doing badly. Some indicators of its doing badly would be that this it doesn't encounter many conspecifics (because they're all dead, or the organism has migrated into a novel environment) or that the organism is always starving, or it's under temperature stress. If you were that organism, and you had to make a bet about how optimized your genes were for your environment, you'd bet not very, or at least you'd bet slightly worse odds than if you were making the bet when you were doing okay. (There are some huge leaps there, but you're necessarily making a decision with incomplete information). Consequently the chance of a mutation having a beneficial effect in an environment where you're doing badly is slightly higher than in one where you're doing well, because you can be a little more confident that you (and your genes) are less likely to be near a summit on a fitness landscape. To put it in the extreme, loser organisms might be better off with just about any change. If there's any way for the organism to recognize its bad fortune, and then adjust how much it spends on proofreading - or in some way allow mistakes to be expressed - that's the time.
As it turns out, such a mechanism exists. Hsp90, a chaperone protein that has homologs in most cells, conceals mutations by correctly folding mutant proteins - except under restrictive conditions, like temperature stress. The mutation rate does not change, but in response to underperformance, Hsp90 can suddenly unmask the accumulated genotypic variation, which will suddenly appear as phenotypic variation. Rutherford and Lindquist neatly termed this phenomenon evolutionary capacitance[2], and later groups explored the concept in the abstact[3].
It bears speculating what other meta-selection tricks cells might have developed. Are there mechanisms to slow evolution in successful species? In other words, do consistently well-fed organisms and/or ones that are crowded by the success of their own species (for example, cultured bacteria or New Yorkers) spend more effort on tricks to slow evolution, in recognition that they may well be near a fitness peak, making mutations slightly more likely to be harmful? Cells in active, dense culture (but with sufficient resources) could be tested for mutation rate, controlling for metabolic changes occurring in response to crowding. The interesting result would be that they actually are mutating more slowly than before the culture became dense. [Added later: when I wrote this I wasn't aware of the phenomenon of quorum-sensing. Best known in bacteria, it also occurs in some metazoans. In fact some work has shown a link between quorum-sensing and mutation but it is not what I had predicted. That is, I had predicted quorum-sensing bacteria that mutated slower when they're in crowded conditions with conspecifics, because it's worth the energy to avoid mutation since they're more likely to be in an optimal environment. However, what has been observed in P. aeruginosa is that "high frequency" strains emerge which have had certain virulence factors induced in a way suggestive of quorum-induction, but that the quorum-sensing genes have been deactivated by mutation more often than would otherwise be expected.]
There are cases where organisms intentionally mutate, the best example of which is the adaptive immune system of vertebrates. (Note in the context of the prior argument that the mutation rate has not been shown to change with stress.) Lymphocytes produce molecules with specific randomly variable peptide sequences (one of these molecule classes is antibodies). Because this hypermutation always occurs in a strictly delineated stretch of amino acid residues within a peptide, the innovation is in effect safely inside a box. That such a clever and complex mechanism should first emerge in response to the constant assault of pathogens is probably not surprising. But if it appeared once - are there organisms with other kinds of built-in selection laboratories for other purposes? It's always easier to disrupt something than improve it, and what lymphocyte hypermutation is doing is disrupting pathogens. If there are any other such selection systems in biology, chances are that their function is to invent new ways to break other organisms, as with the adaptive immune system. A prime place to start looking would be venoms.
REFERENCES AND FOOTNOTES
[1] The thought experiment of the forgiving-DNA planet (with mutations equally likely to help or hurt) concluded that there would be no stable lineages. However, an added complication would be that mutations would result neither in reproductive isolation, or in speciation (though still not with stable lineages within each reproductive silo). Language, which often branches from a common ancestor and can be followed "genetically", follows a very similar pattern, since to a first approximation, phonological and morphosyntactical innovations are neutral to the function of the language. However, reproductive isolation does still occur (i.e., an English speaker can't understand a Dutchman or German) but there are also dialect spectra (i.e. Dutch and German have intermediates that are mutually intelligible to both). It's difficult to say objectively whether these spectra are broader or occur more frequently in language systems than in gene systems.
[2] Rutherford SL and Lindquist S. Hsp90 as a capacitor for morphological evolution. Nature. 1998 Nov 26;396(6709):336-42.
[3] Bergman A and Siegal ML. Evolutionary capacitance as a general feature of complex gene networks. Nature. 2003 Jul 31;424(6948):549-52. Nature. 2003 Jul 31;424(6948):501-4.
Subscribe to:
Posts (Atom)