One of the main principles in living things is the preservation of self at the expense of non-self, the maintenance of order by the absorption of energy, often at the expense of others. Of course, cells and most multicellular organisms are intentionless automata. But to intentional beings like us, whose self is identified with our consciousness and which is in turn dependent on the continued coherence of one physical form, it's easy for us to make muddy assumptions about the significance and stability of self in other organisms. It's strange that somehow our intention arises from the behavior of an assembly of these intentionless automata. We are at once watching, and the products of, a process that ensures that entities which take actions to make more of themselves are the entities we see in the world, and it's very difficult for us not to ascribe intention and agency to all living things, even prokaryotes, exhibiting clear functionality as they do.
You might not like it if your children end up being biologically different than yourself, but bacteria don't and can't care. If a cell doesn't care that it (or its offspring) may change radically if they mutate, then why do cells expend such effort preserving consistency of self? We should expect to see most prominently the effects of entities that copy themselves - but because entropy is always positive, it's only a matter of time before they change. It doesn't matter that "self" is not consistent over time, just that the cause-and-effect tree continues growing new branches. Yet all cells develop and retain elaborate mechanisms to prevent changes to their DNA. If preservation of a consistent self is not a real end, then why do they bother avoiding mutation?
Of course, the obvious answer is that as life becomes more complex, any mutation is far more likely than not to be injurious rather than beneficial. The more complex the organism (the more elements in a system) the more likely this probably is. Simple one-celled organisms could tolerate a slightly higher mutation rate, because they have fewer metabolic entities interacting inside the cell and few if any controlled external interactions with other cells. By analogy, imagine changing software across the entire General Electric family of companies, versus at a one-office specialty manufacturer. Therefore, in bacteria we should expect and do observe a higher mutation rate over time, and more diverse and innovative biochemistry at any one point in time. For example, some bacteria that can use sulfur as the terminal electron acceptor, converting it to hydrogen sulfide, parallel to aerobic organisms like us who breathe O2 as the terminal electron acceptor and convert it to water; in fact, there are families of bacteria where some use sulfur and some use oxygen (like Syntrophobacterales; imagine if some species of primates were found to be sulfur reducing, and the rest were aerobic - but as said before, you just can't expect that kind of flexibility at G.E.) Viruses also have been able to innovate in terms of nucleic acid reproduction far beyond what cell-based systems use, and they are notoriously sloppy in reproduction, far more even than bacteria (hence the necessary concept of quasispecies). Although the numbers probably wouldn't surprise us, it would be an interesting comparison to define some quantitative index of biochemical innovation-per-clade.
If cells do not expend effort commensurate with the likely damage from mutations, they will die, and we won't see their descendants. "Commensurate" means that the more likely a mutation is to be deleterious, the more an (evolutionary steady-state with respect to mutation) cell will spend to make sure it won't happen. Probable fitness cost is determined not just by the chance that it will be good or bad, but how good or bad it will be. At a guess, a deleterious mutation is probably likely, on average, to damage the organism's fitness more than the rare beneficial mutation will improve it. It should be possible to add up the energy that (for example) DNA Pol I and other proofreading systems in bacteria require for activity. If we assume that mutation costs are steady-state (a safe first approximation after 3.7 billion years of these molecules running around loose) then this number will be a good reflection of the fitness cost of mutations to these organisms. It's also likely to be higher for multicellular organisms and lower for viruses, on a per-base pair basis. Even if cells were capable of ensuring 100% fidelity, it's very likely that there's some point of diminishing marginal returns beyond which it's no longer profitable for the cell to bother with proofreading.
Now imagine a planet with particularly forgiving biochemistry, where mutations are equally likely to be positive or negative, and (further simplifying) the mutations are always equally good or bad. In this scenario (and any scenario more benign than this one), cells which expend any effort trying to stop mutations are wasting their time and are at a fitness disadvantage. Mutation would occur rapidly and there would be no stable lineages. Although you would eventually see reproductive isolation, you most emphatically would not see any one stable species or haplotype more than another, aside from some effect that those organisms closer to the mean (the ancestral starting point that sets the center of the normal distribution) would probably predominate in the early period before a stable population is reached in the bounds of their environment. At this time the allele distribution would shift to become truly random.[1]
In contrast, in our world, there are species whose gene pools are stable over long periods of time, relative to the behavior of the cells that make up those species. Therefore, altruism can appear if a gene comes along that gives its cell the ability to recognize other carriers and treats them preferentially, making it more likely to see that gene in the future. But in our imaginary world of neutral-or-better and therefore constant mutation, there are no stable species. Unless a gene arises that can somehow measure phylogenetic distance in general and act proportionally to it, there would be little altruism.
Mutation cost is not context independent, and the following consideration of how to predict and manage mutation cost might seem teleological, but it turns out to have real world correlates. Imagine (back in our own world now) that there's an organism that's doing badly. Some indicators of its doing badly would be that this it doesn't encounter many conspecifics (because they're all dead, or the organism has migrated into a novel environment) or that the organism is always starving, or it's under temperature stress. If you were that organism, and you had to make a bet about how optimized your genes were for your environment, you'd bet not very, or at least you'd bet slightly worse odds than if you were making the bet when you were doing okay. (There are some huge leaps there, but you're necessarily making a decision with incomplete information). Consequently the chance of a mutation having a beneficial effect in an environment where you're doing badly is slightly higher than in one where you're doing well, because you can be a little more confident that you (and your genes) are less likely to be near a summit on a fitness landscape. To put it in the extreme, loser organisms might be better off with just about any change. If there's any way for the organism to recognize its bad fortune, and then adjust how much it spends on proofreading - or in some way allow mistakes to be expressed - that's the time.
As it turns out, such a mechanism exists. Hsp90, a chaperone protein that has homologs in most cells, conceals mutations by correctly folding mutant proteins - except under restrictive conditions, like temperature stress. The mutation rate does not change, but in response to underperformance, Hsp90 can suddenly unmask the accumulated genotypic variation, which will suddenly appear as phenotypic variation. Rutherford and Lindquist neatly termed this phenomenon evolutionary capacitance[2], and later groups explored the concept in the abstact[3].
It bears speculating what other meta-selection tricks cells might have developed. Are there mechanisms to slow evolution in successful species? In other words, do consistently well-fed organisms and/or ones that are crowded by the success of their own species (for example, cultured bacteria or New Yorkers) spend more effort on tricks to slow evolution, in recognition that they may well be near a fitness peak, making mutations slightly more likely to be harmful? Cells in active, dense culture (but with sufficient resources) could be tested for mutation rate, controlling for metabolic changes occurring in response to crowding. The interesting result would be that they actually are mutating more slowly than before the culture became dense. [Added later: when I wrote this I wasn't aware of the phenomenon of quorum-sensing. Best known in bacteria, it also occurs in some metazoans. In fact some work has shown a link between quorum-sensing and mutation but it is not what I had predicted. That is, I had predicted quorum-sensing bacteria that mutated slower when they're in crowded conditions with conspecifics, because it's worth the energy to avoid mutation since they're more likely to be in an optimal environment. However, what has been observed in P. aeruginosa is that "high frequency" strains emerge which have had certain virulence factors induced in a way suggestive of quorum-induction, but that the quorum-sensing genes have been deactivated by mutation more often than would otherwise be expected.]
There are cases where organisms intentionally mutate, the best example of which is the adaptive immune system of vertebrates. (Note in the context of the prior argument that the mutation rate has not been shown to change with stress.) Lymphocytes produce molecules with specific randomly variable peptide sequences (one of these molecule classes is antibodies). Because this hypermutation always occurs in a strictly delineated stretch of amino acid residues within a peptide, the innovation is in effect safely inside a box. That such a clever and complex mechanism should first emerge in response to the constant assault of pathogens is probably not surprising. But if it appeared once - are there organisms with other kinds of built-in selection laboratories for other purposes? It's always easier to disrupt something than improve it, and what lymphocyte hypermutation is doing is disrupting pathogens. If there are any other such selection systems in biology, chances are that their function is to invent new ways to break other organisms, as with the adaptive immune system. A prime place to start looking would be venoms.
REFERENCES AND FOOTNOTES
[1] The thought experiment of the forgiving-DNA planet (with mutations equally likely to help or hurt) concluded that there would be no stable lineages. However, an added complication would be that mutations would result neither in reproductive isolation, or in speciation (though still not with stable lineages within each reproductive silo). Language, which often branches from a common ancestor and can be followed "genetically", follows a very similar pattern, since to a first approximation, phonological and morphosyntactical innovations are neutral to the function of the language. However, reproductive isolation does still occur (i.e., an English speaker can't understand a Dutchman or German) but there are also dialect spectra (i.e. Dutch and German have intermediates that are mutually intelligible to both). It's difficult to say objectively whether these spectra are broader or occur more frequently in language systems than in gene systems.
[2] Rutherford SL and Lindquist S. Hsp90 as a capacitor for morphological evolution. Nature. 1998 Nov 26;396(6709):336-42.
[3] Bergman A and Siegal ML. Evolutionary capacitance as a general feature of complex gene networks. Nature. 2003 Jul 31;424(6948):549-52. Nature. 2003 Jul 31;424(6948):501-4.
A seasonal song for Bill Labov
9 hours ago
No comments:
Post a Comment