Consciousness and how it got to be that way

Wednesday, July 28, 2010

Reflections on Wigner 1: Humility in Pattern Recognition - Mathematics as a Special Case

It's often asked how natural selection could have produced something like the mathematical ability of modern humans. Why can an ape, designed to mate, fight, hunt and run on a savanna, and perceive things that occur on a time scale of seconds to minutes and a size scale of a centimeter to a few hundred meters, even partly understand quarks and galaxies? Implicit in this statement is an admiration for that ability, and the power of mathematics, as well as an assumption held by physicists that should not be surprising.

The physicists' assumption is that the whole of nature, or at least the important parts of it, can be described by mathematics. In "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", Wigner observes "Galileo's restriction of his observations to relatively heavy bodies was the most important step in this regard. Again, it is true that if there were no phenomena which are independent of all but a manageably small set of conditions, physics would be impossible." Another way of saying this is that those regular relationships in nature most easily recognizable by our nervous systems are those parts of nature which we are most likely to notice first; seasonal agriculture preceded gravitation for this reason. But there is a circular, self-selection issue here about the interesting correspondence between the empirical behavior of nature and the mathematical relationships humans are capable of understanding, which is that:

a) humans can understand math.

b) What we have most clearly and exactly understood of nature so far (physics) employs math

c) Therefore, math uniquely and accurate describes nature.

Point b may be true only because our limited pattern recognition ability (even including infinitely recursive propositional thinking like math within that term) only allows us to recognize a certain limited group of relationships among all possible relationships in nature. In other words, of course we've discovered physics because those relationships are the ones we can most easily recognize! It's as if someone with a ruler goes around measuring things, and at the end of the day looks at the data she's collected and is amazed that it was exactly the kind of data you can collect with a ruler.

This discussion is far from an attack on usefulness of mathematics; if you have a model that worked in the past, bet on it working in the future, and the fact that not everything in the universe is yet shown to be predictable by mathematical relationships is certainly not cause to say "We've been at it in earnest for a few centuries and haven't shown how math predicts everything; time to quit." But it also certainly isn't time to say that math can show or has shown everything important, and the rest is necessarily detail. The whole endeavor of truth-seeking I think has at least something to do with decreasing suffering and increasing happiness, both very real parts of nature, and as yet there are very few mathematical relationships concerning them. I look forward to the day that such relationships are shown, but we cannot assume that they exist, or that if they don't, suffering is unimportant.

One problem is that if indeed there are relationships in nature un-graspable by human cognition or mathematics (and note that I've made no argument as to whether those two things are the same), how could we know? It would just look like noise, and we couldn't tell if a knowable relationship was there and had yet to be pulled out, or there was nothing to know (or knowable). We might at least know whether such unknowable information, or "detail", could exist, if we had some proof within our propositional system that there are statements which are true but cannot be deduced from the system. And we have just such a proof.

If we regard mathematics as a formalist does, that math is a trick of our neurology that corresponds usefully enough to nature, the question of why math is useful at all becomes even more important. But if we inject a little humility into our celebration of our own propositional cleverness, the matter seems less pressing.

We have no reason to believe that the total space of comprehensible relationships in nature is not far, far larger than what is encompassed by "mathematics", even in math's fullest extension. If this is the case, it is easier to see how our mathematical ability is a side effect of natural selection and the nervous system it created. By giving us a larger working memory than our fellow species along with some ability to assign symbols arbitrarily, that nervous system does allows us to use propositional thinking to see nature beyond the limitations of our senses - but just barely.

In this view, we can perceive just the barest "extra" layer of nature beyond what our immediate senses do, and mathematics seems far less surprising or miraculous. There is still reason enough to investigate math's unreasonable effectiveness but we shouldn't insist on being shocked that it could have been produced by the hit-and-miss kluges of evolution. But I've made another circular assumption here, which is:

a) evolution proceeds according to natural law

b) evolution will therefore favor replicators that have some appreciatiof some of those natural laws, and modify their behavior accordingly

c) therefore, our ability to perceive the laws that have impacted our own survival, and maybe a few extra ones of the same form, should be expected


There are two mysteries then: first, that any type of regular pattern exists in nature, and second, that we are able to apprehend these patterns, particularly through mathematics. The second mystery probably disappears, seeming special only because of the likely incompleteness of math as a tool to describe nature, math's special case as a method of perception stemming from our own neurology, and the circular basis of our wonder at this as-yet early phase in our use of it. But the first question, of how or why even partly regular relationships appear to exist at all in nature, regardless of how we perceive them, remains untouched by this essay.

Tuesday, July 27, 2010

In the Land of the Blind

...the one-eyed man must remember the majority is always sane. The Country of the Blind by H.G. Wells explores how an entirely blind civilization might view the universe, and how sighted people might interact with them. The Churchlands' infra-people, while argumentative, seem quite diplomatic by comparison.

Sunday, July 25, 2010

Redwoods Aren't That Ancient or Special


Redwood Preserve in the Oakland Hills.


It's a common claim on informational signs in California parks that "redwoods were around at the time of the dinosaurs", or some such statement. While they're certainly amazing organisms, are these really tree-coelacanths?

Timetree consistently gives a divergence of trees in the order Cupressaceae (redwoods, junipers, various cypresses) at 80 MYA. We know from well-preserved fossilized trees that there were trees growing in the late Cretaceous that looked like modern redwoods, in the same place that modern redwoods grow. (This particular petrified forest near Napa, California is by far the most amazing petrified forest I've ever seen. I had to touch the trees to convince myself they're stone and not wood.)

Because fossilized "redwoods" date back to just after the putative divergence time, it's likely that modern redwoods are merely the more-ancestral-appearing descendants (relative to junipers and cypresses) of the these ancestral trees. Although the size and bark of the fossilized trees look similar to the redwoods today, that certainly doesn't tell the whole story about them, although barring miraculously preserved 65-million-year old Cretaceous tree DNA, that's about all we'll get. Consequently those old trees would more appropriately be called ancestral cypresses. Maybe the Ancient Giant Cypress?

Either way, redwoods are still pretty special, although not in the way that their chromosomes somehow resisted entropy for 65 million years.

Monday, July 5, 2010

Saturday, July 3, 2010

Do Small Biotechs Really Produce More AND BETTER Drug Candidates?

...and if so, why?

I occasionally post about biotechnology industry issues here insofar as they're relevant to the more central topics of this blog, and the productivity of the private sector biotech research enterprise directly bears on the tools we will have in the future to investigate cognition as well as to treat patients with cognitive and neurological disorders. If you're an academic scientist or philosopher and you find all this very dry, I would advise you to at least skim it so you can get an idea of what goes on in the evil world of industry. One thing I will say in defense of the private sector: workers are much, much better treated than they are in academia, not just in terms of money, but in working conditions and general treatment by superiors.

It's a cliche that Big Pharma can't find its own leads and has bought its pipeline from biotech for the past 10-15 years, which serves effectively as free-range R&D (until the round-up.) Having spent most of my time before medical school consulting at smaller biotech companies, and several times finding myself with free time because one of those companies was bought for its portfolio and closed, I've spent my fair share of time wondering about this question. However I actually can't recall seeing an analysis of biotech vs big pharma output, or in particular, of quality of candidates judging by ROI or absolute annual sales. But let's assume that the disparity is real. Big pharmas certainly do - they sometimes try to duplicate the perceived success of small biotechs by putting together small entrepreneur-like groups, like Glaxo. So what is it, exactly, that is more productive about small biotechs?

1) The most obvious: small biotechs have a much greater incentive to get their (usually lone) drugs into clinical trials - if they don't, they disappear. Big pharma management is not so incentivized, and timelines of individual drugs are sometimes adjusted to fit the portfolio. What's being maximized is completely different for a start-up biotech and a multi-drug big pharma. Overall sales is what's being maximized in big pharma, while speed to first-in-human and to market is being maximized in biotech (it equates to survival and therefore financial incentive.)

2) Small biotechs may produce more candidates, but on average lower quality candidates. Because of money and therefore time limitations, they're willing to push through the first lead where the Glaxos of the world have the cash to keep tweaking the structure. You would think this would necessarily mean that the big pharmas then wouldn't be interested in these low-quality candidates, but a) not all decisions are rational, and hype and groupthink have effects in the real world ("We have to buy them to get the first XYZ inhibitor!") and b) the first-in-humans candidate of a given class is often "lower quality" than what might have been the second-in-humans, which as mentioned the biotech won't wait around to discover; the perception and impact of the quality difference is highly context-dependent.

3) At biotech start-ups, scientists have the greatest influence on senior management or are senior management. Typically the management of the group closest to revenue generation is the one that has the most influence over the CEO. In big pharmas, this means sales. In a company that doesn't yet have any sales, this means clinical, or (if even earlier in the cycle) chemists and biologists. Once sales obtains this position, the amount of time the CEO spends thinking about sales increases and development plans tend to be de-emphasized (until everyone panics and it's too late.) I had long suspected that Genentech's success owed to its keeping scientists in key decision-making positions and after having consulted there I'm convinced that this is the case.

4) There are scale-dependent effects that would be present in any organization but are exacerbated by the uniquely long product development cycle in pharmaceuticals. Another exacerbation is the level of government oversight in the industry and the consequences of regulatory transgressions, leading to what are referred to in politics as Olsonian veto blocs, large groups of people who have a say in the process and have nothing to lose by saying "no" but everything to lose by saying "yes" at an inappropriate time. In the pharmaceutical world this is legal, regulatory, and QC - absolutely necessary to the industry, but their influence on timelines seems to be strongly scale dependent. In my own experience in the industry, some of the most focused "how do we get this done" people I've worked with were in QC at the biotech level. Some of the most obstructionist were in QC at the big pharma level. In general a company with a large revenue stream should be expected to be much more risk averse than a company with no profits. In the same vein, once a drug is approved, any new investigations could potentially yield a new indication that would provide some new revenues, or new safety findings that would diminish revenues across the board for the whole molecule, so post-marketing investigations are usually done with kid gloves.

5) Again scale-dependent are free-riders. At a smaller company, free-riding is obvious to all, more immediately detrimental to the future of the entire company, and more quickly punished. This is not the case at large companies with deeper pockets, many of the employees of which seem to be benefiting from a kind of corporate welfare state. This situation often arises at low surface-area-to-volume companies, where most employees interact only with other employees rather than with customers, vendors, or industry contacts outside the company. It would be worth seeing whether there's a sweet spot for company size in terms of a relationship between number of personnel vs. first-in-human clinical trials per person*year, including outlicensed compounds. Anecdotally, I have also noticed an odd scale-dependent increase in the proportion of people who have ever worked in government - not from related agencies like FDA, but from local governments or other areas.

[Story time - and if you know me personally, you know which company I'm talking about. I couldn't help but reflect that the strategy of employees of one big pharma subsidiary company where I worked was exactly that of a parasite in the gut of a large, warm mammal that can afford to miss a few calories here and there. The downside to the strategy is that they're super-specialized to thrive only in that environment; that is, their skillset degenerates into "how to stay employed at ABC Big Pharma". Consequently sometimes they have to transfer between mammals of the same species (i.e. subsidiaries) to survive. The day it was announced this particular subsidiary was being shut down by the parent, I saw groups of people openly weeping as if Princess Diana had died all over again.]


CONCLUSION

If you didn't get enough speculation already, read on. Plus this part also has colorful analogies that I think are nonetheless still useful.

- Dunbar's number applies to organized humans in all activities. There has been work done on Amazonian hunter-gatherers showing that there are village sizes beyond which there tend to be fission events. It's not that the village hits 150 and everyone draws straws to determine who moves, but there are dynamics that invariably take advantage of a trigger event to cause the split (the chief and his brother have a fight, there's a food shortage and some families move to find better hunting areas, etc.) This suggests that there are in general optimum sizes for human social organizations. This research may have a direct bearing on the productivity of small vs. large companies.

- The biotech industry in each part of the country where there is an active scene (the Bay Area, Seattle, San Diego, and Boston) is a notoriously small world. People often end up working together in different combinations at different companies, merely being re-sorted based on skillsets. In Edward Bellamy's 1888 utopian novel Looking Backward, he describes a system where workers have general industrial skills and are (centrally) resourced to new factories based on need. Of course Bellamy was arguing from a socialist standpoint but in biotech it seems that the free market has already generated exactly this arrangement.

- The pharmaceutical industry is not the only one that is dominated by deep-pocketed century-old behemoths that present barriers to entry and snap up competition as it first evolves from the primordial slime and takes its first stumbling steps in an established jungle. If biotechs are as everyone expects more productive than big pharma, this is bad for patients and bad for the economy, and yet there is no check on the growth of the largest companies. It's as if we're at the end of the Cretaceous (with animals so large they need second brains to coordinate their movements) or in the middle of the Second World War (where the incentive to build ever-bigger battleships yielded the monster Yamato.) In both cases, conditions changed (climate and aircraft, respectively), and selection no longer favored the most massive, but it's hard to see how this trend will ever reverse itself, since it's hard to see how capital accumulation can ever be economically selected against. That is, I don't know what would be capitalism's equivalent of Chicxulub or P-51 Mustangs that would obviate the uneven accumulations of capital, so for now we're stuck with biotech serving as free-range R&D for big pharma.

This is cross-posted with a different introductionj at my economics and social science blog, The Late Enlightenment.

Failed Alzheimers Trial Data Pooled and Made Available

Biopharmas that have had failed Alzheimers drugs are pooling their data in an archive. This is excellent news for several reasons. First is that sharing data means a better chance at success in the future in this very therapeutically tricky disease that has sent more than its share of clinical programs to their graves.

This is a solution I hope we see employed outside this one disease. Several times I've been working on molecules that were killed either because there was a business case (didn't fit in the portfolio; acquisition occurred and acquirer only wanted other drugs from our pipeline, market changed during development, etc.) or scientific reasons - there were toxicities, or we found another molecule that was better. But it was always frustrating to think, particularly in the case of acquisitions, that the data was locked away on a server somewhere never to be seen again, and potentially of scientific use to future research programs. For Alzheimers at least this is no longer the case.

A secondary benefit is that the debate of pharmas "hiding" negative trial data will be quelled, again at least for Alzheimers. The failures will all out there for everyone to learn from.

Friday, July 2, 2010

Why Modern Music Is Too Hard, But Visual Art Isn't

An interesting piece in the New York Times discusses why much of the last century's classical composition and dense prose may never find an audience, but at the same time visual modern art presents a more effortlessly coherent experience. The definition for complexity in music is given as non-redundant events per unit-time, but I'm not sure how they're measuring pattern recognition challenge in visual art and prose. The money quote in the article has to do with why it's much easier for the un-initiated to enjoy a Pollock piece:
The word "time" is central to [critic ]Mr. Lerdahl's argument, for it explains why an equally complicated painting like Pollock's "Autumn Rhythm" appeals to viewers who find the music of Mr. Boulez or the prose of Joyce hopelessly offputting. Unlike "Finnegans Wake," which consists of 628 closely packed pages that take weeks to read, the splattery tangles and swirls of "Autumn Rhythm" (which hangs in New York's Metropolitan Museum of Art) can be experienced in a single glance. Is that enough time to see everything Pollock put into "Autumn Rhythm"? No, but it's long enough for the painting to make a strong and meaningful impression on the viewer.

In a word, the constraints of sensory memory, determined by the sensory modality which is being used (hearing, vision, language, etc.)

Rapid Evolution in Tibetans - and Who Else?

Science paper isn't up yet but probably will be by the time you read it. Tibetans only split from Han less than 3,000 years ago and already they've built up a whole repertoire of genetic low-oxygen adaptations.

While that may not be a surprise, the speed with which it occurred probably is. More and more, it's becoming clear that cultural choices we make (what we eat, where we live) over time affect our genes. So this leads us back to the elephant in the room of human evolution studies. There are genes which affect cognition. Why do we think these haven't been selected differentially as well?

Pattern Recognition in Numbers and Tiles

All numbers can be represented accurately with an infinite string of digits, whether they are rational or irrational. This seems trivial for rational numbers and especially for "round" numbers, but it's easy to be confused by writing conventions and the coincidence of numeric base that we use. In base-10, we omit zeroes, so that 1.5 is really 1.50 with repeating zeroes to infinity. There's no information in the infinite string of 0's so we can omit them and still accurately represent the number (we compress it by writing the repeating parts in shorthand.) As for "not round" rational numbers, the vast majority of their patterns will not be immediately obvious to humans, given our limited pattern recognition limitations and the numeric base that we use. In base-10, the ratio that produces 0.142857-repeating is not obviously 1/7, but in base-7 it is - because in base-7, it's 0.1.

Some numbers have been proven to be irrational. The earliest of these for which we still have a record was Euclid's reductio ad absurdum for the square root of 2. The same has been done for pi and e, among other constants; but as yet, there is no generalized method for proving a number's irrationality.

A proof that there can be no general method for proving a number's irrationality, or at least whether the irrationality of some numbers may never be proven, would be worth having. Using a very unorthodox practical proof: there are finite particles in the universe with which to compute these numbers, and they will exist for a finite time; not nearly long enough to run through all the operations to produce ratios even for all the (infinite) irrational numbers between 0 and 1, regardless what those operations are.

What is interesting about this problem is that ultimately, proving rationality means recognizing some periodicity (or for irrationality its absence) and therefore that more generally proving ir/rationality is a pattern recognition problem. Other problems which are essentially pattern recognition problems are Kolmogorov complexity, i.e. compressibility, which although archiving applications do it all the time is in an absolute sense non-computable; and tiling problems, infinite (plane-covering) solutions of which are famously shown to be undecidable in principle by any algorithms. Are there other properties that these non-computable pattern-recognition problems have in common?

Thursday, July 1, 2010

Could the Flynn Effect Be the Result of Decreased Parasite Loads During Pregnancy?

Paper here. The paper is looking at differences between IQs based on current epidemiological stats, but it seems the next step would be to look at differences in the same population over time. The policy implications need not be stated. Via Dienekes.