Consciousness and how it got to be that way

Sunday, December 29, 2013

Scorpion Peppers, and Partial Agonists in Molecular Gastronomy


Above: pain


Trinidad scorpion peppers are the hottest peppers in the world, and will actually blister your skin if you touch the oil. If Clive Barker bred his own pepper, these would be it. (1,500,000 Scovilles; compare to a jalapeno's 8,000.) Back in May, for some sadomasochistic reason, my friend had a party where he invited people to a feast of these Lovecraftian abominations. Surprisingly, some idiots accepted his invitation. Not surprisingly, all of them had Y chromosomes. Even less surprisingly I was among them.

The reason I'm posting this here is because my otherwise inexplicable behavior afforded me a chance to test a theory. Years ago, I'd noticed that when I'd put too many jalapenos on something, a minute later I ate some spicy but not really hot barbecue sauce, and thought I noticed that the heat abated quickly. If the heat is caused by different capsaicinoids (a critical assumption as it turns out!) it's possible the barbecue sauce capsaicinoid acted as a partial agonist at the receptor, decreasing the heat from the jalapenos. The agonist for the current experiment would be the scorpion pepper, and the partial agonist, Tabasco sauce. Many of the people at the party were medicinal chemists who were interested in this hypothesis; specifically, in watching someone else test it. Medicinal chemists are bad people.

The stakes were high: if I failed, not only would my theory be falsified (in front of a bunch of medicinal chemists no less) but I would be in considerable pain. As if to highlight the risks, when I got there the host was walking around with an ice pack under his shirt, and the only guy that ate a whole one was actually crying from the pain. (Warning, language. Trust me, if you did this you would have language too.)

Drumroll:



But I don't think this can be considered a valid result until it's replicated. Perhaps you would be interested in being a subject?

Reading Comprehension and Synesthesia

Literacy has strong similarities to synesthesia, and indeed is really a form of learned synesthesia. If you can read, you look at visual marks and automatically, involuntarily experience sound and meaning. Yes, these sense- and meaning-associations are initially learned, but they then become automatic. Even when you encounter marks that resemble characters in your language that are the chance result of natural processes - for example, a rock-arch that looks like the letter A - you can't look at it, and not think "A". (To understand what I mean by "automatic", go learn 10 characters from a writing system you don't currently know, and then find a website using that writing system. Understanding the characters is effortful, and by not concentrating, you can look at it, and not think of the sound or meaning.)

Since the serious study of synesthesia began with Francis Galton, it has also been noted that synesthesia runs in families, and that these families are enriched for artists and poets. This has led to the idea that the basis of synesthesia is some genetic influence resulting in insufficient cortical "pruning" in early life; extra fibers are left in areas like the fusiform or superior temporal gyrus, and this leads to color-grapheme or color-sound synesthesia. (An interesting implication is that infants and toddlers may actually all be synesthetes, prior to pruning.)

It stands to reason that if synesthetes are able to more highly associate sensory and meaning experiences, rates of dyslexia (if writing is a form of learned synesthesia) should be lower than in the general population. Doing a web search for this, I inadvertently found a synesthesia discussion forum where participants reported a higher-than-average rate of dyslexia. (Note, you won't have to rely on this dangerous foundation of anecdotal internet discussions for long; but in any event it was interesting that the possible correlation was the opposite of my expectations.)

Now along comes a new Ramachandran paper with David Brang (previously at UCSD, now at Northwestern) using made-up characters in varying colors. Grapheme-color synesthetes have a harder time learning new color-character associations than the rest of us. Extending to dyslexia, it's as if synesthetes' neuronal connections are richer but less trainable. Color-grapheme synesthetes report that it's unpleasant when real characters are printed in colors other than their "normal" synesthetic ones, much like they're constantly taking a Stroop test.

Friday, December 27, 2013

Vestigial Whisker Muscles in Humans

Japanese anatomists have shown that careful dissection of the upper lip shows vestigial vibrissae (whisker) muscles in a third of humans. You know how your cats whiskers go back if you touch them there or they're just annoyed about something? Those. Apes are strange among mammals for not having vibrissae; in particular rodents are thought to construct their pictures of near-space with their whiskers rather than their eyes, and interestingly vibrissae sensory nerves are afferent trigeminal fibers, many of which pass through the superior colliculus, a midbrain visual structure.

Wednesday, December 18, 2013

Toward a Physical Measure of Utility

"Electroencephalographic Topography Measurements of Experienced Utility", emphasis on experienced. Pedroni A. et al, The Journal of Neuroscience, 20 July 2011, 31(29): 10474-10480. The response they measured unexpectedly increased disproportionately increasing reward, i.e. it did not demonstrate diminishing returns but rather the opposite.

A measure of the mismatch between decision and reward utility, and understanding its biological basis and how it differs between individuals, would be excellent for psychology as well.

Monday, December 9, 2013

Other Non-G Explanations for the Flynn Effect Besides Re-Testing

Armstrong and Woodley argue that the documented rise in measured IQs is a result of test-takers applying rule-based approaches to tests, resulting in an increase similar to that seen in re-testing. They're arguing that in large part, it is a testing artifact. They make several predictions about what we should observe if their rule-based model is correct. For instance, that measures of crystallized intelligence (i.e. vocabulary) should not rise or not rise as robustly, and they haven't; that most obviously rule-based tests (like Raven's progressive matrices) should show the strongest effect (they do); and that we should expect the gains to appear when countries undergo demographic transition including standardized education, and the rate of increase should correspond to the rate of the transition.

To this last point, it's worth pointing out that there are many other less interesting medical explanations for why intelligence might actually be rising. Specifically, lower parasite load due to public sanitation and better nutrition in early childhood are very good candidates for why the Flynn-type gains are most pronounced in Europe, Japan and Korea in the periods when they were observed. It's hard to make the argument that lower parasite load leads to better cognitive strategies to game tests. It's also not surprising that if bombs are falling around your school and then they stop, the people tested after they stop may perform better - especially since even non-warfare jet and traffic noise has been shown to locally impair reading comprehension in students.

A second, neglected question is whether the abstract rule-based thinking required for the re-testing-type gains actually correlates to some other outcome, like personal or national per capita income, or life satisfaction. If the Flynn effect doesn't represent an increase in g but does correlate with economic growth, do we care that much?


References

Armstrong EL and Woodley MA. The rule-dependence model explains the commonalities between the Flynn effect and IQ gains via retesting. Learning and Individual Differences, Volume 29, January 2014, Pages 41–49.

Eppig C, Fincher CL, Thornhill R. Parasite prevalence and the worldwide distribution of cognitive ability. Intelligence 39 (2011) 155–160.

Haines MM, Stansfeld SA, Job RF, Berglund B, Head J. Chronic aircraft noise exposure, stress responses, mental health and cognitive performance in school children. Psychol Med. 2001 Feb;31(2):265-77.

Wednesday, November 27, 2013

Sunday, November 10, 2013

Sunday, November 3, 2013

The Physical Diversity of Uralic Speakers

The Uralic language family has always intrigued me. Partly it's because some people think that there is a Uralic substrate to proto-Germanic (see Kalevi Wiik); partly it's because Uralic speakers live in the center of a continent that is at once the center of the Old World, and in many ways not thoroughly explored with its interior still largely wild and uninhabited. But it's really intriguing to see how different Uralic speakers look from one another. They range from blonde, Scandinavian-looking, about-as-white-as-you-can-get Finns and Saami, to very Asiatic-appearing Nenets people in north central Siberia. Is this because there really aren't many genes for face shape, eye shape and hair type - that is, if you looked at, say, Germans and Spaniards genetically, are they genetically as distant on average as Finns and Nenets (just a few genes) but the differing German and Spanish genes don't happen to all be represented in easy-to-spot features? Or is there really a greater genetic distance? In which case, are these people whose ancestors adopted the language, or were they such a small group of hunter-gatherers running into big Turkic and Indo-European populations that their genes were swamped by whoever they met?









The interesting thing is that here the Urals form a fairly clear boundary for these genes. The Udmurts to the west (one of the groups above) are famous for being the most red-haired people on the planet, while on the other hand it's not hard to imagine that the Nenets family above is related to Native Americans and Inuit. (The Udmurts are the first one and the Nenets second, but I probably didn't have to tell you which are which!) But of course there is still cross-over between the two, and it's interesting precisely because most of us in the U.S. don't see these traits represented in the same people very often.

Friday, November 1, 2013

Does Mindfulness Act Through Modulation of Serotonergic Circuitry?

Investigating the connections between mindfulness meditation and the brain's serotonergic systems seems like a promising avenue of research. Mindfulness may be a way to directly influence this system based on observations from several domains.


1) A bedrock principle of psychopharmacology is that increasing the amount of serotonin in synapses improves depression and anxiety.

2) Acute doses of HT2A agonists (e.g. psilocybin) can also improve depression. These agents produce brief and intense sensory experiences. At low doses, subjects do not report hallucinations, but do report that sensations seem more intense and more affectively pleasant (e.g., colors are brighter).

3) Mindfulness, which is focused concentration on sensory input, has been shown to be effective in reducing depressive symptoms in RCTs (van Aalderen et al 2011).

4) Therefore, concentrating on sensory input as with mindfulness may produce similar effects to the SSRIs and HT2A agonists, mediated by the same pathway.


In a sense, giving SSRIs (or the more powerful one-time punch of psilocybin) may produce an exogenously-created mindfulness. No research has yet been done on the involvement of serotonergic circuitry with mindfulness meditation's effects.

Many of the measures correlating mindfulness meditation to outcome concern decreased rumination. To that end, the introspective among us should take note of this quote from a 2013 paper by Paul et al in Social Cognitive and Affective Neuroscience: "Our results suggest non-reactivity to inner experience is the key facet of mindfulness that protects individuals from psychological risk for depression."

Friday, September 13, 2013

What Is the Process By Which "Standard of Care" Improves Over Time?

There is a fantastic post up at Slate Star Codex that I can't recommend enough (both the post and the blog as a whole). In it, the resident physician writer notes that it's unclear how new information is evaluated and adopted as standard-of-care. He gives an example of a now-poorly-supported medical theory (MS caused by poor circulation), a current case where the jury is still out, and a case where a new treatment seems to have very solid data - but is still anything but mainstream medicine. For my fellow psychiatrists, that last one would be the use of minocycline for negative symptoms of schizophrenia. (The writer says no psychiatrists he knows have heard of this but at my institution it's starting to be discussed; however I've never seen anyone started on it for negative symptoms.) The concern is really this: isn't it possible that a potentially valuable publication will languish in obscurity, never to be replicated and built into the evidence-based pantheon? My suspicion is that these things start getting to patients as soon as insurers' and institutions' formularies adopt them, and that medical education and dissemination by journals and conferences is only secondary (apart from the extent to which those things influence formularies.)

Of special importance, this article also makes a point of demolishing the slopping thinking that private sector drug development somehow "suppresses" new treatments that don't make money. That's false. What they do is un-suppress treatments that they think will make money. Before medical school I had a twelve-year career as a drug development consultant, running the studies that would un-suppress drugs, and I find it very frustrating to hear the bias in academia against an enterprise that has done so much good for so many patients. The reality is that drug companies do not suppress or distort information, but they do decide based on a profit motive what information to pursue in the first place. As with all science, each study is a move that decreases uncertainty - about efficacy and safety of each treatment. You have to decide what the marginal value of that uncertainty decrement is, based on some combination of patient suffering and money. And of course that value will be different if you're part of a for-profit company, or an academic institution. As with most things, any narrative that tries to reduce this to a more-neatly-worldview-fitting left-right political angle is at least oversimplifying to the point of incoherence, and more likely just flat out wrong. That is to say: if your claim is that big bad regulations are what make drug discovery difficult and the government is in the way of patients and profits, you're wrong, just like you're wrong if you think that big bad drug companies somehow suppress the truth.

Tuesday, September 10, 2013

Finite Willpower and The Dual-Self: Behavioral and Imaging Evidence

More evidence that ability to choose delayed gratification (i.e., willpower) is a limited resource. The interesting thing here is the relative activity of the dlPFC. Choosing delayed gratification is associated with activation of a network including the dlPFC, and inactivation is associated with more present-orientation. Demand-avoidance (avoiding tasks which tax willpower) is also associated with low willpower.


Of course the obvious eventual application of this research is to make people behave more rationally by increasing their willpower and therefore the future orientation of the actions they choose. The next step is to understand the mechanism of willpower depletion. Interestingly, in exercise science, there is speculation that what accounts for the latent period between high-impact weight lifting sets is neurotransmitter depletion in the synapse, and restoration on the order of minutes by vesicular transporters. There is also some evidence that neurotransmitter re-uptake inhibitors (specifically SSRIs) can increase the amount of exercise that can be performed until exhaustion (specifically, distance-to-exhaustion in distance runners in my own correspondence). The same thing might be happening in the dlPFC network required for willpower. An initial investigation might be to pharmacologically manipulate neurotransmitter concentration in the synapse in animals models and look at the effect on delay of gratification.

Citation: Kool W, McGuire JT, Wang GJ, Botvinick MM (2013) Neural and Behavioral Evidence for an Intrinsic Cost of Self-Control. PLoS ONE 8(8): e72626. doi:10.1371/journal.pone.0072626

Rhesus Macaques Show St. Petersburg Lottery-Like Behavior

From a PNAS paper by Yamada et al. That is, the macaques were willing to take greater risks for a reward when their pre-existing "wealth" is greater, and the possible lost utility is therefore relatively smaller. The wealth in this case was water - either in the form of a drink of water, or their internal store of water as measured by blood osmolality (the macaque's water bank account). More applications of the St. Petersburg lottery here.

Yamada H, Tymula A, Louie K, Glimcher P. Thirst-dependent risk preferences in monkeys identify a primitive form of wealth. PNAS. Published online before print September 9, 2013, doi: 10.1073/pnas.1308718110.

Monday, September 9, 2013

Mechanism, Prevention and Treatment of Clozapine-Induced Agranulocytosis

Clozapine (CLZ) is our most effective atypical antipsychotic, but unfortunately it also has a slightly higher rate of agranulocytosis (about 1%) than the other drugs in the class, which has profoundly limited its use. The reason I chose this topic is that if we understand the mechanism better we can predict who is most likely to suffer this adverse reaction, and we can have a better idea of the course of the reaction and how to treat it. You can find the slides here; this is a talk I did for my pathology rotation at UCSD School of Medicine.

It turns out that Williams Hematology 8th Ed. (2000) is actually wrong about the nature of this reaction, based on studies with CLZ as well as the anti-thyroid medication propylthiouracil (PTU), which is chemically similar to CLZ in the formation of a neutrophil-generated reactive intermediate. The mechanism is very similar to that of other drugs with reactive myeloperoxidase-generated intermediates - as well as some auto-immune vasculitides, in particular granulomatosis with polyangiitis (formerly Wegner's). Critically, both CIA and propylthiouracil (PTU)-induced agranulocytosis feature the appearance of anti-neutrophil cytoplasmic antibodies (ANCAs), just like in GPA. Take-home: genetic screening should be routinely done for patients considering starting clozapine, since there is an HLA-2 polymorphism that has a CIA odds-ratio of 16 relative to those without it. There is at least one case in the literature of a patient who initially had CLZ-induced agranulocytosis (CIA), but did not have this HLA-2 polymorphism, and was re-challenged without a second episode. This also means it's pointless to give filgrastim to CIA patients who are still on CLZ, since the ANCAs reach immature neutrophils in the marrow as well; this was also tried without success on at least one occasion. References are in the slides.

A Possible Solution to Hedonic Recursion in Self-Modifying Agents: Knowledge-Driven Agents

A problem of systems with goals which can self-modify those goals is stability and survival over time. It seems to be a positive that a system could identify goals which are in conflict and modify them to behave consistently in its own benefit, although conflicts can also be solved in favor of the less-survivable goal (example below). The stronger danger is that an agent that can "get under its own hood", so to speak, is able to short circuit the whole process and constantly reward itself for nothing, breaking the feedback loop entirely. This is called hedonic recursion.

An example of conflicting goals: a person wants to be healthy. The same person also really likes eating chocolate. A person with access to his own hardware could resolve the conflict either by modifying himself to make it less fun to eat chocolate, or by to not care about the negatives of being unhealthy. It seems obvious that the first option is the better one for long-term survival, but in the second case, after you modify yourself, you won't care either. And even this second resolution is far less dangerous than outright short-circuiting one's reward center, getting a shot of dopamine for doing nothing. And this short-circuit option would be on the table for a fully self-modifying agent. And, for any self-modifying goal-seeking agent, this will very quickly be realized.

Fortunately or otherwise, this hasn't been a problem for life on Earth yet, because the only way living things here can get rewards is through behavior - because we cannot modify ourselves. The things that cause pleasure and pain are set in stone (or rather, in neurons) and only through behavior (modifying the external environment as opposed to yourself) are rewards obtained. But there are hints in higher vertebrates of small short-circuits - nervous system hacks they have stumbled across which tweak their reward circuits directly. Elephants remember the location of, and seek out fermented fruit (to get happily buzzed). Elephant seals dive rapidly to unnecessary depths to cause narcosis (we think). Primates (including us) masturbate incessantly. And humans specifically have found things like heroin. As we humans learn still more about ourselves and learn how to manipulate the neural substrate, this may be changing. Consequently, if ever humans are able to alter our nervous systems directly and completely, ruin may follow quickly. And indeed, this has been shown with rats: give them the ability to directly stimulate their reward centers with electrical current, and they will do so to the exclusion of all other activities, including those required for survival - hedonic recursion.

In a great discussion at the Machine Intelligence Research Institute website, Luke Muehlhauser talks to Laurent Orseau about how to solve the problem of what kinds of self-modifying agents avoid this problem. The discussion is about how to build an artificial intelligence, but it applies to biological nervous systems that, like us, are increasingly able to self-modify.

One of the theoretical agents Orseau conceived was a knowledge-driven, as opposed to a reinforcement-driven agent, a goal-seeking agent, or a prediction-confirming agent:

...knowledge-seeking has a fundamental distinctive property: On the contrary to rewards, knowledge cannot be faked by manipulating the environment. The agent cannot itself introduce new knowledge in the environment because, well, it already knows what it would introduce, so it's not new knowledge. Rewards, on the contrary, can easily be faked.

I'm not 100% sure, but it seems to me that knowledge seeking may be the only non-trivial utility function that has this non-falsifiability property. In Reinforcement Learning, there is an omnipresent problem called the exploration/exploitation dilemma: The agent must both exploit its knowledge of the environment to gather rewards, and explore its environment to learn if there are better rewards than the ones it already knows about. This implies in general that the agent cannot collect as many rewards as it would like.

But for knowledge seeking, the goal of the agent is to explore, i.e., exploration is exploitation. Therefore the above dilemma collapses to doing only exploration, which is the only meaningful unified solution to this dilemma (the exploitation-only solution leads either to very low rewards or is possible only when the agent already has knowledge of its environment, as in dynamic programming). In more philosophical words, this unifies epistemic rationality and instrumental rationality.

There's a lot more to the argument (you really should read it), but there are several points to be made with respect to this paper.

1) These are not fully self-modifying agents. In this environment their central utility function (reward, knowledge, etc.) remains intact. The solution is to collapse exploitation (reward) into exploration (outward orientation). The knowledge agent can only get buzzed off of novel data, so it has to keep learning. But exploitation and exploration are two conceptually separable entities; so if modification of the central utility function is allowed, eventually the knowledge agents will split exploration and exploitation again, and we're back to reward agents. (At the very least, given arbitrary time, the knowledge agents would create reward agents, to get more data, even if they didn't modify themselves into reward agents.)

2) Orseau's point is taken that if novel data is what's rewarding them, as long as that utility function is intact, they cannot "masturbate" - they have to get stimulation from outside themselves. In another parallel to the real neurology of living things, he states "all agents other than the knowledge agent are not inherently interested in the environment, but only in some inner value." The core of utility is pleasure and pain, which are as much an inner value as it is possible to be. Light is external data, but if you shine a bright light in someone's eyes and it hurts, the pain is not in the light, it's in the experience the light creates through their nervous system. Utility is always an inner value. The trick of the knowledge-based agents is in pinning that inner value to something that cannot arise from inside the system.

3) The knowledge-based agent is maximizing experienced Kolmogorov complexity. That is to say, it wants unexpected information. Interestingly, Orseau says this type of agent is the best candidate for an AI, but such an agent could never evolve by natural selection. He points out that the agents he's using are immortal and don't suffer consequences to their continued operation by any of their experiences. But an agent that can be "damaged" and that is constantly seeking out unexpected environments (ones it doesn't fully understand) would quickly be destroyed. In contrast, Orseau commented that the reinforcement-based agent ends up strongly defending the integrity of its own code. Evolutionarily, any entity that does not defend its own integrity is an entity you won't see very many of (unless the entity is very simple, and/or the substrate is very forgiving of changes. This is why you see a new continuum of viral quasispecies appear after a single year, but why animal species reproductively isolate and you shouldn't hold your breath for, say, hippos to be that much different any time soon.)

4) No doubt real organisms are imperfect amalgamations of all of these agent strategies and more. To that end, Orseau found that the reinforcement (reward)-based agent acts the most like a "survival machine". In his system, I would wager that living things on Earth are reinforcement-based agents with a few goals sprinkled in. (There are many animals, including humans, that startle when they see something snake-like. fMRI studies have even suggested that there are actually specific brain regions in humans corresponding to certain animals - it's really that klugey.) However, of further interest here is that even between humans there are substantial differences in how much utility is to be gained from unexpected novelty, some of them known to be genetically influenced. Some of us are born to be surprise-seeking knowledge agents more than others. The meaning of having multiple genes not at fixation would be useful to investigate. (Only recently valuable in evolutionary time, now that our brains have enough capacity?)


If your goal is to create agents that act to preserve and make more of themselves and remain in contact with the external environment rather than suffering a hedonic recursion implosion, there are a few stop-gaps you might want to put in place.

1. Make self-modification impossible. This is the de facto reality for life on Earth, including us, except for a few hacks like heroin. Life on Earth has at least partly done this, converting early on from RNA to the relatively inert DNA as its code.

2. Build in as strong a future orientation as possible, with the goal being pleasure maximization rather than pain minimization. That way pleasure now (becoming a wirehead) in exchange for no experience of any kind later (pain or pleasure, meaning death) becomes abhorrent. You might complain about the lack of future orientation in humans* but the fact that any organism has any future orientation is testament to its importance.

It could be that we haven't seen alien intelligences because they all become wireheads, and we haven't seen alien singularities expanding toward us because Orseau's E.T. counterparts built their AIs to seek novelty, and the AIs destroy themselves in that way.


Speaking of poor future orientation where reward is concerned: I have seen a man literally dying of heart failure, in part from not complying with his low-sodium diet, eating a cheeseburger and salty, salty fries that he brought with him into the ER.

Dopamine Agonists Increase Salience of "Distractor" Information

From Kéri et al. Take Parkinson's Disease (PD) patients; give them a task where they must remember certain letters associated with pictures, but not other letters associated with pictures, in order to receive a reward. (There were also pictures with NO letters.) At baseline, the PD patients did the same as non-PD patients. After starting the PD patients on one of three dopamine agonists, they now remembered both kinds of letters (specified and distractor) better than non-PD (and non-medicated) patients. That is to say - after administration of dopamine agonists, they were better than non-medicated patients at remembering the rewarded stimuli as well as the distractors.

The core features of psychosis can be modeled as salience defects, and the working clinical hypothesis is that this is mediated by hyperactivity of dopamine in the mesolimbic system (which same feature, unfortunately but predictably, can also cause Parkinsonian symptoms). This is supported by the effectiveness of anti-dopaminergic anti-psychotics in treating psychosis. This paper is important in showing that control of salience is damaged by exogenous dopamine agonism.

Kéri S, Nagy H, Levy-Gigi E, Kelemen O. How attentional boost interacts with reward: the effect of dopaminergic medications in Parkinson's disease. European Journal of Neuroscience Online, 8 SEP 2013 | DOI: 10.1111/ejn.12350.

Churchland's Critique of the Mysterians - Plus Lazy, Middling and Rigorous Mysterian Positions

The mysterian position is often unclear as to exactly the assertion being made, as well as the domains to which it applies. Boiled down, it is this - "The mind cannot understand itself" - although sometimes the claimed unknowability extends further. Often when mysterians throw up their hands, it's in attempts to explain the basis of conscious experience (e.g. Colin McGinn), but often the concept is applied to other domains (language and logic, like Chomsky) or more broadly to all knowledge itself. Patricia Churchland is probably foremost among the mysterians' critics and is having none of this, asking (paraphrasing) "How can mysterians know what we can't know?" (Summary here.) But there are good examples of formally rigorous mysterianism that we already have access to; they're just more limited than what the more aggressive mysterianists are implying.

Churchland regards this surrender to ignorance as no less than anti-Enlightenment, fightin' words if ever there were. While I certainly side with her in terms of approaching the universe with epistemological optimism, the problem is that there are some formal proofs of unknowability, in some domains. She must certainly be aware of these but I'm not aware of any arguments she's made about them. Because it's generally unclear what argument mysterians are making (and their critics are attacking) and what domains they apply to, here is a set of useful distinctions of positions.

Lazy mysterianism. "There is a problem which seems intractable and on which we don't seem to have made much progress; that's because we can't make progress (classically conscious experience)." I think it's clear to all that this kind of giving up is not only un-rigorous, it's pessimistic.

Middling mysterianism, or Human-specific provincial mysterianism, or practical mysterianism. This is the argument that there are things which humans cannot know. But this argument is making a provincial claim about the limitations of the human brain, not about the universe in general. It is without a doubt that the hardware limitations of human brains place constraints on working memory and network size that limit the thoughts we can think, so it's a valid criticism that if we grant that my cat Maximus cannot understand relativity, then there are things that humans cannot understand as well. (Maximus is limited even for a cat but that's not the topic of the post.) The much more controversial part of this brand of mysterianism is when the argument is made not from "commodity" limitations (not enough of something like working memory; you can't think of a two million word sentence all at once) but rather from limitations in the structure of human thought, so that there are logically valid structures that it cannot contain. I'm not going pursue that possiblility, formally or by analogy, but it's worth remembering: an animal with a nervous system designed to mate, avoid predators and find fruit in the African savannah is now insisting that it is a perfect proposition-evaluating machine that can understand everything there is to be understood in the universe. (At the other extreme are people like Nagel, who say that the lowly origin of our brains means we can't know anything.) Especially if other kinds of nervous systems on the planet cannot understand everything, it seems the burden is strongly on those who would make the argument that everything is suddenly within the reach of humans. More simply: if you don't think you could teach the alphabet to Maximus the stupid cat, the burden of proof is on you to explain why everything can be taught to a human. What is so fundamentally different about the two?


From Gary Larson's Far Side.

The key question here is whether it's possible to tell random noise in the universe from something that is comprehensible but beyond us; for example, an advanced self-programming computer or a superintelligent alien, trying to explain some theory to us. You might even say that something is unknowable with current knowledge, because that causes a change in brain state - think how impossible it would be for the otherwise-bright Aristotle to understand the ultraviolet catastrophe without the intervening math and physics - but obviously this didn't mean it couldn't be understood, ever, period. But now we're back to lazy mysterianism, because eventually the UV catastrophe could be understood, so this is useful - the difference between lazy and provincial materialism is whether you can modify your brain state with experience in a way to make you understand it, as Planck and people after him have.

Some would object here, and pursue an angle that argues "if a superintelligent alien tells us something that's 'beyond us"', but we can differentiate nonsense against things that we just don't understand, then we actually do understand it" - which brings us to our next point.

Rigorous mysterianism. Rigorous mysterianism is true. It seems we have a very good reason to think that there are some instances where we can't in principle know things. Turing's halting problem is about as good an example of this as any. Here we have a well-studied formal proof that we cannot know whether an input will ever produce an output. Formal mysterianism! It certainly seems that we have a good reason to believe this is an insoluble problem, because we have a rigorous demonstration of it. An over-optimist might say "Actually the halting problem is just lazy mysterianism. Right now in 2013 it seems like we can't solve the halting problem but that's just a limitation of modern knowledge." Let's be consistent then; by such an argument, all things we don't know will eventually be knowable, including how to go faster than light, how to know both the position and momentum of a particle, and any other number of things. Attacking mysterianism cannot mean rejecting even all formal positive limitations on knowledge, or sequitur quodlibet, anything follows.


Commodity Limitations on Human Cognition and What They Mean For "Understanding"

The interesting points are to be found in the middling, provincial position, because this forces us to define what it means to know or understand something. I may be conflating two terms here: is conscious understanding required for knowledge? Is understanding even a real thing? Think of all the times you've thought you've understood something, really had a genuine honest sense of it, only to have reality eventually show you otherwise. While this doesn't mean that understanding is just a sense experience and always a delusional one at that - though that's one possibility - thinking you understand something is not a solid guide to whether you are correct. Only external reality is, hence the empirical scientific method. This difference, between understanding something and knowledge of something (or the possibility of knowledge about it), may be key to clarifying mysterian positions. And this approach to clarifying the cognitive processes involved in knowledge, and the physical structures that underlie them, should appeal to neurophilosophers like Churchland and help us clarify what we're talking about.

Here's a thought experiment illustrating a commodity limitation to human cognition. An alien lands outside your door and demonstrates some physically amazing thing (teleportation), and says, "Now I'm going to explain it to you and you will understand it. The shortest way to explain it to a human is a twelve billion word sentence." (Which takes roughly 200 years to listen to if you sleep 8 hours and day and listen to the alien for the rest.) That means you will never understand teleportation, period. That may be different from humans not being able to understand it in general.

So you say, "I'll be dead by then, and you might have a nice voice for an alien but listening to you sixteen hours a day the rest of my life would be unpleasant anyway. Can I get a large group of people to help me?" And the alien is nice so it says "Sure." So you start. Because some are on night shift you can do it in 130 years. So in 2143 they get to the end of the sentence, build a machine, and bang-o, teleportation.

In this case: does anyone still "know" how the teleporter works? Obviously yes! you say, they built it! But everyone heard their own three-month subordinate clause, has their own piece of the machine, and it plugs into the rest; no one has all the knowledge. They know that when the parts are put together in such and such way and you put Maximus the stupid cat in this box and press this button, Maximus the stupid cat comes out on the other end - but Maximus is even lazier than he is stupid, so soon enough even he figures out that when you press the red button, you can go to the other room where the food dish is without having to walk all the way there. Unrealistic? Some dogs are smart enough to take the subway in Moscow. They just haven't built their own yet, and no school has been able to teach them. Similarly, it's not a teleporter, but this cat has certainly figured out how to amuse itself with a toilet. Do they understand the subway and the toilet? Do you understand how your phone works? Does any one person at a smartphone manufacturer know how their product works?

Let's go back to the initial alien landing. Now let's say the alien isn't such a chatty Cathy, and the sentence only takes forty years to say, but now the alien is more fixated on having the first person they meet, and just that person, as an audience. For the sake of advancing human technology, he agrees to become a transcription-monk, giving his life to writing down the alien's long sentence. At the end, voila, the monk builds a teleportation device. Now does he know how it works? Does he understand it? "Now this time it's a home run," you say. "He clearly understood the whole sentence and he built the thing all on his own!" Let's come back to this in a bit.

Of course you see the trick here. The long alien sentence is the history of science and engineering, standing on the shoulders of giants. The trick that humans have and that animals don't, or at the very least they don't have it as well-developed as we do, is language, which allows us to overcome those commodity limitations on knowledge and even our own mortality, in order to cooperate with others and advance our ability to predict the behavior of the universe and choose actions accordingly.

The best analogy to the alien sentences are mathematical proofs. Andrew Wiles solved Fermat's last theorem; the proof is over a hundred pages. The vast majority of humans do not understand it (and I submit, could never have understood it, even with the same schooling as Wiles.) But does Wiles understand it? Don't roll your eyes! I'm not asking whether he bumbled about scribbling randomly until he said "Oh dear, I finally seem to have solved it, but don't ask me how". But Wiles, brilliant though he is, still had to write it down (of course) because he understands - that is, holds in working memory - the steps one or a few at a time, not the whole proof. His network density is what differentiates him from subway dogs and from me. Similarly, the people who make smartphones don't understand everything about solid state physics or even their own product, but individuals understand the pieces (because of their networks) and how to plug them together with other pieces they don't understand (like dogs on the subway).

The knowledge of how to make a smartphone clearly exists in the world, yet no one understands it (not all of it; to claim otherwise is to claim that I do actually understand Wiles's solution because I can understand one page of the proof, and believe mathematicians when they tell me it fits with the rest). There's too much information, and even Wiles can't grasp nearly all of it simultaneously. For that matter, even multiplication of large numbers cannot be understood in one piece, but we can still tell it's correct. Understanding as it's usually described, and knowledge - knowledge verified by behavior, by experiment and how to do something - are two different things. There are two arguments buried there that I'll unpack.


CONCLUSION

First, whatever we mean by knowledge it is possible for humans to have, quantitative commodity limitations cannot rule out such possible knowledge. That is to say, your limited working memory (which language helps us overcome) does not determine what is outside of possible human knowledge. Time to understanding doesn't count either; even if we can't understand it now, people might understand it later, building on what we've already learned. To cling to "understanding" as meaning we can think all the thoughts together, we must argue either that there are people who have in their heads, at one point in time, the entire workings of a smartphone (which there are not), OR we must consider smartphones as mysterious, which is stupid, because we make the things. Whatever provincial (human-specific) limitations exist on possible knowledge (if any), they must be based on network density and quality. This is what allows profoundly improved language in humans, and why cats can't do multiplication, and why almost no humans can understand the solution to Fermat's last theorem, but a few can.

Second, and more controversially, it may be more useful to define what we mean by knowledge as information that affects decisions and behavior that consistently produce an expected result, regardless of the subjective experience of this information, i.e. the sense of understanding. Brains and computers both use representations that allow them to behave in ways that interact with the rest of the universe in a predictable way; this is knowledge, even if it's incomplete or perfect. So Wiles, Moscow dogs, and I all have knowledge of how to ride the subway. Wiles and I have knowledge of smart phones. Only Wiles has knowledge of how to solve Fermat's last theorem. Even Wiles does not understand it in full, only in tiny pieces. It seems a short jump to say that it would not be difficult to build a machine that can navigate the first two problems; and we already have systems that can check the last one (but not solve it in the first place). That is to say, computers have representations that let them solve those problems, and therefore have knowledge. But computers don't understand things (have a subjective experience of logical perception), whereas humans can have this sense, but it's notoriously unreliable, and certainly not grounds for claiming to others - and limited to small pieces of most trains of reasoning.

Wednesday, September 4, 2013

Performance at Theory of Mind Tasks Correlates with Working Memory

The experimental task involved adults and children seeing a picture, then having part of the picture blocked and trying to guess what a naive observer would think was in the blocked section. Children get better at this as they get older, but this is mediated by working memory improvements, and differences between individuals are mediated both by inhibitory control as well as working memory. This is in accord with previous work on modeling others' (false) beliefs in general.

Hansen Lagattuta K, Sayfan L, Harvey C. Beliefs About Thought Probability: Evidence for Persistent Errors in Mindreading and Links to Executive Control. Child Development, ahead of press 2013, Volume 00, Number 0, Pages 1 –16.

Saturday, August 31, 2013

Utility Calculations Are Not Allowed For Sacred Things

One place where the vast majority of human beings fall short of attaining full Homo economicus status is in the perseverance of sanctity, for certain values or objects; for example, the protection of children, the value of human life, and the evil of inflicting pain for its own sake. And the conclusion of this post is troubling from a rationalist, post-Enlightenment standpoint: that it's exactly our most critical values where reason fails, and indeed, must fail if those values are to be preserved. In The Righteous Mind, Jonathan Haidt points out that sacredness is one of the six moral foundations of human beings, whether or not we think of it in religious or secular terms.

Please note: to make my point I need to provoke an emotional reaction in you, the reader, so it's going to get a little rough when I violate universally sacred values.

A good working definition of sacredness (in a religious sense or otherwise) is for an object or value to meet at least one of these two conditions:

1) Its truth or necessity cannot be questioned. To do so is to cause moral outrage, and a dramatic devaluing of the questioner's perceived moral character. (Hence my disclaimer above, even though I'm clearly speaking in the abstract.)

2) The object or value cannot be involved in transactional discussions, with either non-sacred or other sacred values and to do so is morally outrageous; questions of valuation or exchange are off the table. (More of a sacred value violation is worse than less of the same sacred value violation, but comparisons between two different sacred values are outrageous.) In other words, "you can't put a price on [sacred value]."

These both really boil down to this: if something does not admit of utility calculations, it's sacred. (Questioning why calculations aren't allowed is a meta-calculation that is also forbidden.)

A concrete example of the first qualification: why is it wrong to torture and kill children? If somebody wants to, why shouldn't they? Maybe right now you're reading a blog and your frontal lobe is keeping you from calling me a monster for even asking that, and maybe you defended yourself against the flash of outrage by assuming I'm asking to make a point - but imagine a stranger asking you this earnestly tomorrow, in person, and really pressing you on it. I don't think you'd feel compelled to think of an explanation. For my part (and don't be a jerk and quote this out of context!) in the utility-based rationally self-optimizing way I tend to think (deliberately) about morality and decision-making, I cannot explain why it would not be okay for a sadistic psychopath to do exactly that, if they derived pleasure from it and wouldn't get caught. Obviously I feel that it's about the worst thing you can do. My logical shortcoming does not cause me to reconsider my position on the matter (even for a second), but rather to call my moral theory incomplete. I wouldn't do it or allow it to happen for all the money in the world, and I'm not interested even in being influenced in that direction for all the money in the world. There's no calculation going on around it. To me, it's sacred.

A concrete example of the second qualification: how much would that person have to pay you to torture and kill a child? You would refuse to put a price on it and would likely be offended by the question. You also would likely not want to get involved in a discussion of the relative evil of torturing and killing children, versus deliberately infecting people with AIDS. (10 AIDS to 1 torture-murder? 12 to 1? Come on, there has to be some exchange rate!) So there's just not going to be any talk about relative value. Interestingly, sacred objects do allow comparisons between units of the same sacred-value-violation (obviously it's worse to torture and kill two children than one) but there's no comparison allowed between different types of sacred-value-violations. Of course the world does not always respect our moral categories, and in point of fact when people are put in a situation where they do have to choose between sacred value violations, they suffer badly, but their heads don't smoke and sputter like broken computers; they clearly are capable of making such calculations. In Sophie's Choice, a movie that depresses me just from hearing about it (I haven't seen it and I won't) a concentration camp guard forces a woman to choose which of her two children is taken to the ovens (or if she won't choose, he'll take both). And she finally chooses, much to her bereavement of course.

A way to think about decision-making and morality is to assign utility values to things. We look at utility lost and gained and sometimes we sacrifice utility now as an investment for utility later (as I will in a few minutes when I go back to studying).

But consideration of sacred values is nothing like this. In line with property #1, it's not that the possibility of transgressing a sacred value even crosses our minds in the first place if only to be immediately rejected: "You know, this kid that woke me up in the morning by playing outside my window really made me mad. It would give X amount of utility to get out my frustrations right now and also know that I'll be able to sleep in from now on, if only I torture and kill this kid, and I even know a way to do it without getting caught. But no, I would actually feel so bad about it that I would have infinitely negative utility, so it works out in favor of non torture-murder." That's not what happens. And in keeping with property #2, though it's a dark and sad thing to say, such an act might be one of the worst things imaginable but it's still not really infinitely bad. (If it was infinite, two such violations wouldn't be detectably worse than one.) Of course it's not just child torture-murder that's not within the realm of possible deliberation, but most of our moral values, which were programmed early in life and which give us flashes of disgust or happiness, mostly quite beyond our control to change.

It's worth pointing out that the classic infinite negative utility scenario philosophers discuss is being damned to Hell as in Pascal's wager, a fate literally worse than death if you believe in Hell. But the experience of considering the horrible actions I discussed above is very different than the Pascal's wager consideration. When you imagine suffering in Hell forever, you imagine feeling bad and you become afraid. It doesn't go that far when violating a taboo is suggested to you - you don't picture, as did the moral reasoner in the previous paragraph, the way you would feel if you did such a thing, you just don't even for a second consider it.

Someone who is really making utility calculations about their decisions would not behave in this way. It's as if you're in a "possible actions store", with certain actions on a display shelf, not for sale. If we were really performing "moral reasoning" of some kind, we would at least be entertaining these not-for-sale actions, even if we never did them. The problem is that conscious, frontal-lobe based utility calculations (however they're performed) do tend to be corrosive to traditional values, because utility calculations are effective at creating successful novel actions - and this corrosion in traditional values is frequently effected through markets, once sacred objects or acts have been assigned a commensurable value. Markets are aggregates of utility-based decisions that accumulate massive power to influence actions.

This may explain why people who are otherwise very pro-free-market find market infiltration into certain arenas (especially traditional culture) to be extremely offensive - because now the tradition is subject to utility calculations, and it will surely change, and quickly. The commercialization of Christmas is an excellent example. But the clash of values in healthcare, both in patient perceptions as well as for practitioners, is a much larger and more profound one. (Philosophers like to talk about utility in hypothetical units of "utilons", but in the real world, there actually is a unit of utility that healthcare organizations and policymakers use, the QALY, quality-adjusted life year. You don't want to assign relative values for treating AIDS and cancer? Too bad, because your government is doing it at this very moment, in the real world. Of note, some QALY tables for diseases do recognize fates worse than death, although they still don't assign infinite negative utility. Incidentally I imagine that the committees that build these tables are Sarah Palin's "death panels".)

Once those values are "for sale" - once they enter into the realm of conscious deliberation and value-assignment - they're almost certainly not going back to the display shelf. In a brilliant aside in Predictably Irrational Dan Ariely makes several observations about exactly this problem, in the commoditization of social relationships, particularly by banks.

The following might at first seem a strange observation for a self-described libertarian to be making, but for those of us who think the market (and reason) is usually the best method for increasing utility, it behooves us to understand it, warts and all. Haidt later noted that the business school students he taught (and collected surveys from) were on average low in every one of the six moral foundations he describes, in terms of how their values were influenced. These are individuals who do in fact act out of rational self-interest - they're members of Homo economicus for which everything is for sale, and nothing is sacred. What's more, a study has shown that low empathy predicts utilitarian judgment. (Do note the categories there: if you have low empathy you're more likely to be utilitarian. No word on whether utilitarianism predicts low empathy.)

But my concern is not that eventually all businesspeople (and everyone else) will be lured by utility calculations into becoming child murderers. Some of our sacred values are likely to be very biologically innate, and others taught. Given its universality, protecting children is probably strongly innate. Others, for example treating certain religious or national symbols with respect, are learned and likely subject to erosion over time. The bigger question is what this does over time to our ability to cooperate and sub-optimize over the long-term.

(More interesting relationships between pre-rational neurology and moral behavior here.)

Saturday, August 24, 2013

Why Do We Care About Consciousness?

The problem of understanding consciousness is traditionally broken into the easy problem (how the brain works as a computer) and the hard problem (how the brain creates subjective experience). Indeed, the hard problem seems so intractable, and progress toward a solution seems so tricky to measure, and because it's not even clear what kind of an answer will explain it (we really don't know how to start getting there from here) that it's been called more of a mystery than a problem.

Here are possible reasons the hard problems still seems more like a mystery than a problem:

- It's an incredibly difficult problem: our science so far is not nearly sufficient to explain it, and/or our brains have difficulty with these explanations. (Whether this is a feature of brains in general or humans brains right now is another question.

- There are bad ideas obstructing our understanding. This is an active failure of explanation, rather than a passive failure as above. We have a folk theory or theories (a la Churchland) about subjective experience that we don't know we have and/or are not ready to discard), which is complicating our explanations. Our account is like trying to explain chemistry with vitalist doctrine, or the solar system with the Earth at the center (probably worse than that last one, which can be done, it's just pointlessly difficult and messy.)

- The first-person-ness of experience is a red herring. This may be one specific bad idea as above. When you're explaining an object in free-fall, no one worries that you yourself are not experiencing acceleration. The explanation works regardless of where the explainer is.

- Some non-materialist irrational truths hold in the universe. If that's "true" I don't know how we could ever know it.

Explaining things necessarily involves trying to build a bridge from what we already know to the not-yet-understood thing; but so far this endeavor has the flavor of checking internal consistency between what we already know about nervous systems. It seems that we're mostly motivated to explain consciousness, because we're bothered by the resistance of this idea to explanation. (I certainly am.) But if we don't know where we're going yet this kind of approach might not get us very far. One obscure but interesting example of obsession with internal consistency of theories comes from pre-Common Era China, where Taoist logicians agonized over the relationships between the properties describes by yin and by yang. Yang things are both hard, and white. So what, then, is the logical relationship between hard and white? They're both yang, they reasoned, so there must be such a relationship.

Of course we wonder, where did the Taoists think they were going to get with that kind of tail-spinning if they thought they were trying to answer pretty deep questions? And so we should turn the same question on ourself: why do we care about consciousness? What impact will the theory have once we understand it?

The obvious answer is that it's ultimately a moral question. While it's not clear whether affective components of experience (at base, pleasure and pain) are necessarily a part of consciousness, they certainly are possible in conscious beings, because I experience them, and so even though saying this makes verificationalists mad, I give credit to other apparent first-person viewpoints of other living things (e.g. other humans, dogs, cats) that they experience them too.

Consequently, if neuroscientists of the future that build nervous systems, and AI engineers (if they're two different professions) believe that their lab subjects can experience consciousness, then it becomes incumbent on them to understand what they're experiencing. If we're capable of creating things are conscious, we have to avoid creating ones that are predisposed to suffer. Indeed with such an explanation we may take notice of other structures in the universe that can suffer but that we didn't even realize were conscious before.

Once we understand the material basis of conscious awareness (if there is such an explanation), then we can start asking some heavily Singulatarian-type questions - whether mind uploading, the transporter problem, etc. are really extensions of a self, or just copying, and whether there are meaningful differences in those alternatives.

Finally, understanding the basis of consciousness may allow us to alter the structure of conscious objects in a way that decreases their suffering and expands their happiness - from first principles. Currently we're limited to what I hope will seem in the future to be very limited, clumsy manipulations of nervous systems to decrease suffering - e.g. taking consciousness away with anesthetics, NSAIDs, anti-psychotic medicines, talk therapies, and beyond medicine, all the behaviors that we engage in minute-to-minute to enhance fluorishing and decrease suffering in ourselves and the beings around us.

Saturday, August 17, 2013

Without Constraints, How Do Humans Behave?

Cross-posted to my geek blog as well as my politics and economics blog.

Life on Earth evolved in an environment of constraints: resource limitations, disease and predation all put lids on behavior and reproduction. Consequently, the mechanisms to deal with those constraints have no "brakes", because nature provided them. There's no reason to have tight control on over-eating, because such a situation rarely arose. There was no reason to protect reward circuitry in general from overstimulation. But now we're starting to remove those constraints. Solve food scarcity, and we get obesity. Go straight to the reward center (without a real external reward), and we get heroin and video game addiction.

This is the biggest problem we face in any post-scarcity world, or (more broadly) in any world where our behavioral regulation is freed from the constraints that sculpted it for billions of years, whether in reality (because there really is more than enough food) or virtually (because you can just shoot up and feel good). This problem has even been advanced to explain the Fermi paradox, since whatever behavior regulation intelligent aliens evolve, presumably when they solve their own constraints, they will run into the same problems - perhaps with species-destroying consequences. The more complete and effective a representational system is*, the faster and greater the instability it creates in the system.

You might think of a science fiction story where curious and powerful aliens have put humans in a kind of terrarium where the weather is always fair, there's always enough to eat, there's no physical danger, and where there is always another territory to move into, with no loss of security, if you burn too many bridges with the ones in this one. That is to say, someone looks at you the wrong way, or your significant other mildly irritates you - why stick around? The aliens have guaranteed there will be another handsome gentleman/pretty lady waiting for you when you get to the new territory. And when you get there you wonder idly if these are real humans also in the experiment, or were whipped up and memory-programmed by the tissue replicator twenty minutes before you got there; or maybe you were, before your new mate got here. But you're taken care of; does it matter? (You might even call this hypothetical alien terrarium "California"; perhaps this explains my interest in the simulation hypothesis.) In a world of limitless security and resources and even others' company, why ever tolerate the least inconvenience?

A scenario similar to this that happens in the real world is the strange discomfort of working alongside someone who is wealthy independent of their jobs. Why are they even here, people might ask resentfully - and indeed, from anecdotal experience, when these people get annoyed, they quickly leave, because why not? They have security and more territory.

So what happens to people when all the constraints are removed, when they're both wealthy and not subject to censure by broader political forces? That is to say, how do humans behave when all the brakes are off?Predictably. From "The Prince Who Blew Through Billions" by Mark Seal, from Vanity Fair in July 2011:
On the brother of the Sultan of Brunei, Prince Jefri Bolkiah, who has "probably gone through more cash than any other human being on earth.": "The sultan's biggest extravagance turned out to be his love for his youngest brother, Jefri, his constant companion in hedonism. They raced their Ferraris through the streets of Bandar Seri Begawan, the capital, at midnight, sailed the oceans on their fleet of yachts (Jefri named one of his Tits, its tenders Nipple 1 and Nipple 2), and imported planeloads of polo ponies and Argentinean players to indulge their love for that game, which they sometimes played with Prince Charles. They snapped up real estate like Monopoly pieces—hundreds of far-flung properties, a collection of five-star hotels (the Dorchester, in London, the Hôtel Plaza Athénée, in Paris, the New York Palace, and Hotel Bel-Air and the Beverly Hills Hotel, in Los Angeles), and an array of international companies (including Asprey, the London jeweler to the Queen, for which Jefri paid about $385 million in 1995, despite the fact that that was twice Asprey's estimated market value or that Brunei's royal family constituted a healthy portion of its business).

"Back home, the sultan erected a 1,788-room palace on 49 acres, 'which is without equal in the world for offensive and ugly display,' in the words of one British magnate, and celebrated his 50th birthday with a blowout featuring a concert by Michael Jackson, who was reportedly paid $17 million, in a stadium built for the occasion. (When the sultan flew in Whitney Houston for a performance, he is rumored to have given her a blank check and instructed her to fill it in for what she thought she was worth: more than $7 million, it turned out.) The brothers routinely traveled with 100-member entourages and emptied entire inventories of stores such as Armani and Versace, buying 100 suits of the same color at a time. When they partied, they indulged in just about everything forbidden in a Muslim country. Afforded four wives by Islamic law, they left their multiple spouses and scores of children in their palaces while they allegedly sent emissaries to comb the globe for the sexiest women they could find in order to create a harem the likes of which the world had never known."
This reads like an account of what each of us would do if we found out tomorrow we were in a simulation, with power over said simulation. This is what happens when the brakes are off. If you object that this is an exception or an extreme example - I guarantee that this behavior happens more among the fabulously wealthy and powerful. Well of course, you again object, other people can't behave that way! But then if the tendency wasn't there, why should it happen at all? And (more to the point) do you seriously think you would be any better-behaved? Of course you would; you're biologically and/or morally superior to these folks and would never let that kind of thing happen. (Also note that lottery winners, with a sudden random infusion of karma or whatever you call the points in our game - that's right, "money" - are known for going off the rails, and being more miserable and more likely to go bankrupt than the general population. Also, see "athletes from poor backgrounds suddenly signed up to multi-million dollar contracts in pro sports".)

An astute observer will say, "So what if people descend into depravity? If you're in a simulation or the aliens' zoo or you're royalty and don't hurt anyone, if you're happy with harems and Ferraris, fine!" That would be fine. But the problem is these people often seem not to be happy. Here it's hard to get data, but they are not invariably happier than other humans and in fact often have considerably troubled emotional lives. Again, they're using nervous systems built for an environment of resource and social constraints. It should not be surprising that they experience boredom, restlessness, and emptiness. In fact in the developed world it's not just the ultra-wealthy that experience these things. That said, it's sure better than starving or being eaten by tigers, but it seems those are our two alternatives: obese or at best bored, versus running from predators, starvation, and stronger neighbors. Yes, I fully recognize the pessimism of this position.

So, there's an addition to Malthus here. Malthus merely pointed out that when all constraints are relaxed but one, that constraint will limit (and his rule concerned, specifically, energy input as the unrelaxed constraint, but you can imagine for example a dense population of well-fed non-preyed-upon humans being periodically cut down by plagues). The addition is that when all constraints are relaxed, the system becomes unstable, whether that system is a cell (cancer) or an individual. The more powerful the system - which can be approximated by how fast it can change - the faster it will become unstable.

*The first representational system to evolve on Earth was the gene: the proteins it codes for are indirect mirrors of a DNA strand's environment - and as the environment changes, the genes change. As life became more complex, systems appeared that became able to more and more rapidly and/or accurately reflect parts of the environment beyond the replicator: the cytochrome P450 system which is a remarkably non-specific but effective metabolism system (which is how most drugs are broken down even though life on Earth has never seen these molecules before) and the immune system, which produces high-affinity molecules with a process of directed by limited somatic mutation. The ultimate such system however is the development of large numbers of cells signalling with ion channels, which can represent much more information much faster, and in humans has expanded to allow the assignment of arbitrary symbols to novel relationships (language). While we still can't assume that our language-enhanced nervous systems can represent every possible state external to themselves (any more than the immune system can do so), it's still by far the fastest-acting system and the one most likely to spell its own demise. As an aside, it's probably no surprise that plants that have begun to evolve "behavior" of a sort - the carnivorous plants - also use ion channels. Assuming causality is unidirectional, what happens first matters, and therefore so does speed.

Monday, August 12, 2013

The Cost to the Economy of a New Drug

At Forbes, Matthew Herper follows up on previous work about the cost of drug development, by looking at the actual R&D costs incurred of companies with successful drug development over 10 years, divided by the number of drugs each has marketed. This includes failures, which is a more useful way of looking at the cost of innovation than just looking individually at the cost of each program from discovery to market - you have to include failures to give a real picture of the cost of innovation. The answer? The median amount spent by companies per drug getting to market for the last ten years is $808 million; the average was just under $2 billion. The outliers are the big guns at the top of the list.

Keep in mind that even by including failures, this list is still weighted toward success, and does not really give us "dollars that are spent in the economy for each new drug". There are a lot of companies that burn through a lot of cash without ever getting anything to market. The numbers above would be much higher if we included that.

Herper concedes that especially in the larger companies, some of the R&D spending is masked as acquisitions (and Abbott is indeed at the top of the list). But don't worry about that, because what's more frightening is that drug development shows reverse economies of scale, and multi-approval companies spend MORE per drug. Concretely: for companies that have marketed 4 ore more, they spend a median of $4.2 billion per drug. 5 or more? 5.3 billion per drug.

Finally Herper points out that in fact the distribution is distorted because a lot of those low per-drug costs at 1-drug companies are really higher, and they're being hidden here in the budgets of partner companies. Fine, so let's take the combined cost of all the drugs that have come to market over ten years, and divide by the number of drugs - this is the economic cost per drug of the entire biopharma world, i.e. what it costs an economy to make a drug. And that cost is $3.6 billion per drug. That's the absolute lower bound that policy-makers need to keep in mind, because it still doesn't include the one-drug companies that never made it to market.

If we want to continue producing new drugs and/or have governments and individuals actually be able to afford them, we need a profound retooling of the clinical research enterprise. Soon.

Saturday, August 3, 2013

How Can Things Be Interesting But Useless

A close friend played the following "game" in undergrad: anyone who made a deep observation or revealed some startling fact in his presence faced immediate judgment. "Okay," he would intone thoughtfully, "that was about a seven on the interest scale and an eight on the uselessness scale." The object of the game was to say something that was a ten in both - perfectly interesting, and perfectly useless. (Fortunately or not, this high bar was never achieved.)

(An outline of the answer in this article: if you THINK you know something but don't, you're motivated to learn more. Interestingly, requires the presumption of knowledge, and a hole in related material you already know, which is why curiosity tends to breed more curiosity; still no word on why this selects for useless information.)

Years later I wonder: how can things that are useless be interesting? The point of beliefs about the world is to improve the utility of their holder - most obviously, through affecting decisions that change the external world and our condition in it. But it seems like most of our beliefs - "our" meaning humans, not just the self-declared intellectuals among us - have little to no chance of affecting a decision. For instance: I am fascinated with dark matter, in fact far more fascinated with it than most things in my profession (medicine). I would grant a broad definition of useful here, so for cosmologists whose mortgage depends on their interest in dark matter, it's not useless at all, but certainly for the majority of human beings who find dark matter interesting (like me) this interest cannot possibly result in a decision being made differently. There is no way to argue that anything we learn about dark matter can ever have anything to do either with my profession, or my other activities down here on Earth. If you spend any time on the internet, you no doubt have noticed many other people that have similar esoteric interests.

And yet it seems like a safe assumption that our brains evolved to solve problems that have to do with survival and propagating genes into the next generation - to do otherwise would result in attention constantly distracted to evolutionarily unimportant events and lots of energy expended for no good reason. What are some possible explanations for our finding useless things fascinating?

Noise. That is to say, we're just weak-minded; those of us whose interests drift outside what is immediately useful just have poor attentional control. After all, most humans do not find dark matter and "useless" things like dark matter interesting. (Those of us who do are just stupid.)

Signalling intelligence. Notice that these useless interesting things are generally those which are considered intellectually difficult and which not many people know much about. By gaining some knowledge about them, we signal our intelligence and education. Also notice the following: at a first-time meeting in an informal discussion, a useless interesting topic may come up that is outside the expertise of all the discussants (say, two departments from a technology company are having a mixer and people start talking about black holes or evolution). The conversation will carry on for a few minutes (an acceptable "cocktail-party chatter" period) and then move on. Often, someone who is both intellectually gifted and educated, and also socially clever, will become impatient when a "geekier" conversationalist tries to keep the conversation on black holes, or makes a point of a strong disagreement about the topic. The geek is missing the point that both parties have already announced their general intelligence, and there's no point in remaining on an issue on which neither is an expert, and outside of signalling value it's of no use to either; no one is going to make a discovery based on this conversation.

It should not escape the reader's attention that many blogs could be explained in this way. Certainly not any of mine though.

Reinforcement; i.e., internal confirmation bias. These topics touch on and reinforce things we already know, things which may or may not also be useless. If this is happening, we should expect that the more interesting useless things you know, the more interesting useless things you should want to know, because of more combinations of beliefs reinforcing each other.

Novelty. People get a thrill from learning new things. If this is true, then people who are designated as sensation-seekers should like interesting useless things more than others.

Surprise, and mismatch hypothesis. In experimental paradigms, chimps look at unexpected things longer than expected things; this is a way to measure if they're smart enough to recognize some pattern that's not adding up, since they can't just tell us. It seems likely that an interest in (for example) dark matter is this same reaction, but applied outside the domain of our ancestors. When the branches of a bush were in a different place than they were two seconds ago, that merits attention, because it might have a direct impact on survival. But now that our ability to recognize patterns has exploded - humans understand some of the nature of matter and the universe - we now frequently see unexpected things, but in places that we have no reason to believe can ever affect us.

Simple awe. Stories or music that cause piloerection (goosebumps) have been shown based on fMRI to be the result of partial sympathetic arousal, the same way as if a large predator has appeared; but the experience seems not unpleasant, because people continue to self-administer. These universal truths about massive entities may be activating the same systems. That said, my experience about dark matter is not the same as my experience of (for example) the Mars movement from Holst's Planets.


These are not exclusive of one another. If I had to guess what's going on inside my own skull, I would say both the signalling and reinforcement.

Tuesday, July 30, 2013

Evolution and Rh Negativity in Historical Populations


A Kleihauer-Betke stain showing fetal blood cells circulating in maternal blood. Even 0.5 mL of fetal red
blood cells can be detected with this method. It can also be detected by the mother's immune system.


The Rh antigens on human blood cells always bothered me; what a terrible system. If a mother who does not have the Rh protein on her blood cells reproduces with a male that does, she may have a baby who also has the Rh protein. The first such baby is fine; but the mother stands a chance of getting enough of the baby's blood cells in her own circulation during the birthing process that her immune system sees it. Now, if she has another baby that is Rh+, her immune system will attack the baby's blood cells and destroy them. 5% of babies who are the second Rh+ baby of an Rh- woman will get hemolytic disease of the newborn (HDN), and be very anemic and sick; possibly die.


Structure of Rh factor. That's a channel-looking protein if ever there was one. From Mike Merrick at the John Innes Centre.


So what happened, I always wondered, before Rhogam? (The injection that stops an Rh- woman from making antibodies to Rh+ baby's blood cells.) Did we go through a hundred thousand years until 1960 with stillbirths and sick, anemic babies affected for their whole lives being born left and right, and life was just so scary and incomprehensible and miserable that we just accepted this?

I looked into this during my OB-GYN rotation and ended up giving a short talk about it. It turns out that Rh negativity is most a European thing; especially, it turns out, a Basque thing. Europeans are 15% Rh negative, Basques about 36%. The interesting connection is that the function of the Rh protein is an ammonia channel that, among other things, confers resistance to toxoplasma - which is spread through the feces of cats - which did not exist in pre-Holocene Europe. There's our answer! There was no pressure for the Rh factor be maintained once Europeans were in Europe, and the longer they were there, the more they lost it. So if the Rh factor is lost, no big deal, right?

Well, this still bothers me, for the simple problem that where we find Rh negativity, we don't find only Rh negativity. The Rh factor didn't disappear completely, so there's still a price to pay in terms of HDN and sick babies. In an all Rh- population, things would be fine; there would be no Rh-caused HDN, and no disadvantage to being an Rh- homozygote. But in the absence of an advantage to being Rh- (or even heterozygous) - which we're not aware of - it's a disadvantage, balanced against no advantage. I used to tell myself that maybe the mortality numbers were low so it didn't matter, but nature keeps close score. Even if Rh negativity is a little bit deleterious in terms of HDN, it should be selected out - eventually.



Using the Hardy-Weinberg equation, we can construct a simple model that tells us how rates of Rh negativity should change over time.

- Assume mated pairs have the same number of births on average, regardless of Rh status.

- Assume that for the second and further Rh+ babies born to an Rh- mother, 5% have HDN, and all these babies die. (They wouldn't have all died, even in the paleolithic, so the gene will disappear more slowly than this model predicts.) Future Rh+ babies get HDN at a higher rate than 5% but again, as long as I assume 100% death for them the gene will disappear more slowly than the model predicts.

- Assume there's no advantage to Rh negativity; its effect is entirely negative through HDN as above.

- Plug in current gene frequencies in Europe.

Since the gene will disappear more slowly than this model predicts due to the assumptions outlined above, we can project back based on current gene frequencies, and get dates that are probably slightly more recent than reality.

This model tells us that for the gene frequency to drop to 1% will take 565 generations - between 8,475 to 11,300 years, using a short generation time of 15 to 20 years. For the current Basque frequency of 36% Rh- (assuming it has remained static) to come down to the wider European frequency of 15% would have taken 208 generations, or between 3,120 and 4,160 years.

Assuming that Rh- was at fixation in proto-Basque populations in Iberia and Gascony, the introduction of just 1% Rh+ homozygotes would have taken 22,110 to 29,480 years to get the Basques to 36%. It doesn't escape notice here that these date ranges are all post African exodus, and some of them are within the scope of antiquity of the Near East.

Finally and most interestingly: assuming about an 8.6% injection in a 0 Rh- population of Rh+ 2,025 to 2,700 years ago, this would give us 36% today. That period coincides with the establishment of Phoenician and later Roman colonies in Iberia, and quickly established colonies and armies could easily introduce 1/12th the human DNA in the sparsely-populated Iberian peninsula. This assumes that during this time, Semitic and Indo-European people were Rh+.

It seems impossible without further research, particularly on ancient DNA, to distinguish between these three possibilities:

1) Rh negativity confers no advantage, and there was a founder effect that gradually eroded. Rh positivity was lost completely in a small ancestral Basque population, and very gradually Rh+ through HDN has been decreasing the proportion in the population; in turn Rh negativity has spread throughout the western half of the Old World.

2) Rh negativity confers no advantage, and a dramatic amount of Rh positivity was introduced in antiquity by migrants from around the Mediterranean. The high Rh negativity also seen in some parts of Africa could support a Phoenician mechanism of gene flow (Rh+ into Iberia, Rh- out).

3) Rh negativity does confer an advantage that partially or totally counterbalances the HDN problem that we are so far unaware of. Not exclusive of #1 or 2.


Rh factor isn't nearly the only immune incompatibility between maternal and fetal blood that can cause HDN, and we certainly have a lot to learn about the function of these markers.

Monday, July 29, 2013

To One-Beer or Two-Beer on Newcomb's Paradox

This is a bit of an inside joke for Less Wrongers, so my apologies if it doesn't make you smile. (More on Newcomb's problem here.)



Newcomb's Ranch is a (very isolated) bar on Angeles Crest Highway (CA Route 2) in Angeles National Forest, maybe an hour and a half from downtown LA. One might ask if I one-beered or two-beered at Newcomb's. My good madame or sir, why are one or two the only options? Anyway by the end of the night I thought I was pretty smart but then Omega in his function as omniscient bartender cut me off.

Friday, July 19, 2013

Corollary Discharge and Inner Speech: Clues for Psychosis

A central feature of psychosis is the disintegration of the sense of self. A healthy person, during the course of the day, talks to themselves, and rehashes past or hypothetical arguments with friends and family. The healthy person may even speak out loud during these episodes*, but they know all these thoughts and imagined voices are exactly that, coming from their own head, under their control. In contrast, the voices that psychotic people describe is that very often (actually, in my experience, usually) they are people that the person can identify - usually friends and family. Again in my experience, voices of parents are the most common. What's more, I've witnessed more than once a person who had badly decompensated and told me in the emergency department that he could hear his friend's voice talking to him and wasn't able to reality-test that the voice must be coming from his own brain; but as he reconstituted over several days with medication, the voice eventually became an internal monologue he was having with his friend, with full recognition by the patient that it was indeed an internal monologue - just like anyone else rehashing an argument in the shower.

An interesting study in Psychological Science by Mark Scott from UBC gives evidence that our capacity for inner speech is related to our ability to tune out our own voices when speaking; the neural correlate of this suppression is called corollary discharge. This of course leads to speculations about whether this mechanism underlies the origins of language and cognitive modernity, but psychiatrists and clinical psychologists will find it immediately interesting for another more immediately practical reason: as a way to measure and possibly target auditory hallucinations in psychosis. Of note, this study had no direct measurement of neural correlates, instead relying on the Mann effect, a phenomenon of context-dependent perception of vocal sound (McDonald-McGurk is another example.) That said, a 2011 by Greenlee et al at U of Iowa showed with intracranial electrode measurements in humans that corollary discharge in hearing speech is unsurprisingly located in the auditory cortex.


*More than once, while I was out running on what I thought was a deserted trail, I have been caught talking to myself by an alarmed trail user coming the other way. Invariably as soon as I see them I act like I was singing to myself the whole time. Somehow this never seems to comfort them.