Consciousness and how it got to be that way

Saturday, August 31, 2013

Utility Calculations Are Not Allowed For Sacred Things

One place where the vast majority of human beings fall short of attaining full Homo economicus status is in the perseverance of sanctity, for certain values or objects; for example, the protection of children, the value of human life, and the evil of inflicting pain for its own sake. And the conclusion of this post is troubling from a rationalist, post-Enlightenment standpoint: that it's exactly our most critical values where reason fails, and indeed, must fail if those values are to be preserved. In The Righteous Mind, Jonathan Haidt points out that sacredness is one of the six moral foundations of human beings, whether or not we think of it in religious or secular terms.

Please note: to make my point I need to provoke an emotional reaction in you, the reader, so it's going to get a little rough when I violate universally sacred values.

A good working definition of sacredness (in a religious sense or otherwise) is for an object or value to meet at least one of these two conditions:

1) Its truth or necessity cannot be questioned. To do so is to cause moral outrage, and a dramatic devaluing of the questioner's perceived moral character. (Hence my disclaimer above, even though I'm clearly speaking in the abstract.)

2) The object or value cannot be involved in transactional discussions, with either non-sacred or other sacred values and to do so is morally outrageous; questions of valuation or exchange are off the table. (More of a sacred value violation is worse than less of the same sacred value violation, but comparisons between two different sacred values are outrageous.) In other words, "you can't put a price on [sacred value]."

These both really boil down to this: if something does not admit of utility calculations, it's sacred. (Questioning why calculations aren't allowed is a meta-calculation that is also forbidden.)

A concrete example of the first qualification: why is it wrong to torture and kill children? If somebody wants to, why shouldn't they? Maybe right now you're reading a blog and your frontal lobe is keeping you from calling me a monster for even asking that, and maybe you defended yourself against the flash of outrage by assuming I'm asking to make a point - but imagine a stranger asking you this earnestly tomorrow, in person, and really pressing you on it. I don't think you'd feel compelled to think of an explanation. For my part (and don't be a jerk and quote this out of context!) in the utility-based rationally self-optimizing way I tend to think (deliberately) about morality and decision-making, I cannot explain why it would not be okay for a sadistic psychopath to do exactly that, if they derived pleasure from it and wouldn't get caught. Obviously I feel that it's about the worst thing you can do. My logical shortcoming does not cause me to reconsider my position on the matter (even for a second), but rather to call my moral theory incomplete. I wouldn't do it or allow it to happen for all the money in the world, and I'm not interested even in being influenced in that direction for all the money in the world. There's no calculation going on around it. To me, it's sacred.

A concrete example of the second qualification: how much would that person have to pay you to torture and kill a child? You would refuse to put a price on it and would likely be offended by the question. You also would likely not want to get involved in a discussion of the relative evil of torturing and killing children, versus deliberately infecting people with AIDS. (10 AIDS to 1 torture-murder? 12 to 1? Come on, there has to be some exchange rate!) So there's just not going to be any talk about relative value. Interestingly, sacred objects do allow comparisons between units of the same sacred-value-violation (obviously it's worse to torture and kill two children than one) but there's no comparison allowed between different types of sacred-value-violations. Of course the world does not always respect our moral categories, and in point of fact when people are put in a situation where they do have to choose between sacred value violations, they suffer badly, but their heads don't smoke and sputter like broken computers; they clearly are capable of making such calculations. In Sophie's Choice, a movie that depresses me just from hearing about it (I haven't seen it and I won't) a concentration camp guard forces a woman to choose which of her two children is taken to the ovens (or if she won't choose, he'll take both). And she finally chooses, much to her bereavement of course.

A way to think about decision-making and morality is to assign utility values to things. We look at utility lost and gained and sometimes we sacrifice utility now as an investment for utility later (as I will in a few minutes when I go back to studying).

But consideration of sacred values is nothing like this. In line with property #1, it's not that the possibility of transgressing a sacred value even crosses our minds in the first place if only to be immediately rejected: "You know, this kid that woke me up in the morning by playing outside my window really made me mad. It would give X amount of utility to get out my frustrations right now and also know that I'll be able to sleep in from now on, if only I torture and kill this kid, and I even know a way to do it without getting caught. But no, I would actually feel so bad about it that I would have infinitely negative utility, so it works out in favor of non torture-murder." That's not what happens. And in keeping with property #2, though it's a dark and sad thing to say, such an act might be one of the worst things imaginable but it's still not really infinitely bad. (If it was infinite, two such violations wouldn't be detectably worse than one.) Of course it's not just child torture-murder that's not within the realm of possible deliberation, but most of our moral values, which were programmed early in life and which give us flashes of disgust or happiness, mostly quite beyond our control to change.

It's worth pointing out that the classic infinite negative utility scenario philosophers discuss is being damned to Hell as in Pascal's wager, a fate literally worse than death if you believe in Hell. But the experience of considering the horrible actions I discussed above is very different than the Pascal's wager consideration. When you imagine suffering in Hell forever, you imagine feeling bad and you become afraid. It doesn't go that far when violating a taboo is suggested to you - you don't picture, as did the moral reasoner in the previous paragraph, the way you would feel if you did such a thing, you just don't even for a second consider it.

Someone who is really making utility calculations about their decisions would not behave in this way. It's as if you're in a "possible actions store", with certain actions on a display shelf, not for sale. If we were really performing "moral reasoning" of some kind, we would at least be entertaining these not-for-sale actions, even if we never did them. The problem is that conscious, frontal-lobe based utility calculations (however they're performed) do tend to be corrosive to traditional values, because utility calculations are effective at creating successful novel actions - and this corrosion in traditional values is frequently effected through markets, once sacred objects or acts have been assigned a commensurable value. Markets are aggregates of utility-based decisions that accumulate massive power to influence actions.

This may explain why people who are otherwise very pro-free-market find market infiltration into certain arenas (especially traditional culture) to be extremely offensive - because now the tradition is subject to utility calculations, and it will surely change, and quickly. The commercialization of Christmas is an excellent example. But the clash of values in healthcare, both in patient perceptions as well as for practitioners, is a much larger and more profound one. (Philosophers like to talk about utility in hypothetical units of "utilons", but in the real world, there actually is a unit of utility that healthcare organizations and policymakers use, the QALY, quality-adjusted life year. You don't want to assign relative values for treating AIDS and cancer? Too bad, because your government is doing it at this very moment, in the real world. Of note, some QALY tables for diseases do recognize fates worse than death, although they still don't assign infinite negative utility. Incidentally I imagine that the committees that build these tables are Sarah Palin's "death panels".)

Once those values are "for sale" - once they enter into the realm of conscious deliberation and value-assignment - they're almost certainly not going back to the display shelf. In a brilliant aside in Predictably Irrational Dan Ariely makes several observations about exactly this problem, in the commoditization of social relationships, particularly by banks.

The following might at first seem a strange observation for a self-described libertarian to be making, but for those of us who think the market (and reason) is usually the best method for increasing utility, it behooves us to understand it, warts and all. Haidt later noted that the business school students he taught (and collected surveys from) were on average low in every one of the six moral foundations he describes, in terms of how their values were influenced. These are individuals who do in fact act out of rational self-interest - they're members of Homo economicus for which everything is for sale, and nothing is sacred. What's more, a study has shown that low empathy predicts utilitarian judgment. (Do note the categories there: if you have low empathy you're more likely to be utilitarian. No word on whether utilitarianism predicts low empathy.)

But my concern is not that eventually all businesspeople (and everyone else) will be lured by utility calculations into becoming child murderers. Some of our sacred values are likely to be very biologically innate, and others taught. Given its universality, protecting children is probably strongly innate. Others, for example treating certain religious or national symbols with respect, are learned and likely subject to erosion over time. The bigger question is what this does over time to our ability to cooperate and sub-optimize over the long-term.

(More interesting relationships between pre-rational neurology and moral behavior here.)

Saturday, August 24, 2013

Why Do We Care About Consciousness?

The problem of understanding consciousness is traditionally broken into the easy problem (how the brain works as a computer) and the hard problem (how the brain creates subjective experience). Indeed, the hard problem seems so intractable, and progress toward a solution seems so tricky to measure, and because it's not even clear what kind of an answer will explain it (we really don't know how to start getting there from here) that it's been called more of a mystery than a problem.

Here are possible reasons the hard problems still seems more like a mystery than a problem:

- It's an incredibly difficult problem: our science so far is not nearly sufficient to explain it, and/or our brains have difficulty with these explanations. (Whether this is a feature of brains in general or humans brains right now is another question.

- There are bad ideas obstructing our understanding. This is an active failure of explanation, rather than a passive failure as above. We have a folk theory or theories (a la Churchland) about subjective experience that we don't know we have and/or are not ready to discard), which is complicating our explanations. Our account is like trying to explain chemistry with vitalist doctrine, or the solar system with the Earth at the center (probably worse than that last one, which can be done, it's just pointlessly difficult and messy.)

- The first-person-ness of experience is a red herring. This may be one specific bad idea as above. When you're explaining an object in free-fall, no one worries that you yourself are not experiencing acceleration. The explanation works regardless of where the explainer is.

- Some non-materialist irrational truths hold in the universe. If that's "true" I don't know how we could ever know it.

Explaining things necessarily involves trying to build a bridge from what we already know to the not-yet-understood thing; but so far this endeavor has the flavor of checking internal consistency between what we already know about nervous systems. It seems that we're mostly motivated to explain consciousness, because we're bothered by the resistance of this idea to explanation. (I certainly am.) But if we don't know where we're going yet this kind of approach might not get us very far. One obscure but interesting example of obsession with internal consistency of theories comes from pre-Common Era China, where Taoist logicians agonized over the relationships between the properties describes by yin and by yang. Yang things are both hard, and white. So what, then, is the logical relationship between hard and white? They're both yang, they reasoned, so there must be such a relationship.

Of course we wonder, where did the Taoists think they were going to get with that kind of tail-spinning if they thought they were trying to answer pretty deep questions? And so we should turn the same question on ourself: why do we care about consciousness? What impact will the theory have once we understand it?

The obvious answer is that it's ultimately a moral question. While it's not clear whether affective components of experience (at base, pleasure and pain) are necessarily a part of consciousness, they certainly are possible in conscious beings, because I experience them, and so even though saying this makes verificationalists mad, I give credit to other apparent first-person viewpoints of other living things (e.g. other humans, dogs, cats) that they experience them too.

Consequently, if neuroscientists of the future that build nervous systems, and AI engineers (if they're two different professions) believe that their lab subjects can experience consciousness, then it becomes incumbent on them to understand what they're experiencing. If we're capable of creating things are conscious, we have to avoid creating ones that are predisposed to suffer. Indeed with such an explanation we may take notice of other structures in the universe that can suffer but that we didn't even realize were conscious before.

Once we understand the material basis of conscious awareness (if there is such an explanation), then we can start asking some heavily Singulatarian-type questions - whether mind uploading, the transporter problem, etc. are really extensions of a self, or just copying, and whether there are meaningful differences in those alternatives.

Finally, understanding the basis of consciousness may allow us to alter the structure of conscious objects in a way that decreases their suffering and expands their happiness - from first principles. Currently we're limited to what I hope will seem in the future to be very limited, clumsy manipulations of nervous systems to decrease suffering - e.g. taking consciousness away with anesthetics, NSAIDs, anti-psychotic medicines, talk therapies, and beyond medicine, all the behaviors that we engage in minute-to-minute to enhance fluorishing and decrease suffering in ourselves and the beings around us.

Saturday, August 17, 2013

Without Constraints, How Do Humans Behave?

Cross-posted to my geek blog as well as my politics and economics blog.

Life on Earth evolved in an environment of constraints: resource limitations, disease and predation all put lids on behavior and reproduction. Consequently, the mechanisms to deal with those constraints have no "brakes", because nature provided them. There's no reason to have tight control on over-eating, because such a situation rarely arose. There was no reason to protect reward circuitry in general from overstimulation. But now we're starting to remove those constraints. Solve food scarcity, and we get obesity. Go straight to the reward center (without a real external reward), and we get heroin and video game addiction.

This is the biggest problem we face in any post-scarcity world, or (more broadly) in any world where our behavioral regulation is freed from the constraints that sculpted it for billions of years, whether in reality (because there really is more than enough food) or virtually (because you can just shoot up and feel good). This problem has even been advanced to explain the Fermi paradox, since whatever behavior regulation intelligent aliens evolve, presumably when they solve their own constraints, they will run into the same problems - perhaps with species-destroying consequences. The more complete and effective a representational system is*, the faster and greater the instability it creates in the system.

You might think of a science fiction story where curious and powerful aliens have put humans in a kind of terrarium where the weather is always fair, there's always enough to eat, there's no physical danger, and where there is always another territory to move into, with no loss of security, if you burn too many bridges with the ones in this one. That is to say, someone looks at you the wrong way, or your significant other mildly irritates you - why stick around? The aliens have guaranteed there will be another handsome gentleman/pretty lady waiting for you when you get to the new territory. And when you get there you wonder idly if these are real humans also in the experiment, or were whipped up and memory-programmed by the tissue replicator twenty minutes before you got there; or maybe you were, before your new mate got here. But you're taken care of; does it matter? (You might even call this hypothetical alien terrarium "California"; perhaps this explains my interest in the simulation hypothesis.) In a world of limitless security and resources and even others' company, why ever tolerate the least inconvenience?

A scenario similar to this that happens in the real world is the strange discomfort of working alongside someone who is wealthy independent of their jobs. Why are they even here, people might ask resentfully - and indeed, from anecdotal experience, when these people get annoyed, they quickly leave, because why not? They have security and more territory.

So what happens to people when all the constraints are removed, when they're both wealthy and not subject to censure by broader political forces? That is to say, how do humans behave when all the brakes are off?Predictably. From "The Prince Who Blew Through Billions" by Mark Seal, from Vanity Fair in July 2011:
On the brother of the Sultan of Brunei, Prince Jefri Bolkiah, who has "probably gone through more cash than any other human being on earth.": "The sultan's biggest extravagance turned out to be his love for his youngest brother, Jefri, his constant companion in hedonism. They raced their Ferraris through the streets of Bandar Seri Begawan, the capital, at midnight, sailed the oceans on their fleet of yachts (Jefri named one of his Tits, its tenders Nipple 1 and Nipple 2), and imported planeloads of polo ponies and Argentinean players to indulge their love for that game, which they sometimes played with Prince Charles. They snapped up real estate like Monopoly pieces—hundreds of far-flung properties, a collection of five-star hotels (the Dorchester, in London, the Hôtel Plaza Athénée, in Paris, the New York Palace, and Hotel Bel-Air and the Beverly Hills Hotel, in Los Angeles), and an array of international companies (including Asprey, the London jeweler to the Queen, for which Jefri paid about $385 million in 1995, despite the fact that that was twice Asprey's estimated market value or that Brunei's royal family constituted a healthy portion of its business).

"Back home, the sultan erected a 1,788-room palace on 49 acres, 'which is without equal in the world for offensive and ugly display,' in the words of one British magnate, and celebrated his 50th birthday with a blowout featuring a concert by Michael Jackson, who was reportedly paid $17 million, in a stadium built for the occasion. (When the sultan flew in Whitney Houston for a performance, he is rumored to have given her a blank check and instructed her to fill it in for what she thought she was worth: more than $7 million, it turned out.) The brothers routinely traveled with 100-member entourages and emptied entire inventories of stores such as Armani and Versace, buying 100 suits of the same color at a time. When they partied, they indulged in just about everything forbidden in a Muslim country. Afforded four wives by Islamic law, they left their multiple spouses and scores of children in their palaces while they allegedly sent emissaries to comb the globe for the sexiest women they could find in order to create a harem the likes of which the world had never known."
This reads like an account of what each of us would do if we found out tomorrow we were in a simulation, with power over said simulation. This is what happens when the brakes are off. If you object that this is an exception or an extreme example - I guarantee that this behavior happens more among the fabulously wealthy and powerful. Well of course, you again object, other people can't behave that way! But then if the tendency wasn't there, why should it happen at all? And (more to the point) do you seriously think you would be any better-behaved? Of course you would; you're biologically and/or morally superior to these folks and would never let that kind of thing happen. (Also note that lottery winners, with a sudden random infusion of karma or whatever you call the points in our game - that's right, "money" - are known for going off the rails, and being more miserable and more likely to go bankrupt than the general population. Also, see "athletes from poor backgrounds suddenly signed up to multi-million dollar contracts in pro sports".)

An astute observer will say, "So what if people descend into depravity? If you're in a simulation or the aliens' zoo or you're royalty and don't hurt anyone, if you're happy with harems and Ferraris, fine!" That would be fine. But the problem is these people often seem not to be happy. Here it's hard to get data, but they are not invariably happier than other humans and in fact often have considerably troubled emotional lives. Again, they're using nervous systems built for an environment of resource and social constraints. It should not be surprising that they experience boredom, restlessness, and emptiness. In fact in the developed world it's not just the ultra-wealthy that experience these things. That said, it's sure better than starving or being eaten by tigers, but it seems those are our two alternatives: obese or at best bored, versus running from predators, starvation, and stronger neighbors. Yes, I fully recognize the pessimism of this position.

So, there's an addition to Malthus here. Malthus merely pointed out that when all constraints are relaxed but one, that constraint will limit (and his rule concerned, specifically, energy input as the unrelaxed constraint, but you can imagine for example a dense population of well-fed non-preyed-upon humans being periodically cut down by plagues). The addition is that when all constraints are relaxed, the system becomes unstable, whether that system is a cell (cancer) or an individual. The more powerful the system - which can be approximated by how fast it can change - the faster it will become unstable.

*The first representational system to evolve on Earth was the gene: the proteins it codes for are indirect mirrors of a DNA strand's environment - and as the environment changes, the genes change. As life became more complex, systems appeared that became able to more and more rapidly and/or accurately reflect parts of the environment beyond the replicator: the cytochrome P450 system which is a remarkably non-specific but effective metabolism system (which is how most drugs are broken down even though life on Earth has never seen these molecules before) and the immune system, which produces high-affinity molecules with a process of directed by limited somatic mutation. The ultimate such system however is the development of large numbers of cells signalling with ion channels, which can represent much more information much faster, and in humans has expanded to allow the assignment of arbitrary symbols to novel relationships (language). While we still can't assume that our language-enhanced nervous systems can represent every possible state external to themselves (any more than the immune system can do so), it's still by far the fastest-acting system and the one most likely to spell its own demise. As an aside, it's probably no surprise that plants that have begun to evolve "behavior" of a sort - the carnivorous plants - also use ion channels. Assuming causality is unidirectional, what happens first matters, and therefore so does speed.

Monday, August 12, 2013

The Cost to the Economy of a New Drug

At Forbes, Matthew Herper follows up on previous work about the cost of drug development, by looking at the actual R&D costs incurred of companies with successful drug development over 10 years, divided by the number of drugs each has marketed. This includes failures, which is a more useful way of looking at the cost of innovation than just looking individually at the cost of each program from discovery to market - you have to include failures to give a real picture of the cost of innovation. The answer? The median amount spent by companies per drug getting to market for the last ten years is $808 million; the average was just under $2 billion. The outliers are the big guns at the top of the list.

Keep in mind that even by including failures, this list is still weighted toward success, and does not really give us "dollars that are spent in the economy for each new drug". There are a lot of companies that burn through a lot of cash without ever getting anything to market. The numbers above would be much higher if we included that.

Herper concedes that especially in the larger companies, some of the R&D spending is masked as acquisitions (and Abbott is indeed at the top of the list). But don't worry about that, because what's more frightening is that drug development shows reverse economies of scale, and multi-approval companies spend MORE per drug. Concretely: for companies that have marketed 4 ore more, they spend a median of $4.2 billion per drug. 5 or more? 5.3 billion per drug.

Finally Herper points out that in fact the distribution is distorted because a lot of those low per-drug costs at 1-drug companies are really higher, and they're being hidden here in the budgets of partner companies. Fine, so let's take the combined cost of all the drugs that have come to market over ten years, and divide by the number of drugs - this is the economic cost per drug of the entire biopharma world, i.e. what it costs an economy to make a drug. And that cost is $3.6 billion per drug. That's the absolute lower bound that policy-makers need to keep in mind, because it still doesn't include the one-drug companies that never made it to market.

If we want to continue producing new drugs and/or have governments and individuals actually be able to afford them, we need a profound retooling of the clinical research enterprise. Soon.

Saturday, August 3, 2013

How Can Things Be Interesting But Useless

A close friend played the following "game" in undergrad: anyone who made a deep observation or revealed some startling fact in his presence faced immediate judgment. "Okay," he would intone thoughtfully, "that was about a seven on the interest scale and an eight on the uselessness scale." The object of the game was to say something that was a ten in both - perfectly interesting, and perfectly useless. (Fortunately or not, this high bar was never achieved.)

(An outline of the answer in this article: if you THINK you know something but don't, you're motivated to learn more. Interestingly, requires the presumption of knowledge, and a hole in related material you already know, which is why curiosity tends to breed more curiosity; still no word on why this selects for useless information.)

Years later I wonder: how can things that are useless be interesting? The point of beliefs about the world is to improve the utility of their holder - most obviously, through affecting decisions that change the external world and our condition in it. But it seems like most of our beliefs - "our" meaning humans, not just the self-declared intellectuals among us - have little to no chance of affecting a decision. For instance: I am fascinated with dark matter, in fact far more fascinated with it than most things in my profession (medicine). I would grant a broad definition of useful here, so for cosmologists whose mortgage depends on their interest in dark matter, it's not useless at all, but certainly for the majority of human beings who find dark matter interesting (like me) this interest cannot possibly result in a decision being made differently. There is no way to argue that anything we learn about dark matter can ever have anything to do either with my profession, or my other activities down here on Earth. If you spend any time on the internet, you no doubt have noticed many other people that have similar esoteric interests.

And yet it seems like a safe assumption that our brains evolved to solve problems that have to do with survival and propagating genes into the next generation - to do otherwise would result in attention constantly distracted to evolutionarily unimportant events and lots of energy expended for no good reason. What are some possible explanations for our finding useless things fascinating?

Noise. That is to say, we're just weak-minded; those of us whose interests drift outside what is immediately useful just have poor attentional control. After all, most humans do not find dark matter and "useless" things like dark matter interesting. (Those of us who do are just stupid.)

Signalling intelligence. Notice that these useless interesting things are generally those which are considered intellectually difficult and which not many people know much about. By gaining some knowledge about them, we signal our intelligence and education. Also notice the following: at a first-time meeting in an informal discussion, a useless interesting topic may come up that is outside the expertise of all the discussants (say, two departments from a technology company are having a mixer and people start talking about black holes or evolution). The conversation will carry on for a few minutes (an acceptable "cocktail-party chatter" period) and then move on. Often, someone who is both intellectually gifted and educated, and also socially clever, will become impatient when a "geekier" conversationalist tries to keep the conversation on black holes, or makes a point of a strong disagreement about the topic. The geek is missing the point that both parties have already announced their general intelligence, and there's no point in remaining on an issue on which neither is an expert, and outside of signalling value it's of no use to either; no one is going to make a discovery based on this conversation.

It should not escape the reader's attention that many blogs could be explained in this way. Certainly not any of mine though.

Reinforcement; i.e., internal confirmation bias. These topics touch on and reinforce things we already know, things which may or may not also be useless. If this is happening, we should expect that the more interesting useless things you know, the more interesting useless things you should want to know, because of more combinations of beliefs reinforcing each other.

Novelty. People get a thrill from learning new things. If this is true, then people who are designated as sensation-seekers should like interesting useless things more than others.

Surprise, and mismatch hypothesis. In experimental paradigms, chimps look at unexpected things longer than expected things; this is a way to measure if they're smart enough to recognize some pattern that's not adding up, since they can't just tell us. It seems likely that an interest in (for example) dark matter is this same reaction, but applied outside the domain of our ancestors. When the branches of a bush were in a different place than they were two seconds ago, that merits attention, because it might have a direct impact on survival. But now that our ability to recognize patterns has exploded - humans understand some of the nature of matter and the universe - we now frequently see unexpected things, but in places that we have no reason to believe can ever affect us.

Simple awe. Stories or music that cause piloerection (goosebumps) have been shown based on fMRI to be the result of partial sympathetic arousal, the same way as if a large predator has appeared; but the experience seems not unpleasant, because people continue to self-administer. These universal truths about massive entities may be activating the same systems. That said, my experience about dark matter is not the same as my experience of (for example) the Mars movement from Holst's Planets.

These are not exclusive of one another. If I had to guess what's going on inside my own skull, I would say both the signalling and reinforcement.