Problems can arise when we misunderstand the rhetorical intentions surrounding informally stated hypotheses, which are certainly not limited to scientific endeavors. A stated hypothesis is a very strange kind of propositional utterance. When we state a hypothesis, we don't know whether it's true but we make the statement as if it were anyway - and we don't consider it to be lying or equivocating either. Why not? It has something to do with your audience knowing why you're uttering a proposition of uncertain truth value, which is exactly why if they don't know your proposition is a hypothesis, there can be problems. For example, Edward Sapir explicitly advanced the Penutian language family as a hypothesis to be investigated, and the theory was quickly adopted as gospel by the linguistic community, much to his distress. Fortunately it has largely been supported by data. It doesn't always work this way, in science or everyday conversation.
Since most languages mark questions and subjunctives explicitly, a fanciful solution would be a hypothesis particle. A simple phoneme would follow any informal statement of a hypothesis (-ba would work), and it would mean this: "I make no assertion about the truth of the proposition preceding this particle, but I want to learn the truth of this proposition, and I invite close scrutiny and criticism of this proposition in service of this goal."
But reflect further about rhetorical intention and truth value in utterances. Our classification of the rhetorical intentions is poor, since we don't recognize classes of utterances which are quite often explicitly encoded. A simple epistemological model of language is that all contentful utterances are commands, either directly to commit an action or to react to provided information, even if that's just for the listener to update his/her model of the world. As such we would expect most utterances will contain signals for rhetorical intention above and beyond the content of the sentence; there is the proposition being uttered, and the intention the utterer has of how the audience should react. Analytic philosophers attempted to approach natural language propositionally although their conclusions were sometimes provincial, hobbled as they sometimes were by an impoverished knowledge of the variety of language structures that existed outside Western Indo-European. Here is a phylogeny of coherent utterances which includes rhetorical intention.
1) COMMAND: "Get out of my house."
Extra-contential rhetorical tag: none.
1.1) Commands are the basic form of language and it is therefore not surprising that the command forms of verbs are usually morphosyntactically as, or more simple than, even the infinitive.
2) CONTENTFUL EXCLAMATION: "A blue hummingbird!"
Extra-contential rhetorical tag: "Recognize this object/event I have verbally pointed to."
2.1) Some languages (e.g. Tagalog, Washoe) do encode this intention explicitly and have focus markers which explicitly declare what the speaker wishes the listener to focus on. These markers occur throughout sentence structures and are not limited to noun-phrase exclamations like the one above.
2.2) Dependent phrase structure is always just another example of recursive phrase structure. That said, shorthand often evolves for parsimony (i.e., "There is a red book on the table" is really just shorthand for, and content-wise exactly the same as, "There is a book that is red on the table".) In languages with copulae like English, superficially there seems to be a distinction between main and dependent phrases only because of phonetic realization (or lack thereof) of recursion, but languages that lack copulae illustrate the principle more clearly.
2.3) There is also an argument that constructions like "there is" or intransitive words equivalent to "exist" are really just reflexive copulae that tie off structural loose ends. Therefore the statement above is equivalent to the proposotion "There is a hummingbird that is blue!" Potential investigation: ergative/absolutive languages with reflexive morphemes are known not to "cross systems", e.g. East Greenlandic, in which using ergative blocks use of the reflexive marker; you can only use one system at a time. So how do the reflexive copular constructions behave in ergative/absolutive languages that have both copulae and reflexivity?
3) DECLARATION: "I am going running at 3pm."
Extra-contential rhetorical tag: "I want you to accept as true the meaning of this proposition, and update your model of the world accordingly."
3.1) Although seemingly the most basic form of utterance, declarative propositions are not even close to the entirety of contentful utterances we make. Still they are zero-grade in all languages that do mark rhetorical intention.
4) YES/NO QUESTION: "Is he very tall?"
Extra-contential rhetorical tag: "I want you to reformulate this utterance as a proposition and then tell me your evaluation of its truth value."
4.1) Questions are usually marked, either by word-order changes, explicit particles (like Japanese -ka) or tone. English has few minimal pairs where tone makes a difference (e.g. permit) and such pairs are related, unlike full tone languages. Still, if the concept of minimal pairs is extended to rhetorical intention, tone is indeed explicitly encoded and certainly distinguishes minimal pairs. ("He is running for governor." "He is running for governor?" These sentences mean different things.)
4.2) Yes/no questions are therefore actually different kinds of utterances than those containing interrogative pronouns. In fact some languages do mark them differently. Latin marked verbs with the suffix -ne only in questions that did not contains interrogative pronouns.
4.3) In all cases that I know of, the verb dominates other parts of speech in taking on question particle - that is, if there's a verb in a sentence and a language has a question particle, the particle attaches to the verb. (Case in point, in Japanese, -ka typically goes on the verb but if a single-noun utterance requesting clarification, the question particle can go on the noun; "He's working in the city," one speaker says, and the other says "Takamatsu-ka,", i.e. "In Takamatsu?"
I have argued previously that verbs and adjectives are both first-order modifiers, but that some first-order modifiers can modify two nouns simultaneously (these are called transitive verbs.) In this view, nouns alone cannot create a proposition since there is no relationship stated between them without verbs. Therefore, it makes sense that the rhetorical marker would be placed on the verb that changes the utterance from a list into a proposition.
5) INTERROGATIVE-PRONOUN-CONTAINING QUESTION: "What is the best restaurant in Portland?"
Extra-contential rhetorical tag: "This is a proposition whose truth value cannot be evaluated since I have deliberately used a placeholder ('what'). I want listeners to reformulate the statement as a proposition but include information that can be plugged into the placeholder slot in such a way as to make the proposition true."
5.1) Interrogative-pronoun containing questions are also usually marked in some way (by word-order, tone, and/or explicit morphemes.)
5.2) Languages often have multiple interrogative pronouns for different types of nominal information, but never to my knowledge are there dedicated adjectival, verbal, or other interrogative words. Interrogative pronouns can be pressed into service for one-off service as verbs and even productively undergo morphosyntactic operations: (Imagine a woman has just been told her twelve year-old son was seen driving to school. "He was what-ing to school?" Nonetheless these kinds of operations on interrogative pronouns are never formalized.)
6) CONTINGENT DECLARATION: "If it rains today, you're on your own."
Extra-contential rhetorical tag: "I have explicitly marked off a proposition whose truth value is influenced by other propositions stated in close proximity and whose truth I may not be certain of, or by the way I obtained the information."
6.1) There are two sub-structures here: one is the typical if-then statement formulation for subordinate clauses we normally think of, but there is also the case of evidential markers most famous among Tupi-Guarani languages. Both systems are ways of explicitly marking the truth-weighting that the listener should give to the proposition so marked.
Though it doesn't merit a separate entry here, it's interesting that hypotheses aren't exactly questions, but they aren't exactly subordinate clauses either (though a hypothesis can be stated as both.) It's my suspicion that humans not engaged in research do not engage in extended hypotheticals - the propositions they are unsure about tend to be simple enough that their hypotheses are all contained in single clauses delineated by if-then markers. For most humans, thoughts complicated enough to require more than one sentence and which are of uncertain truth are merely deception, not hypotheses to be tested.
However if English does follow my humorous suggestion to develop an explicit hypothesis particle and a seventh utterance category, then I should re-state my earlier sentence as "A simple epistemological model of language is that all contentful utterances are commands, either directly to commit an action or to react to provided information, even if that's just for the listener to update his/her model of the world-ba."
Consciousness and how it got to be that way
Sunday, December 19, 2010
The Verb Regularization Rate in English
"The half-life of an irregular verb scales as the square root of its usage frequency: a verb that is 100 times less frequent regularizes 10 times as fast." From Lieberman et al in Nature. An interesting question is to what other morphosyntactic rules this generalizes to, like noun plurals (and to what extent is it influenced by phonetic realization. My guess: not very much.) Pinker and many others knew qualitatively that the less a verb is used, the more likely it is to become regular in a given time period. Now we have the quantitative rule.
Monday, September 20, 2010
Gene Ancestry Visualization Tool?
Most people are familiar with the radial-wheel format of showing the descent of species, like this one:
Image credit Bristol University Department of Chemistry
What would be really cool, and may already exist, which is why I'm posting this, is the same tool existed, except for gene ancestry. (And here is where I make obvious my ignorance of bioinformatics and the descent of genes both.) Genes can often be shown to originate by duplication events. I'm probably being naive about the extent to which the ancestry of genes results from similar events, or can be traced back to common ancestors shared with other genes in the same genome. But unless there was a lateral transfer event, genes all have to have common ancestry with other sequences in the organism, correct?
Ultimately we'd see another radial circle, except with genes radiating out from a common ancestor. The outer surface would be the human genome as it is now. You could place nested concentric circles representing previous phases of the genome (see mock-up below). Yes, I know the "surface" separating primates from non-primates (for example) doesn't represent any qualitative break, and the circle would be pretty lumpy since not all genes or gene families would branch and mutate at the same rates. (Genes I picked are illustrative only.)
Now, there are lots more genes than you could legibly read on a computer-screen-sized wheel, but this is also true for the species-descent wheel, which uses representative organisms. In my dream app, you could select certain gene families or genes with products that catalyzed certain classes or reactions or interacted with other gene products. The point of such a visualization is that you could more easily see when pathways started appearing or becoming more complex. Of course there will have been lots of genes lost in the interim, so we couldn't get a complete picture of what the genome was like at each point in the past. (Going back to the Grand-Daddy autocatalytic RNA would be cool, but unlikely.) If the gene-product features are sufficiently advanced you could even conceivably find "holes" in pathways at ancestral stages where the must have been gene products whose genes are no longer in the genome.
If there are any tools similar to this already out there, I would greatly appreciate a comment. This would be a valuable tool to investigate the evolution of pathways and systems. Especially neurons, of course.
Image credit Bristol University Department of Chemistry
What would be really cool, and may already exist, which is why I'm posting this, is the same tool existed, except for gene ancestry. (And here is where I make obvious my ignorance of bioinformatics and the descent of genes both.) Genes can often be shown to originate by duplication events. I'm probably being naive about the extent to which the ancestry of genes results from similar events, or can be traced back to common ancestors shared with other genes in the same genome. But unless there was a lateral transfer event, genes all have to have common ancestry with other sequences in the organism, correct?
Ultimately we'd see another radial circle, except with genes radiating out from a common ancestor. The outer surface would be the human genome as it is now. You could place nested concentric circles representing previous phases of the genome (see mock-up below). Yes, I know the "surface" separating primates from non-primates (for example) doesn't represent any qualitative break, and the circle would be pretty lumpy since not all genes or gene families would branch and mutate at the same rates. (Genes I picked are illustrative only.)
Now, there are lots more genes than you could legibly read on a computer-screen-sized wheel, but this is also true for the species-descent wheel, which uses representative organisms. In my dream app, you could select certain gene families or genes with products that catalyzed certain classes or reactions or interacted with other gene products. The point of such a visualization is that you could more easily see when pathways started appearing or becoming more complex. Of course there will have been lots of genes lost in the interim, so we couldn't get a complete picture of what the genome was like at each point in the past. (Going back to the Grand-Daddy autocatalytic RNA would be cool, but unlikely.) If the gene-product features are sufficiently advanced you could even conceivably find "holes" in pathways at ancestral stages where the must have been gene products whose genes are no longer in the genome.
If there are any tools similar to this already out there, I would greatly appreciate a comment. This would be a valuable tool to investigate the evolution of pathways and systems. Especially neurons, of course.
Sunday, September 19, 2010
Lewy Body Dementia and Alpha Synuclein as a Lipid Manipulator
In terms of understanding moment-to-moment awareness, cognitive hypofunction disorders with clear genetic contributions are often uninteresting. That is, the problem is usually that the wiring isn't there, or is wrong. The reason this is uninteresting is that if you're interested in what underlies moment-to-moment consciousness, you have to look for properties of the brain that change on the same time-scale, allowing our subjective awareness to represent some of the information in the outside world.
For this reason, pharmacology often presents more research opportunities than disease states, in particular NMDA antagonists and HT2A agonists. But there is a disease state which is notorious for hour-to-hour (or more) fluctuations in cognitive status, dementia with Lewy bodies. As you can see from the linked reference, it's not yet clear whether the disease is a sub-type of Alzheimers, or a distinct condition.
Lewy bodies are alpha-synuclein plus ubiquitin inclusions that appear in neurons specific parts of the brain in disease states; the ubiquitin suggests that these are clumps of protein the neuron is trying to degrade. Their presence is not necessarily indicative that the patient had any of the additional symptoms of Lewy body dementia. Clinical Lewy body dementia is associated with symptoms above and beyond what Alzheimers patients suffer. It also has significant overlap with Parkinson's; specifically, patients exhibit both the motor decline of PD as well as REM sleep disorder. Unlike Alzheimers, onset is not insidious. Unlike either Alzheimers or PD patients and most relevant here, Lewy body dementia patients usually have recurrent visual hallucinations, and are extremely sensitive to dopaminergic- and cholinergic-modifying medications.
When we think of real-time changes to nervous systems, we usually think of information being transmitted in an electrochemically-mediated way by neurotransmitter vesicle diffusion and membrane depolarization. Membrane potentials would also be dramatically and rapidly effected by changes in lipid membrane properties, so I had considered previously whether there were proteins expressed in brain that manipulated or maintained membrane lipid contents. It's interesting that alpha synuclein a) is known to be located on the cell membrane in some fraction, b) is natively unfolded in the cytosol, c) interacts with polyunsaturated fatty acids, d) interacts with membranes correlating with serine phosphyorylation and e) still hasn't been assigned a clear function.
This is why a recent Journal of Molecular Neuroscience paper by Riedel et al at the University of Oldenburg is important. Using an oligodendroglial cell line, they demonstrated the creation of a-synuclein aggregates (in vitro Lewy bodies) both in cells that had a point mutation in a-synuclein predisposing them to aggregate formation, as well as in wild-type cells. This was done by adding DHA (by the way, an omega 3 polyunsaturated fatty acid) and then hydrogen peroxide for oxidative stress. Alpha synuclein aggregates formed both in the mutant cell line as well as in the wild type, though the mutant cells' aggregates were bigger (all compared to non-treated controls).
My hypothesis is that alpha-synuclein is responsible for lipid processing of neuronal membranes to maintain electrochemical constancy, in response to physiologically rapid (minutes to hours) changes in the environment of the cell. In addition to the specific deficits in Lewy body dementia (associated with the brain region where the Lewy bodies appear), this may also explain the rapid fluctuation in cognitive status - cell membranes are unable to respond to a changing electrochemical environment because there's a problem with the protein that controls their lipid content. When alpha-synuclein catches up or the triggering physiological change (pH, solute concentration) reverses to previous levels, the cognitive deficits may disappear.
These findings, though they show an interaction, are therefore causally backwards - here, changes to lipids are initiating aggregation. It could be that once aggregation begins, alpha-synuclein function is off-line, and any new alpha-synuclein produced by the cell gets immediately caught in the tangle and can't perform. (There's evidence that there's more than normal at the membrane in disease states.) Here are future experiments, which in my quick survey of the literature I may have missed: 1) patch clamp recordings of dopamine receptors on cells in culture with loss-of-function mutations or knockouts of alpha synuclein, especially in response to differences in charge, pH, and buffer concentration (to mimic physiologic changes in extracellular fluid). 2) Measurement of individual fatty acid chains in knockout relative to control, in terms of their incorporation into cell membranes.
I'm both excited and nervous because in the coming weeks I will be interacting with patients in the clinic who have this disease, which is why I'm motivated to understand it.
For this reason, pharmacology often presents more research opportunities than disease states, in particular NMDA antagonists and HT2A agonists. But there is a disease state which is notorious for hour-to-hour (or more) fluctuations in cognitive status, dementia with Lewy bodies. As you can see from the linked reference, it's not yet clear whether the disease is a sub-type of Alzheimers, or a distinct condition.
Lewy bodies are alpha-synuclein plus ubiquitin inclusions that appear in neurons specific parts of the brain in disease states; the ubiquitin suggests that these are clumps of protein the neuron is trying to degrade. Their presence is not necessarily indicative that the patient had any of the additional symptoms of Lewy body dementia. Clinical Lewy body dementia is associated with symptoms above and beyond what Alzheimers patients suffer. It also has significant overlap with Parkinson's; specifically, patients exhibit both the motor decline of PD as well as REM sleep disorder. Unlike Alzheimers, onset is not insidious. Unlike either Alzheimers or PD patients and most relevant here, Lewy body dementia patients usually have recurrent visual hallucinations, and are extremely sensitive to dopaminergic- and cholinergic-modifying medications.
When we think of real-time changes to nervous systems, we usually think of information being transmitted in an electrochemically-mediated way by neurotransmitter vesicle diffusion and membrane depolarization. Membrane potentials would also be dramatically and rapidly effected by changes in lipid membrane properties, so I had considered previously whether there were proteins expressed in brain that manipulated or maintained membrane lipid contents. It's interesting that alpha synuclein a) is known to be located on the cell membrane in some fraction, b) is natively unfolded in the cytosol, c) interacts with polyunsaturated fatty acids, d) interacts with membranes correlating with serine phosphyorylation and e) still hasn't been assigned a clear function.
This is why a recent Journal of Molecular Neuroscience paper by Riedel et al at the University of Oldenburg is important. Using an oligodendroglial cell line, they demonstrated the creation of a-synuclein aggregates (in vitro Lewy bodies) both in cells that had a point mutation in a-synuclein predisposing them to aggregate formation, as well as in wild-type cells. This was done by adding DHA (by the way, an omega 3 polyunsaturated fatty acid) and then hydrogen peroxide for oxidative stress. Alpha synuclein aggregates formed both in the mutant cell line as well as in the wild type, though the mutant cells' aggregates were bigger (all compared to non-treated controls).
My hypothesis is that alpha-synuclein is responsible for lipid processing of neuronal membranes to maintain electrochemical constancy, in response to physiologically rapid (minutes to hours) changes in the environment of the cell. In addition to the specific deficits in Lewy body dementia (associated with the brain region where the Lewy bodies appear), this may also explain the rapid fluctuation in cognitive status - cell membranes are unable to respond to a changing electrochemical environment because there's a problem with the protein that controls their lipid content. When alpha-synuclein catches up or the triggering physiological change (pH, solute concentration) reverses to previous levels, the cognitive deficits may disappear.
These findings, though they show an interaction, are therefore causally backwards - here, changes to lipids are initiating aggregation. It could be that once aggregation begins, alpha-synuclein function is off-line, and any new alpha-synuclein produced by the cell gets immediately caught in the tangle and can't perform. (There's evidence that there's more than normal at the membrane in disease states.) Here are future experiments, which in my quick survey of the literature I may have missed: 1) patch clamp recordings of dopamine receptors on cells in culture with loss-of-function mutations or knockouts of alpha synuclein, especially in response to differences in charge, pH, and buffer concentration (to mimic physiologic changes in extracellular fluid). 2) Measurement of individual fatty acid chains in knockout relative to control, in terms of their incorporation into cell membranes.
I'm both excited and nervous because in the coming weeks I will be interacting with patients in the clinic who have this disease, which is why I'm motivated to understand it.
Saturday, August 21, 2010
Thoughts on Newcomb
I'm currently reading Robert Nozick's Socratic Puzzles. It contains two essays about Newcomb's Problem. If you've not encountered Newcomb before, a brief description follows, and if you want more, the most discussion I've seen anywhere is at Less Wrong. I can sum up this post thusly: how can Newcomb be a hard problem?
Imagine a superintelligent being (a god, or an alien grad student as Nozick imagines, or more plausibly a UCSD medical student. It's up to you.) This superintelligent being says that it can predict your actions perfectly. It shows you two boxes, Box #1 and Box #2, into which it will place money according to rules that I will shortly give. As for you, you have two options: either open both boxes and take the money from both if there is any, or open only Box #2 and take the money from just Box #2. Now here are the rules, and the kicker. Since the being can predict your actions perfectly, it does the following trick. If it predicts that you're going to take just Box #2, it will place a thousand dollars in box #1, and a million dollars in Box #2. So in this instance, you will get a million dollars, but you'll miss out on the thousand in box #1. On the other hand, if it predicts that you will take both boxes, the being will place a thousand dollars in box #1, but place nothing in Box #2. In that case, you end up with just a thousand dollars. So in other words: the being always puts a thousand dollars in Box #1, whereas in Box #2 there's either a million, or nothing.
So, now the superintelligent being has gone back to its home planet of La Jolla, and you are left wondering what to do. Assuming you want the most money possible, which option do you pick and why?
Figure 1. Decision table for Newcomb's Problem.
There's been a lot of discussion about Newcomb's Box, and not all of the responses adhere to the standard one-or-both answer. But I take the point of this particular logic-koan to be that we're to decide based on the givens of the problem which of the options we would take, so cute answers about trying to cheat, making side bets, etc. are wasting our time. If we're going to introduce those kinds of non-systematic "real world" options into this exercise, then we're going to need a lot more context than we currently have to make a decision. In fact after ten years living in Berkeley I'm surprised that I haven't yet met someone on a street corner claiming to be an alien with a million dollars for me, but if I did I would walk away and not play at all. (Come to think of it, I frequently get similar spontaneous offers of a million dollars or more in my spam folder which I ignore at my peril.)
My own answer is to take only box #2, expecting to get a million dollars. Why? Because I want a million dollars, and the superintelligent alien is apparently smart enough to know that I'll gladly cooperate and not try to make myself unpredictable (more on this in a moment). Why try to be a smart-ass about it? (It's both to your disadvantage and not even possible anyway per the terms of the problem.) The being told you where it would put the million dollars (or not) based on your actions, and it's a given in the problem that the being is perfect at predicting your actions. This is what gives the both-boxers fits. They say one-boxers are idiots because if the being got my choice wrong, it didn't put anything in box #2 because it thought I would choose both. If the being is wrong, I open only box #2, and I get zero (because the alien thought I was going to take both and least get a grand, but he was having an off day.)
I will be beating the following dead horse a lot here: the problem states you have a reliable predictor. Why does Figure 1 above even have a right-side column? If you assume the being is fallible then you're not thinking about Newcomb's problem as stated any more: you're ascribing properties to the being that either conflict with what is given in the problem, or your're making stuff up. (Maybe the alien is fallible and copper and zinc are toxic to it! That way it won't predict in time that I'm going to kill it by throwing my spare pennies and brass keys at it, and then I can get the full amount from both boxes! Sucker. Ridiculous? No more than worrying about the given perfect predictor's not being perfect.)
Figure 2. Correction to Figure 1. This figure is the actual real table for Newcomb's Problem. Figure 1 is somebody else-not-Newcomb's problem that features fallible aliens.
Complaints about the logic of the Box #2-only response (which is the majority's response, if the ones Nozick cites in one of his essays are representative) typically focus on two things. One, that we're assuming reverse causality, that we must think our choice of the boxes will make there be a million dollars in it; and two, that it suggests we don't have free will. I dismiss the second objection out of hand because the whole point of the problem is that the being is a reliable predictor of human behavior - for that one aspect of your behavior, in this problem, no, you don't have free will. Look: we already accepted a being with near-perfect predictive powers. Without that, then the problem changes and we have to guess how likely the being is to get it right. But as long as we have Mr./Ms. Perfect Predictor, then the nature or mechanism is unimportant. You can justify how it accomplishes this however you like (we don't have free will in this respect, or the alien can travel through time) but the point is, any cleverness or strategy or philosophizing you do has already been taken into account by the alien.
But things can be predicted in our world, including human behavior, and for some reason this doesn't seem to evince outcries about undermining the concept of free will. Like it or not, other humans predict things about you all the time that you think you'd have some conscious control over - whether you'll quit smoking, your credit score, your mortality - and across the population, these predictions are quite robust. They don't always have the individual exactitude that our alien friend does of course. But at the very least you must concede that if our alien friend is even as smart as humans, after playing this game multiple times with us, its ability to predict which box you take would be greater than random chance, and you would get some information about which box you should pick based on this. Being completely honest, I think a lot of the resistance to one-boxing comes from the repugnance with which some people regard the idea that their behavior is extremely predictable. (Hey! News flash: it is.) Nozick even offers additional information in his example by saying that you've seen friends and colleagues play the same game, and the being predicted their choice reliably each time. Come on Plato, do you want a million dollars or not? Absolute no-brainer!
The first objection (regarding self-referential decision-making) is slightly more fertile ground for argument and it's the one to which Nozick devotes the most time. The idea is that you're engaging in circular logic: I'm deciding to one-box, therefore the being knew I would one-box, therefore I should decide to one-box. (Again: what's the whole point of the exercise? That whatever decision you're about to make, the being knew you would do it, including all the mental gyrations you're going through to get to your answer.) Nozick gives the example of a person who doesn't know whether his father is Person A or Person B. Person A was a university scientist and died of a painful disease in mid-life which would certainly be passed onto all offspring; children of person A would be expected to display an aptitude for technical subjects. Person B was an athlete, and likewise his children would be expected to display an athletic character. So the troubled young man is deciding on a career, noting that he has excelled equally in both baseball and engineering. "I certainly wouldn't want to have a painful genetic disease. Therefore, I'll choose a career in baseball. Since I've chosen a career in baseball, that means my true prowess is in athletics and therefore, B was my father, and I won't get a genetic disease. Phew!"
Yes, that would be a ridiculous decision process. The difference between the two is this: the category the decider is in the whole time is defined in Newcomb as definitely affecting the decision, whereas in Nozick's parallel, it does not (he could've gone either way.) Whatever you decide in Newcomb, the alien knew you would go through your whole sequence of contortions, and you were in that category all the while. Whether such a deterministic category is meaningful is a different and probably more interesting question than Newcomb as-is. Here's another example: you're in a national park, following a marked trail. You get so far along the trail until you come to a frighteningly steep rock face with only a single cable hammered into it. You reason, "I am about to proceed up these cables. If I'm about to do it, it's only because my action was anticipated by the national park people who design the map and trails and they can predict my actions as a reasonably fit and sensible hiker, and furthermore they put these cables here; they're not in the business of encouraging people to do foolishly dangerous things. Therefore, because I am going to do it, it is safe and I should do it." (Any reader who's ever braved the cables on Half Dome in Yosemite by him or herself without knowing ahead of time what they were getting into has had this exact experience.) This replicates the decision process relating to the for-some-reason mysterious perfect predictor: "I am about to open Box #2 only. If I'm about to open it, the superintelligent being would have put a million dollars in it. Therefore I should open Box #2 only." In fact, all the time we go through such circular reasoning processes as they relate to other human beings who are predicting are actions either in general or specifically for us: I am going to do A, and A wouldn't be available unless other agents who can predict my actions reasonably well knew I would come along and do A, therefore I should do A. This still may be an epistemological mess (something I'm not going to debate here) but the fact is that we use this kind of reasoning constantly, living in a world shaped in the to-us most salient ways by other agents who can predict our actions.
Incidentally, I intentionally used the example of the national park because that we use that kind of reasoning becomes obvious when you're trying to decide whether to climb something or undertake an otherwise risky proposition in a wilderness area, rather than on developed trails with markers; you become acutely aware that this circular justification heuristic based on other agents predicting your actions is suddenly unavailable, and then when it's available again (five miles further on, you run across an old trail) the arrangement seems quite obvious.
As a final note, as in other games (like Prisoner's Dilemma) the payouts can be critically important to how we choose. As the problem is traditionally stated (always a thousand in box #1, either zero or a million in box #2), it actually makes the decision quite easy for us, even if we're worried about the fallibility of our brilliant alien benefactor (which again, if we are, then what's the point of this whole exercise!?!?). Making a decision that throws away a thousand for a crack at a million is not for most humans in Western democracies a bad deal. (If someone could show me a business plan that had a 50% chance of turning a thousand bucks into a million within the few minutes that the Newcomb problem could presumably take place in, I'd be stupid not to do it!) On the other hand if I lived in the developing world and made $50 a month and had six kids to feed, I might think harder about this. (This is the St. Petersburg lottery problem, in which the expected utility of the same payout differs between agents based on their own context, and can be applied to other problems as well.) Similarly if it were five hundred thousand in Box #1 and a million in Box #2, things would be more interesting, for my own expected utility at least. Opening a box expecting a million and getting nothing doesn't hurt so much if you would have only got a thousand by playing it safe and opening both; it would be pretty bad if you'd expected a million and got nothing but could still have half a million if you'd played it safe. (For me. Bill Gates would probably shrug.)
Overall, the whole exercise of Newcomb's Box, as given, seems to me uninteresting and obvious. But enough smart people have gone on debating it for long enough that I must be some kind of philistine who's missing something about it. Nonetheless the arguments I've seen so far are not compelling; feel free to share more.
Imagine a superintelligent being (a god, or an alien grad student as Nozick imagines, or more plausibly a UCSD medical student. It's up to you.) This superintelligent being says that it can predict your actions perfectly. It shows you two boxes, Box #1 and Box #2, into which it will place money according to rules that I will shortly give. As for you, you have two options: either open both boxes and take the money from both if there is any, or open only Box #2 and take the money from just Box #2. Now here are the rules, and the kicker. Since the being can predict your actions perfectly, it does the following trick. If it predicts that you're going to take just Box #2, it will place a thousand dollars in box #1, and a million dollars in Box #2. So in this instance, you will get a million dollars, but you'll miss out on the thousand in box #1. On the other hand, if it predicts that you will take both boxes, the being will place a thousand dollars in box #1, but place nothing in Box #2. In that case, you end up with just a thousand dollars. So in other words: the being always puts a thousand dollars in Box #1, whereas in Box #2 there's either a million, or nothing.
So, now the superintelligent being has gone back to its home planet of La Jolla, and you are left wondering what to do. Assuming you want the most money possible, which option do you pick and why?
Figure 1. Decision table for Newcomb's Problem.
There's been a lot of discussion about Newcomb's Box, and not all of the responses adhere to the standard one-or-both answer. But I take the point of this particular logic-koan to be that we're to decide based on the givens of the problem which of the options we would take, so cute answers about trying to cheat, making side bets, etc. are wasting our time. If we're going to introduce those kinds of non-systematic "real world" options into this exercise, then we're going to need a lot more context than we currently have to make a decision. In fact after ten years living in Berkeley I'm surprised that I haven't yet met someone on a street corner claiming to be an alien with a million dollars for me, but if I did I would walk away and not play at all. (Come to think of it, I frequently get similar spontaneous offers of a million dollars or more in my spam folder which I ignore at my peril.)
My own answer is to take only box #2, expecting to get a million dollars. Why? Because I want a million dollars, and the superintelligent alien is apparently smart enough to know that I'll gladly cooperate and not try to make myself unpredictable (more on this in a moment). Why try to be a smart-ass about it? (It's both to your disadvantage and not even possible anyway per the terms of the problem.) The being told you where it would put the million dollars (or not) based on your actions, and it's a given in the problem that the being is perfect at predicting your actions. This is what gives the both-boxers fits. They say one-boxers are idiots because if the being got my choice wrong, it didn't put anything in box #2 because it thought I would choose both. If the being is wrong, I open only box #2, and I get zero (because the alien thought I was going to take both and least get a grand, but he was having an off day.)
I will be beating the following dead horse a lot here: the problem states you have a reliable predictor. Why does Figure 1 above even have a right-side column? If you assume the being is fallible then you're not thinking about Newcomb's problem as stated any more: you're ascribing properties to the being that either conflict with what is given in the problem, or your're making stuff up. (Maybe the alien is fallible and copper and zinc are toxic to it! That way it won't predict in time that I'm going to kill it by throwing my spare pennies and brass keys at it, and then I can get the full amount from both boxes! Sucker. Ridiculous? No more than worrying about the given perfect predictor's not being perfect.)
Figure 2. Correction to Figure 1. This figure is the actual real table for Newcomb's Problem. Figure 1 is somebody else-not-Newcomb's problem that features fallible aliens.
Complaints about the logic of the Box #2-only response (which is the majority's response, if the ones Nozick cites in one of his essays are representative) typically focus on two things. One, that we're assuming reverse causality, that we must think our choice of the boxes will make there be a million dollars in it; and two, that it suggests we don't have free will. I dismiss the second objection out of hand because the whole point of the problem is that the being is a reliable predictor of human behavior - for that one aspect of your behavior, in this problem, no, you don't have free will. Look: we already accepted a being with near-perfect predictive powers. Without that, then the problem changes and we have to guess how likely the being is to get it right. But as long as we have Mr./Ms. Perfect Predictor, then the nature or mechanism is unimportant. You can justify how it accomplishes this however you like (we don't have free will in this respect, or the alien can travel through time) but the point is, any cleverness or strategy or philosophizing you do has already been taken into account by the alien.
But things can be predicted in our world, including human behavior, and for some reason this doesn't seem to evince outcries about undermining the concept of free will. Like it or not, other humans predict things about you all the time that you think you'd have some conscious control over - whether you'll quit smoking, your credit score, your mortality - and across the population, these predictions are quite robust. They don't always have the individual exactitude that our alien friend does of course. But at the very least you must concede that if our alien friend is even as smart as humans, after playing this game multiple times with us, its ability to predict which box you take would be greater than random chance, and you would get some information about which box you should pick based on this. Being completely honest, I think a lot of the resistance to one-boxing comes from the repugnance with which some people regard the idea that their behavior is extremely predictable. (Hey! News flash: it is.) Nozick even offers additional information in his example by saying that you've seen friends and colleagues play the same game, and the being predicted their choice reliably each time. Come on Plato, do you want a million dollars or not? Absolute no-brainer!
The first objection (regarding self-referential decision-making) is slightly more fertile ground for argument and it's the one to which Nozick devotes the most time. The idea is that you're engaging in circular logic: I'm deciding to one-box, therefore the being knew I would one-box, therefore I should decide to one-box. (Again: what's the whole point of the exercise? That whatever decision you're about to make, the being knew you would do it, including all the mental gyrations you're going through to get to your answer.) Nozick gives the example of a person who doesn't know whether his father is Person A or Person B. Person A was a university scientist and died of a painful disease in mid-life which would certainly be passed onto all offspring; children of person A would be expected to display an aptitude for technical subjects. Person B was an athlete, and likewise his children would be expected to display an athletic character. So the troubled young man is deciding on a career, noting that he has excelled equally in both baseball and engineering. "I certainly wouldn't want to have a painful genetic disease. Therefore, I'll choose a career in baseball. Since I've chosen a career in baseball, that means my true prowess is in athletics and therefore, B was my father, and I won't get a genetic disease. Phew!"
Yes, that would be a ridiculous decision process. The difference between the two is this: the category the decider is in the whole time is defined in Newcomb as definitely affecting the decision, whereas in Nozick's parallel, it does not (he could've gone either way.) Whatever you decide in Newcomb, the alien knew you would go through your whole sequence of contortions, and you were in that category all the while. Whether such a deterministic category is meaningful is a different and probably more interesting question than Newcomb as-is. Here's another example: you're in a national park, following a marked trail. You get so far along the trail until you come to a frighteningly steep rock face with only a single cable hammered into it. You reason, "I am about to proceed up these cables. If I'm about to do it, it's only because my action was anticipated by the national park people who design the map and trails and they can predict my actions as a reasonably fit and sensible hiker, and furthermore they put these cables here; they're not in the business of encouraging people to do foolishly dangerous things. Therefore, because I am going to do it, it is safe and I should do it." (Any reader who's ever braved the cables on Half Dome in Yosemite by him or herself without knowing ahead of time what they were getting into has had this exact experience.) This replicates the decision process relating to the for-some-reason mysterious perfect predictor: "I am about to open Box #2 only. If I'm about to open it, the superintelligent being would have put a million dollars in it. Therefore I should open Box #2 only." In fact, all the time we go through such circular reasoning processes as they relate to other human beings who are predicting are actions either in general or specifically for us: I am going to do A, and A wouldn't be available unless other agents who can predict my actions reasonably well knew I would come along and do A, therefore I should do A. This still may be an epistemological mess (something I'm not going to debate here) but the fact is that we use this kind of reasoning constantly, living in a world shaped in the to-us most salient ways by other agents who can predict our actions.
Incidentally, I intentionally used the example of the national park because that we use that kind of reasoning becomes obvious when you're trying to decide whether to climb something or undertake an otherwise risky proposition in a wilderness area, rather than on developed trails with markers; you become acutely aware that this circular justification heuristic based on other agents predicting your actions is suddenly unavailable, and then when it's available again (five miles further on, you run across an old trail) the arrangement seems quite obvious.
As a final note, as in other games (like Prisoner's Dilemma) the payouts can be critically important to how we choose. As the problem is traditionally stated (always a thousand in box #1, either zero or a million in box #2), it actually makes the decision quite easy for us, even if we're worried about the fallibility of our brilliant alien benefactor (which again, if we are, then what's the point of this whole exercise!?!?). Making a decision that throws away a thousand for a crack at a million is not for most humans in Western democracies a bad deal. (If someone could show me a business plan that had a 50% chance of turning a thousand bucks into a million within the few minutes that the Newcomb problem could presumably take place in, I'd be stupid not to do it!) On the other hand if I lived in the developing world and made $50 a month and had six kids to feed, I might think harder about this. (This is the St. Petersburg lottery problem, in which the expected utility of the same payout differs between agents based on their own context, and can be applied to other problems as well.) Similarly if it were five hundred thousand in Box #1 and a million in Box #2, things would be more interesting, for my own expected utility at least. Opening a box expecting a million and getting nothing doesn't hurt so much if you would have only got a thousand by playing it safe and opening both; it would be pretty bad if you'd expected a million and got nothing but could still have half a million if you'd played it safe. (For me. Bill Gates would probably shrug.)
Overall, the whole exercise of Newcomb's Box, as given, seems to me uninteresting and obvious. But enough smart people have gone on debating it for long enough that I must be some kind of philistine who's missing something about it. Nonetheless the arguments I've seen so far are not compelling; feel free to share more.
Sunday, August 8, 2010
Wednesday, August 4, 2010
Hints That You're Living in a Simulation; Plus, What Is a Simulation?
See Bostrom's simulation argument for background. From a practical standpoint, you might be suspicious that you live in a simulation if you inhabit a world with the following characteristics:
Hint #1) Limited resolution. A simulation would be computation intensive. It would be useful to have tricks that increase the economy of operations, but in ways that do no compromise the consistency of the simulation to the players. One such trick would be to set an absolute upper limit to resolution (or a lower limit to the size of the elements that make up the "picture") that is below the sensory threshold of the players. These elements could variously be called pixels or quarks. Similarly, it would behoove the simulators to set a maximum time resolution, i.e. maximum frames-per-second, also called Planck times. Furthermore, the simulation's computing power is spared by a statistical method of calculating relationships between entities in the simulation (i.e quantum mechanics) even though it may look, at the scale of the game players or simulated entities, as if the universe maintained quantitative relationships in terms of integers calculated to arbitrary precision. (Related question: is it possible in principle given the physics of our universe for something the size of a bacterium or virus to "be conscious of" this gap in the behavior of the Newtonian and quantum realms, at a very basic sensory level? If not, isn't it interesting that our universe is such that there can be no consciousness operating on scales that would expose the twitching gears behind the scenes?)
Hint #2) There are limitations in what spaces within the game can be occupied by players or sims. In the old Atari 2600 Pole Position game, you couldn't just randomly go off driving off the track and through the crowd even if you didn't care about losing points; the game just wouldn't let you. Similarly, the total space in our apparent universe that we occupy, or directly interact with, or for that matter even get any significant amount of information from, is an infinitesimally small part of the whole. Unless you're in a submarine or in orbit, you don't go more than 200 meters below sea level or 13,000 m above it. (That's a volume of 2.1 x 10^18 m^3 that for all practical purposes the entirety of human history has occurred in; double that figure, and that's the volume that all of evolutionary history has occurred in.)
Hint #3) Beyond the "active game volume" as described above, dab a few pixels here and there in an otherwise almost entirely dark and empty volume. Make them so far away that sims can't possibly interact with them. Reveal additional detail as necessary whenever someone happens to look more closely at them. (And there's another trick: objects in this simulation are only loosely
defined until one of the players interacts with them, "collapsing the wave function". Yeah, that's what the programmers will call it, that's the ticket.)
Hint #4) Even in that limited location, make the active game volume wrap around. That way the simulators get rid of edge-distortion problems, as in Conway's Life. A sphere is the best way to do this. Therefore, work out the physics rules of the simulation to favor spheres.
Hint #5) Make each state of the simulation dependent on previous states of the simulation, but simplify by dramatically limiting the number of inputs with any causal weight. The simulators can limiting computations by having only mass, charge, space and their change over time determine subsequent frames.
Hint #6) If for some reason it is important for the entities in the simulation to remain ignorant to their existence as part of a simulation, the simulators could make sure the entities are accustomed to not only these kinds of stark informational discontinuities but to profound differences in the quality of awareness, both within themselves and each other. That is, the sims will accept not just that the vast majority of the universe (as seen in the sky at night) is interactively off-limits to them, but they'll also accept that their own awareness thereof and ability to connect the dots will dramatically vary over time. That way, if there is any need to interfere and make adjustments (to stop someone from figuring out the game) it won't strike the sims as strange. (Forgetfulness, deja vu, mental illness, drugs, varying intelligence or ability to concentrate on math, death of player-characters before they can learn too much?)
#6 does raise a very important question: why would the simulators give a damn if we knew we were in a simulation. So what? What would we do about it, sue them? If Pac-Man woke up and deduced that he were a video game character, if he still experienced suffering and mortality the same way, why would it matter? By this same view, there's an easy answer to whether we should behave differently if we're actually in a simulation: no. Whether our universe is in reality just World of Warcraft from the sixth dimension, if we simulated beings can suffer (and I know I can), then the moral rules are exactly the same as before.
It's also worth asking for some humility, and asking why we humans always assume that we would be the purpose of any such simulation. We could be merely incidental consciousnesses that are necessary for harboring the populations of simulated bacteria that the simulators are really studying. Or, the simulators could be cryonicists who preserve pets, and the most popular pets in their dimension look like what we call raccoons, and our universe is actually the raccoon heaven in which their beloved masked companions await a cure for the disease that forced the owners to put them on ice. In fact the raccoon-heaven simulation would contain a whole suite of ecosystem, all of them purely simulated (with the exception of raccoons) to keep up the appearance of a full biosphere. So the point of such a simulation would be to fool raccoons - or maybe even mice (again, why would they care about fooling everyone! If the simulators are reading this, just give me more juicy steaks and I won't make problems. It doesn't cost you anything!)
While the raccoon thought experiment is meant to be whimsical, a healthy respect for our own ignorance is always in order for these kinds of speculations. After all, assuming what we have guessed about the rest of (for the sake of argument simulated) universe is accurate, then there might be "aliens" (other non-human intelligences within the simulation) who may very well be much brighter than us. So even if the simulation is somehow arranged around the most intelligent entities within it (as we assume), those entities need not be human. Even if we're simulated, and we have a real brain and body in the "real" universe that's similar to our form in this one, this simulated universe might be designed for Martians (who are brighter than us) and be much less pleasant than our home dimension.
Finally, the very idea of a simulation is poorly defined. Mostly we think of something like almost completely controlled full-world simulation in the Matrix, but let's explore boundary cases. If I wear rose-colored glasses, is that a simulation (or a red world)? What about LSD that causes me to see unidentified animals scurrying past in my peripheral vision? What about DMT that causes a complete dissociation of external stimuli from subjective experience? What if I have some chip implanted that displays blueprints of machinery in my visual field a la the Terminator, is that a simulation? What about a chip that makes me see a tiger following me around that isn't there? (Hypothetical given current limitations.) What if I hear voices telling me to do things that are produced by tissue inside my own skull, by no conscious intent of anyone? (Not at all hypothetical.)
One of the interesting points in the popular movie Inception is the way that external stimuli appear in dreams. This gives us a hint as to what we mean by simulation, and why we care. Most of us have had experiences where the outside world "intruded" into a dream, with the stimulus obvious after we awoke. I once dreamed that a dimensional portal slid open in front of me with an ominous metallic resonance, and I stepped through it, suddenly speeding over the red, rocky surface of Mars. Then I realized it was my father opening his metal closet door in the next room, and I was looking into that room at the red-orange carpeting. Before I was fully awake I had received the sound stimulus but I had built a world out of it that most of us would not regard as real. (The experience of speeding over Mars was quite real, even if most humans would have a more accurate representation of that auditory stimulus.) So, a better way of saying "how do I know 'this' is reality, rather than another dream, or a simulation?" is to ask "how do I know I am perceiving "true" stimuli, without mapping them unnecessarily onto internal stimuli, giving me as accurate and un-contorted a view of the world as possible?"
And indeed in certain ways, we certainly are dreaming, in the sense of injecting internal stimuli and filtering external stimuli through them. (Notably, it is possible to view schizophrenics as people who experience dreams even while awake and filter their perceptions accordingly.) First and most obviously, because our sense organs are limited in what they can detect, we're obtaining only a slice of possible data. Second, the world we knit together is the result of binding of sensory attributes into object/events, as well as pattern recognition. The limitations of our nervous systems, and the associations we are able to make, profoundly influence the representation we build of the world we're perceiving.
Third, and most significantly, a large part of our experience is non-representational: emotions, pleasure and pain do not exist outside of nervous systems, or rather the events to which those experiences correspond are almost entirely contained within nervous systems. Yes, to be precise the experience of light does not exist until the triggering of a cascade of electrochemical events by radiation incident on pigments in retinal cells; but light, which is what is represented in our experience, exists traveling across the universe. Pain and happiness do not. These are internal stimuli that add a non-representational layer to reality, even more certainly than my dream of the Mars overflight.
A good working definition of a simulation as it is commonly understood is when the majority of one's external stimuli are supplied deliberately by another intelligence to produce experiences that do not correlate to physical reality external to the nervous system (or computational equivalent). This avoids taking a position on AI; the sims may or may not be entities separate from the computation. I.e., you might be in sensory deprivation tank like Neo, or you might be a computer program. The question of reality versus dreams or simulations is not one of discrete "levels" as we've come to think of it in popular culture. It is rather a question about how we know our experiences correspond in some consistent way with events separate from our nervous systems.
Hint #1) Limited resolution. A simulation would be computation intensive. It would be useful to have tricks that increase the economy of operations, but in ways that do no compromise the consistency of the simulation to the players. One such trick would be to set an absolute upper limit to resolution (or a lower limit to the size of the elements that make up the "picture") that is below the sensory threshold of the players. These elements could variously be called pixels or quarks. Similarly, it would behoove the simulators to set a maximum time resolution, i.e. maximum frames-per-second, also called Planck times. Furthermore, the simulation's computing power is spared by a statistical method of calculating relationships between entities in the simulation (i.e quantum mechanics) even though it may look, at the scale of the game players or simulated entities, as if the universe maintained quantitative relationships in terms of integers calculated to arbitrary precision. (Related question: is it possible in principle given the physics of our universe for something the size of a bacterium or virus to "be conscious of" this gap in the behavior of the Newtonian and quantum realms, at a very basic sensory level? If not, isn't it interesting that our universe is such that there can be no consciousness operating on scales that would expose the twitching gears behind the scenes?)
Hint #2) There are limitations in what spaces within the game can be occupied by players or sims. In the old Atari 2600 Pole Position game, you couldn't just randomly go off driving off the track and through the crowd even if you didn't care about losing points; the game just wouldn't let you. Similarly, the total space in our apparent universe that we occupy, or directly interact with, or for that matter even get any significant amount of information from, is an infinitesimally small part of the whole. Unless you're in a submarine or in orbit, you don't go more than 200 meters below sea level or 13,000 m above it. (That's a volume of 2.1 x 10^18 m^3 that for all practical purposes the entirety of human history has occurred in; double that figure, and that's the volume that all of evolutionary history has occurred in.)
Hint #3) Beyond the "active game volume" as described above, dab a few pixels here and there in an otherwise almost entirely dark and empty volume. Make them so far away that sims can't possibly interact with them. Reveal additional detail as necessary whenever someone happens to look more closely at them. (And there's another trick: objects in this simulation are only loosely
defined until one of the players interacts with them, "collapsing the wave function". Yeah, that's what the programmers will call it, that's the ticket.)
Hint #4) Even in that limited location, make the active game volume wrap around. That way the simulators get rid of edge-distortion problems, as in Conway's Life. A sphere is the best way to do this. Therefore, work out the physics rules of the simulation to favor spheres.
Hint #5) Make each state of the simulation dependent on previous states of the simulation, but simplify by dramatically limiting the number of inputs with any causal weight. The simulators can limiting computations by having only mass, charge, space and their change over time determine subsequent frames.
Hint #6) If for some reason it is important for the entities in the simulation to remain ignorant to their existence as part of a simulation, the simulators could make sure the entities are accustomed to not only these kinds of stark informational discontinuities but to profound differences in the quality of awareness, both within themselves and each other. That is, the sims will accept not just that the vast majority of the universe (as seen in the sky at night) is interactively off-limits to them, but they'll also accept that their own awareness thereof and ability to connect the dots will dramatically vary over time. That way, if there is any need to interfere and make adjustments (to stop someone from figuring out the game) it won't strike the sims as strange. (Forgetfulness, deja vu, mental illness, drugs, varying intelligence or ability to concentrate on math, death of player-characters before they can learn too much?)
#6 does raise a very important question: why would the simulators give a damn if we knew we were in a simulation. So what? What would we do about it, sue them? If Pac-Man woke up and deduced that he were a video game character, if he still experienced suffering and mortality the same way, why would it matter? By this same view, there's an easy answer to whether we should behave differently if we're actually in a simulation: no. Whether our universe is in reality just World of Warcraft from the sixth dimension, if we simulated beings can suffer (and I know I can), then the moral rules are exactly the same as before.
It's also worth asking for some humility, and asking why we humans always assume that we would be the purpose of any such simulation. We could be merely incidental consciousnesses that are necessary for harboring the populations of simulated bacteria that the simulators are really studying. Or, the simulators could be cryonicists who preserve pets, and the most popular pets in their dimension look like what we call raccoons, and our universe is actually the raccoon heaven in which their beloved masked companions await a cure for the disease that forced the owners to put them on ice. In fact the raccoon-heaven simulation would contain a whole suite of ecosystem, all of them purely simulated (with the exception of raccoons) to keep up the appearance of a full biosphere. So the point of such a simulation would be to fool raccoons - or maybe even mice (again, why would they care about fooling everyone! If the simulators are reading this, just give me more juicy steaks and I won't make problems. It doesn't cost you anything!)
While the raccoon thought experiment is meant to be whimsical, a healthy respect for our own ignorance is always in order for these kinds of speculations. After all, assuming what we have guessed about the rest of (for the sake of argument simulated) universe is accurate, then there might be "aliens" (other non-human intelligences within the simulation) who may very well be much brighter than us. So even if the simulation is somehow arranged around the most intelligent entities within it (as we assume), those entities need not be human. Even if we're simulated, and we have a real brain and body in the "real" universe that's similar to our form in this one, this simulated universe might be designed for Martians (who are brighter than us) and be much less pleasant than our home dimension.
Finally, the very idea of a simulation is poorly defined. Mostly we think of something like almost completely controlled full-world simulation in the Matrix, but let's explore boundary cases. If I wear rose-colored glasses, is that a simulation (or a red world)? What about LSD that causes me to see unidentified animals scurrying past in my peripheral vision? What about DMT that causes a complete dissociation of external stimuli from subjective experience? What if I have some chip implanted that displays blueprints of machinery in my visual field a la the Terminator, is that a simulation? What about a chip that makes me see a tiger following me around that isn't there? (Hypothetical given current limitations.) What if I hear voices telling me to do things that are produced by tissue inside my own skull, by no conscious intent of anyone? (Not at all hypothetical.)
One of the interesting points in the popular movie Inception is the way that external stimuli appear in dreams. This gives us a hint as to what we mean by simulation, and why we care. Most of us have had experiences where the outside world "intruded" into a dream, with the stimulus obvious after we awoke. I once dreamed that a dimensional portal slid open in front of me with an ominous metallic resonance, and I stepped through it, suddenly speeding over the red, rocky surface of Mars. Then I realized it was my father opening his metal closet door in the next room, and I was looking into that room at the red-orange carpeting. Before I was fully awake I had received the sound stimulus but I had built a world out of it that most of us would not regard as real. (The experience of speeding over Mars was quite real, even if most humans would have a more accurate representation of that auditory stimulus.) So, a better way of saying "how do I know 'this' is reality, rather than another dream, or a simulation?" is to ask "how do I know I am perceiving "true" stimuli, without mapping them unnecessarily onto internal stimuli, giving me as accurate and un-contorted a view of the world as possible?"
And indeed in certain ways, we certainly are dreaming, in the sense of injecting internal stimuli and filtering external stimuli through them. (Notably, it is possible to view schizophrenics as people who experience dreams even while awake and filter their perceptions accordingly.) First and most obviously, because our sense organs are limited in what they can detect, we're obtaining only a slice of possible data. Second, the world we knit together is the result of binding of sensory attributes into object/events, as well as pattern recognition. The limitations of our nervous systems, and the associations we are able to make, profoundly influence the representation we build of the world we're perceiving.
Third, and most significantly, a large part of our experience is non-representational: emotions, pleasure and pain do not exist outside of nervous systems, or rather the events to which those experiences correspond are almost entirely contained within nervous systems. Yes, to be precise the experience of light does not exist until the triggering of a cascade of electrochemical events by radiation incident on pigments in retinal cells; but light, which is what is represented in our experience, exists traveling across the universe. Pain and happiness do not. These are internal stimuli that add a non-representational layer to reality, even more certainly than my dream of the Mars overflight.
A good working definition of a simulation as it is commonly understood is when the majority of one's external stimuli are supplied deliberately by another intelligence to produce experiences that do not correlate to physical reality external to the nervous system (or computational equivalent). This avoids taking a position on AI; the sims may or may not be entities separate from the computation. I.e., you might be in sensory deprivation tank like Neo, or you might be a computer program. The question of reality versus dreams or simulations is not one of discrete "levels" as we've come to think of it in popular culture. It is rather a question about how we know our experiences correspond in some consistent way with events separate from our nervous systems.
Labels:
delusions,
dreams,
hallucination,
schizophrenia,
simulation
Monday, August 2, 2010
Looking for Neurological Differences Between Nouns and Verbs
Just ran across this poster presented at the Organizing for Brain Mapping's Annual Meeting in 2004. Sahin, Halgren, Ubert, Dale, Schomer, Wu and Pinker looked at the fMRI and EEG changes associated with a number of language tasks, and one of the questions they asked was whether activation characteristics were different for nouns and verbs. This study did not find that they were.
In my sketch of a neurolinguistic theory, verbs are first order modifiers and are distinct from adjectives in that they mediate properties and relationships between nouns. (In this sense, intransitive verbs are more similar to adjectives than to transitive verbs.) I also postulate that nouns and first order modifiers should have identifiably different neural correlates. I have not yet completed a literature search (obviously, if I'm citing posters from 2004.) However, even if such different neural correlates obtain, then I think it the task design here was not necessarily adequate to capture such differences, because the participants were asked to morphologically modify the nouns and verbs in isolation, rather than in situ, in grammatical relation to each other.
Another interesting experiment would be to give the participants nonsense words and new affixing rules (i.e. not revealing the part of speech of the nonsense word, i.e. ("if the word has a t in it, add -pex to the end, otherwise, add -peg"), and look for any difference relative to neural correlates of morphological tasks done in real words.
In my sketch of a neurolinguistic theory, verbs are first order modifiers and are distinct from adjectives in that they mediate properties and relationships between nouns. (In this sense, intransitive verbs are more similar to adjectives than to transitive verbs.) I also postulate that nouns and first order modifiers should have identifiably different neural correlates. I have not yet completed a literature search (obviously, if I'm citing posters from 2004.) However, even if such different neural correlates obtain, then I think it the task design here was not necessarily adequate to capture such differences, because the participants were asked to morphologically modify the nouns and verbs in isolation, rather than in situ, in grammatical relation to each other.
Another interesting experiment would be to give the participants nonsense words and new affixing rules (i.e. not revealing the part of speech of the nonsense word, i.e. ("if the word has a t in it, add -pex to the end, otherwise, add -peg"), and look for any difference relative to neural correlates of morphological tasks done in real words.
Strong AI, Weak AI, and Talmudic AI
Yale Computer scientist David Gelernter argues here that Judaic dialectic tradition will help us to reason our way through the moral morass of the first truly intelligent machine. I had first written this off as an article in the genre of "interesting collision of worldviews". But in the near future the cognitive science debates we're having today will seem luxuriously academic and unhurried, because for several reasons involving computing and neuroscience they will soon be more than intriguingly difficult questions. Even if we can all agree that suffering must be the basis of morality, we will need a way to know that, on that basis, it's not okay to disassemble someone in a coma, but it is okay to disassemble a machine that can argue for its own self-preservation.
Sunday, August 1, 2010
John Searle Must Be Spamming Me
Because with all the comment-spam, the waiting-to-be-moderated comments list looks like a Chinese chat-room. I have yet to see any Hindi. And anyway I'm sure the machine producing the spam doesn't understand the symbols.
Wednesday, July 28, 2010
Reflections on Wigner 1: Humility in Pattern Recognition - Mathematics as a Special Case
It's often asked how natural selection could have produced something like the mathematical ability of modern humans. Why can an ape, designed to mate, fight, hunt and run on a savanna, and perceive things that occur on a time scale of seconds to minutes and a size scale of a centimeter to a few hundred meters, even partly understand quarks and galaxies? Implicit in this statement is an admiration for that ability, and the power of mathematics, as well as an assumption held by physicists that should not be surprising.
The physicists' assumption is that the whole of nature, or at least the important parts of it, can be described by mathematics. In "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", Wigner observes "Galileo's restriction of his observations to relatively heavy bodies was the most important step in this regard. Again, it is true that if there were no phenomena which are independent of all but a manageably small set of conditions, physics would be impossible." Another way of saying this is that those regular relationships in nature most easily recognizable by our nervous systems are those parts of nature which we are most likely to notice first; seasonal agriculture preceded gravitation for this reason. But there is a circular, self-selection issue here about the interesting correspondence between the empirical behavior of nature and the mathematical relationships humans are capable of understanding, which is that:
a) humans can understand math.
b) What we have most clearly and exactly understood of nature so far (physics) employs math
c) Therefore, math uniquely and accurate describes nature.
Point b may be true only because our limited pattern recognition ability (even including infinitely recursive propositional thinking like math within that term) only allows us to recognize a certain limited group of relationships among all possible relationships in nature. In other words, of course we've discovered physics because those relationships are the ones we can most easily recognize! It's as if someone with a ruler goes around measuring things, and at the end of the day looks at the data she's collected and is amazed that it was exactly the kind of data you can collect with a ruler.
This discussion is far from an attack on usefulness of mathematics; if you have a model that worked in the past, bet on it working in the future, and the fact that not everything in the universe is yet shown to be predictable by mathematical relationships is certainly not cause to say "We've been at it in earnest for a few centuries and haven't shown how math predicts everything; time to quit." But it also certainly isn't time to say that math can show or has shown everything important, and the rest is necessarily detail. The whole endeavor of truth-seeking I think has at least something to do with decreasing suffering and increasing happiness, both very real parts of nature, and as yet there are very few mathematical relationships concerning them. I look forward to the day that such relationships are shown, but we cannot assume that they exist, or that if they don't, suffering is unimportant.
One problem is that if indeed there are relationships in nature un-graspable by human cognition or mathematics (and note that I've made no argument as to whether those two things are the same), how could we know? It would just look like noise, and we couldn't tell if a knowable relationship was there and had yet to be pulled out, or there was nothing to know (or knowable). We might at least know whether such unknowable information, or "detail", could exist, if we had some proof within our propositional system that there are statements which are true but cannot be deduced from the system. And we have just such a proof.
If we regard mathematics as a formalist does, that math is a trick of our neurology that corresponds usefully enough to nature, the question of why math is useful at all becomes even more important. But if we inject a little humility into our celebration of our own propositional cleverness, the matter seems less pressing.
We have no reason to believe that the total space of comprehensible relationships in nature is not far, far larger than what is encompassed by "mathematics", even in math's fullest extension. If this is the case, it is easier to see how our mathematical ability is a side effect of natural selection and the nervous system it created. By giving us a larger working memory than our fellow species along with some ability to assign symbols arbitrarily, that nervous system does allows us to use propositional thinking to see nature beyond the limitations of our senses - but just barely.
In this view, we can perceive just the barest "extra" layer of nature beyond what our immediate senses do, and mathematics seems far less surprising or miraculous. There is still reason enough to investigate math's unreasonable effectiveness but we shouldn't insist on being shocked that it could have been produced by the hit-and-miss kluges of evolution. But I've made another circular assumption here, which is:
a) evolution proceeds according to natural law
b) evolution will therefore favor replicators that have some appreciatiof some of those natural laws, and modify their behavior accordingly
c) therefore, our ability to perceive the laws that have impacted our own survival, and maybe a few extra ones of the same form, should be expected
There are two mysteries then: first, that any type of regular pattern exists in nature, and second, that we are able to apprehend these patterns, particularly through mathematics. The second mystery probably disappears, seeming special only because of the likely incompleteness of math as a tool to describe nature, math's special case as a method of perception stemming from our own neurology, and the circular basis of our wonder at this as-yet early phase in our use of it. But the first question, of how or why even partly regular relationships appear to exist at all in nature, regardless of how we perceive them, remains untouched by this essay.
The physicists' assumption is that the whole of nature, or at least the important parts of it, can be described by mathematics. In "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", Wigner observes "Galileo's restriction of his observations to relatively heavy bodies was the most important step in this regard. Again, it is true that if there were no phenomena which are independent of all but a manageably small set of conditions, physics would be impossible." Another way of saying this is that those regular relationships in nature most easily recognizable by our nervous systems are those parts of nature which we are most likely to notice first; seasonal agriculture preceded gravitation for this reason. But there is a circular, self-selection issue here about the interesting correspondence between the empirical behavior of nature and the mathematical relationships humans are capable of understanding, which is that:
a) humans can understand math.
b) What we have most clearly and exactly understood of nature so far (physics) employs math
c) Therefore, math uniquely and accurate describes nature.
Point b may be true only because our limited pattern recognition ability (even including infinitely recursive propositional thinking like math within that term) only allows us to recognize a certain limited group of relationships among all possible relationships in nature. In other words, of course we've discovered physics because those relationships are the ones we can most easily recognize! It's as if someone with a ruler goes around measuring things, and at the end of the day looks at the data she's collected and is amazed that it was exactly the kind of data you can collect with a ruler.
This discussion is far from an attack on usefulness of mathematics; if you have a model that worked in the past, bet on it working in the future, and the fact that not everything in the universe is yet shown to be predictable by mathematical relationships is certainly not cause to say "We've been at it in earnest for a few centuries and haven't shown how math predicts everything; time to quit." But it also certainly isn't time to say that math can show or has shown everything important, and the rest is necessarily detail. The whole endeavor of truth-seeking I think has at least something to do with decreasing suffering and increasing happiness, both very real parts of nature, and as yet there are very few mathematical relationships concerning them. I look forward to the day that such relationships are shown, but we cannot assume that they exist, or that if they don't, suffering is unimportant.
One problem is that if indeed there are relationships in nature un-graspable by human cognition or mathematics (and note that I've made no argument as to whether those two things are the same), how could we know? It would just look like noise, and we couldn't tell if a knowable relationship was there and had yet to be pulled out, or there was nothing to know (or knowable). We might at least know whether such unknowable information, or "detail", could exist, if we had some proof within our propositional system that there are statements which are true but cannot be deduced from the system. And we have just such a proof.
If we regard mathematics as a formalist does, that math is a trick of our neurology that corresponds usefully enough to nature, the question of why math is useful at all becomes even more important. But if we inject a little humility into our celebration of our own propositional cleverness, the matter seems less pressing.
We have no reason to believe that the total space of comprehensible relationships in nature is not far, far larger than what is encompassed by "mathematics", even in math's fullest extension. If this is the case, it is easier to see how our mathematical ability is a side effect of natural selection and the nervous system it created. By giving us a larger working memory than our fellow species along with some ability to assign symbols arbitrarily, that nervous system does allows us to use propositional thinking to see nature beyond the limitations of our senses - but just barely.
In this view, we can perceive just the barest "extra" layer of nature beyond what our immediate senses do, and mathematics seems far less surprising or miraculous. There is still reason enough to investigate math's unreasonable effectiveness but we shouldn't insist on being shocked that it could have been produced by the hit-and-miss kluges of evolution. But I've made another circular assumption here, which is:
a) evolution proceeds according to natural law
b) evolution will therefore favor replicators that have some appreciatiof some of those natural laws, and modify their behavior accordingly
c) therefore, our ability to perceive the laws that have impacted our own survival, and maybe a few extra ones of the same form, should be expected
There are two mysteries then: first, that any type of regular pattern exists in nature, and second, that we are able to apprehend these patterns, particularly through mathematics. The second mystery probably disappears, seeming special only because of the likely incompleteness of math as a tool to describe nature, math's special case as a method of perception stemming from our own neurology, and the circular basis of our wonder at this as-yet early phase in our use of it. But the first question, of how or why even partly regular relationships appear to exist at all in nature, regardless of how we perceive them, remains untouched by this essay.
Tuesday, July 27, 2010
In the Land of the Blind
...the one-eyed man must remember the majority is always sane. The Country of the Blind by H.G. Wells explores how an entirely blind civilization might view the universe, and how sighted people might interact with them. The Churchlands' infra-people, while argumentative, seem quite diplomatic by comparison.
Sunday, July 25, 2010
Redwoods Aren't That Ancient or Special
Redwood Preserve in the Oakland Hills.
It's a common claim on informational signs in California parks that "redwoods were around at the time of the dinosaurs", or some such statement. While they're certainly amazing organisms, are these really tree-coelacanths?
Timetree consistently gives a divergence of trees in the order Cupressaceae (redwoods, junipers, various cypresses) at 80 MYA. We know from well-preserved fossilized trees that there were trees growing in the late Cretaceous that looked like modern redwoods, in the same place that modern redwoods grow. (This particular petrified forest near Napa, California is by far the most amazing petrified forest I've ever seen. I had to touch the trees to convince myself they're stone and not wood.)
Because fossilized "redwoods" date back to just after the putative divergence time, it's likely that modern redwoods are merely the more-ancestral-appearing descendants (relative to junipers and cypresses) of the these ancestral trees. Although the size and bark of the fossilized trees look similar to the redwoods today, that certainly doesn't tell the whole story about them, although barring miraculously preserved 65-million-year old Cretaceous tree DNA, that's about all we'll get. Consequently those old trees would more appropriately be called ancestral cypresses. Maybe the Ancient Giant Cypress?
Either way, redwoods are still pretty special, although not in the way that their chromosomes somehow resisted entropy for 65 million years.
Monday, July 5, 2010
Saturday, July 3, 2010
Do Small Biotechs Really Produce More AND BETTER Drug Candidates?
...and if so, why?
I occasionally post about biotechnology industry issues here insofar as they're relevant to the more central topics of this blog, and the productivity of the private sector biotech research enterprise directly bears on the tools we will have in the future to investigate cognition as well as to treat patients with cognitive and neurological disorders. If you're an academic scientist or philosopher and you find all this very dry, I would advise you to at least skim it so you can get an idea of what goes on in the evil world of industry. One thing I will say in defense of the private sector: workers are much, much better treated than they are in academia, not just in terms of money, but in working conditions and general treatment by superiors.
It's a cliche that Big Pharma can't find its own leads and has bought its pipeline from biotech for the past 10-15 years, which serves effectively as free-range R&D (until the round-up.) Having spent most of my time before medical school consulting at smaller biotech companies, and several times finding myself with free time because one of those companies was bought for its portfolio and closed, I've spent my fair share of time wondering about this question. However I actually can't recall seeing an analysis of biotech vs big pharma output, or in particular, of quality of candidates judging by ROI or absolute annual sales. But let's assume that the disparity is real. Big pharmas certainly do - they sometimes try to duplicate the perceived success of small biotechs by putting together small entrepreneur-like groups, like Glaxo. So what is it, exactly, that is more productive about small biotechs?
1) The most obvious: small biotechs have a much greater incentive to get their (usually lone) drugs into clinical trials - if they don't, they disappear. Big pharma management is not so incentivized, and timelines of individual drugs are sometimes adjusted to fit the portfolio. What's being maximized is completely different for a start-up biotech and a multi-drug big pharma. Overall sales is what's being maximized in big pharma, while speed to first-in-human and to market is being maximized in biotech (it equates to survival and therefore financial incentive.)
2) Small biotechs may produce more candidates, but on average lower quality candidates. Because of money and therefore time limitations, they're willing to push through the first lead where the Glaxos of the world have the cash to keep tweaking the structure. You would think this would necessarily mean that the big pharmas then wouldn't be interested in these low-quality candidates, but a) not all decisions are rational, and hype and groupthink have effects in the real world ("We have to buy them to get the first XYZ inhibitor!") and b) the first-in-humans candidate of a given class is often "lower quality" than what might have been the second-in-humans, which as mentioned the biotech won't wait around to discover; the perception and impact of the quality difference is highly context-dependent.
3) At biotech start-ups, scientists have the greatest influence on senior management or are senior management. Typically the management of the group closest to revenue generation is the one that has the most influence over the CEO. In big pharmas, this means sales. In a company that doesn't yet have any sales, this means clinical, or (if even earlier in the cycle) chemists and biologists. Once sales obtains this position, the amount of time the CEO spends thinking about sales increases and development plans tend to be de-emphasized (until everyone panics and it's too late.) I had long suspected that Genentech's success owed to its keeping scientists in key decision-making positions and after having consulted there I'm convinced that this is the case.
4) There are scale-dependent effects that would be present in any organization but are exacerbated by the uniquely long product development cycle in pharmaceuticals. Another exacerbation is the level of government oversight in the industry and the consequences of regulatory transgressions, leading to what are referred to in politics as Olsonian veto blocs, large groups of people who have a say in the process and have nothing to lose by saying "no" but everything to lose by saying "yes" at an inappropriate time. In the pharmaceutical world this is legal, regulatory, and QC - absolutely necessary to the industry, but their influence on timelines seems to be strongly scale dependent. In my own experience in the industry, some of the most focused "how do we get this done" people I've worked with were in QC at the biotech level. Some of the most obstructionist were in QC at the big pharma level. In general a company with a large revenue stream should be expected to be much more risk averse than a company with no profits. In the same vein, once a drug is approved, any new investigations could potentially yield a new indication that would provide some new revenues, or new safety findings that would diminish revenues across the board for the whole molecule, so post-marketing investigations are usually done with kid gloves.
5) Again scale-dependent are free-riders. At a smaller company, free-riding is obvious to all, more immediately detrimental to the future of the entire company, and more quickly punished. This is not the case at large companies with deeper pockets, many of the employees of which seem to be benefiting from a kind of corporate welfare state. This situation often arises at low surface-area-to-volume companies, where most employees interact only with other employees rather than with customers, vendors, or industry contacts outside the company. It would be worth seeing whether there's a sweet spot for company size in terms of a relationship between number of personnel vs. first-in-human clinical trials per person*year, including outlicensed compounds. Anecdotally, I have also noticed an odd scale-dependent increase in the proportion of people who have ever worked in government - not from related agencies like FDA, but from local governments or other areas.
[Story time - and if you know me personally, you know which company I'm talking about. I couldn't help but reflect that the strategy of employees of one big pharma subsidiary company where I worked was exactly that of a parasite in the gut of a large, warm mammal that can afford to miss a few calories here and there. The downside to the strategy is that they're super-specialized to thrive only in that environment; that is, their skillset degenerates into "how to stay employed at ABC Big Pharma". Consequently sometimes they have to transfer between mammals of the same species (i.e. subsidiaries) to survive. The day it was announced this particular subsidiary was being shut down by the parent, I saw groups of people openly weeping as if Princess Diana had died all over again.]
CONCLUSION
If you didn't get enough speculation already, read on. Plus this part also has colorful analogies that I think are nonetheless still useful.
- Dunbar's number applies to organized humans in all activities. There has been work done on Amazonian hunter-gatherers showing that there are village sizes beyond which there tend to be fission events. It's not that the village hits 150 and everyone draws straws to determine who moves, but there are dynamics that invariably take advantage of a trigger event to cause the split (the chief and his brother have a fight, there's a food shortage and some families move to find better hunting areas, etc.) This suggests that there are in general optimum sizes for human social organizations. This research may have a direct bearing on the productivity of small vs. large companies.
- The biotech industry in each part of the country where there is an active scene (the Bay Area, Seattle, San Diego, and Boston) is a notoriously small world. People often end up working together in different combinations at different companies, merely being re-sorted based on skillsets. In Edward Bellamy's 1888 utopian novel Looking Backward, he describes a system where workers have general industrial skills and are (centrally) resourced to new factories based on need. Of course Bellamy was arguing from a socialist standpoint but in biotech it seems that the free market has already generated exactly this arrangement.
- The pharmaceutical industry is not the only one that is dominated by deep-pocketed century-old behemoths that present barriers to entry and snap up competition as it first evolves from the primordial slime and takes its first stumbling steps in an established jungle. If biotechs are as everyone expects more productive than big pharma, this is bad for patients and bad for the economy, and yet there is no check on the growth of the largest companies. It's as if we're at the end of the Cretaceous (with animals so large they need second brains to coordinate their movements) or in the middle of the Second World War (where the incentive to build ever-bigger battleships yielded the monster Yamato.) In both cases, conditions changed (climate and aircraft, respectively), and selection no longer favored the most massive, but it's hard to see how this trend will ever reverse itself, since it's hard to see how capital accumulation can ever be economically selected against. That is, I don't know what would be capitalism's equivalent of Chicxulub or P-51 Mustangs that would obviate the uneven accumulations of capital, so for now we're stuck with biotech serving as free-range R&D for big pharma.
This is cross-posted with a different introductionj at my economics and social science blog, The Late Enlightenment.
I occasionally post about biotechnology industry issues here insofar as they're relevant to the more central topics of this blog, and the productivity of the private sector biotech research enterprise directly bears on the tools we will have in the future to investigate cognition as well as to treat patients with cognitive and neurological disorders. If you're an academic scientist or philosopher and you find all this very dry, I would advise you to at least skim it so you can get an idea of what goes on in the evil world of industry. One thing I will say in defense of the private sector: workers are much, much better treated than they are in academia, not just in terms of money, but in working conditions and general treatment by superiors.
It's a cliche that Big Pharma can't find its own leads and has bought its pipeline from biotech for the past 10-15 years, which serves effectively as free-range R&D (until the round-up.) Having spent most of my time before medical school consulting at smaller biotech companies, and several times finding myself with free time because one of those companies was bought for its portfolio and closed, I've spent my fair share of time wondering about this question. However I actually can't recall seeing an analysis of biotech vs big pharma output, or in particular, of quality of candidates judging by ROI or absolute annual sales. But let's assume that the disparity is real. Big pharmas certainly do - they sometimes try to duplicate the perceived success of small biotechs by putting together small entrepreneur-like groups, like Glaxo. So what is it, exactly, that is more productive about small biotechs?
1) The most obvious: small biotechs have a much greater incentive to get their (usually lone) drugs into clinical trials - if they don't, they disappear. Big pharma management is not so incentivized, and timelines of individual drugs are sometimes adjusted to fit the portfolio. What's being maximized is completely different for a start-up biotech and a multi-drug big pharma. Overall sales is what's being maximized in big pharma, while speed to first-in-human and to market is being maximized in biotech (it equates to survival and therefore financial incentive.)
2) Small biotechs may produce more candidates, but on average lower quality candidates. Because of money and therefore time limitations, they're willing to push through the first lead where the Glaxos of the world have the cash to keep tweaking the structure. You would think this would necessarily mean that the big pharmas then wouldn't be interested in these low-quality candidates, but a) not all decisions are rational, and hype and groupthink have effects in the real world ("We have to buy them to get the first XYZ inhibitor!") and b) the first-in-humans candidate of a given class is often "lower quality" than what might have been the second-in-humans, which as mentioned the biotech won't wait around to discover; the perception and impact of the quality difference is highly context-dependent.
3) At biotech start-ups, scientists have the greatest influence on senior management or are senior management. Typically the management of the group closest to revenue generation is the one that has the most influence over the CEO. In big pharmas, this means sales. In a company that doesn't yet have any sales, this means clinical, or (if even earlier in the cycle) chemists and biologists. Once sales obtains this position, the amount of time the CEO spends thinking about sales increases and development plans tend to be de-emphasized (until everyone panics and it's too late.) I had long suspected that Genentech's success owed to its keeping scientists in key decision-making positions and after having consulted there I'm convinced that this is the case.
4) There are scale-dependent effects that would be present in any organization but are exacerbated by the uniquely long product development cycle in pharmaceuticals. Another exacerbation is the level of government oversight in the industry and the consequences of regulatory transgressions, leading to what are referred to in politics as Olsonian veto blocs, large groups of people who have a say in the process and have nothing to lose by saying "no" but everything to lose by saying "yes" at an inappropriate time. In the pharmaceutical world this is legal, regulatory, and QC - absolutely necessary to the industry, but their influence on timelines seems to be strongly scale dependent. In my own experience in the industry, some of the most focused "how do we get this done" people I've worked with were in QC at the biotech level. Some of the most obstructionist were in QC at the big pharma level. In general a company with a large revenue stream should be expected to be much more risk averse than a company with no profits. In the same vein, once a drug is approved, any new investigations could potentially yield a new indication that would provide some new revenues, or new safety findings that would diminish revenues across the board for the whole molecule, so post-marketing investigations are usually done with kid gloves.
5) Again scale-dependent are free-riders. At a smaller company, free-riding is obvious to all, more immediately detrimental to the future of the entire company, and more quickly punished. This is not the case at large companies with deeper pockets, many of the employees of which seem to be benefiting from a kind of corporate welfare state. This situation often arises at low surface-area-to-volume companies, where most employees interact only with other employees rather than with customers, vendors, or industry contacts outside the company. It would be worth seeing whether there's a sweet spot for company size in terms of a relationship between number of personnel vs. first-in-human clinical trials per person*year, including outlicensed compounds. Anecdotally, I have also noticed an odd scale-dependent increase in the proportion of people who have ever worked in government - not from related agencies like FDA, but from local governments or other areas.
[Story time - and if you know me personally, you know which company I'm talking about. I couldn't help but reflect that the strategy of employees of one big pharma subsidiary company where I worked was exactly that of a parasite in the gut of a large, warm mammal that can afford to miss a few calories here and there. The downside to the strategy is that they're super-specialized to thrive only in that environment; that is, their skillset degenerates into "how to stay employed at ABC Big Pharma". Consequently sometimes they have to transfer between mammals of the same species (i.e. subsidiaries) to survive. The day it was announced this particular subsidiary was being shut down by the parent, I saw groups of people openly weeping as if Princess Diana had died all over again.]
CONCLUSION
If you didn't get enough speculation already, read on. Plus this part also has colorful analogies that I think are nonetheless still useful.
- Dunbar's number applies to organized humans in all activities. There has been work done on Amazonian hunter-gatherers showing that there are village sizes beyond which there tend to be fission events. It's not that the village hits 150 and everyone draws straws to determine who moves, but there are dynamics that invariably take advantage of a trigger event to cause the split (the chief and his brother have a fight, there's a food shortage and some families move to find better hunting areas, etc.) This suggests that there are in general optimum sizes for human social organizations. This research may have a direct bearing on the productivity of small vs. large companies.
- The biotech industry in each part of the country where there is an active scene (the Bay Area, Seattle, San Diego, and Boston) is a notoriously small world. People often end up working together in different combinations at different companies, merely being re-sorted based on skillsets. In Edward Bellamy's 1888 utopian novel Looking Backward, he describes a system where workers have general industrial skills and are (centrally) resourced to new factories based on need. Of course Bellamy was arguing from a socialist standpoint but in biotech it seems that the free market has already generated exactly this arrangement.
- The pharmaceutical industry is not the only one that is dominated by deep-pocketed century-old behemoths that present barriers to entry and snap up competition as it first evolves from the primordial slime and takes its first stumbling steps in an established jungle. If biotechs are as everyone expects more productive than big pharma, this is bad for patients and bad for the economy, and yet there is no check on the growth of the largest companies. It's as if we're at the end of the Cretaceous (with animals so large they need second brains to coordinate their movements) or in the middle of the Second World War (where the incentive to build ever-bigger battleships yielded the monster Yamato.) In both cases, conditions changed (climate and aircraft, respectively), and selection no longer favored the most massive, but it's hard to see how this trend will ever reverse itself, since it's hard to see how capital accumulation can ever be economically selected against. That is, I don't know what would be capitalism's equivalent of Chicxulub or P-51 Mustangs that would obviate the uneven accumulations of capital, so for now we're stuck with biotech serving as free-range R&D for big pharma.
This is cross-posted with a different introductionj at my economics and social science blog, The Late Enlightenment.
Failed Alzheimers Trial Data Pooled and Made Available
Biopharmas that have had failed Alzheimers drugs are pooling their data in an archive. This is excellent news for several reasons. First is that sharing data means a better chance at success in the future in this very therapeutically tricky disease that has sent more than its share of clinical programs to their graves.
This is a solution I hope we see employed outside this one disease. Several times I've been working on molecules that were killed either because there was a business case (didn't fit in the portfolio; acquisition occurred and acquirer only wanted other drugs from our pipeline, market changed during development, etc.) or scientific reasons - there were toxicities, or we found another molecule that was better. But it was always frustrating to think, particularly in the case of acquisitions, that the data was locked away on a server somewhere never to be seen again, and potentially of scientific use to future research programs. For Alzheimers at least this is no longer the case.
A secondary benefit is that the debate of pharmas "hiding" negative trial data will be quelled, again at least for Alzheimers. The failures will all out there for everyone to learn from.
This is a solution I hope we see employed outside this one disease. Several times I've been working on molecules that were killed either because there was a business case (didn't fit in the portfolio; acquisition occurred and acquirer only wanted other drugs from our pipeline, market changed during development, etc.) or scientific reasons - there were toxicities, or we found another molecule that was better. But it was always frustrating to think, particularly in the case of acquisitions, that the data was locked away on a server somewhere never to be seen again, and potentially of scientific use to future research programs. For Alzheimers at least this is no longer the case.
A secondary benefit is that the debate of pharmas "hiding" negative trial data will be quelled, again at least for Alzheimers. The failures will all out there for everyone to learn from.
Labels:
alzheimers,
biotechnology,
business,
process of science
Friday, July 2, 2010
Why Modern Music Is Too Hard, But Visual Art Isn't
An interesting piece in the New York Times discusses why much of the last century's classical composition and dense prose may never find an audience, but at the same time visual modern art presents a more effortlessly coherent experience. The definition for complexity in music is given as non-redundant events per unit-time, but I'm not sure how they're measuring pattern recognition challenge in visual art and prose. The money quote in the article has to do with why it's much easier for the un-initiated to enjoy a Pollock piece:
In a word, the constraints of sensory memory, determined by the sensory modality which is being used (hearing, vision, language, etc.)
The word "time" is central to [critic ]Mr. Lerdahl's argument, for it explains why an equally complicated painting like Pollock's "Autumn Rhythm" appeals to viewers who find the music of Mr. Boulez or the prose of Joyce hopelessly offputting. Unlike "Finnegans Wake," which consists of 628 closely packed pages that take weeks to read, the splattery tangles and swirls of "Autumn Rhythm" (which hangs in New York's Metropolitan Museum of Art) can be experienced in a single glance. Is that enough time to see everything Pollock put into "Autumn Rhythm"? No, but it's long enough for the painting to make a strong and meaningful impression on the viewer.
In a word, the constraints of sensory memory, determined by the sensory modality which is being used (hearing, vision, language, etc.)
Rapid Evolution in Tibetans - and Who Else?
Science paper isn't up yet but probably will be by the time you read it. Tibetans only split from Han less than 3,000 years ago and already they've built up a whole repertoire of genetic low-oxygen adaptations.
While that may not be a surprise, the speed with which it occurred probably is. More and more, it's becoming clear that cultural choices we make (what we eat, where we live) over time affect our genes. So this leads us back to the elephant in the room of human evolution studies. There are genes which affect cognition. Why do we think these haven't been selected differentially as well?
While that may not be a surprise, the speed with which it occurred probably is. More and more, it's becoming clear that cultural choices we make (what we eat, where we live) over time affect our genes. So this leads us back to the elephant in the room of human evolution studies. There are genes which affect cognition. Why do we think these haven't been selected differentially as well?
Pattern Recognition in Numbers and Tiles
All numbers can be represented accurately with an infinite string of digits, whether they are rational or irrational. This seems trivial for rational numbers and especially for "round" numbers, but it's easy to be confused by writing conventions and the coincidence of numeric base that we use. In base-10, we omit zeroes, so that 1.5 is really 1.50 with repeating zeroes to infinity. There's no information in the infinite string of 0's so we can omit them and still accurately represent the number (we compress it by writing the repeating parts in shorthand.) As for "not round" rational numbers, the vast majority of their patterns will not be immediately obvious to humans, given our limited pattern recognition limitations and the numeric base that we use. In base-10, the ratio that produces 0.142857-repeating is not obviously 1/7, but in base-7 it is - because in base-7, it's 0.1.
Some numbers have been proven to be irrational. The earliest of these for which we still have a record was Euclid's reductio ad absurdum for the square root of 2. The same has been done for pi and e, among other constants; but as yet, there is no generalized method for proving a number's irrationality.
A proof that there can be no general method for proving a number's irrationality, or at least whether the irrationality of some numbers may never be proven, would be worth having. Using a very unorthodox practical proof: there are finite particles in the universe with which to compute these numbers, and they will exist for a finite time; not nearly long enough to run through all the operations to produce ratios even for all the (infinite) irrational numbers between 0 and 1, regardless what those operations are.
What is interesting about this problem is that ultimately, proving rationality means recognizing some periodicity (or for irrationality its absence) and therefore that more generally proving ir/rationality is a pattern recognition problem. Other problems which are essentially pattern recognition problems are Kolmogorov complexity, i.e. compressibility, which although archiving applications do it all the time is in an absolute sense non-computable; and tiling problems, infinite (plane-covering) solutions of which are famously shown to be undecidable in principle by any algorithms. Are there other properties that these non-computable pattern-recognition problems have in common?
Some numbers have been proven to be irrational. The earliest of these for which we still have a record was Euclid's reductio ad absurdum for the square root of 2. The same has been done for pi and e, among other constants; but as yet, there is no generalized method for proving a number's irrationality.
A proof that there can be no general method for proving a number's irrationality, or at least whether the irrationality of some numbers may never be proven, would be worth having. Using a very unorthodox practical proof: there are finite particles in the universe with which to compute these numbers, and they will exist for a finite time; not nearly long enough to run through all the operations to produce ratios even for all the (infinite) irrational numbers between 0 and 1, regardless what those operations are.
What is interesting about this problem is that ultimately, proving rationality means recognizing some periodicity (or for irrationality its absence) and therefore that more generally proving ir/rationality is a pattern recognition problem. Other problems which are essentially pattern recognition problems are Kolmogorov complexity, i.e. compressibility, which although archiving applications do it all the time is in an absolute sense non-computable; and tiling problems, infinite (plane-covering) solutions of which are famously shown to be undecidable in principle by any algorithms. Are there other properties that these non-computable pattern-recognition problems have in common?
Thursday, July 1, 2010
Could the Flynn Effect Be the Result of Decreased Parasite Loads During Pregnancy?
Wednesday, June 30, 2010
Dreaming About Former Stress Experiences
It's a cliche that in the industrialized world, adults tend to dream about missing exams in high school or college. You know the ones - you have a test or a class you have to be in but you don't know where it is, the hallways don't make sense, your school or the buildings aren't the way you remember them, and it's a disaster. You wake up and you're maybe even mildly amused; thank goodness you'll never have to deal with that again. Why do we seem only to have these dreams when we're no longer in that situation?
After a 13-year hiatus from education I began medical school at age 35, leaving a career in biomedical research consulting. At one point prior to starting med school I joked to a colleague that I could no longer laugh off those missed-exam dreams because there are many, many more quite real exams in my future. But what I am amused by is that now, I no longer have missed-exam dreams, and instead I've started having anxiety dreams about situations associated with my pre-medical school career. Usually, in these new dreams, I'm arriving late at an airport only to find out that my flight has just left. That this change occurred less than a year after I left that career certainly suggests that there's some mechanism which pegs anxiety to stress experiences that we no longer have, or that are associated with some earlier phase of our identities.
What could be going on here? There are lots of people that change careers and/or go back to school, so has anyone else had this experience?
After a 13-year hiatus from education I began medical school at age 35, leaving a career in biomedical research consulting. At one point prior to starting med school I joked to a colleague that I could no longer laugh off those missed-exam dreams because there are many, many more quite real exams in my future. But what I am amused by is that now, I no longer have missed-exam dreams, and instead I've started having anxiety dreams about situations associated with my pre-medical school career. Usually, in these new dreams, I'm arriving late at an airport only to find out that my flight has just left. That this change occurred less than a year after I left that career certainly suggests that there's some mechanism which pegs anxiety to stress experiences that we no longer have, or that are associated with some earlier phase of our identities.
What could be going on here? There are lots of people that change careers and/or go back to school, so has anyone else had this experience?
Friday, June 25, 2010
Hupa is Not Unique Among Languages in its Use of Verbs
In a sketch of a neurolinguistic theory I posted previously, I mentioned a possible empirical problem with the theory. Specifically, I posit that the basic neurological unit of language is the noun. If this were the necessary structure of language based on human anatomy, a non-noun-based language would falsify the theory. The Wikipedia article on Hupa previously stated:
I have long been interested in Hupa as a result of this statement. The first question we might ask is whether such a dramatic innovation is restricted to Hupa or in fact appears in some form in other lower or Pacific Athabaskan languages. There is no report of such structures in Upper Umpqua or the Rogue River languages.
Once you read the grammar and vocabulary of Hupa published by P.E. Goddard, 1905, barely half a century after their first contact with Europeans, the answer was clear. The claim about Hupa's use of verbs and paucity of nouns, which is not referenced, is totally inconsistent with Goddard's work. Goddard lists 130 nouns straight away in the first 20 pages. Not all are morphophonemic, but the non-compound morphemes for the obligately affixed nouns seem to all be unique. Furthermore there is discussion of verb nominalization on page 21-23, and while the morphology seems to be more elaborated than in Western Indo-European languages, it's nothing as dramatically novel as this statement, certainly not showing that "nearly all nouns in the language are derived from verbs." In fact Athabaskan languages in general have elaborate verb morphology, though again, they don't replace nouns.
For a time I had thought that Hupa was a real-life example of the fanciful verb-based language of Tlön fantasized by Borges in Tlön, Uqbar, Orbis Tertius. But it's not. It's worth pointing out that there does seem that there are some innovations of Hupa, relative to other Athabaskan languages, but this is to be expected from a language isolated for centuries with close trading relationships with an Algonquian language (Yurok) and a probable isolate (Karuk). Certainly the differences are not so profound.
I subsequently revised the Wikipedia article. In the meantime continue to look for languages that falsify the neurolinguistic sketch. One interesting possibility is that we might find dramatic differences in language based on geography with phylogenetic patterning. That is, is there something different about Andamanese + New Guinean + Australian languages (earliest out of Africa) vs. sub-Saharan African languages vs all other languages? I'm not asking whether there could be differences based on language-descent, which there necessarily will be; I'm asking the far more controversial question of whether they may be genetic innovations that result in different wiring and therefore differences in language structure. So far we have not found differences so profound as to warrant speculating about population-wide differences in the underlying hardware. If we ever do find differences in a language or group of languages as dramatic as the one that had been suggested here, or that Daniel Everett suggested with Piraha (which also appears to collapse under scrutiny), I submit that it might be profitable to look for anatomic and genetic differences. Such diversity of language and neurology would absolutely be a windfall to understanding the physical basis of language and cognition.
As an aside, I have been to the Hoopa Nation in Northern California several times. It's absolutely beautiful country and I highly recommend a visit. Unfortunately the Hupa language is not a living language, although it is being preserved by the efforts of Danny Ammon and others who make their resources available for the rest of us.
Trinity River south of Hoopa, California, by Trinityalpsphoto
Morphologically, it is remarkable for having an extremely small number— perhaps less than one hundred— of basic (monomorphemic) nouns, as nearly all nouns in the language are derived from verbs.
I have long been interested in Hupa as a result of this statement. The first question we might ask is whether such a dramatic innovation is restricted to Hupa or in fact appears in some form in other lower or Pacific Athabaskan languages. There is no report of such structures in Upper Umpqua or the Rogue River languages.
Once you read the grammar and vocabulary of Hupa published by P.E. Goddard, 1905, barely half a century after their first contact with Europeans, the answer was clear. The claim about Hupa's use of verbs and paucity of nouns, which is not referenced, is totally inconsistent with Goddard's work. Goddard lists 130 nouns straight away in the first 20 pages. Not all are morphophonemic, but the non-compound morphemes for the obligately affixed nouns seem to all be unique. Furthermore there is discussion of verb nominalization on page 21-23, and while the morphology seems to be more elaborated than in Western Indo-European languages, it's nothing as dramatically novel as this statement, certainly not showing that "nearly all nouns in the language are derived from verbs." In fact Athabaskan languages in general have elaborate verb morphology, though again, they don't replace nouns.
For a time I had thought that Hupa was a real-life example of the fanciful verb-based language of Tlön fantasized by Borges in Tlön, Uqbar, Orbis Tertius. But it's not. It's worth pointing out that there does seem that there are some innovations of Hupa, relative to other Athabaskan languages, but this is to be expected from a language isolated for centuries with close trading relationships with an Algonquian language (Yurok) and a probable isolate (Karuk). Certainly the differences are not so profound.
I subsequently revised the Wikipedia article. In the meantime continue to look for languages that falsify the neurolinguistic sketch. One interesting possibility is that we might find dramatic differences in language based on geography with phylogenetic patterning. That is, is there something different about Andamanese + New Guinean + Australian languages (earliest out of Africa) vs. sub-Saharan African languages vs all other languages? I'm not asking whether there could be differences based on language-descent, which there necessarily will be; I'm asking the far more controversial question of whether they may be genetic innovations that result in different wiring and therefore differences in language structure. So far we have not found differences so profound as to warrant speculating about population-wide differences in the underlying hardware. If we ever do find differences in a language or group of languages as dramatic as the one that had been suggested here, or that Daniel Everett suggested with Piraha (which also appears to collapse under scrutiny), I submit that it might be profitable to look for anatomic and genetic differences. Such diversity of language and neurology would absolutely be a windfall to understanding the physical basis of language and cognition.
As an aside, I have been to the Hoopa Nation in Northern California several times. It's absolutely beautiful country and I highly recommend a visit. Unfortunately the Hupa language is not a living language, although it is being preserved by the efforts of Danny Ammon and others who make their resources available for the rest of us.
Trinity River south of Hoopa, California, by Trinityalpsphoto
Cranial Blood Flow Differences Between Populations?
A provisionally published paper in BMC Research Notes by Farhoudi et al uses transcranial Doppler to investigate flow rates in major cerebral arteries. They found that the averages for their sample from northwest Iran were higher than previously described norms.
The numbers are presented as-is with no p-value, so caution is in order in thinking these are necessarily real (though Farhoudi et al don't make any claims beyond presenting the numbers.) What is immediately interesting is whether there are real population differences and what that means for investigative methods like fMRI studies. There's also the question of whether there are clinical correlations (different stroke rates and outcomes in some populations?) or even behavioral/cognitive connections - though for that last question I'll hope that Razib Khan picks up the story.
The numbers are presented as-is with no p-value, so caution is in order in thinking these are necessarily real (though Farhoudi et al don't make any claims beyond presenting the numbers.) What is immediately interesting is whether there are real population differences and what that means for investigative methods like fMRI studies. There's also the question of whether there are clinical correlations (different stroke rates and outcomes in some populations?) or even behavioral/cognitive connections - though for that last question I'll hope that Razib Khan picks up the story.
A Competition Mechanism for Attention
A nucleus in the midbrain of owls is found by Asadollahi et al to encode salience (i.e., relative strength) of visual and auditory stimuli. This is strong support for the existence of the previously theorized salience map and is a big step in the study of attention. Nature paper here.
Future questions: how is remote sense data (visual-auditory) integrated with contact data (touch, pain and taste)? How are attention conflicts resolved; that is, when an agent voluntarily wants to focus on a stimulus that is weaker and no more irregular than those around it? In the case of humans trying to focus on one visual stimulus known to be important despite sensory distraction, might we predict inhibitory projections from the temporal cortex to this nucleus or to its projections?
Future questions: how is remote sense data (visual-auditory) integrated with contact data (touch, pain and taste)? How are attention conflicts resolved; that is, when an agent voluntarily wants to focus on a stimulus that is weaker and no more irregular than those around it? In the case of humans trying to focus on one visual stimulus known to be important despite sensory distraction, might we predict inhibitory projections from the temporal cortex to this nucleus or to its projections?
Sunday, June 13, 2010
DNA Testing and Inconsistent Laws
The following seems a little bass-ackwards: FDA doesn't regulate companies that sell bullsh*t medicine - that is, "herbal supplements" which these highly profitable companies strongly imply can treat disease (and are sometimes unsafe, just like real pharmaceuticals.) But this same agency is now warning DNA testing companies that they're out of compliance, even for products that aren't intended to diagnose disease. I have two 23andMe kits sitting in my office right now that I haven't sent in yet, and I'm going to be pretty annoyed if FDA's ruling affects them. I suspect I'm not the only one. Following FDA submission rules is enormously expensive and time-consuming, and most of these companies don't have the resources to do it. Unless we think of a sensible solution, good-bye DNA testing industry.
This personal genomics/personalized medicine revolution everyone is talking about won't get her if it's made illegal. The problem is not that there should be no regulation, it's that there should be consistent, non-stupid regulation, and the U.S. is going to damage a domestic industry that will be very important in the near future early in its development.
[Added later: Alex Tabarrok at Marginal Revolution says it much more elegantly and directly: "The idea that the FDA can regulate and control what individuals may learn about their own bodies is deeply offensive and, in my view, plainly unconstitutional."]
This personal genomics/personalized medicine revolution everyone is talking about won't get her if it's made illegal. The problem is not that there should be no regulation, it's that there should be consistent, non-stupid regulation, and the U.S. is going to damage a domestic industry that will be very important in the near future early in its development.
[Added later: Alex Tabarrok at Marginal Revolution says it much more elegantly and directly: "The idea that the FDA can regulate and control what individuals may learn about their own bodies is deeply offensive and, in my view, plainly unconstitutional."]
Monday, May 31, 2010
The Loneliness of the Seminal Computer Scientist
(Granted, Sillitoe's story has a better ring to it than the title of this post.) Probably already well-known among modern mathematicians and computer scientists, Alan Turing was quite an accomplished distance runner. When I first saw his 1947 Lecestershire marathon time of 2:46, I thought there might be some confusion; Europeans use the term "marathon" much more loosely than North Americans. On this side of the Atlantic, a marathon is 26.2 miles, period. So I looked it up and there's even a newspaper clipping image which specifies that yes, this is a real marathon time. 2:46 would be a damn good time in 2010, let alone in 1947 with bad running gear and probably bad nutrition and training. My own PR is 3:13 and I will likely never approach Turing's time regardless how much EPO and 'roids I consume.
Sunday, May 23, 2010
Hemineglect and Delusions
Hemineglect is among the more bizarre neurological conditions (which is also to say, devastating to the patient). In brief, the patient ignores one or the other half of space, right or left, up to and including his or her own body. They won't register stimuli on the neglected side and will even ignore their own bodies on that side, sometimes claiming that their limbs aren't their own: if you hold their arm up in front of them and ask them whose arm it is, they'll often insist it's a family member who's hiding nearby. (Yes, really.) A neurologist related to me that these patients will even sometimes request to be moved to a new bed in the hospital because there's someone else laying in bed with them (as in, the neglected half of their own body. Yes, really.) To these patients, a circle has only 180 degrees. The neglected half of space might as well be the fourth dimension.
Needless to say, natural experiments like these cases are a rich substrate for neurophilosophy. One aspect of neglect syndromes that I find interesting is that some of this behavior apparently amounts to a delusion, in the strict sense of a steadfast false belief. Neglect patients will sometimes complain that the hospital isn't feeding them enough, and of course when the nurse or physician comes into the room, they see a plate of food that's exactly half-eaten. So they turn the plate 180 degrees - and the patient grumbles "Good," and continues eating. See the disconnect here? If I were at dinner and said "Wow that green curry was good but I wish there were more," and my dining companion was able to magically produce more curry out of the fourth dimension before my eyes like some kind of a 3D chef visiting Flatland, of course I would be utterly amazed - but I haven't heard of such a reaction in the anecdotal reports I've heard from neurologists so far (I have not yet interacted with a neglect patient).
I'll do my best to put myself in the neglect patient's place again. Most of us believe that we have exactly 2 arms and 2 legs, and would react incredulously if a researcher told us that no, in fact we had four arms and four legs, but we were only using two of each. The researcher says to me "Fine, I can prove it." In an empty room with just her and me, she holds up an arm in front of me that looks just like me other two arms - skin color, size, etc. - and says it's my arm. I can't feel it or move it, and somehow I'm unable to see what it connects to, and it seemed to appear out of thin air (just like the green curry). But having four arms is ridiculous! Yet I trust this researcher; she seems incredibly earnest, she can reproduce this trick any time I ask with no preparation, and as I soon discover, so can anyone else I ask, including people with no possible connection to the researcher. All of them can hold up in front of me one or two arms that look like my own arms.
In such a position, I would be forced to conclude, as bizarre as it seems, that the evidence points to some kind of a perceptual defect on my part. As strange as it is, and as much as I absolutely cannot understand where this arm is coming from or how it connects to me, I eventually have to accept the incredible truth (after many, many trials) that I and everybody else has four arms, and that there's something strange about my perception that keeps me from seeing them. And even if I remain incredulous, certainly I would at least want to know how they were doing this amazing trick. But severe neglect patients not only avoid curiosity about things that could disturb their limited perception of space, they make up impossible stories about where their limb is coming from if it's presented to them. Clearly the deficit that produces their inability to fully represent space is neurological, rather than psychogenic. But isn't this part of the behavior arguably delusional?
Needless to say, natural experiments like these cases are a rich substrate for neurophilosophy. One aspect of neglect syndromes that I find interesting is that some of this behavior apparently amounts to a delusion, in the strict sense of a steadfast false belief. Neglect patients will sometimes complain that the hospital isn't feeding them enough, and of course when the nurse or physician comes into the room, they see a plate of food that's exactly half-eaten. So they turn the plate 180 degrees - and the patient grumbles "Good," and continues eating. See the disconnect here? If I were at dinner and said "Wow that green curry was good but I wish there were more," and my dining companion was able to magically produce more curry out of the fourth dimension before my eyes like some kind of a 3D chef visiting Flatland, of course I would be utterly amazed - but I haven't heard of such a reaction in the anecdotal reports I've heard from neurologists so far (I have not yet interacted with a neglect patient).
I'll do my best to put myself in the neglect patient's place again. Most of us believe that we have exactly 2 arms and 2 legs, and would react incredulously if a researcher told us that no, in fact we had four arms and four legs, but we were only using two of each. The researcher says to me "Fine, I can prove it." In an empty room with just her and me, she holds up an arm in front of me that looks just like me other two arms - skin color, size, etc. - and says it's my arm. I can't feel it or move it, and somehow I'm unable to see what it connects to, and it seemed to appear out of thin air (just like the green curry). But having four arms is ridiculous! Yet I trust this researcher; she seems incredibly earnest, she can reproduce this trick any time I ask with no preparation, and as I soon discover, so can anyone else I ask, including people with no possible connection to the researcher. All of them can hold up in front of me one or two arms that look like my own arms.
In such a position, I would be forced to conclude, as bizarre as it seems, that the evidence points to some kind of a perceptual defect on my part. As strange as it is, and as much as I absolutely cannot understand where this arm is coming from or how it connects to me, I eventually have to accept the incredible truth (after many, many trials) that I and everybody else has four arms, and that there's something strange about my perception that keeps me from seeing them. And even if I remain incredulous, certainly I would at least want to know how they were doing this amazing trick. But severe neglect patients not only avoid curiosity about things that could disturb their limited perception of space, they make up impossible stories about where their limb is coming from if it's presented to them. Clearly the deficit that produces their inability to fully represent space is neurological, rather than psychogenic. But isn't this part of the behavior arguably delusional?
Saturday, May 15, 2010
Ultrasound, Neuronal Excitability, and Hallucinations
A few years ago it was announced that a U.S. patent had been filed by Thomas Dawson on behalf of Sony for direct neural input of sensory information. There are a few: the most recent related patent by the same inventor here, but the one that I believe attracted media attention is here).
The patents describe a method of stimulating multi-modality sensory experience through the use of sound energy. The basic idea is that neuronal excitability can be increased with ultrasound (review here) although to date it seems that all the work has been either with CNS neurons in culture, or PNS neurons, rather than CNS neurons inside a spine or skull. Obviously if these patents represent functioning technology the implications are profound. For that reason I scrutinized 6,536,440 for evidence that the concept had in fact been reduced to practice, which (naively I'm told) I had thought was still a requirement for the issue of a patent. There's precious little in the document to suggest anyone is ready to build a transducer capable of producing any sensory experience in subjects, let alone a coherent one.
Of course ultrasound is already used in medical imaging all the time, in a similar range to that reported in the review (optimal transcranial transmission of ultrasound at 7 x 10^5 Hz, but in vitro studies showed neuronal excitability changes at higher frequencies around 2-7 x 10^6 Hz. Medical imaging ultrasounds use frequencies up to 10^7 Hz, but the intensity range is 1-10 W/cm^2, and the imaging device is rarely applied to the skull (useless for imaging, because bone blocks commercial ultrasound). That said, I'm unaware of anecdotal reports of patients hallucinating by any modality during ultrasound imaging, a very common outpatient procedure, and quick search of Pubmed reveals no such cases among the first 30 articles.
The patents describe a method of stimulating multi-modality sensory experience through the use of sound energy. The basic idea is that neuronal excitability can be increased with ultrasound (review here) although to date it seems that all the work has been either with CNS neurons in culture, or PNS neurons, rather than CNS neurons inside a spine or skull. Obviously if these patents represent functioning technology the implications are profound. For that reason I scrutinized 6,536,440 for evidence that the concept had in fact been reduced to practice, which (naively I'm told) I had thought was still a requirement for the issue of a patent. There's precious little in the document to suggest anyone is ready to build a transducer capable of producing any sensory experience in subjects, let alone a coherent one.
Of course ultrasound is already used in medical imaging all the time, in a similar range to that reported in the review (optimal transcranial transmission of ultrasound at 7 x 10^5 Hz, but in vitro studies showed neuronal excitability changes at higher frequencies around 2-7 x 10^6 Hz. Medical imaging ultrasounds use frequencies up to 10^7 Hz, but the intensity range is 1-10 W/cm^2, and the imaging device is rarely applied to the skull (useless for imaging, because bone blocks commercial ultrasound). That said, I'm unaware of anecdotal reports of patients hallucinating by any modality during ultrasound imaging, a very common outpatient procedure, and quick search of Pubmed reveals no such cases among the first 30 articles.
Sunday, May 9, 2010
A Possible Mechanistic Answer to Penrose et al
Paper here, "Informal Concepts in Machines", Kurt Ammon. Argues that algorithms exist which can perform computations beyond the limits of Turing machines. This is one answer to the problem of how, if humans are machines, we can definitively decide non-computability (one of Penrose's challenges).
Sunday, May 2, 2010
Junk DNA and Cancer
In Nature Lamprecht et al provide evidence that the presence of long terminal repeats (LTRs, a form of junk DNA) increases the risk of humans' developing certain cancers, especially lymphomas. [Added later: in 2007 the ENCODE study was supposed to show evidence that intragenic regions are pervasively transcribed but there's a large segment that believes this to be an artifact.]
Initially the discovery that our genomes were to a first approximation entirely composed of non-coding garbage characters was a surprise. But on further evolutionary reflection, it made sense: DNA is about copying itself, and it sometimes codes for proteins in networks with other pieces of DNA as a replication strategy. Consequently we should expect that most DNA is passively or selfishly just along for the ride, except in highly fecundity-dependent species where the extra time and energy make a fitness difference (like bacteria). Before you make the effort to back-of-the-envelope calculate the daily cost of replicating the 97% extra noncoding portion of our genome, realize that I have doubtless expended more calories typing this blog post than I will expend from all the DNA replication and proofreading I do during the entire day.
But even if the energy expended is not a problem, this paper revives the debate, because it shows that there still is a fitness cost for junk DNA in multicellular organisms - but it's paid in terms of cancer risk rather than energy cost. It's a little harder to write this off as fitness noise. We're back to the old question of what the advantage is for multicellular organisms to carry so much junk DNA.
Initially the discovery that our genomes were to a first approximation entirely composed of non-coding garbage characters was a surprise. But on further evolutionary reflection, it made sense: DNA is about copying itself, and it sometimes codes for proteins in networks with other pieces of DNA as a replication strategy. Consequently we should expect that most DNA is passively or selfishly just along for the ride, except in highly fecundity-dependent species where the extra time and energy make a fitness difference (like bacteria). Before you make the effort to back-of-the-envelope calculate the daily cost of replicating the 97% extra noncoding portion of our genome, realize that I have doubtless expended more calories typing this blog post than I will expend from all the DNA replication and proofreading I do during the entire day.
But even if the energy expended is not a problem, this paper revives the debate, because it shows that there still is a fitness cost for junk DNA in multicellular organisms - but it's paid in terms of cancer risk rather than energy cost. It's a little harder to write this off as fitness noise. We're back to the old question of what the advantage is for multicellular organisms to carry so much junk DNA.
Friday, April 30, 2010
Predictable But Still Interesting Resolution to Bilingual Coma Case
I had previously written here about the case of the Croatian girl who supposedly woke from a coma miraculously speaking German. Of course, it turns out she already spoke German before. It's still neurologically interesting that ability in one language would be preserved than the other; Steven Novella covers.
Thursday, April 29, 2010
Medication Resources, Psychiatric and Otherwise
I promise a return to a less med-school-centric blog tomorrow. For now, in keeping with the theme, here are some interesting resources for medications and patients:
Psycho-Babble - a discussion forum about psychiatric medications. While it seems on its face like it could be a bad idea, strong moderation keeps it valuable and on-track.
Patients Like Me - a forum where patients can share their experiences with medications. Probably not a place to get real data, but nonetheless it might be worthwhile for marketing and healthcare workers to check out occasionally to see what the dominant threads are. General, not just for psychiatric meds.
Psycho-Babble - a discussion forum about psychiatric medications. While it seems on its face like it could be a bad idea, strong moderation keeps it valuable and on-track.
Patients Like Me - a forum where patients can share their experiences with medications. Probably not a place to get real data, but nonetheless it might be worthwhile for marketing and healthcare workers to check out occasionally to see what the dominant threads are. General, not just for psychiatric meds.
For Second Year Med Students
The boards are coming up (as if you need someone else to remind you of that) and if your pathology isn't quite up to snuff, here's a Magic: The Gathering-style fantasy role-playing card game to help you memorize all the nasty unicellular beasties. Here, of course, is B. fragilis:
I think these guys were just huge nerds who wanted to make money by creating their own card-based RPG while seeming all responsible and mediciney. And good on em! (Hat tip Boing-Boing).
I think these guys were just huge nerds who wanted to make money by creating their own card-based RPG while seeming all responsible and mediciney. And good on em! (Hat tip Boing-Boing).
Top Psychiatric Prescriptions for 2009
Can be found here. While not of philosophical or scientific interest, it's certainly of professional interest to nervous system health professionals to ask to what degree the revenue growth is due to off-label use (at a guess, much more than half. Also note the comparison to simultaneous population growth).
Before starting med school I did clinical research for pharmaceutical companies for 12 years. In no polity that I know of can a drug be marketed "in general", that is, beyond some narrowly defined indication. Consequently the pattern of use of psychiatric drugs is very different than for other therapeutic areas. But they can certainly be used by practitioners this way. Is this inherent to the nature of psychiatric illnesses, or is it just an artifact of our regulatory enterprise? What risks are there to patients as a result of this pattern of use, and have we already seen an impact?
Before starting med school I did clinical research for pharmaceutical companies for 12 years. In no polity that I know of can a drug be marketed "in general", that is, beyond some narrowly defined indication. Consequently the pattern of use of psychiatric drugs is very different than for other therapeutic areas. But they can certainly be used by practitioners this way. Is this inherent to the nature of psychiatric illnesses, or is it just an artifact of our regulatory enterprise? What risks are there to patients as a result of this pattern of use, and have we already seen an impact?
Sunday, April 25, 2010
Croatian Girl Wakes Up from Coma Speaking German
This is more dramatic than "foreign accent syndrome", although the story is short, the source suspect, and even without those two de-weighting considerations I would highly doubt it's real. I would put money down that we're going to find one of the following:
1) It's exaggerated, and at most she has some form of foreign accent syndrome.
2) The girl spoke German before she was in a coma.
1) It's exaggerated, and at most she has some form of foreign accent syndrome.
2) The girl spoke German before she was in a coma.
Saturday, April 24, 2010
Free Will and Materialism or Epiphenomenalism are Not Mutually Exclusive
Some materialists are a little too eager to claim that free will is necessarily exploded if our consciousness is based in the material world and follows lawful processes. Of course this has implications for morality. However such an eager deconstruction is simplistic, and furthermore cannot be specifically pinned on
epiphenomenalism.
Regarding the possibility of free will, we recognize the following categories of relationships between any discrete entities in the universe, which must include conscious entities like humans:
1) Lawful relationships: if operation X happens to entity A, then B always results. It is not trivial to note that knowledge of lawful relationships must result from repeated pattern recognition. It is frequently pointed out that after building his system of mechanics, Newton loudly proclaimed what he saw as a pre-determined clockwork universe; despite this he somehow avoided moral nihilism.
2) No relationship (noise): a failure of pattern recognition. Perhaps there is no pattern, or perhaps there is but we're too stupid to see it. Whether there is a generalized way to tell the difference between these two possibilities without knowing the pattern if one is determined exists is another question. Note that nuclear decay is lawful only en masse but at the level of individual nuclei is random; quantum mechanics famously emphasizes the non-predictable (non-lawful) behavior of individual particles. Matiyasevich showed more generally than Goedel that all systems must contain detail, that is, true statements that cannot be deduced by the axioms always exist ("details"). But this still doesn't get us out of the woods because intentionality is not random.
3) Free Intentionality: Goal-seeking behavior that is not random but not forced by physical law to make the specific choices it does. This is most consistent with the folk model of behavior. Many materialists would claim this is a myth forced on us by our own neurology, and that there is no space for any options besides 1 or 2.
There are several answers to the problem of free intentionality that do not involve an appeal to faith ("i.e. It seems true so it must be"): yes, it does seem true, and while the preceding statement isn't the end of the argument, certainly it's premature to assume that free intentionality must be an illusion because our understanding in 2010 of the way the world works doesn't allow for that class of actions. (Note Newton's smugness.) We have nothing close to a complete understanding of the nervous system, so it's a little soon to be discarding introspection.
Grant for the moment that such a thing as free intentional behavior exists. I would argue (separately) that the property of free intentional behavior exists more in some entities than others, and that this property is lawfully determined. It is less clear exactly what those properties must share in order to exhibit free intentionality. Importantly, must free-intentional entities have experience; that is, can "dead" systems or much less advanced systems like cnidarians have free intentionality but not be subjectively aware? Because we're talking about humans in these discussions the assumption is that we would have both free intentionality and experience, but if an argument has been made that they must co-occur, I haven't yet seen it.
Epiphenomenalism
The bogeyman of free will is whether consciousness is an epiphenomenon of the real process of cognition, like a shadow or aftereffect. In some cases it certainly is: EEG studies have shown that people often decide to make a movement up to 300 milliseconds before they do, and the experimenter watching the trace literally knows before they do that they're about to move. First, because this is sometimes the case does not mean it is always the case.
Second, and more critically, why is it so troubling if our consciousness is on a tape delay? Assuming free intentionality applies to your nervous system, but not to your conscious awareness (if it's epiphenomenal) then you subconsciously have free will and become of a free intentional decision after it was made. In one sense "you" are just along for the ride, but it's a ride on a computer with free will that has been telling "you" what to do in a deceptive way since the day you were born and which cannot be separated from "you" without the end of your experience (because that means cutting your brain out). As long as the inseparable computer that's giving the orders has free intentionality, we have no right to be upset by the arrangement. The key to whether we can make free intentional acts is therefore not in the length of the tape delay, but in the behavior of the computer that's making decisions for "you". Our understanding of the nervous system does not yet allow us to rule out category #3 above, especially in light of strong introspective
evidence.
IF FREE INTENTIONALITY DOES NOT EXIST, WHY DOES IT SEEM LIKE IT DOES?
1) If free intentionality is in fact an illusion - why? Why would it be useful for organisms to deceive themselves into thinking that they have free will? The same question can be asked of conscious experience itself. How could such a thing be evolutionarily selected for, since it seems to have no outward manifestation?
2) For those who believe in a pre-determined universe, this means we're living in a static four-dimensional block of space-time. Why does "now" seem to be a special point in time? By this view, "now" is an illusion. How could organisms have developed that do not sense the full sweep of their existence? Why would this narrow focus on a gradually-shifting, illusorily-special space?
3) Free intentionality seems to manifest more at greater time scales. It is clear that there are behaviors resulting from Category 1 lawful relationships in every organism, including ourselves. If there is free intentionality, it a) probably occurs at least in the executive decision center and b) manifests over longer periods of time. Case in point, tell a person they have no free will and a common response is to hop on one foot or do something else socially unexpected. Find that person in exactly one month's time, and try to measure how different their life is because they hopped on one foot for five seconds; not very much I'll wager. It's the person who is making considered decisions based on semantic reasoning whose life can change over time. If free intentionality exists anywhere, it is in this kind of cognition.
epiphenomenalism.
Regarding the possibility of free will, we recognize the following categories of relationships between any discrete entities in the universe, which must include conscious entities like humans:
1) Lawful relationships: if operation X happens to entity A, then B always results. It is not trivial to note that knowledge of lawful relationships must result from repeated pattern recognition. It is frequently pointed out that after building his system of mechanics, Newton loudly proclaimed what he saw as a pre-determined clockwork universe; despite this he somehow avoided moral nihilism.
2) No relationship (noise): a failure of pattern recognition. Perhaps there is no pattern, or perhaps there is but we're too stupid to see it. Whether there is a generalized way to tell the difference between these two possibilities without knowing the pattern if one is determined exists is another question. Note that nuclear decay is lawful only en masse but at the level of individual nuclei is random; quantum mechanics famously emphasizes the non-predictable (non-lawful) behavior of individual particles. Matiyasevich showed more generally than Goedel that all systems must contain detail, that is, true statements that cannot be deduced by the axioms always exist ("details"). But this still doesn't get us out of the woods because intentionality is not random.
3) Free Intentionality: Goal-seeking behavior that is not random but not forced by physical law to make the specific choices it does. This is most consistent with the folk model of behavior. Many materialists would claim this is a myth forced on us by our own neurology, and that there is no space for any options besides 1 or 2.
There are several answers to the problem of free intentionality that do not involve an appeal to faith ("i.e. It seems true so it must be"): yes, it does seem true, and while the preceding statement isn't the end of the argument, certainly it's premature to assume that free intentionality must be an illusion because our understanding in 2010 of the way the world works doesn't allow for that class of actions. (Note Newton's smugness.) We have nothing close to a complete understanding of the nervous system, so it's a little soon to be discarding introspection.
Grant for the moment that such a thing as free intentional behavior exists. I would argue (separately) that the property of free intentional behavior exists more in some entities than others, and that this property is lawfully determined. It is less clear exactly what those properties must share in order to exhibit free intentionality. Importantly, must free-intentional entities have experience; that is, can "dead" systems or much less advanced systems like cnidarians have free intentionality but not be subjectively aware? Because we're talking about humans in these discussions the assumption is that we would have both free intentionality and experience, but if an argument has been made that they must co-occur, I haven't yet seen it.
Epiphenomenalism
The bogeyman of free will is whether consciousness is an epiphenomenon of the real process of cognition, like a shadow or aftereffect. In some cases it certainly is: EEG studies have shown that people often decide to make a movement up to 300 milliseconds before they do, and the experimenter watching the trace literally knows before they do that they're about to move. First, because this is sometimes the case does not mean it is always the case.
Second, and more critically, why is it so troubling if our consciousness is on a tape delay? Assuming free intentionality applies to your nervous system, but not to your conscious awareness (if it's epiphenomenal) then you subconsciously have free will and become of a free intentional decision after it was made. In one sense "you" are just along for the ride, but it's a ride on a computer with free will that has been telling "you" what to do in a deceptive way since the day you were born and which cannot be separated from "you" without the end of your experience (because that means cutting your brain out). As long as the inseparable computer that's giving the orders has free intentionality, we have no right to be upset by the arrangement. The key to whether we can make free intentional acts is therefore not in the length of the tape delay, but in the behavior of the computer that's making decisions for "you". Our understanding of the nervous system does not yet allow us to rule out category #3 above, especially in light of strong introspective
evidence.
IF FREE INTENTIONALITY DOES NOT EXIST, WHY DOES IT SEEM LIKE IT DOES?
1) If free intentionality is in fact an illusion - why? Why would it be useful for organisms to deceive themselves into thinking that they have free will? The same question can be asked of conscious experience itself. How could such a thing be evolutionarily selected for, since it seems to have no outward manifestation?
2) For those who believe in a pre-determined universe, this means we're living in a static four-dimensional block of space-time. Why does "now" seem to be a special point in time? By this view, "now" is an illusion. How could organisms have developed that do not sense the full sweep of their existence? Why would this narrow focus on a gradually-shifting, illusorily-special space?
3) Free intentionality seems to manifest more at greater time scales. It is clear that there are behaviors resulting from Category 1 lawful relationships in every organism, including ourselves. If there is free intentionality, it a) probably occurs at least in the executive decision center and b) manifests over longer periods of time. Case in point, tell a person they have no free will and a common response is to hop on one foot or do something else socially unexpected. Find that person in exactly one month's time, and try to measure how different their life is because they hopped on one foot for five seconds; not very much I'll wager. It's the person who is making considered decisions based on semantic reasoning whose life can change over time. If free intentionality exists anywhere, it is in this kind of cognition.
Subscribe to:
Posts (Atom)