Cognition and Evolution

Consciousness and how it got to be that way

Monday, February 15, 2021

Some Medical Hypotheses

Many people in medical fields accumulate these points of curiosity that are outside their specialization or thatnthey otherwise never have time to follow up on. Here are several. As always, nothing here should be taken as medical advice.
  1. The decline in alveolar ventilation with age as measired by DLCO (about 1% a year) results partly from the gradual accumulation of small subclinical pulmonary emboli. This predicts that people on blood thinners should show a slower decline. Also, part of the increased all-cause mortality seen in people who sit a lot relative to those who don't is the result of such emboli, in the lungs and elsewhere, suggesting people who sit less should also show a decreased rate of lung function loss. (This second part of the hypothesis is appealing because you can't undo sitting mortality by adding exercise, just like you can't undo that PE from your flight to JFK by hitting the gym after you get off the plane. Also, frequent sitting is evolutionarily recent, and in fact our ancestors most likely had more exsanguinating traumas than we do, so even without sitting the balance in our current environment is still too tilted toward clotting.) Of note, capillary microthrombi do account for some of the dysfunction in COVID hypoxemia, though I am unaware to what extent this mechanism accounts for persistent hypoxemia in recovered COVID patients (Dhont et al 2020.)

  2. One of the functions of a four-chambered heart is to prevent clots from reaching end organs. In the brains of animals with less dependence on complex behavior and/or without small capillaries, this is less of a problem. True cold blooded modern reptiles do not have small capillaries and have three chambered hearts. Dinosaurs, birds and mammals are all warm blooded or poikilothermic and have four- chambered hearts. A four chambered heart provides an additional aperture that thrombi have to pass through, and for developmental reasons may make it more likely that patent foramen ovale-type defects are less likely.

  3. Mammalian red blood cells are enucleate. The prevailing theory is that mammals have the tiniest capillaries of all orders (even moreso than birds, consistent with being warm blooded). The new hypothesis here is that mammals' on average more communal living makes them more susceptible to viruses. (Yes birds live communally but in terms of physical contact mammals on average spend more time directly touching.) The majority of cells in blood are RBCs. A virus adapted for infecting nucleate RBCs could do quite a bit of damage in animals that had them. But in mammals, these viruses would only enter an empty shell. Note that the next most common type of blood cell, the neutrophil, is programmed to self-destruct in 24 hours, and indeed ejects its nucleus as its main defense, frustrating any pathogen that needs time to do its work and on top of that would have to be adapted to both intra and extracellular conditions. I am not aware of any virus which infects and reproduces using the translational machinery of avian red blood cells but the existence of such viruses would support this theory. In fact RBCs in non-mammalian vertebrates do have active adaptive immunity functions (Nombela and Ortega-Villaizan 2018). That the maturation of red blood cells is directly dependent on ejection of the nucleus suggests this is an important pathway (Testa 2004), which may also be an adaptation for cancer resistance in long-lived species that thusfar only mammals have taken advantage of. Of course many viruses interact with RBCs in mammals, but do not (cannot!) use them to reproduce.

Dhont, S., Derom, E., Van Braeckel, E. et al. The pathophysiology of ‘happy’ hypoxemia in COVID-19. Respir Res 21, 198 (2020).

Nombela I. and Ortega-Villaizan MdM. Nucleated red blood cells: Immune cell mediators of the antiviral response. PLoS Pathog. 2018 Apr; 14(4): e1006910. Published online 2018 Apr 26. doi: 10.1371/journal.ppat.1006910

Testa, U. Apoptotic mechanisms in the control of erythropoiesis. Leukemia 18, 1176–1199 (2004).

Sunday, June 7, 2020

Parasite Burdens and the Flynn Effect

The Flynn Effect is the real, not-test-based increase in IQ seen in first-world countries, about 3 IQ points a decade. In the last couple decades the effect has leveled off in much of developed world. There's a lot of discussion over why this should be.

One obvious candidate is parasite burden. As countries develop, public sanitation gets better, and public health improves. If it's public health (pathogens plus nutrition) and offering standardized schooling to all, you would expect to see an eventual plateau in developed countries, and the developing countries begin to follow their trend.

Any parasite which directly damages the brain is an obvious candidate as one causative agent. This is especially interesting when you read that up to one-third of people in, e.g. Peru, have radiographic evidence of neurocysticerosis - tapeworm damage in the brain. This study shows that of people with evidence of the disease, 18.2% of them in childhood have IQ < 70. Starting to connect dots, we can start making an estimate of IQ improvement from eradication of neurocysticercosis alone.

  • Let's assume that (as the Peruvian study showed) 33% of people have neurocysticerosis.

  • Let's assume that of the people with neurocysticercosis, 18.2% (4% of the total population) have IQ < 70, the mean IQ is 69. This is obviously simple and actually quite conservative, but the higher we make the mean number for this subgroup, the more modest the effect of eradicating neuroscysticercosis.

  • Let's also assume that the 70 and above IQ folks are evenly distributed between 70 and 100. Also simplifying, but I doubt neurocysticercosis makes many people smarter.

  • With those assumptions, then a 3-point IQ increase in the general pouplation could be brought about by a one-third decrease in NC cases.

  • Of course the 3-point IQ trend goes on for more than 3 decades (when all three-thirds of would-be neurocysticercosis patients were prevented from getting it) so it can't just be that.

To test the hypothesis, we could look at average IQ increases going forward in developing countries currently getting de-wormed. You could also look at existing Flynn Effect curves for the developed world and compare it against the % population getting on clean public water supplies. Of course it's hardly controversial that lower parasite burden would correlate with better outcomes, and indeed the de-worming projects have already shown an improvement in school attendance in participating areas. And while parasite diseases cause massive human suffering, this is still interesting even just by purely pragmatic reasons: a country's economic well-being is linked to the average IQ.

Monday, April 27, 2020

Is a Virus Alive?

The pandemic has brought this question much more public attention than usual. It seems to be an interesting question - but on scrutiny, the problem evaporates.

Viruses are replicators. The question of whether they (or anything) is alive is not a useful one.

It boils down to this: in most of these discussions, what we're really asking, when we ask if COVID-19 is "alive", is whether it can make us sick. If it can replicate, it can make us sick, and we know that viruses can replicate. Categorizing things as "alive" turns out to be an arbitrary exercise that neither organizes our knowledge nor adds information - it's much like asking if a submarine swims. What this exposes is that we have no definition of "alive" to begin with. "Alive" is just the label in English for an intuitive category in our animal brains having to do with animacy or agency, and at the molecular level or with non-intuitive strange entities like viruses (or slime-molds, or jellyfish) these intuitions fail us.

More explanation:
  1. Specific to COVID-19, most of them time people ask "Is it alive?" when we're talking about the virus "remaining alive" for certain lengths of time on surfaces. Of course what we really care about is whether it can make you sick. Poison oak oil (urushiol) can cause a Type IV allergic reaction after decades. Is it alive?

  2. "Make you sick" corresponds to "reproduction". Fire, stalagmites, and black holes (if you follow Lee Smolin's argument) all grow and/or reproduce. Why aren't those alive?

  3. You might have rolled your eyes when I mentioned fire, and not been wondering whether that is a living thing. We instinctively recognize there's a distinction, but it's worth spending time on. There IS something qualitatively different between a virus, and fire. Viruses are discrete entities that are alike - with elements ordered in a certain way - despite having been made from those elements when they were NOT so ordered. But fire does not carry historical information in this way. That is to say - if two coworkers get infected with COVID-19, despite being genetically different people with different cells, they will produce identical viruses. You can tell the viruses came from other coronaviruses. In contrast, if you light two identical sticks, one by sticking it in a campfire and the other from a cigarette lighter, it doesn't matter - they will burn the same way. You can't tell where that fire is "descended" from.

  4. Being more specific, viruses and people are both replicators. That is a useful category which encodes a qualitative difference. Fire is not a replicator. Viruses are. While fire might not be an interesting boundary case, transposons, prions and computer viruses might be. Viroids shouldn't really be considered a boundary case since they're really just naked viruses that take advantage of intercellular junctions in plants, but somehow people seem to think viroids are less alive than viruses.

  5. Interestingly, we don't have to be explicitly taught what things are alive and what things are not. Speculatively, there may be a central pattern generator that has some combination of animacy, agency, reproduction, and growth. Which does usefully capture all the living things knowable in the macroscale world that our ancestors inhabited for millions of years.

  6. Part of the problem with asking this question is there is no definition of "alive". Molecular biologists got bored with this question very quickly because it didn't advance any hypotheses. (Think of it as the "how many angels can dance on the head of a pin" question for this field; or if you're given to Eastern thought, "since yang is both hard and white, what is the logical relationship between these two things". That is, a problem which only seems to be a problem because of other assumptions which turned out to be wrong or unnecessary, and even if the question was meaningful, answering it turned out to be uninformative and arbitrary.) The most common definition used - again, not necessary for any experiment - is independent metabolism. You might say that an organic virus is not alive because they have no independent metabolism - this is the usual cutoff. What about chlamydia? This is an actual genus of bacteria which is obligately parasitic on host ATP. (A medically relevant genus no less, because it causes diseases humans.) Yes, it uses ATP. So do viruses once they're inside cells. So instead of "alive" why wouldn't we just say "independently ATP generating"?

  7. And yet, it does seem very unsatisfying to learn that "alive" - an apparently important distinction between the types of objects I see when I look out my window - is actually arbitrary. That's because I don't see anything that the term doesn't seem to work for. I see on one hand rocks, clouds, the roof of my porch, and on the other, flowers, birds and grass. Naked-eye observers of the natural world are the Newtonians of biology. Looking out your window, you can't encounter anything where your instinct of "alive" and the better category of "replicator" don't line up...

  8. ...but as soon as you see viruses or viroids or prions, your assumptions are falsified and these traits no longer overlap. Another place where the same debate happened, interestingly also outside the realm of every day experience was in the nineteenth century attack on the idea of vitalism, where a supposed distinction between living and non-living materials was shown empirically not to exist. So to stretch the analogy, Woehler was molecular biology's Planck, and instead of the ultraviolet catastrophe, he demonstrated the urea epiphany.

Saturday, April 18, 2020

Number of COVID-19 Cases Correlates With Population Density

It seems fairly obvious that density should correlate with how fast a virus spreads. Comparing across countries or even states is difficult due to time of introduction as well as many other variables. This should be less of a problem (but certainly not zero problem) for a study of of cases by county within a single state. Therefore I looked at the relationship between density and cases. Keep in mind this is an ongoing pandemic so time of introduction will still make a difference, and for that matter there is no effort to control for other variables (e.g., difference in testing frequency by county.) Both axes are log 10 mostly to group points together. As you can see from the R^2 there's quite a close association.

The next and less obvious question is, if viral load (total number of viruses an infected person was exposed to) correlates with illness severity, you would expect that density would also correlate with deaths. There are even more variables that come into play with deaths - age and health of the population which definitely differs, as well as access to medical care and ICU beds. So I did the same thing for deaths; I'm not showing it since I found an R^2 of only 0.0845. I predict that a month from now that R^2 will be higher.

Friday, March 27, 2020

Why We Failed to React to the Pandemic

Given how the pandemic dominates all other news, an appropriate warning about it should have done the same. Yet in the West there was no such thing that I was aware of, including in the rationalist community.

  1. We can't call it a failure to predict. I think few people in the rationalist community would have argued that a pandemic could NOT happen, before NYE 2019. It's a failure to react, even once we saw THIS non-hypothetical pandemic coming. Am I missing people who were sounding the alarm? If not, it seems rationalists are no better at spotting information important to survival than anyone else.

    (Side lesson: most cognitive skills are not as generalizeable as we would like to think. Because you are good at thinking critically about software does not necessarily mean you're good at thinking critically about epidemiology. I suspect this is because understanding the relevant variables is mostly about memorized instinctive system 1 associations and weightings that come from experience.)

  2. Very few people saw this coming - "this" meaning "a possibility of a pandemic we must plan for". Including rationalists. Including superforecasters. People in epidemiology knew it was possible but it's hard to evaluate their claims of danger over any other profession that predicts low-probability high-consequence events in a way connected to professional success (they're always thinking about pandemics, appropriately); Bill Gates and a few other smart people outside the epidemiology world tried to raise consciousness about the possibility prior to this particular event. Was there a way to pull the signal out from them above all the other constantly-broadcasted jeremiads at the time? And it wasn't like an earthquake where one second it wasn't there, the next it was, and so far as we know there's no way to spot it early; it has been there since December and the large majority of us in the US, including rationalists, did not care much until early March. This was in no way a black swan. We knew it could happen, it had happened several times before, and we had weeks of growing warnings. It was a white swan, walking slowly toward us from the horizon, just like the last few white swans did.

    [Added later: Nassim Nicholas Taleb uses exactly the same language in this Bloomberg interview. And read more here about why it was so hard to raise the alarm.
  3. Most depressingly, all this occurred after we (in the rationalist community, in parts of the psychology and media and data world) had for years pointed out the failures of predictors repeatedly and tried explicitly to improve. It's depressing because this raises the question of what else we're missing, and indeed if we ever can NOT miss things like this. Again: not even a failure to predict. A failure to react. Why? Denial, fear of social censure by others not on board? Bounded rationality, ie most of us are too stupid to extract important signals and extrapolate?

  4. As a result, I am now particularly concerned about the likelihood of Carrington events and nuclear war - see here and here for near-misses (never mind their intentional use, which is also possible - indeed, that's why they were built and why they continue to be maintained.) The 1983 event is particularly chilling and came down to the career-risking, intuitive, principled judgment of ONE MAN. Petrov should be a name repeated with reverence around the world, since arguably it's because of him that there still IS a world. Our overconfidence that it can't happen occurs on a similar time scale with the Asian flu of 1957-58 which resulted in school closures and an economic downturn, though not on the scale we're seeing with COVID-19.

  5. We have never seen runaway AI. We have seen nuclear weapons used in war. I wouldn't argue against the possibility of a hard AI takeoff, but you canNOT argue against the possibility of nuclear weapons used in war, because it has already happened once. Interestingly, of all the stupid denialisms out there, I have never run into Hiroshima-Nagasaki denialists.

    Another white swan on the horizon that rationalists should spend more time stopping.

Added later:

Here's Scott Alexander's review of the book written by Toby Ord, which besides AI lists pandemics and nuclear war. Before you're too thrilled that he gives lower numbers for nuclear war than AI, those numbers are for TOTAL EXTINCTION OF THE HUMAN RACE, not the chance of it happening. There's a lot of space between "extinct" and "a lot of the people you love will die and all of you will suffer horribly" just like there's space between "okay" and "needs intubation" with COVID-19, so don't think mild to moderate means okay.] Yet another time we survived by dumb luck:
...even when people seem to care about distant risks, it can feel like a half-hearted effort. During a Berkeley meeting of the Manhattan Project, Edward Teller brought up the basic idea behind the hydrogen bomb. You would use a nuclear bomb to ignite a self-sustaining fusion reaction in some other substance, which would produce a bigger explosion than the nuke itself. The scientists got to work figuring out what substances could support such reactions, and found that they couldn’t rule out nitrogen-14. The air is 79% nitrogen-14. If a nuclear bomb produced nitrogen-14 fusion, it would ignite the atmosphere and turn the Earth into a miniature sun, killing everyone. They hurriedly convened a task force to work on the problem, and it reported back that neither nitrogen-14 nor a second candidate isotope, lithium-7, could support a self-sustaining fusion reaction.

They seem to have been moderately confident in these calculations. But there was enough uncertainty that, when the Trinity test produced a brighter fireball than expected, Manhattan Project administrator James Conant was “overcome with dread”, believing that atmospheric ignition had happened after all and the Earth had only seconds left. And later, the US detonated a bomb whose fuel was contaminated with lithium-7, the explosion was much bigger than expected, and some bystanders were killed. It turned out atomic bombs could initiate lithium-7 fusion after all! [my emphasis] As Ord puts it, “of the two major thermonuclear calculations made that summer at Berkeley, they got one right and one wrong”. This doesn’t really seem like the kind of crazy anecdote you could tell in a civilization that was taking existential risk seriously enough.

Added still later: depressing results that cognitive biases are extremely difficult to avoid even with explicit high stakes incentives.]

Sunday, December 29, 2019

The Lack of Real-World Money Pumps: How Intransitive Preferences Do and Do Not Distort Behavior

In economics and other areas of applied rationality, the problem with having intransitive preferences - where you prefer x to y, and y to z, but z to x - is that you can supposedly be made into a money pump, by taking advantage of these irrational preferences. Indeed I'd seen claims online of people who were subjects of psychology or economics experiments actually knowing that this was happening to them, but being unable to stop themselves. This seemed both intuitively far-fetched, as well as something which would be constantly exploited, especially if it was a trait that existed differentially in the population.

I had encountered this idea some years ago in the rationalist canon, but I had never been able to think of examples where it really happened. Imagine my joy when I thought I had finally run across it when, after dinner one night, my (irrational) toddler demonstrated intransitive preferences when eating M&M's and trading with me. She prefers green over orange, orange over brown, and brown over green. Here it is!, I thought. A money pump! But exactly how could I benefit from this?

As it turned out, there was no way to money-pump her, and might not be a way to ever meaningfully money-pump anybody. But to illustrate the point I'll give you a hypothetical example of how it could work. This hypothetical example alters her real behavior considerably - to see what I changed and why it could not work in the real world, skip ahead to "Where Are the Real Money Pumps?" in bold below.

Say we both start out with 12 green, 12 orange, and 12 brown, and assume the following preference rules (individual exchange rates):

I have transitive preferences: I like green twice as much as orange, which I like twice as much as brown. (1 green = 2 orange = 4 brown)

She has intransitive preferences: she likes green twice as much as orange, which she likes twice as much as brown, which she likes twice as much as green, etc.) In fact, I would argue that she has a subtype of intransitive preferences, cyclic preferences: with intransitive preferences you can also just say that there is no value on something and it can't be used in trade. (Though irrational by Von Neumann-Morgenstern and other standards, this is in fact how normal human beings behave; in contrast, when someone will put a price on anything, that person is called a psychopath.)

It helps to see the specifics of how you can take advantage of someone with cyclic preferences, but it's dry and boring, so I'll include it in a "supplement" at the end if you're interested and don't just want to take my word for it. Suffice it to say after at most 7 trades, I would have all the M&M's except for two brown ones.


I went looking for cases in the real world where people get money-pumped, and found:
a) none, and
b) I'm not the only one who has noticed this gap.

In fact money-pumping seems to be an entirely theoretical risk, predicted deductively from rationality models. So what's going on? Likely some combination of:
  1. People are irrational and their preferences are a mess, but they aren't neatly cyclical like this. In fact we should expect that most intransitive preferences are just that - they have no relation to any other preferences - therefore precluding a cyclic system of intransitive preferences. That is, sets of cyclic preferences are a subset of intransitive preferences, but because of the nature of intransitive preferences (basically they're a mess with little relation to each other or even consistency on short time scales), cyclic preferences are very rare or nonexistent in the real world.

  2. Humans have many heuristics that can cause reasoning errors, but that also must have been on net beneficial to our ancestors, and (sometimes) they only make sense in the context of their environment. For example, we like to punish wrongdoers, to the point where we will spend more resources to do it, even when the damage has already been done. This seems irrational, unless you realize that there are not many limited-round-games in life, and if there was someone doing bad things in your tribe fifty thousand years ago, it made sense to invest those punishment-resources now to deter further wrongdoing later. A more germane example is the endowment effect, which is a heuristic that clearly has the effect of keeping people from getting taken advantage of in markets with asymmetric information. I expect that with cyclic preferences, there is a meta-preference that usually acts like a circuit-breaker for this sort of thing, for example just thinking about the goods in dollar figures. Of course, you could rightly object, if someone can do that, they don't really have cyclic preferences, since the dollar is like a color of M&M that doesn't admit of intransitive valuation - and you're right! Which is why people don't actually get money-pumped! In this same way, I expect that even my toddler would notice her overall number of M&M's is shrinking and mine is growing and at some point say she's not playing this stupid game anymore.

  3. There actually are some cyclic preference-sets revealed by people engaging in repetitive behavior that makes their life consistently worse. This includes compulsive gambling, junk food, substance abuse, and staying in abusive relationships (or seeking out new ones.) It's interesting that these aren't about trading with currency, or at least don't centrally involve explicit trade or currency exchange, which are relatively new things in terms of evolutionary psychology, and something where along with learning them explicitly, we have developed learned defenses against being taken advantage of (as noted above.) Even in those cases where there appear to be cyclic preferences, these are better understood as predictably shifting preferences (due to things like future discounting), but this is a semantic distinction since they have the same outcome.


Clashes between the system of transitive preference systems - speaking broadly, the economy - and intransitive preferences are somewhat rare, but they occur, even when they are not cyclic. You don't think of people repeating abusive relationships as part of the economy, but that's a great example of intransitive preferences. Gambling and substance use are part of the economy, but at the fringes (it's interesting that societies usually have prohibitions or regulations about trading the same sorts of things, things which involve strongly affective parts of our cognition and behavior like gambling, sex, drugs, and firearms.)

Some psychopaths recognize the intransitive nature of most humans' valuation of other human life (it's "priceless"), and take hostages who they will kill unless they are given money or some other objective. In those cases, many humans magically overcome our intransitiveness and kill the hostage-takers, or allow the hostage-takers to kill their victims in order to avoid negotiating in the future.

A fortunately more common but unfortunately probably more intractable problem is healthcare: about 70% of healthcare dollars in the U.S. spent by someone are spent in the last 6 months of that person's life. We could spend more and more getting additional minutes at the end of life. Unless we're going to ruin ourselves in this way, there has to be a rule, regardless of whether we're in a public or private healthcare system. This is something we don't like thinking about.

Finally, whenever people try to create a system of transitive preferences outside of the mother-system (the economy), the gravity of the currency economy inevitably connects to it and sucks it in, whether we're talking about Ithaca-dollars, or the charade of "no currency" at Burning Man.


Starting out we have:

Me: 12 green, 12 orange, 12 brown
Her: 12 green, 12 orange, 12 brown

Round 1. I offer her 6 of my orange for all 12 of her brown. Now we have:

Me: 12 green, 6 orange, 24 brown
Her: 12 green, 18 orange, 0 brown

Round 2. I offer her 6 of my brown for all 12 of her green. Now we have:

Me: 24 green, 6 orange, 18 brown
Her: 0 green, 18 orange, 6 brown

Round 3. I offer her 3 of my orange for all of her brown. Now we have:

Me: 24 green, 3 orange, 24 brown
Her: 0 green, 21 orange, 0 brown

Round 4. I offer her 11 of my green for all 21 of her orange. (Give her a good exchange rate and round up. She's irrational, I'll get it back!) Now we have:

Me: 13 green, 24 orange, 24 brown
Her: 11 green, 0 orange, 0 brown

Round 5. I offer her 6 of my brown for all 11 of her green (rounding up again.) Now we have:

Me: 24 green, 24 orange, 18 brown
Her: 0 green, 0 orange, 6 brown

Round 6. I offer her 3 of my green for all 6 of her brown. Now we have:

Me: 21 green, 24 orange, 24 brown
Her: 3 green, 0 orange, 0 brown

Round 7. I offer her 2 of my brown for all 3 of her green (rounding up.) Now we have:

Me: 24 green, 24 orange, 22 brown
Her: 0 green, 0 orange, 2 brown get the picture.

As I was calculating this out I actually found it quite hard to think about the irrational player's decisions. There is value exchange symmetry in rational trading, which is to say, it doesn't matter if I am getting higher-value units or lower-value units. Whereas I would be tempted to say to the irrational player in round 7 above, "Look, why are we going through all this? Why don't you just give me those last 2 brown because they're worth less than themselves!" (Also more than themselves. But I want them, so I wouldn't say that.)

Originally I couldn't see how to benefit from this, even hypothetically - I thought "I could keep the trade going indefinitely but not accumulate anything". The errors were a) I actually had no preference for one color over another, and b) her preference for the "better" one in any pair was arbitrarily small (e.g., she just barely liked orange better than brown), and you can't subdivide M&M's (so with rounding either you could never arrive at having almost entirely fleeced the other party, or it would take too long. On the other hand, if you benefit from the trade itself and have no preferences, and you CAN usefully subdivide, you could still benefit. But I wasn't charging M&M commissions.

There is a total wealth (by my measurement, in units of "browns") of 168 in the game, with each side (by my measurement) starting with 84 brown-units. At the end of each round, with my trades, the value I hold is 84, 126, 126, 124, 162, 156, 166. You actually can't even talk about the other player's total value because what unit do you use to measure it? If we held differing, but rational, valuations - as people do in the real world - then say my daughter values browns twice as much as orange, orange twice as much as green - we'd quickly both wind up with her holding all the brown and me holding all the green. And that would be fine. In fact there have been cases in history where people became worried about the problems that could arise when preferences were different - Isaac Newton noted that because the English and Chinese relative valuations for gold and silver were different, that in a simple system eventually one would end up with all the silver and the other with all the gold, and trade would grind to a halt. But of course the system isn't that simple, and in any event as long as preferences are rational - not circular - it wouldn't matter.

Sunday, November 3, 2019

Editorial Clickbait about Psychiatry in the New England Journal of Medicine

I'm really disappointed in NEJM for publishing this piece by Gardner and Kleinman (G&K.) Overall this article is not helpful or useful. There is a cottage industry of psychiatrists writing hit pieces on our own specialty, and often they make coherent and actionable points that improve the specialty and ultimately patient outcomes. But of many valid criticisms of psychiatry, this article bizarrely focuses on two problems that pervade most of medicine, and implies that they are uniquely problems for psychiatry. The thesis seems to be that psychiatry has been damaged by reliance on a biological approach, which has stunted its ability to treat patients, and damaged our interactions with them by decreasing the quantity and quality of our interaction.

First: these two have apparently not been talking to many of their colleagues, inside and outside of psychiatry. How many physicians do you know, especially in cognitive specialties with lots of patient contact, who say "No, I don't have inappropriate time pressures on my patient interactions, and what pressures there are, are not worse than they were thirty years ago"? Most psychiatrists would love to spend more time with patients. When we don't, it's not because we've already gone through the checklist so we don't want to waste time forming rapport - it's due to the moral hazard introduced by the financial and administrative structure of modern medicine. The same argument obviously applies to many specialties outside of psychiatry.

The second part of their argument is that over-reliance on a biological approach is what has distorted psychiatry and prevented us from adequately treating patients. In case they haven't noticed, we do have psychiatric medications which work, that we didn't have a few decades ago. (They somehow fail to comment on the existence of SSRIs and second-generation antipsychotics, for example.) How is this the failure of a biological approach? It is trivially true that biological approaches to psychiatry have not yet been as fruitful as we would all like. The genomics revolution (for example) has also not benefited most branches of medicine to the degree hyped - yet. It's a bit premature to say that therefore, biological approaches like genomics have not yet benefited psychiatry and therefore will never benefit psychiatry. They have essentially not benefited any other branch of clinical medicine besides hem/onc - because it's easier to kill or poison certain cells (especially ones that are suspended structurelessly in fluid, rather than connected in a specific network, neural or otherwise) than it is to make them work better. We should expect that oncology would have been the first to benefit. In this G&K are rather like engineers in 1900 saying "we haven't achieved powered flight yet, therefore it can't be achieved ever." (Which, by the way, some engineers at the time did.)

It's unclear what G&K's solution is. Perhaps most tellingly, the voices I've seen online defending this article seem to have great difficulty understanding the definition of "syndrome", or the idea of treating empirically before the biology of a specific case or even the disease itself is clear is quite often the best approach (and again, this is not specific to psychiatry.) For instance, many psychotherapies have an impressive evidence base at this point, and if we don't understand psychopharmacology as well as we would like at the biological level, we certainly don't have anything like a fully articulated biological theory of psychotherapy either. If you have a treatments that can help - pharmacologically or otherwise - it's immoral to withhold it just because the science behind the treatment mechanism or pathophysiology is not settled. And as near as I can tell, that's exactly what G&K are proposing.