tag:blogger.com,1999:blog-47245926432242622092024-03-09T18:48:34.749-08:00Cognition and EvolutionConsciousness and how it got to be that wayMichael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.comBlogger204125tag:blogger.com,1999:blog-4724592643224262209.post-80069750807451259212021-02-15T15:49:00.001-08:002024-03-09T05:29:21.841-08:00Some Medical HypothesesMany people in medical fields accumulate these points of curiosity that are outside their specialization or that they otherwise never have time to follow up on. Here are several. As always, nothing here should be taken as medical advice.
<ol>
<li>The decline in alveolar ventilation with age as measired by DLCO (about 1% a year) results partly from the gradual accumulation of small subclinical pulmonary emboli. This predicts that people on blood thinners should show a slower decline. Also, part of the increased all-cause mortality seen in people who sit a lot relative to those who don't is the result of such emboli, in the lungs and elsewhere, suggesting people who sit less should also show a decreased rate of lung function loss. (This second part of the hypothesis is appealing because you can't undo sitting mortality by adding exercise, just like you can't undo that PE from your flight to JFK by hitting the gym after you get off the plane. Also, frequent sitting is evolutionarily recent, and in fact our ancestors most likely had more exsanguinating traumas than we do, so even without sitting the balance in our current environment is still too tilted toward clotting.) Of note, capillary microthrombi do account for some of the dysfunction in COVID hypoxemia, though I am unaware to what extent this mechanism accounts for persistent hypoxemia in recovered COVID patients (Dhont et al 2020.)<br/><br/>
<li>One of the functions of a four-chambered heart is to prevent clots from reaching end organs. In the brains of animals with less dependence on complex behavior and/or without small capillaries, this is less of a problem. True cold blooded modern reptiles do not have small capillaries and have three chambered hearts. Dinosaurs, birds and mammals are all warm blooded or poikilothermic and have four- chambered hearts. A four chambered heart provides an additional aperture that thrombi have to pass through, and for developmental reasons may make it more likely that patent foramen ovale-type defects are less likely. <br/><br/>
<li>Mammalian red blood cells are enucleate. The prevailing theory is that mammals have the tiniest capillaries of all orders (even moreso than birds, consistent with being warm blooded). The new hypothesis here is that mammals' on average more communal living makes them more susceptible to viruses. (Yes birds live communally but in terms of physical contact mammals on average spend more time directly touching.) The majority of cells in blood are RBCs. A virus adapted for infecting nucleate RBCs could do quite a bit of damage in animals that had them. But in mammals, these viruses would only enter an empty shell. Note that the next most common type of blood cell, the neutrophil, is programmed to self-destruct in 24 hours, and indeed ejects its nucleus as its main defense, frustrating any pathogen that needs time to do its work and on top of that would have to be adapted to both intra and extracellular conditions. I am not aware of any virus which infects and reproduces using the translational machinery of avian red blood cells but the existence of such viruses would support this theory. In fact RBCs in non-mammalian vertebrates do have active adaptive immunity functions (Nombela and Ortega-Villaizan 2018). That the maturation of red blood cells is directly dependent on ejection of the nucleus suggests this is an important pathway (Testa 2004), which may also be an adaptation for cancer resistance in long-lived species that thusfar only mammals have taken advantage of. Of course many viruses interact with RBCs in mammals, but do not (cannot!) use them to reproduce.
</ol>
<br/><br/>
<b>REFERENCES</b><br/>
Dhont, S., Derom, E., Van Braeckel, E. et al. <a href="https://respiratory-research.biomedcentral.com/articles/10.1186/s12931-020-01462-5" target="_blank">The pathophysiology of ‘happy’ hypoxemia in COVID-19.</a> Respir Res 21, 198 (2020). https://doi.org/10.1186/s12931-020-01462-5
<br/><br/>
Nombela I. and Ortega-Villaizan MdM. <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5919432/" target="_blank">Nucleated red blood cells: Immune cell mediators of the antiviral response. </a> PLoS Pathog. 2018 Apr; 14(4): e1006910. Published online 2018 Apr 26. doi: 10.1371/journal.ppat.1006910
<br/><br/>
Testa, U. <a target=_blank href="https://www.nature.com/articles/2403383" target="_blank">Apoptotic mechanisms in the control of erythropoiesis</a>. Leukemia 18, 1176–1199 (2004). https://doi.org/10.1038/sj.leu.2403383
Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-66178623098514439292020-06-07T16:39:00.001-07:002020-06-07T16:39:39.618-07:00Parasite Burdens and the Flynn EffectThe <a target=_blank href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4152423/">Flynn Effect</a> is the real, not-test-based increase in IQ seen in first-world countries, about 3 IQ points a decade. In the last couple decades the effect has leveled off in much of developed world. There's a lot of discussion over why this should be.
<br/><br/>
One obvious candidate is <a target=_blank href="https://cognitionandevolution.blogspot.com/2013/12/other-non-g-explanations-for-flynn.html">parasite burden</a>. As countries develop, public sanitation gets better, and public health improves. If it's public health (pathogens plus nutrition) and offering standardized schooling to all, you would expect to see an eventual plateau in developed countries, and the developing countries begin to follow their trend.
<br/><br/>
Any parasite which directly damages the brain is an obvious candidate as one causative agent. This is especially interesting when you read that up to <i>one-third of people in, e.g. Peru, have radiographic evidence of neurocysticerosis - tapeworm damage in the brain.</i> <a target=_blank href="https://pubmed.ncbi.nlm.nih.gov/29687740/">This study</a> shows that of people with evidence of the disease, 18.2% of them in childhood have IQ < 70. Starting to connect dots, we can start making an estimate of IQ improvement from eradication of neurocysticercosis alone.
<br/><br/>
<ul>
<li>Let's assume that (as the Peruvian study showed) 33% of people have neurocysticerosis.
<br/><br/>
<li>Let's assume that of the people with neurocysticercosis, 18.2% (4% of the total population) have IQ < 70, the mean IQ is 69. This is obviously simple and actually quite conservative, but the higher we make the mean number for this subgroup, the more modest the effect of eradicating neuroscysticercosis.
<br/><br/>
<li>Let's also assume that the 70 and above IQ folks are evenly distributed between 70 and 100. Also simplifying, but I doubt neurocysticercosis makes many people smarter.
<br/><br/>
<li><b>With those assumptions, then a 3-point IQ increase in the general pouplation could be brought about by a one-third decrease in NC cases.</b>
<br/><br/>
<li>Of course the 3-point IQ trend goes on for more than 3 decades (when all three-thirds of would-be neurocysticercosis patients were prevented from getting it) so it can't just be that.
</ul>
<br/>
To test the hypothesis, we could look at average IQ increases going forward in <a target=_blank href="https://www.givewell.org/charities/deworm-world-initiative">developing countries currently getting de-wormed</a>. You could also look at existing Flynn Effect curves for the developed world and compare it against the % population getting on clean public water supplies. Of course it's hardly controversial that lower parasite burden would correlate with better outcomes, and indeed the de-worming projects have already shown an improvement in school attendance in participating areas. And while parasite diseases cause massive human suffering, this is still interesting even just by purely pragmatic reasons: <a target=_blank href="https://en.wikipedia.org/wiki/IQ_and_the_Wealth_of_Nations"> a country's economic well-being is linked to the average IQ</a>.
Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-2978735134649652252020-04-27T12:03:00.000-07:002020-04-27T12:03:18.878-07:00Is a Virus Alive?The pandemic has brought this question much more public attention than usual. It seems to be an interesting question - but on scrutiny, the problem evaporates.
<br/><br/>
Viruses are replicators. The question of whether they (or anything) is alive is not a useful one.
<br/><br/>
It boils down to this: in most of these discussions, what we're really asking, when we ask if COVID-19 is "alive", is whether it can make us sick. If it can replicate, it can make us sick, and we know that viruses can replicate. Categorizing things as "alive" turns out to be an arbitrary exercise that neither organizes our knowledge nor adds information - it's much like asking if a submarine swims. What this exposes is that we have no definition of "alive" to begin with. "Alive" is just the label in English for an intuitive category in our animal brains having to do with animacy or agency, and at the molecular level or with non-intuitive strange entities like viruses (or slime-molds, or jellyfish) these intuitions fail us.
<br/><br/>
More explanation:
<br/>
<ol>
<li>Specific to COVID-19, most of them time people ask "Is it alive?" when we're talking about the virus "remaining alive" for certain lengths of time on surfaces. Of course what we really care about is whether it can make you sick. Poison oak oil (urushiol) can cause a Type IV allergic reaction after decades. Is it alive?
<br/><br/>
<li>"Make you sick" corresponds to "reproduction". Fire, stalagmites, and black holes (if you follow Lee Smolin's argument) all grow and/or reproduce. Why aren't those alive?
<br/><br/>
<li>You might have rolled your eyes when I mentioned fire, and not been wondering whether that is a living thing. We instinctively recognize there's a distinction, but it's worth spending time on. There IS something qualitatively different between a virus, and fire. Viruses are discrete entities that are alike - with elements ordered in a certain way - despite having been made from those elements when they were NOT so ordered. But fire does not carry historical information in this way. That is to say - if two coworkers get infected with COVID-19, despite being genetically different people with different cells, they will produce identical viruses. You can tell the viruses came from other coronaviruses. In contrast, if you light two identical sticks, one by sticking it in a campfire and the other from a cigarette lighter, it doesn't matter - they will burn the same way. You can't tell where that fire is "descended" from.
<br/><br/>
<li>Being more specific, viruses and people are both <i>replicators</i>. That is a useful category which encodes a qualitative difference. Fire is not a replicator. Viruses are. While fire might not be an interesting boundary case, transposons, prions and computer viruses might be. Viroids shouldn't really be considered a boundary case since they're really just naked viruses that take advantage of intercellular junctions in plants, but somehow people seem to think viroids are less alive than viruses.
<br/><br/>
<li>Interestingly, we don't have to be explicitly taught what things are alive and what things are not. Speculatively, there may be a central pattern generator that has some combination of animacy, agency, reproduction, and growth. Which does usefully capture all the living things knowable in the macroscale world that our ancestors inhabited for millions of years.
<br/><br/>
<li>Part of the problem with asking this question is there is no definition of "alive". Molecular biologists got bored with this question very quickly because it didn't advance any hypotheses. (Think of it as the "how many angels can dance on the head of a pin" question for this field; or if you're given to Eastern thought, "since yang is both hard and white, what is the logical relationship between these two things". That is, a problem which only seems to be a problem because of other assumptions which turned out to be wrong or unnecessary, and even if the question was meaningful, answering it turned out to be uninformative and arbitrary.) The most common definition used - again, not necessary for any experiment - is independent metabolism. You might say that an organic virus is not alive because they have no independent metabolism - this is the usual cutoff. What about chlamydia? This is an actual genus of bacteria which is obligately parasitic on host ATP. (A medically relevant genus no less, because it causes diseases humans.) Yes, it uses ATP. So do viruses once they're inside cells. So instead of "alive" why wouldn't we just say "independently ATP generating"?
<br/><br/>
<li>And yet, it does seem very unsatisfying to learn that "alive" - an apparently important distinction between the types of objects I see when I look out my window - is actually arbitrary. That's because I don't see anything that the term doesn't seem to work for. I see on one hand rocks, clouds, the roof of my porch, and on the other, flowers, birds and grass. Naked-eye observers of the natural world are the Newtonians of biology. Looking out your window, you can't encounter anything where your instinct of "alive" and the better category of "replicator" don't line up...
<br/><br/>
<li>...but as soon as you see viruses or viroids or prions, your assumptions are falsified and these traits no longer overlap. Another place where the same debate happened, interestingly also outside the realm of every day experience was in the nineteenth century attack on the idea of vitalism, where a supposed distinction between living and non-living materials was shown empirically not to exist. So to stretch the analogy, Woehler was molecular biology's Planck, and instead of the <i>ultraviolet catastrophe</i>, he demonstrated the <i>urea epiphany</i>.
</ol>
Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-59204805997700571852020-04-18T14:43:00.001-07:002020-04-18T20:38:48.930-07:00Number of COVID-19 Cases Correlates With Population DensityIt seems fairly obvious that density should correlate with how fast a virus spreads. Comparing across countries or even states is difficult due to time of introduction as well as many other variables. This should be less of a problem (but certainly not zero problem) for a study of of cases by county within a single state. Therefore I looked at the relationship between density and cases. Keep in mind this is an ongoing pandemic so time of introduction will still make a difference, and for that matter there is no effort to control for other variables (e.g., difference in testing frequency by county.) Both axes are log 10 mostly to group points together. As you can see from the R^2 there's quite a close association.
<br/><br/>
<center>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQ4hMScILkp2-Pzi_sL2cUYxwG2bFSMRboeshm12e0p3CEfydV5RVM6yxlgR1HGFh_4LNtpbjG21ktE8RQ5QMuU2TgIpOroPxem8DelTDHolUe6tZMFPofzCCnMI-a4pYfIVnP9NFZK35T/s1600/COVID19PA.png" width=99% height=99% /></img>
</center>
<br/>
The next and less obvious question is, if viral load (total number of viruses an infected person was exposed to) correlates with illness severity, you would expect that density would also correlate with deaths. There are even more variables that come into play with deaths - age and health of the population which definitely differs, as well as access to medical care and ICU beds. So I did the same thing for deaths; I'm not showing it since I found an R^2 of only 0.0845. I predict that a month from now that R^2 will be higher.
<br/><br/>Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-78381136656381251972020-03-27T09:19:00.000-07:002024-02-08T23:32:12.652-08:00Why We Failed to React to the PandemicGiven how the pandemic dominates all other news, an appropriate warning about it should have done the same. Yet in the West there was no such thing that I was aware of, including in the rationalist community.<br><br/>
<ol>
<li>We can't call it a failure to predict. I think few people in the rationalist community would have argued that a pandemic could NOT happen, before NYE 2019. It's a <i>failure to react</i>, even once we saw THIS non-hypothetical pandemic coming. Am I missing people who were sounding the alarm? If not, it seems rationalists are no better at spotting information important to survival than anyone else.
<br><br/>
(Side lesson: most cognitive skills are not as generalizeable as we would like to think. Because you are good at thinking critically about software does not necessarily mean you're good at thinking critically about epidemiology. I suspect this is because understanding the relevant variables is mostly about memorized instinctive system 1 associations and weightings that come from experience.)
<br><br/>
<li>Very few people saw this coming - "this" meaning "a possibility of a pandemic we must plan for". Including rationalists. Including superforecasters. People in epidemiology knew it was possible but it's hard to evaluate their claims of danger over any other profession that predicts low-probability high-consequence events in a way connected to professional success (they're always thinking about pandemics, appropriately); Bill Gates and a few other smart people outside the epidemiology world tried to raise consciousness about the possibility prior to this particular event. Was there a way to pull the signal out from them above all the other constantly-broadcasted jeremiads at the time? And it wasn't like an earthquake where one second it wasn't there, the next it was, and so far as we know there's no way to spot it early; it has been there since December and the large majority of us in the US, including rationalists, did not care much until early March. <b>This was in no way a black swan.</b> We knew it could happen, it had happened several times before, and we had weeks of growing warnings. It was a white swan, walking slowly toward us from the horizon, just like the last few white swans did.<br><br/>
[Added later: Nassim Nicholas Taleb uses <a target=_blank href="https://www.youtube.com/watch?v=lBjVTm7F1lQ">exactly the same language</a> in this Bloomberg interview. And read more here about <a href="https://nymag.com/intelligencer/2020/03/why-was-it-so-hard-to-raise-the-alarm-on-coronavirus.html">why it was so hard to raise the alarm</a>.
<li>Most depressingly, all this occurred after we (in the rationalist community, in parts of the psychology and media and data world) had for years pointed out the failures of predictors repeatedly and tried explicitly to improve. It's depressing because this raises the question of what else we're missing, and indeed if we ever can NOT miss things like this. Again: not even a failure to predict. <i>A failure to react</i>. Why? Denial, fear of social censure by others not on board? Bounded rationality, ie most of us are too stupid to extract important signals and extrapolate?<br><br/>
<li>As a result, I am now particularly concerned about the likelihood of Carrington events and nuclear war - see <a target=_blank href="https://blog.ucsusa.org/david-wright/the-moon-and-nuclear-war-904">here</a> and <a target=_blank href="https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident">here</a> for near-misses (never mind their intentional use, which is also possible - indeed, that's why they were built and <b>why they continue to be maintained.</b>) The 1983 event is particularly chilling and came down to the career-risking, intuitive, principled judgment of <b>ONE MAN.</b> Petrov should be a name repeated with reverence around the world, since arguably it's because of him that there still IS a world. Our overconfidence that it can't happen occurs on a similar time scale with the Asian flu of 1957-58 which resulted in school closures and an economic downturn, though not on the scale we're seeing with COVID-19.<br><br/>
<li>We have never seen runaway AI. We <i>have</i> seen nuclear weapons used in war. I wouldn't argue against the possibility of a hard AI takeoff, but you can<i>NOT</i> argue against the possibility of nuclear weapons used in war, because <i>it has already happened once</i>. Interestingly, of all the stupid denialisms out there, I have never run into Hiroshima-Nagasaki denialists.<br><br/> Another white swan on the horizon that rationalists should spend more time stopping.
</ol>
<br/><br/><br/>
Added later:
<br/><br/>
Here's <a target=_blank href="https://slatestarcodex.com/2020/04/01/book-review-the-precipice/">Scott Alexander's review of the book written by Toby Ord</a>, which besides AI lists pandemics and nuclear war. Before you're too thrilled that he gives lower numbers for nuclear war than AI, those numbers are for TOTAL EXTINCTION OF THE HUMAN RACE, not the chance of it happening. There's a lot of space between "extinct" and "a lot of the people you love will die and all of you will suffer horribly" just like there's space between "okay" and "needs intubation" with COVID-19, so don't think mild to moderate means okay.] Yet another time we survived by dumb luck:
<blockquote>...even when people seem to care about distant risks, it can feel like a half-hearted effort. During a Berkeley meeting of the Manhattan Project, Edward Teller brought up the basic idea behind the hydrogen bomb. You would use a nuclear bomb to ignite a self-sustaining fusion reaction in some other substance, which would produce a bigger explosion than the nuke itself. The scientists got to work figuring out what substances could support such reactions, and found that they couldn’t rule out nitrogen-14. The air is 79% nitrogen-14. If a nuclear bomb produced nitrogen-14 fusion, it would ignite the atmosphere and turn the Earth into a miniature sun, killing everyone. They hurriedly convened a task force to work on the problem, and it reported back that neither nitrogen-14 nor a second candidate isotope, lithium-7, could support a self-sustaining fusion reaction.<br/><br/>
They seem to have been moderately confident in these calculations. But there was enough uncertainty that, when the Trinity test produced a brighter fireball than expected, Manhattan Project administrator James Conant was “overcome with dread”, believing that atmospheric ignition had happened after all and the Earth had only seconds left. And later, the US detonated a bomb whose fuel was contaminated with lithium-7, the explosion was much bigger than expected, and some bystanders were killed. It turned out atomic bombs <i>could</i> initiate lithium-7 fusion after all! [my emphasis] As Ord puts it, “of the two major thermonuclear calculations made that summer at Berkeley, they got one right and one wrong”. This doesn’t really seem like the kind of crazy anecdote you could tell in a civilization that was taking existential risk seriously enough.</blockquote>
<br/>
Added still later: depressing results that cognitive biases are extremely difficult to avoid <a target=_blank href="https://marginalrevolution.com/marginalrevolution/2020/04/do-better-incentives-limit-cognitive-biases.html">even with explicit high stakes incentives</a>.]
<br/><br/>Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-56425625894183989312019-12-29T16:09:00.001-08:002024-02-08T23:37:04.584-08:00The Lack of Real-World Money Pumps: How Intransitive Preferences Do and Do Not Distort BehaviorIn economics and other areas of applied rationality, the problem with having <a target=_blank href="https://www.oxfordreference.com/view/10.1093/oi/authority.20110803100008999">intransitive preferences</a> - where you prefer x to y, and y to z, but z to x - is that you can supposedly be made into a money pump, by taking advantage of these irrational preferences. Indeed I'd seen claims online of people who were subjects of psychology or economics experiments actually knowing that this was happening to them, but being unable to stop themselves. This seemed both intuitively far-fetched, as well as something which would be constantly exploited, especially if it was a trait that existed differentially in the population.
<br/><br/>
I had encountered this idea some years ago in the rationalist canon, but I had never been able to think of examples where it really happened. Imagine my joy when I thought I had finally run across it when, after dinner one night, my (irrational) toddler demonstrated intransitive preferences when eating M&M's and trading with me.
She prefers green over orange, orange over brown, and brown over green. Here it is!, I thought. A money pump! But exactly how could I benefit from this?
<br/><br/>
As it turned out, there was no way to money-pump her, and might not be a way to ever meaningfully money-pump anybody. But to illustrate the point I'll give you a hypothetical example of how it could work. This hypothetical example alters her real behavior considerably - to see what I changed and why it could not work in the real world, skip ahead to "Where Are the Real Money Pumps?" in bold below.
<br/><br/>
Say we both start out with 12 green, 12 orange, and 12 brown, and assume the following preference rules (individual exchange rates):
<br/><br/>
I have transitive preferences: I like green twice as much as orange, which I like twice as much as brown. (1 green = 2 orange = 4 brown)
<br/><br/>
She has <i>in</i>transitive preferences: she likes green twice as much as orange, which she likes twice as much as brown, which she likes twice as much as green, etc.) In fact, I would argue that she has a subtype of intransitive preferences, cyclic preferences: with intransitive preferences you can also just say that there is no value on something and it can't be used in trade. (Though irrational by <a target=_blank href="https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem">Von Neumann-Morgenstern</a> and other standards, this is in fact how normal human beings behave; in contrast, when someone will put a price on <i>anything</i>, that person is called a <a target=_blank href="https://cognitionandevolution.blogspot.com/2013/08/utility-calculations-are-not-allowed.html">psychopath</a>.)
<br/><br/>
It helps to see the specifics of how you can take advantage of someone with cyclic preferences, but it's dry and boring, so I'll include it in a "supplement" at the end if you're interested and don't just want to take my word for it. Suffice it to say after at most 7 trades, I would have all the M&M's except for two brown ones.
<br/><br/>
<br/>
<b>THEN WHERE ARE THESE MONEY PUMPS? I.E., WHERE ARE THE SYSTEMS OF CYCLIC PREFERENCES THAT ARE ECONOMICALLY TAKEN ADVANTAGE OF?</b>
<br/><br/>
I went looking for cases in the real world where people get money-pumped, and found:<br/>
a) none, and <br/>
b) I'm not the only one who has noticed this gap. <br/>
<br/>
In fact money-pumping seems to be an entirely theoretical risk, predicted deductively from rationality models. So what's going on? Likely some combination of:
<br/>
<ol>
<li>People are irrational and their preferences are a mess, but they aren't neatly cyclical like this. In fact we should expect that most intransitive preferences are just that - they have no relation to any other preferences - therefore precluding a cyclic system of intransitive preferences. That is, sets of cyclic preferences are a subset of intransitive preferences, but because of the nature of intransitive preferences (basically they're a mess with little relation to each other or even consistency on short time scales), cyclic preferences are very rare or nonexistent in the real world.
<br/><br/>
<li>Humans have many heuristics that can cause reasoning errors, but that also must have been on net beneficial to our ancestors, and (sometimes) they only make sense in the context of their environment. For example, we like to punish wrongdoers, to the point where we will spend more resources to do it, even when the damage has already been done. This seems irrational, unless you realize that there are not many limited-round-games in life, and if there was someone doing bad things in your tribe fifty thousand years ago, it made sense to invest those punishment-resources now to deter further wrongdoing later. A more germane example is the <a target=_blank href="https://thelateenlightenment.blogspot.com/2014/06/the-endowment-effect-as-rational.html">endowment effect</a>, which is a heuristic that clearly has the effect of keeping people from getting taken advantage of in markets with asymmetric information. I expect that with cyclic preferences, there is a meta-preference that usually acts like a circuit-breaker for this sort of thing, for example just thinking about the goods in dollar figures. Of course, you could rightly object, if someone can do that, they don't really have cyclic preferences, since the dollar is like a color of M&M that doesn't admit of intransitive valuation - and you're right! Which is why people don't actually get money-pumped! In this same way, I expect that even my toddler would notice her overall number of M&M's is shrinking and mine is growing and at some point say she's not playing this stupid game anymore.
<br/><br/>
<li>There actually are some cyclic preference-sets revealed by people engaging in repetitive behavior that makes their life consistently worse. This includes compulsive gambling, junk food, substance abuse, and staying in abusive relationships (or seeking out new ones.) It's interesting that these aren't about trading with currency, or at least don't centrally involve explicit trade or currency exchange, which are relatively new things in terms of evolutionary psychology, and something where along with learning them explicitly, we have developed learned defenses against being taken advantage of (as noted above.) Even in those cases where there appear to be cyclic preferences, these are better understood as predictably shifting preferences (due to things like future discounting), but this is a semantic distinction since they have the same outcome.
</ol>
<br/><br/>
<b>SO WHERE DO WE SEE IRRATIONAL BEHAVIOR DUE TO INTRANSITIVE PREFERENCES?</b>
<br/><br/>
Clashes between the system of transitive preference systems - speaking broadly, the economy - and intransitive preferences are somewhat rare, but they occur, even when they are not cyclic. You don't think of people repeating abusive relationships as part of the economy, but that's a great example of intransitive preferences.
Gambling and substance use are part of the economy, but at the fringes (it's interesting that societies usually have prohibitions or regulations about trading the same sorts of things, things which involve strongly affective parts of our cognition and behavior like gambling, sex, drugs, and firearms.)
<br/><br/>
Some psychopaths recognize the intransitive nature of most humans' valuation of other human life (it's "priceless"), and take hostages who they will kill unless they are given money or some other objective. In those cases, many humans magically overcome our intransitiveness and kill the hostage-takers, or allow the hostage-takers to kill their victims in order to avoid negotiating in the future.
<br/><br/>
A fortunately more common but unfortunately probably more intractable problem is healthcare: about 70% of healthcare dollars in the U.S. spent by someone are spent in the last 6 months of that person's life. We could spend more and more getting additional minutes at the end of life. Unless we're going to ruin ourselves in this way, there has to be a rule, regardless of whether we're in a public or private healthcare system. This is something we don't like thinking about.
<br/><br/>
Finally, whenever people try to create a system of transitive preferences outside of the mother-system (the economy), the gravity of the currency economy inevitably connects to it and sucks it in, whether we're talking about Ithaca-dollars, or the charade of "no currency" at Burning Man.
<br/><br/><br/>
<b>SUPPLEMENT - AN EXAMPLE OF HOW CYCLICAL PREFERENCE MONEY-PUMP WOULD WORK, IF IT EVER ACTUALLY HAPPENED</b>
<br/><br/>
Starting out we have:
<br/><br/>
Me: 12 green, 12 orange, 12 brown
<br/>Her: 12 green, 12 orange, 12 brown
<br/><br/><br/>
Round 1. I offer her 6 of my orange for all 12 of her brown. Now we have:
<br/><br/>
Me: 12 green, 6 orange, 24 brown
<br/>Her: 12 green, 18 orange, 0 brown
<br/><br/><br/>
Round 2. I offer her 6 of my brown for all 12 of her green. Now we have:
<br/><br/>
Me: 24 green, 6 orange, 18 brown
<br/>Her: 0 green, 18 orange, 6 brown
<br/><br/><br/>
Round 3. I offer her 3 of my orange for all of her brown. Now we have:
<br/><br/>
Me: 24 green, 3 orange, 24 brown
<br/>Her: 0 green, 21 orange, 0 brown
<br/><br/><br/>
Round 4. I offer her 11 of my green for all 21 of her orange. (Give her a good exchange rate and round up. She's irrational, I'll get it back!) Now we have:
<br/><br/>
Me: 13 green, 24 orange, 24 brown
<br/>Her: 11 green, 0 orange, 0 brown
<br/><br/><br/>
Round 5. I offer her 6 of my brown for all 11 of her green (rounding up again.) Now we have:
<br/><br/>
Me: 24 green, 24 orange, 18 brown
<br/>Her: 0 green, 0 orange, 6 brown
<br/><br/><br/>
Round 6. I offer her 3 of my green for all 6 of her brown. Now we have:
<br/><br/>
Me: 21 green, 24 orange, 24 brown
<br/>Her: 3 green, 0 orange, 0 brown
<br/><br/><br/>
Round 7. I offer her 2 of my brown for all 3 of her green (rounding up.) Now we have:
<br/><br/>
Me: 24 green, 24 orange, 22 brown
<br/>Her: 0 green, 0 orange, 2 brown
<br/><br/>
...you get the picture.
<br/><br/>
As I was calculating this out I actually found it quite hard to think about the irrational player's decisions. There is value exchange symmetry in rational trading, which is to say, it doesn't matter if I am getting higher-value units or lower-value units. Whereas I would be tempted to say to the irrational player in round 7 above, "Look, why are we going through all this? Why don't you just give me those last 2 brown because they're worth less than themselves!" (Also more than themselves. But I want them, so I wouldn't say that.)
<br/><br/>
Originally I couldn't see how to benefit from this, even hypothetically - I thought "I could keep the trade going indefinitely but not accumulate anything". The errors were a) I actually had no preference for one color over another, and b) her preference for the "better" one in any pair was arbitrarily small (e.g., she just barely liked orange better than brown), and you can't subdivide M&M's (so with rounding either you could never arrive at having almost entirely fleeced the other party, or it would take too long. On the other hand, if you benefit from the trade itself and have no preferences, and you CAN usefully subdivide, you could still benefit. But I wasn't charging M&M commissions.
<br/><br/>
There is a total wealth (by my measurement, in units of "browns") of 168 in the game, with each side (by my measurement) starting with 84 brown-units. At the end of each round, with my trades, the value I hold is 84, 126, 126, 124, 162, 156, 166. You actually can't even talk about the other player's total value because what unit do you use to measure it? If we held differing, but rational, valuations - as people do in the real world - then say my daughter values browns twice as much as orange, orange twice as much as green - we'd quickly both wind up with her holding all the brown and me holding all the green. And that would be fine. In fact there have been cases in history where people became worried about the problems that could arise when preferences were different - Isaac Newton noted that because the English and Chinese relative valuations for gold and silver were different, that in a simple system eventually one would end up with all the silver and the other with all the gold, and trade would grind to a halt. But of course the system isn't that simple, and in any event as long as preferences are rational - not circular - it wouldn't matter. Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-87040982830644312012019-11-03T13:22:00.001-08:002020-03-29T19:46:00.527-07:00Editorial Clickbait about Psychiatry in the New England Journal of MedicineI'm really disappointed in NEJM for publishing <a target=_blank href="https://www.nejm.org/doi/abs/10.1056/NEJMp1910603">this piece</a> by Gardner and Kleinman (G&K.) Overall this article is not helpful or useful. There is a cottage industry of psychiatrists writing hit pieces on our own specialty, and often they make coherent and actionable points that improve the specialty and ultimately patient outcomes. But of many valid criticisms of psychiatry, this article bizarrely focuses on two problems that pervade most of medicine, and implies that they are uniquely problems for psychiatry. The thesis seems to be that psychiatry has been damaged by reliance on a biological approach, which has stunted its ability to treat patients, and damaged our interactions with them by decreasing the quantity and quality of our interaction.
<br/><br/>
First: these two have apparently not been talking to many of their colleagues, inside and outside of psychiatry. How many physicians do you know, especially in cognitive specialties with lots of patient contact, who say "No, I don't have inappropriate time pressures on my patient interactions, and what pressures there are, are not worse than they were thirty years ago"? Most psychiatrists would love to spend more time with patients. When we don't, it's not because we've already gone through the checklist so we don't want to waste time forming rapport - it's due to the moral hazard introduced by the financial and administrative structure of modern medicine. The same argument obviously applies to many specialties outside of psychiatry.
<br/><br/>
The second part of their argument is that over-reliance on a biological approach is what has distorted psychiatry and prevented us from adequately treating patients. In case they haven't noticed, we <i>do</i> have psychiatric medications which work, that we didn't have a few decades ago. (They somehow fail to comment on the existence of SSRIs and second-generation antipsychotics, for example.) How is this the failure of a biological approach? It is trivially true that biological approaches to psychiatry have not <i>yet</i> been as fruitful as we would all like. The genomics revolution (for example) has also not benefited most branches of medicine to the degree hyped - <i>yet</i>. It's a bit premature to say that therefore, biological approaches like genomics have not yet benefited psychiatry and therefore will never benefit psychiatry. They have essentially not benefited any other branch of clinical medicine besides hem/onc - because it's easier to kill or poison certain cells (especially ones that are suspended structurelessly in fluid, rather than connected in a specific network, neural or otherwise) than it is to make them work better. We should expect that oncology would have been the first to benefit. In this G&K are rather like engineers in 1900 saying "we haven't achieved powered flight yet, therefore it can't be achieved ever." (Which, by the way, some engineers at the time did.)
<br/><br/>
It's unclear what G&K's solution is. Perhaps most tellingly, the voices I've seen online defending this article seem to have great difficulty understanding the definition of "syndrome", or the idea of treating empirically before the biology of a specific case or even the disease itself is clear is quite often the best approach (and again, this is not specific to psychiatry.) For instance, many psychotherapies have an impressive evidence base at this point, and if we don't understand psychopharmacology as well as we would like at the biological level, we certainly don't have anything like a fully articulated biological theory of psychotherapy either. If you have a treatments that can help - pharmacologically or otherwise - it's immoral to withhold it just because the science behind the treatment mechanism or pathophysiology is not settled. And as near as I can tell, that's exactly what G&K are proposing.Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-74682607857230442142019-05-04T16:15:00.001-07:002019-05-04T16:15:44.819-07:00All Perching Birds Descend From an Australian AncestorWhen I visited Australia, I remember walking around my first day seeing lorikeets and other very tropical-looking birds and thinking "Huh, I guess I'm in Gondwanaland now." Little did I did I know, all perching birds even including my own North American ones are actually Australian! This, from <a target=_blank href="https://www.pnas.org/content/116/16/7916">a new PNAS study</a> showing that the last common ancestor was 47 MA ago in Australia. Immediate evolutionary just-so thoughts: Australia has a strange and (relative to other continents) low mammal population, which may have allowed for such a radiation within Australia. Neighboring New Zealand had no mammals at all until seven centuries ago, and is famous for its (sadly threatened) bird diversity, as well as its birds filling many roles usually taken by mammals (hence, now being threatened.) As the Earth went through pulses of cooling and drying and Australia moved north and became drier, less forested, and less hospitable for perching birds, this may have given an opportunity and incentive for a diverse bird population to spread to other continents where they hadn't had similar opportunities to get a head-start on mammals. The paper focuses more on known global climactic shifts, such as the cooling in the Oligocene-Miocene transition: "Three rate shifts [i.e. in rate of diversification] appear to have occurred almost simultaneously during the Oligocene-Miocene transition in three different oscine clades on three different continents...[obviously, after the most recent common ancestor had left Australia.]"
<br/><br/>
<i>Oliveros CH, Field DJ, Ksepka DT, Barker K, Aleixo A, Andersen MJ, Alström P, Benz BW, Braun EL, Braun MJ, Bravo GA, Brumfield RT, Chesser RT, Claramunt S, Cracraft J, Cuervo AM, Derryberry EP, Glenn TC, Harvey MG, Hosner PA, Joseph L, Kimball RT, Mack AL, Miskelly CM, Peterson AT, Robbins MB, Sheldon FH, Silveira LF, Smith BT, White ND, Moyle RG, Faircloth BC. <a target=_blank href="https://www.pnas.org/content/116/16/7916">Earth history and the passerine superradiation</a>. PNAS April 16, 2019 116 (16) 7916-7925 </i>Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-42660154732656827992019-03-22T13:43:00.001-07:002019-03-22T13:43:12.756-07:00How Delusions in the Real World Disappointed My ExpectationsDelusions have long been of interest to me and they're fascinating for many people. Why do people see the same thing as everyone else, but arrive a very different conclusion, and become unable to change their mind about it? I've been fortunate to be able to do basic research into this phenomenon, and in my daily practice I see and treat them frequently.
<br/><br/>
(You should note that delusions represent a <b>small, pathologic subset of false beliefs</b>, really a disturbed belief process distorted by different anatomy. We all have false beliefs, but hopefully we can update them when we get new information. Even when people don't update their beliefs based on relevant information - they're usually identity-forming or socially important beliefs - and frustrating though that is, that is still different than a delusion. So, no, your most un-favorite religion or political party adherents are not delusional, even if they're wrong.)
<br/><br/>
There are a number of misconceptions, or more accurately, misexpectations, that I had about delusions when I went into this business, which will be glaringly basic and obvious to any psychiatrist, but will probably not be so obvious to other people. In no particular order:
<ul>
<li>If and when delusions resolve, there is only rarely a "eureka" moment where the patient realizes the belief is false, or has even a significant enough increase in insight to gradually look back and sheepishly say "Yeah, I guess that wasn't true." Rather than updating the belief, people just stop being so motivated by it. That is to say, in the large majority of people, rather than the belief changing, the <i>centrality</i> of the belief changes. I find this very unsatisfying. "Yeah, I still think drones are probably following me everywhere but I don't worry about it that much." This isn't all that much different from belief in health - confirmation bias is all-pervasive, and recall that science advances one funeral at a time.
<li>Related: you can't talk someone out of a delusion. Ever. (As the rationalist proverb goes, you can't reason someone out of a position that they didn't reason themselves into.) At best, you will waste your and their time, and at worst, you will anger them and damage your therapeutic alliance. And if the psychiatrist who gives into this urge is completely honest, it's partly informed by a need to "win" the discussion. Even if you know this intellectually, early in your career it's very difficult to avoid engaging a delusional patient in this way (partly because the patient will not infrequently challenge you to do exactly that.) At this point I'm proud to say I can mostly resist the temptation.
<li>Though delusions sometimes do appear in isolation, they rarely occur without other neuropsychiatric symptoms. Even delusional disorder (where the patient has ONLY delusions) is often a misdiagnosis that evolves to something else - like dementia, especially when appearing in middle age or later, with the delusion merely as the earliest symptom. So very often, the person with a delusion is quite psychiatrically ill in many ways that make having a coherent discussion about the delusion (to hear a coherent set of delusional beliefs) very unlikely; e.g., severe paranoia that keeps them from talking to you about the details of the delusion, and/or constant hallucinations which distract them and to which they respond, or merely an inability to speak in a way that makes sense at all.
<li>This one was most disappointing to me: delusions are rarely coherent, in contrast to how they are often presented in the lay media - for example, K-PAX, or the analysand in the essay <a target=_blank href="https://harpers.org/archive/1954/12/the-jet-propelled-couch/">The Jet-Propelled Couch</a> (who supposedly was in reality the science fiction writer Cordwainer Smith.) They are sometimes completely bizarre and incomprehensible, and even after giving the patient a chance to explain, you still have no idea what they mean. (This is one subtle feature of thought and speech in psychosis: though the sentences might be grammatical and seem to be meaningful, strung together, you can't make sense of what they're saying or even clearly remember it ten minutes later - much like, I think not coincidentally, we struggle to remember an early morning dream even until lunchtime.) Even when delusions are "about" something comprehensive, they are only peripherally about discrete objective facts, delusions are based on affect and "primitive" themes of the sort that color nightmares[1] - pursuit, certain people being morally bad, looming organizations with sinister intent, an overwhelming sense of contamination, etc.
<li>It is often striking how incurious delusional people are about their predicament - after years of, say, harassment by a sinister government agency, when one asks "Do you know why they're doing this? And where they get all these resources? And how their technology operates?" people often do little more than shrug.[2] They are also usually obviously and badly internally inconsistent, again unlike the cleverly constructed delusions in fiction. If the psychiatrist in the Terminator thought the future-warrior's tale was a delusion, he was right to be impressed by it. People will tell you (for example) that they were victimized for many years by their persecutors, until they developed their special powers at age 23 that made them immune; then in the next sentence, tell you how they were victimized at 26. Rather than becoming upset when such continuity problems are pointed out, they generally just wave it off as irrelevant and keep going.</ul>
<br/>
Delusions are hard to treat; even so, medications can and do help people. But if you get into this business to hear fully elaborated, articulate, consistent delusions about time travel, space empires, or sinister (but interesting) experiments that shadowy government agencies are doing on us - you're going to be disappointed.<br/>
<br/><br/>
[1] Even in delusional "shadow syndromes" like physics crackpots or various denialists that do seem to be focused on external objective cold facts, invariably there is ranting against the Establishment and paranoia about people stealing their work, and this takes up much of the time that might otherwise in a more rational person be devoted to research or making their case.
<br/><br/>
[2] Regarding this incuriosity: delusions are not the only neuropsychiatric symptom where this feature appears. I'm agnostic as to whether this incuriosity is actually part of these diseases, or is just (unfortunately) the natural state of most humans. For example, <a target=_blank href="https://en.wikipedia.org/wiki/Hemispatial_neglect">hemi-neglect</a> is a symptom usually seen after strokes, where the patient loses one half of space. I don't mean that they can't sense what's going on one side of them; they literally can't understand that that side of the universe exists, exactly like you or I can't perceive the fourth dimension.
<br/><br/>
To illustrate: these people lose not only the use of one half of their bodies, but the awareness that they exist. So they will deny that they have a left arm. And if you hold their (genuinely paralyzed) left arm up in front of them, they often confabulate ridiculously: "That's my sister's arm. She's hiding under the table." Now, if my doctor told me I had four arms, I would tell her she was a goof. But if she could consistently could keep holding two extra arms up in front of me that had roughly the shape and skin tone of my other arms and in the middle of a room where there was no chance of a trick, I would eventually have to concede that I was having perceptual difficulties and that I indeed had four arms, even if I couldn't tell how they attached to me. Probably a more common situation is that a hospitalized patient will demand to speak to the doctor at dinner, and as the doc enters their room, says angrily "They keep telling me this is a full-sized dinner, but look at this thing!" And they gesture to their plate, exactly one half of which is eaten. So, you turn the plate 180 degrees, and they grunt, and finish the other half of their dinner, now that it exists. Now, if tonight I complain to my wife that she only gave me half a serving of dinner, and she glared at me and reached over and did something I didn't understand and suddenly my plate was full again as if it had <a target=_blank href="https://demonstrations.wolfram.com/ASphereVisitsFlatland/">passed partway through my dimension like a sphere in Flatland</a>, I <i>think</i> I would say "Whoa! You just magically produced food out of the fourth dimension! I don't understand how you did it, but could you do it again?" But that's not how people usually react, which implies there's a loss of insight or ability to update associated with this condition. It should not be missed that most neglect is left neglect (meaning, a right-sided lesion), and that one theory of delusion holds that somatic delusions can be caused by right frontal lesions, and that some sort of functional right hypofrontality is required for the lack of insight inherent to all delusions, somatic or otherwise.
<br/><br/>
Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-91257564320775995212019-03-11T10:45:00.002-07:002019-03-11T10:45:56.290-07:00In Medicine, Rounding WorksRounding is a time-honored tradition where doctors meet to talk about cases, either in a meeting room (the "rounding room" was named after a specific room at Hopkins) or at/near the bedside. Most often associated with inpatient medicine teams especially in training environments, the treating physician will present the case and discuss it with her colleagues. Not only is it thought that in this way, medical decision-making benefits from collective intelligence, but the anxiety provoked by immediate criticism (especially in trainees) sharpens one's thinking. A <a target=_blank href="https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2726709">study in JAMA Network Open</a> supports this. Teams here were internal medicine teams composed of multiple levels of training, from med students up to attendings. I don't think the findings would be too domain specific, but at a guess, I imagine the benefit would be even greater for psychiatry than for internal medicine, as psychiatry's diagnoses are fuzzier and more subjective.
<br/><br/>
Groups don't <i>always</i> arrive at better decisions than individuals - especially groups of non-expert individuals with no feedback - but teams with people who are experts, and who do get feedback benefit from collective intelligence, do better than individuals alone. So qualitatively this isn't surprising, but a problem in medicine is lack of quantitative thinking; especially in my specialty, psychiatry, where studies are constantly coming out showing that medical or psychiatric illness X increases the risk of psychiatric illness Y. No kidding! <b>By how much</b> is what we want to know. So what's the actual benefit of rounding?
<br/><br/>
For groups of 9, on average you need to treat about 4 people before you make a diagnosis that an individual would have missed (i.e. NNT is about 4.)
<br/><br/>
For groups of 5, NNT = 6.
<br/><br/>
For groups of 2, NNT = 8.
<br/><br/>
The simple plot below shows the % accuracy improvement per person based on group size, and again not surprisingly, there's a diminishing marginal return for adding more people. (Where does it go to zero? Nine is already on the big size for a rounding team.)
<br/><br/>
<center>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0WnnghXdF85cge-Yl8QFc4-SBP-PQURterOS5Y8TUXg5v62oq0ps3isV3xh0Fn6bpTqV_qTdDepTwK3bEePZjNVKwcVSe1c60OqUmE62f-Ny2FAIWdgjoFtPutP1bLpAPLj1Rc7KcQNsS/s1600/rounding+works.png" width=50% height=50% /></img>
</center>
<br/>
This of course doesn't take into account rounding <i>time</i>, which is a real consideration, and big teams are <i>slow</i>. Maybe the % improvement per minute drops at a certain point.
<br/><br/>
Therefore, don't hesitate to curbside-consult your colleague, because just by talking to one other person, every eight patients you're making a more accurate diagnosis.
<br/><br/>
<i>Barnett ML, Boddupalli D, Nundy S, Bates DW, et al. <a target=_blank href="https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2726709">Comparative Accuracy of Diagnosis by Collective Intelligence of Multiple Physicians vs Individual Physicians.</a> JAMA Netw Open. 2019;2(3):e190096. doi:10.1001/jamanetworkopen.2019.0096
</i><br/><br/>Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-68554124074584230032019-02-15T22:20:00.001-08:002019-02-15T22:20:50.601-08:00An Obvious Healthcare Cost-Savings Proposal, That Doctors and Patients Will Obviously ResistArnold Kling <a target=_blank href="http://www.arnoldkling.com/blog/a-provocative-health-care-proposal/">draws attention</a> to a <a target=_blank href="https://market-ticker.org/akcs-www?post=231949">proposal by Karl Denninger</a>, which includes the following:
<br/>
<blockquote><i>No government funded program or government billed invoice will be paid for medical treatment where a lifestyle change will provide a substantially equivalent or superior benefit that the customer refuses to implement. The poster child for this is Type II diabetes, where cessation of eating carbohydrates and PUFA oils, with the exception of moderate amounts of whole green vegetables (such as broccoli) will immediately, in nearly all sufferers, return their blood sugar to near normal or normal levels...This one change alone will cut somewhere between $350 and $400 billion a year out of Federal Spending and, if implemented by private health plans as well, likely at least as much in the private sector.</i></blockquote>
The tone gets even more pointed, and more accurate, further on.
<br/><br/>
Denninger further points out the core values-disconnect that makes talking about healthcare so difficult. That disconnect is that we are trading dollars and human suffering back and forth, and there's no way around this brute fact, ever, except to hide it from both buyers and sellers. This makes the system nauseatingly inefficient, whether we're talking about centralized planning or a free market.
<br/>
<blockquote><i>Americans, and especially health care providers, do not want to think of health care as a commodity. The providers want to be paid, but they do not want to think of themselves as selling their services, so the payment comes from third parties and the price is hidden to consumers...All surgical providers of any sort must publish de-identified procedure counts and account for all complications and outcomes, updated no less often than monthly. Consumers must be able to shop not only on price, but also on outcomes.</i></blockquote>
This will be unpopular as both patients and doctors want to avoid responsibility for bad choices - but now that we're all paying for people to keep eating McDonald's and performing poorly-evidence-supported surgeries so they can buy a vacation home - we will have to make some hard choices.<br/><br/>
Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-41555726118235545472019-02-11T09:33:00.000-08:002019-02-11T09:33:11.429-08:00A Picture from a Recent Trip to Vienna I Somehow Forgot to Post...before my residency's required psychoanalytic training ended and grades were in. This is the Freud Museum in Vienna, which used to be his flat and office, and is still a regular (nice) apartment building. (A hand-written sign taped to a current resident's door on the ground floor by the entrance explained in unsubtly annoyed German that the Freud Museum was upstairs.) I do have to admit a grudging admiration for Freud's self-promotion. At this point, I invite you to say pseudo-profound things in an Austrian accent about how my early life experiences led me to struggle against authority, I want to kill my father, etc. I <a target=_blank href="https://mdk10outside.blogspot.com/2017/07/eastern-europe-july-2017.html">loved Vienna and Central Europe generally</a>, even including this museum. You will note I did grow a beard for the occasion, but at no point had a cigar.
<br/><br/>
<center>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjp019xYk7iP5KUZeXY0vdD4TrzVFC0Tz91QihG-vW8am5mBg18NcuKRBm47UhdVGWdix2uPQmjfRLNFT2IM1b0VILqZkjqr0WCb_bIi3POd0gEEpU4ciVqrSsKLQNTTSzb76-Q9OI6QA3/s1600/2017-07-20+13.48.29.jpg" width=50% height=50% /></img></center><br/>
The most interesting thing in the museum was the microtome on display, which he used to make brain sections. I imagine him thinking, "You can never get <i>anywhere</i> doing it this way...maybe I'll convince people I have the power to do the same thing by talking to people!" There was also a picture of him with fellow Viennese intellectual socialites, one of whom was an immediately striking and intense woman who turned out to be Ludwig Wittgenstein's sister. Also, in German the parts of the subconscious are just rendered in German (not Latin), as "Das Ich Und Das Es" ("The Ego and the Id") which somehow takes away some of the authoritative punch.<br/><br/>Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-34706739672897818012019-02-10T23:08:00.001-08:002019-02-10T23:08:36.544-08:00American English: Examples of Within-Our-Lifetime Language ChangeI once saw a translation of Beowulf from the early twentieth century, which used "throve" as the past tense of "thrive." Interestingly, this means that even the "modern" translation is now outdated, since during the twentieth century "thrived" replaced "throve." Language waits for no one.
<br/><br/>
I have noticed one particular shift in American English: "buck naked" has become "butt naked." No doubt readers younger than mid-30s will have never heard "buck naked" and wonder what I’m talking about. The explanation for the shift is that in some varieties of Black American English, the "k" sound at the end of buck becomes a glottal stop, which is then heard and reproduced as a t in most accents of American English. In conclusion - it’s BUCK, not butt, and you kids get off my lawn.
<br/><br/>
Also, many Californians pronounce the "-ing" verb suffix as "-een"; it's more prominent the further south you go in the state (Think Blink 182 - they’re from San Diego), which suggests it's from language contact with Spanish. When asked about this, people who clearly produce the morpheme this way, some people have insisted to me that they say it it the same as an -ing; this is common for people who speak non-received dialects that differ subtly, since they actually cannot hear the difference between the two phones. I once saw a young student learning to spell, write out a verb phonetically that way, ie "bildeen" for building. I mention this because as California becomes more prominent in American culture, I expect its dialect to become more prestigious, and people elsewhere will start imitating it - so by mid-century, people in e.g. the Midwest may be saying -een instead of -ing.
Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-88724695469416060512019-02-10T11:20:00.001-08:002019-02-10T11:20:25.506-08:00Serious Parfitians Should Strongly Support the Genesis ProjectGenesis <a target=_blank href="https://itp.uni-frankfurt.de/~gros/publications.php#GenesisProject">is a project to seed bacterial life elsewhere in the universe</a>, founded by Claudius Gros. The aim is to build spacecraft that would deliberately seed nearby exoplanets with bacteria, in geological time, kick-starting life in the universe. This gives those planets a head start on eventually evolving something like intelligence. This would seem not only to multiply the numbers of being capable of having lives worth living, but to dramatically increase the chances of an intelligence (a species with a life most worth living) that escapes its home system to avoid black swan apocalypses, singularities, etc. and spreads indefinitely, filling the universe with happiness.
<br/><br/>
A <a target=_blank href="https://en.wikipedia.org/wiki/Mere_addition_paradox">Parfitian</a> could well respond that it's anything but certain that such a project would produce a universe-filling life form capable of happiness, and she would be right. But choosing an action with limited information is the problem with all action selection. On the spectrum of uncertainty, seeding the universe with life > friendly singularity > avoiding biological disaster > avoiding nuclear war > electing the right people in your country > cleaning your room. The problem is that the consequence to total happiness is in the same order.
<br/><br/>
So what's the argument for the pursuing a friendly singularity, over seeding exoplanets? Or singularity issues over cleaning your room?
<br/><br/>
I submit that we don't really know. That's not to say that the singularity or bio-apocalypse can't happen. Unfortunately I'm worried that as we become more powerful but not better at estimating uncertainty, something like this will eventually be fatal for us, and Drake's omega variable will be a little clearer.
<br/><br/>
See also GENU, the <a target=_blank href="https://cognitionandevolution.blogspot.com/2019/01/valence-vs-priority-disagreements-in.html">Gambit of Extreme Negative Utility</a>.
<br/><br/>Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com3tag:blogger.com,1999:blog-4724592643224262209.post-57569247675042412112019-01-20T11:27:00.000-08:002019-01-20T11:27:26.931-08:00Decision-making for Profound Life Choices Isn't Much Like Decision-MakingThere's a great article in the New Yorker about how <a target=_blank href="https://www.newyorker.com/magazine/2019/01/21/the-art-of-decision-making">life's biggest decisions seem to...just kind of happen.</a> There's a short round-up of books and articles considering the paradox, some explaining it in terms of not knowing what the thing you want will actually be like, some by examining how aspiration is really meta-wanting (wanting to want something), and some by pointing out that our values and goals change over time in ways we could not have predicted ahead of time. On this last point I would superimpose a mechanism of biological imperatives; you don't decide to start wanting kids any more than you decide to get a carb craving.
</br></br>
I've noticed a similar experience with not having access to my own calculations, more for my decision to become a physician than to become a dad, although even with fatherhood, why now, here, with this person, is similarly hazy. Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-70965445115099526862019-01-17T22:47:00.000-08:002019-06-12T12:18:41.237-07:00Valence vs Priority Disagreements in Public Debate; Plus, The Gambit of Extreme Negative Utility1. In public discourse, usually, most people do not have valence differences when they disagree, but rather differences of priority. This is especially true in politics. Concrete example: most people agree that racism is bad. The disagreement tends to be about <i>how important it is to eliminate racism right now</i>, relative to other problems. A valence disagreement occurs in this case when a white supremacist says no, actually, racism is good. Valence disagreements are of course much more intractable. Online discourse over the past few years has raised disagreements about priority to the same level of intractability, by turning priority disagreements into valence disagreements, e.g., "You shouldn't be talking about anything else right now but keeping out illegal immigrants. I don't care if you say you're against illegal immigration, if fighting illegal immigration isn't your number one priority, then you're actually <i>for</i> illegal immigration."
<br/><br/>
2. One trick to make your argument demand top priority is to claim that there is a massively, or even infinitely negative consequence for not adjusting one's actions in the ways required of a rational actor if the argument is true - or claiming that there is such a consequence <i>even for ignoring the argument.</i> Organized religion makes the most famous such claims, but the outcomes feared by certain political ideologies (if they don't get their way) can approach the same severity. This is the <b>Gambit of Extreme Negative Utility (GENU.)</b> Related to this, a counterargument to Pascal's wager is that you don't know which version of which religion is the right one, but as I saw a very clever Christian argue online, you have to assume infinite religions to choose from for the expected utility to work out in favor of ignoring Pascal's claim (although this still doesn't tell us which one to choose.) So what do we do? Do we set an arbitrary cut-off for utility beyond which we assume someone is lying to get our attention? This seems very dangerous, unless we think there are no rare events with far greater negative utility than we expect beforehand. So if we throw out Hell, Orwell's boot on a human face forever, and white supremacists' fear of a world overwhelmed by non-white barbarism, don't we also have to stop worrying about the <a target=_blank href="https://en.wikipedia.org/wiki/Great_Filter">Great Filter</a>, the technological singularity, and CRISPR-derived biological weapons made by suicide cults? [Added later: turns out that <a target=_blank href="https://en.wikipedia.org/wiki/Pascal%27s_mugging">Pascal's Mugging</a>, a concept in the rationality world, captures the essence of this - any risk/benefit scenario can be overwhelmed by claimed arbitrarily large or infinite negative utility, which Christians like Pascal call "Hell".]
<br/><br/>
3. It matters <i>which arguments we spend time and attention considering</i>, because although humans seem to have an implicit assumption of infinite time and attention to consider arguments, of course we do not.<br/><br/> Hence, most authorities use some form of agenda-setting and distraction to influence discourse.Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-12334181385657595582019-01-07T09:52:00.001-08:002019-01-07T09:52:17.742-08:00Biomass on Earth, by TaxonFrom <a target=_blank href="https://www.pnas.org/content/115/25/6506">Bar-On, Phillips, and Milo</a> in PNAS, 2018. Immediate stats that come to mind which are maybe more applicable to astrobiology: how many calories (or molecules or glucose) produced per unit energy of sunlight, per unit mass of plants? How about per overall energy produced by the sun (decreased by distance from sun and cloud cover on Earth)? How about base pairs per unit mass of taxon? Viruses probably win that one handily. And finally - an animation over time, where we see the branches appear, mass extinctions, and finally at the end a blossoming of birds and mammals which is really just livestock (or DOES livestock actually out-mass the pre-Holocene wild mammal and bird mass?)
<br/><br/>
<center>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5xHvWGNh1SfI6sK7x8VivooaIWRZgEhcjnSzPUERUwOh7HqwJ2UKLtgsSN8W1ORV3kqoeU0ZSd-XrfyqnFF_o7M445pGcC2z_jwsP88gpEEGieRDqyMNyvvQa6nrxP64R-DMW3hwPAn-J/s1600/biomass+on+earth.jpg" width=99% height=99% /></img>
</center>
<br/><br/>
The only thing here that really surprises me is that molluscs are higher mass than I expected, and nematodes lower mass.Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-60732485597249193272018-10-19T02:14:00.002-07:002024-02-08T23:54:45.224-08:00Lying and Intention<blockquote>
Some years ago, I went to see a movie with a friend who has since passed away. (This is actually one of my favorite memories of her.) Relevant: the movie was Blair Witch Project. My friend was badly scared by horror movies (why did she go? I don't know, but she's a grown up, not my problem) and when I took her back to her house, she was still quite worked up.
<br/><br/>
I should add that it had been a very hot day, and her house didn't have A/C. It was still sweltering, even after midnight, so she knew if she wanted to sleep she would have to open all the windows, which she did. This is also relevant, because a) her room was a very small addition to the house, with windows on both sides of, and behind, her bed, in fact <i>so close</i> to the bed that a cruel person who likes scaring his friends could actually reach in from outside and grab her; and b) I am in fact the kind of person who would do something like that, and lie about my intentions, and had done such things many times before. (I'm quirky that way.)
<br/><br/>
"Can't you please just stay until my roommates get home?" she implored.
<br/><br/>
"No, I have to go home and go to bed."
<br/><br/>
A look of horror crept across her face and her eyes widened. "I know what you're going to do! You're going to drive two blocks away like you're going home, then park, and silently walk back, and wait outside the window until you see me nodding off, then grab me and scare the crap out of me!"
<br/><br/>
"No, I would <i>never</i> do that!"
<br/><br/>
"Yes! Yes you will! I know that's what you're going to do no matter what you say!"
<br/><br/>
"No. No, I am definitely going to go home, and go to bed." Despite her pleas, I walked out. I then got in my car, drove home, and went to bed. I slept very well.
<br/><br/>
The next morning around 6 a.m. - probably not coincidentally, around the time the sun rose - I was awakened by my phone ringing. It was my friend. "What," I mumbled as I picked it up.
<br/><br/>
"You <i>asshole.</i>"
<br/><br/>
"What?" I said. "Me asshole? <i>You</i> asshole. You're waking me up at six in the morning."
<br/><br/>
"You bet I am! I've been sitting here on tenterhooks all night waiting for you to reach in the window and didn't sleep at all and you actually went home and went to bed!"
<br/><br/>
I said nothing, but I smirked.
<br/><br/>
"I can hear you smirking! This is exactly what you planned isn't it?"
<br/><br/>
"Listen," I said, "I did exactly what I said I would do. I <i>told the truth</i>. I did the morally correct thing, and you chose not to believe me, even though I was telling you my true intentions, and then acted on those intentions. So that's your problem. Now if you'll excuse me I have to get some more sleep." I hung up and turned off the phone. I slept very well.
</blockquote>
<br/><br/>
One frequently discussed problem in the analysis of what constitutes moral behavior is that of the contribution of an actor's intention, if any, to the morality of the act.
<br/><br/>
I someone hits me with a car, if I am a consequentialist, I have no grounds to say that the act was more or less moral based on the intent of the person. If someone hits me with their car at 35 mph and breaks my femur, to a consequentialist shouldn't care if the person did it intentionally and was pleased by this outcome, or accidentally and horrified by it.
<br/><br/>
This is correct - only if the definition of "consequentialist" is narrow, and really means "near-sighted consequentialist" - someone who just cares about single, isolated acts, which in the confines of a thought experiment, is often the (unintentional) implicit assumption. But of course this isn't the case, and it violates our moral intentions and (if the two are separable) the actual reactions we have in such situations, or even just hearing about such situations. Even taking out the egocentric anger and desire for revenge likely to be incurred by someone who you know hit you intentionally and enjoyed it, you would be right to be concerned that this person is out there running around loose where they can hurt someone else - and a monster like that is unlikely to limit themselves to cars in such endeavors. That is - their <b>intention predicts future actions,</b> which is why it matters, even (especially!) to a consequentialist. The person who is horrified is less likely to do such things again, although even in that situation, if their horror is misaligned with choices they keep making (they were texting, they were under the influence, etc.) then this also figures into our evaluation - because it <b>predicts future actions.</b> A stronger statement is that <b>without <i>intention</i> as a predictor and link to future actions, to talk about the morality of an isolated act is meaningless.</b>
<br/><br/>
The law in most OECD countries actually gets this right, at least for murder, where it differentiates by degree. The difference in intent between accident and non-accident is obvious enough, but the difference between first and second degree murder is also important. There is something very different and more threatening about someone who murders after planning it out, rather than by unchecked impulse. If someone had a bizarre neurological disorder causing them to helplessly pick up long objects and swing them at everyone around them, you wouldn't want them walking around loose, but you would see that they were horrified themselves at this tragic illness, so you also would recognize that this is not a person who intends harm and whose other actions are suspect as well. (If someone you know to be unfortunately afflicted as a neurological-disease-stick-swinger calls you - from far away, hopefully - and asks for a donation to a charity, you're much more likely to think the charity is legitimate than if you get a call from someone you know to be an intentional, actually-enjoying-hitting-people stick-swinger.)
<br/><br/>
This is the same problem which makes certain human behavioral patterns appear irrational in the context of a necessarily limited, close-ended experiment, when in fact they are not. For example, it's a well-studied result in game theory that humans are willing (in fact, eager!) to punish cheaters even if the damage is done, and enacting the punishment has a non-zero cost. Yes, in a true one-round game, the rational thing to do is to stop one's losses and walk away - think of not getting in a pissing match with someone who cuts you off on the freeway - but this is a rare circumstance. The small bands we've lived in throughout most of history, where you were around the same people all the time and kept score on each other, would predispose exactly such a behavior to emerge - and in game theory experiments or one-time encounters in large populations, we may not be able to override our programming. Granted, in those relatively rare encounters, it is irrational not to override it - but again, these encounters are rare. And tellingly, <i>unless you're planning to be the cheater,</i> you likely minimize your time around, and interactions with, complete strangers. I've come to refer to these kinds of situations (either game theory experiments or in real-life, like each day on the freeway) as <b>GOOTs - Games Of One Turn.</b> Many finite-round games are known to strongly affect decision-making. For example, if you're playing a certain number of rounds of prisoner's dilemma, you know that there is no more revenge possible after the last round, so you plan to defect the last round. And your frenemy in the game knows it, and you know they know it, etc. So you defect one round earlier...et cetera, until the rational player who is optimizing payout and playing a finite game defects immediately on the first round.
<br/><br/><br/>
<b>HOW DOES THIS APPLY TO LYING?</b>
<br/><br/>
There are truth-telling absolutists (Kant is the obvious example, but a modern defender is Sam Harris) who have difficulty ever justifying an intentional mistruth. In Harris's case this is especially interesting as he (correctly) defends the role of intention in moral acts generally. The morally justified lie in the murderer-at-the-door thought experiment that challenged Kant was most famously and tragically realized <a target=_blank href="https://onlinelibrary.wiley.com/doi/full/10.1111/j.1467-9833.2010.01507.x">in the example of Anne Frank</a> (Varden 2010), and in comparing Kant's argument to this specific event it often takes rather more argument than we might hope it should to justify why it was acceptable to deceive the murderous fascist occupiers.
<br/><br/>
The claim "lying is always wrong" - hereafter referred to as the naive theory of lying - fails because three related assumptions are clearly falsified, all of which are present in the true account I provided above.
<ol>
<li>Bad Assumption #1 - <b>identical agency</b>: Humans are all equally capable of identifying and acting on truth; that is, we all have an identical set of beliefs about the world, are equally able to reason about them, and will therefore respond to the same information in the same way. (Kant's Golden Rule fails for this reason as well.) This could occur because of false beliefs, biases, or any other departure from rationality that leads to suboptimal computing of beliefs. (Sometimes an immoral person will deliberately create those false beliefs ahead of time and then deceive by telling the literal truth, as I did.)
<li>Bad Assumption #2 - <b>moral isolation of speech from other actions</b>: Speech is capable of communicating unfiltered truth mind-to-mind, and that the moral weight of a statement comes from how true it is, rather than <b>the effect that you are intending to have</b> with your statement. In this, speech is qualitatively different from other actions. (My intention was to use my speech as an act to deprive my friend of sleep and keep her up all night. That I told the truth should not disqualify from being, as she correctly identified me, an asshole.)
<li>Bad Assumption #3: <b>cooperation-independence:</b> Truth-telling applies to all humans, even if they are not cooperating with even your most basic interests (e.g., preservation of life and avoidance of needless suffering.) This alone justifies lying to the Nazi at Anne Frank's door. I would agree that it is correct that intentionally creating a false impression in someone else's mind is immoral because it's a form of harming them, but a) if someone intends to harm you, harming them in attempt to stop this is not immoral and b) <b>one can certainly create a false impression by telling the truth</b>, as will be explained below. (In my true story, if my friend had said "Okay, go home, I'll be right here" and then gone to someone else's house to spend the night, would that have been immoral? That is, would I have been justified getting mad at her if I had acted her on her lie, come to the window to scare her, and she was gone? How dare she deceive me like that!)
</ol>
Because these bad assumptions are false, applying the naive theory of lying to behavior produces inconsistent results with respect to harm we cause by speaking to people. This makes it an inadequate moral rule. (If your theory of morality does not have a place for harm minimization in general, the argument we would have to have is at a much more profound level.)
<br/><br/>
It is telling and under-appreciated that children as they develop a theory of mind often test the (widely accepted) naive theory of lying against the principle of harm minimization. Right now you can probably think of a smirking child who told you something that was technically true, but intended to deceive you or cause you some other problem. Cute at first, but if they keep doing it through to adulthood, you realize you're dealing with an immoral person.
<br/><br/><br/>
<b>FALSIFYING THE NAIVE THEORY OF LYING - ABSTRACT AND CONCRETE</b>
<br/><br/>
There are infinite scenarios that illustrate this, but in the abstract, the most common scenario is this. (Concrete examples follow each abstract description.)
<br/><br/><br/>
<table border="1"><tr><td>
<i>Person P believes fact X.
<br/><br/>
Q believes not-X.
<br/><br/>
In reality, not-X is true. (That is to say, P violates naive theory of lying Bad Assumption #1 - identical agency.)
<br/><br/>
Not only is Q quite confident that not-X is true, but they quite clearly understand that P believes X, and for bad reasons.
<br/><br/>
Q tells P fact Y (which is true!), knowing full well that this will cause P, based on P's false belief of X, to perform action A, which harms P.
(Or, Q just makes no effort to convince P that not-X, allowing the false belief to stand.) (Bad Assumption #2 violated - moral isolation of speech from other actions.)</i></td></tr></table>
<br/><br/><br/>
Notice that nowhere did Q lie. Q identified a false belief of P that warped P's judgment, and told P something true that will make P do something harmful to himself in the context of P's warped judgment, <b>intending</b> for his true statement to make P harm himself. Q did not lie, but rather used a true statement to create a false conclusion in P's mind that harmed P. Q is immoral.
<br/><br/>
There are many, many concrete examples in the world of financial transactions, where person P either has a false idea about the value or quality of an item, or the movement of a market - which Q does not correct because they benefit from the transaction.[1]
<br/><br/>
Let's say you're at the base of a cliff you've climbed before, and you know the view from the top is beautiful. As a local, you know that what appears to be the most obvious route is actually quite dangerous, because the rock is crumbly and anchors can pop out, risking that the climbers will fall and die - in fact this has happened many times, especially to outsiders who won't listen to the locals. A couple of tourist climbers show up, and you overhear them talking about how this rock face looks like sturdy solid granite, and they're planning to go up the obvious (but unknown to them, most dangerous) route. If you say nothing (to warn them about the crumbly rock), you're immoral. If you say, "The view at the top is great!" you're falsifying Bad Assumption #2, and you're <i>really</i> immoral.
<br/><br/>
The obvious objection is that you are, in a sense, lying by omission. You would tell the person it's dangerous, and you certainly shouldn't induce them to try it! (Yes, obviously, but again, by the naive theory of lying, you would've done nothing wrong.)
<br/><br/><br/>
<b>LYING TO CONVINCE A DISTORTED BRAIN OF THE TRUTH</b>
<br/><br/>
So, let's make things more interesting, returning to the abstract formulation and introducing a very real problem, that of differing beliefs causing people to make different decisions (thus violating Bad Assumption #1, identical agency.)
<br/><br/><br/>
<table border="1"><tr><td>
<i>
Q (who in this scenario is a better person) does try to convince P that not-X.
<br/><br/>
P refuses to believe Q despite Q giving good reasons.
<br/><br/>
Q recognizes P's false belief structure, and understands that by <b>telling untrue fact Z to P, P will make a choice in their best interest</b>, owing to their distorted beliefs.
<br/><br/>
Q tells fact Z to P - that is, <b>Q lies to P</b> - and P makes a better decision than if Q had told P the truth.</i>
</td></tr>
</table>
<br/><br/><br/>
Back to the rock climbers. If you're <i>not</i> a jerk, you go over to visitors and say, "I heard you talking about your route. I have to tell you, this whole cliff is kind of crumbly but the obvious route is really dangerous and people have died on it. I really think you shouldn't try it."
<br/><br/>
As they sort their gear, the visitors scoff. "Ha. I don't think so. These local yokels might be scared of it. Or maybe they just don't want outsiders climbing their route. People told us the locals here are liars. If you're a local I'm not going to believe anything you say. Are you?"
<br/><br/>
If you tell them the truth, you will harm them. If you <i>lie</i> and say "Nah, I'm visiting for the weekend from L.A. and a friend of mine there knows someone who died on that route" then maybe they'll listen. If you are scrupulous and try to change their minds, they dismiss you as a local, climb, then fall and die, you would be pretty immoral to say "Well with their false belief they put me in a bad situation, and by choosing to put them in more danger which led to their deaths, I made the right decision."
<br/><br/><br/>
<b>LYING TO PEOPLE TRYING TO HARM YOU</b>
<br/><br/>
A final abstraction, for Bad Assumption #3, cooperation independence. Here, Q is again a bad person; maybe even a Nazi at Anne Frank's door.
<br/><br/><br/>
<table border="1"><tr><td>
<i>P knows that Q intends to harm them.
<br/><br/>
Q's harming them requires information about P, provided by P.
<br/><br/>
P tells C to Q, knowing that not-C is actually correct. P lies to Q with the intention of protecting themselves or others.</i></td></tr>
</table>
<br/><br/><br/>
One way to think about this for the naive theory of lying crowd: if lying is on the spectrum violence, then when someone intends to commit violence to you, lying to them is a form of self-defense, and in fact much better than the physical violence one might otherwise have to employ. This stands independently of the rest of the argument and is consistent with the naive theory of lying.
<br/><br/>
Back to the cliff. You, the local, are back to being a jerk again. In fact, you've repeatedly gotten visitors to try to climb the most dangerous route by telling them about the view, so that when they fall and die, you can collect their gear. ("Hey, I'm not lying to them! Not my problem if they come to the cliff and are careless about the rock quality on the route!") But the authorities have started to suspect someone is doing this intentionally, so now in your pre-climb conversations with your victims, along with inducing them to climb by extolling the view, you wheedle out of them whether they have any connection to law enforcement. Of course this time, the pair of climbers that comes is indeed law enforcement, but undercover. When you ask, they say "No." They're trying to stop you from doing this to other people, and by identifying themselves, they couldn't do that. They are behaving morally by responding to your violence with defensive, much less severe violence, and trying to stop you from harming others. (Naive-theory-of-lying people: would it matter if instead they technically didn't really lie and instead said "Hey, do we look like law enforcement?" To a five-year-old who doesn't understand theory of mind, possibly.)
<br/><br/><br/>
<b>THE MORALITY OF LYING AND INTENTION</b>
<br/><br/>
An improvement on the naive theory of truth-telling is this. <b>If we intend to help people and intend to avoid harm, we should say things that will accomplish these things.</b> Intention is quite important, because as with other actions, it predicts the speaker's future actions. The default assumption should be that this is almost always accomplished by telling the truth. However, once you have evidence that telling someone the truth will not help them or even harm them, and/or that lying in an extremely limited way will help (or that you're dealing with someone with bad intentions, i.e. who's not cooperating with even your basic safety), it is acceptable to say untrue things. We can call this theory of intention and lying <b>helping by intentionally creating an accurate model - HICAM.</b>
<br/><br/>
In those rare instances where we lie to benefit someone, we can call those pro-social lies. (Intentionally differentiated from white lies - more on that below.) But we can also categorize the ways of causing harm by speaking.
<ol>
<li><b>Active lying or <a target=_blank href="https://ieet.org/index.php/IEET2/more/Messerly20170206">bullshitting</a>.</b> Actively creating a false impression without the intention of helping someone.
<li><b>Letting weeds grow.</b> Allowing false beliefs to persist with the intention of harming someone.
<li><b>Manipulation WITH the truth. </b> Telling the truth in a way that one intends to create a false impression, often by using pre-existing false beliefs.
</ol>
Notice that this definition does not justify "white lies", nor have I used the term. A working definition of white lies is lying that spares people's feelings and otherwise have no effect. I might seem to be siding with the naive theory of lying people when I say that white lies are quite dangerous, for the reason that emotional impact absolutely is an important effect, and psychologically, it's a bit too easy to avoid difficult conversations by telling ourselves we're just telling white lies.
<br/><br/>
Moral thought experiments (including this one) often use exotic examples, although I bet more people were rock-climbing today than switching trolleys between tracks. But examples in your own life likely abound. That said, if you have kids, you have very likely told a few half-truths or outright whoppers to motivate them, keep them out of trouble, or otherwise improve an outcome when their little brains would likely not have responded as well to a carefully marshaled rational argument. Why? Because children's agency is poorly formed (Bad Assumption #1.)
<br/><br/>
The same is likely true if you have family members or clients with some neuropsychiatric illness who would otherwise not be cooperating with you or otherwise make horrendous, unintentionally self-harming decisions; for example dementia. Some years ago, a relative of mine with dementia eventually progressed to having no short-term memory. This person was quite attached to her family home. She lived in a nursing home for about two years but believed the whole time that she had just arrived only a few days prior, and would shortly be returning home. Of course the home (which had fallen into disrepair as she deteriorated) had quickly been sold. Most of her visitors avoided the question of how long she had been there or commenting on the house, but at one point a well-meaning person, realizing this lady thought she just arrived and that she would shortly be returning home, told her the truth. This resulted in an emotional meltdown and suicide attempt. The next day of course, my relative had no memory and again thought she had just arrived and would soon be returning home. Was her well-meaning visitor a moral person? Would she have been moral to repeat this episode?
<br/><br/>
There are many milder but more common versions of this. Someone adheres doggedly to a certain authority. Someone dismisses you because you do not, or you're the wrong religion, ethnicity, political affiliation, etc. These are themselves not very pleasant reasons to be disbelieved, but what is your responsibility here? Say you own an auto shop, and a customer you don't like brings their car in. Inspecting it, you notice that the brakes are about to fail. Because you don't like this customer and you know he's a racist, when he comes to pick up the car to find out what work needs to be done, you intentionally send a black employee (who's in on it) to tell him he needs his brakes replaced, expecting full well the customer will refuse because a black person is telling him this, drive off, and crash when his brakes fail. Yes, he's no prince charming, but if someone told me that story, with that clear <i>intention</i>, I would worry about the shop owner's character and not want to be around him. (In point of fact, when someone is worried that a message will be ignored or taken the wrong way because of something like this, they do often use a messenger more likely to be taken seriously and accurately.)
<br/><br/>
And finally, ask any physician how they motivate their patients with low motivation, cultural barriers, or poor health literacy to do things that will keep them alive. It's hard enough in primary care. Try psychiatry! The temptation to severely spin the truth to improve outcomes for your patient arises frequently, and sometimes wins.
<br/><br/><br/>
<b>PROBLEMS AND FURTHER OBSERVATIONS ABOUT HICAM</b>
<br/><br/>
The obvious (and correct) objection to any non-absolutist model of truth-telling is that it provides a very slippery slope. When you free yourself from a commitment to absolute truth, it becomes maybe a little too easy to justify fibbing in what you've convinced yourself is someone else's actual best interest. Consequently, following these rules, you should still expect opportunities for pro-social lies to come up quite INfrequently, and you also have to commit to real honesty about your motivations for telling what you believe is a pro-social lie. You have to accept that when you tell a lie, when you think you've found a pro-social motivation, you're probably deceiving yourself. The analogy here is to uber-empiricist Hume's statement that (paraphrasing) despite his identification of the problem of induction, still, if you think you've found a violation to the laws of the universe, you've probably just made a mistake.
<br/><br/>
<center>
<img width=75% height=75% src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgaMcRjnP7Ee8ij5ghskM1ykYmugin8Qrk_Fg99L0U91QUq1rqiBqyhiALhKGaFHmNgUdiTis4IViPqsAJqn4VNaaBSmkuWj49qQVOh9VWdoAfh3iJ_r0ao1piM9wnZQlQbYpJ9eALZ6gFM/s1600/rationality+types.png" /></img>
<br/><br/>
<i>Above: Left, how most of us think of the relationship between epistemic and instrumental rationality. Right, a more accurate scheme.</i></center>
<br/><br/>
However HICAM does relate to some distinctions made in epistemology and observations from the psychology of mood and rationality. Epistemic rationality is what we usually think of as rationality, when you can make a valid argument. Instrumental rationality is action that increases utility, with no semantic component. I am being epistemically rational when I can describe and predict a thrown object's course mathematically; I am being instrumentally rational when I catch it without thinking of that (and so is a dog.) People tend to think of the two rationalities as separate domains or "two sides of the same coin", but a better argument is that epistemic rationality is a subset, a special case of instrumental rationality. Not controversial; speech and thought are actions. Consequently there will be times - rarely - that the actual outcome of computing someone else's statement will be different than what would have occurred if it were computed and acted upon rationally (as with a person who holds false beliefs - Bad Assumption #1, identical agency.) Saying something false or invalid to get someone to do something good for them is one of the rare times that speech is ONLY in the realm of instrumental rather than epistemic rationality (see below.)
<br/><br/>
<center>
<img width=35% height=35% src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmzlrXLOvClcxpPJs5tXiWNqNYao8sBx4wmop4wty_EGh8vhVb7v5fBRfLn2-tjIMAxc3LyBO-Y8hv1X3QXxtlh6y1HZuUC-QcDkNyFH3gp9Aqae_7traxfNvTHsQ3XxP8_v_ENBa8nGiS/s1600/prosocial+lies.png" /></img>
</center>
<br/>
What's more, there's an interesting finding in psychology described as <a target=_blank href="http://en.wikipedia.org/wiki/Depressive_realism">depressive realism</a>, where depressed people actually make better predictions about their own performance than non-depressed people. In seeming conflict with this is the robust finding that <a target=_blank href="https://link.springer.com/article/10.1007/s10677-018-9894-6">optimism predicts success</a> (Bortolotti, 2018.) It's as if we have to choose between seeing reality as it is and being depressed, or delusional and happy - and most perplexing, successful. Fellow psychiatrist-blogger Scott Alexander uses the analogy of <a target=_blank href="http://slatestarcodex.com/2017/09/12/toward-a-predictive-theory-of-depression/">mood as being like an advisor</a> to someone who motivates their client or pulls them back based on historical performance. Depressed mood is like an advisor to someone who always fails, telling them not to ever try anything, because history predicts they will fail again; as contrasted with the adviser of someone who always succeeds (happiness, optimism.) Here we see another domain where optimizing more for instrumental rationality (the less rational but more motivating optimistic beliefs)[2] produces better outcomes than optimizing for epistemic rationality (the glum, accurate beliefs.)[3] All this is to say, the occasional leaking of speech out of the epistemic domain into the purely instrumental domain - prosocial lies - is entirely compatible with what we know about human behavior. We can think of the distorted beliefs held by optimistic people as prosocial lies we tell ourselves.
<br/><br/>
All this justification of making false statements as long as they "work" is likely to make rationalists squirm, and indeed the alert reader with supernatural religious convictions might say: "Even if you think religion is false, doesn't HICAM and especially your argument in favor of prosocial lies justify believing it? Isn't religion actually the best example of an instrumentally helpful, though epistemically irrational belief?" Indeed I wrote about exactly this problem some years ago, arguing that <a target=_blank href="https://luckyatheist.blogspot.com/2009/04/thought-experiment-fake-safety-net.html">as a set of untrue statements - lies - religion is immoral, even if it sometimes inspires good acts that otherwise would not have occurred.</a> (That's one of many reasons.) How can I make such a claim, but defend HICAM? Again - we would expect pro-social lies should be very rare, and require strong justification. Contamination by selfish, anti-social motives is always a danger. Therefore the argument is really a quantitative one - maybe we might expect to tell a pro-social lie a couple time a year, rather than fill an entire book with them. Much more than that, and it's overwhelmingly likely most of them are actually someone telling plain old lies, <i>anti-</i>socially. So if ever you catch me intentionally building a whole delusional world around someone that I claim is in their best interest, I would certainly be acting out of immoral intentions.
<br/><br/>
Finally: while I've written a lot about the harm that can come about from saying true things to a person whose thought process is distorted by false beliefs (thus leading that person to actually make bad decisions, despite having told them the truth), this is less often a problem than it otherwise might be. The reason is that people who verbally claim false beliefs often find reasons not to act on those claimed beliefs. Example: someone says to you "my favorite football team will definitely 100% win the game tomorrow, since the previously injured quarterback is back in the line-up." But it turns out that 2 minutes ago it was just announced that actually, the quarterback will NOT be playing tomorrow. As an unethical person, you rush to lock down the bet while your interlocutor holds a false belief (letting weeds grow.) But suddenly they get cold feet, often with a disingenous "Gambling is immoral" or "I don't want my fandom to be polluted with money", etc. In fact humans have lots of "speed bump" heuristics, to keep false beliefs from propagating too far and to keep us from overcommitting, even though epistemically we can't really explain it that way (see <a target=_blank href="https://thelateenlightenment.blogspot.com/2017/10/endowment-effect-as-rational-strategy.html">the endowment effect</a> for one such example.) It's interesting to note that it's often the newly converted who don't have the speed bumps specific to a new set of beliefs who get into trouble. On the other hand, there are people with severe psychiatric illness, who have a brain which is physically different from most other humans. They really believe their delusional beliefs, judging by how they endorse them with action, with no speed bumps.
<br/><br/><br/>
<b>REFERENCES</b>
<br/><br/>
Bortolotti L. Optimism, Agency, and Success. Ethic Theory Moral Prac (2018). https://doi.org/10.1007/s10677-018-9894-6
<br/><br/>
Varden H. Kant and Lying to the Murderer at the Door...One More Time: Kant's Legal Philosophy and Lies to Murderers and Nazis. J Soc Philos, Vol 41(4) Dec 1 2010.
<br/><br/><br/>
<b>FOOTNOTES</b>
<br/><br/>
[1] To be clear, this is not a claim that all transactions are immoral. When we trade, we necessarily hold different valuations of the things being exchanged, otherwise the trade would be irrational. As real objects will inevitably have different values to different people in different situations, this is not an obstacle to moral rational trading. However, if one is trading a more abstract entity that only holds value in terms of its tradable utility, or predicting an outcome that is only connected through arbitrary agreement, and especially in zero sum scenarios, then objectively incorrect valuations by one party is likely to play a larger role in the trade. Case in point, bets, commodities, or stock options.
<br/><br/>
[2] Assuming that our delusional (but success-producing) optimism has been selected for by evolution, I often amuse myself by wondering whether, if <i>Homo erectus</i> could understand psychiatric nosology, they would view their descendants (us) in horror, running around manic all the time as we might appear to them.
<br/><br/>
[3] Of course different levels of optimism or pessimism are rational for different risk:benefit scenarios, just as in game theory, penalty and payout determine the most rational strategy. Case in point: one of the things that cognitive behavior therapy or social anxiety aims to do is make people re-evaluate the actual risk of social interactions. So what if someone doesn't like you or won't say yes to a date? Does it physically harm you? The payout, while unlikely, is high, and the risk (once you get past your anxiety) is almost zero. On the other hand, this would be a terrible approach for rock-climbing. Antonio Gramsci (quoted by Steve Hsu on his blog) expresses this nicely: "Pessimism of the intellect, optimism of the will." Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-87287058859389934912018-09-02T00:23:00.000-07:002019-03-29T02:31:44.495-07:00Procrastination Variants: Narcissistic Subtype<b>This post is research only and should not be taken as medical advice or treatment recommendations.</b>
<br/><br/><br/>
Many things in psychology are multicausal, and/or have subtypes. Initially this causes difficulties in trying to study them. Lumping together different illnesses has obscured the truth many times in the history of neuroscience and mental health, and is certainly still doing it now. (In the early twentieth century most physicians would have considered schizophrenia, autism, intellectual disability and dementia the same thing, and now most educated Westerners have some idea that at least these are different conditions, if not what the symptoms are.)
<br/><br/>
Procrastination is a problem for a lot of people that gets surprisingly little attention in the psychology literature, relative to its prevalence and the amount of suffering it engenders. A simple model relies on executive dysfunction affecting set-switching, and it works like this. You want to accomplish B, but you have to do A first to get to B. A is unpleasant and merely an instrumental goal. If you have poor executive function, you can't get yourself to start doing A; you "put it off". Or, you're already doing X, which although unrelated is much more fun in the moment than A would be, so you REALLY can't get yourself to start.
<br/><br/>
No doubt this model does usefully describe many people's experience, and even for the people best described by the model I'll advance below, an executive function deficit probably does play a part to some degree in just about every chronic procrastinator. But the pattern many people describe has several inconsistencies that suggest that what's really motivating the procrastination is avoiding the threat of ego injury - especially in narcissists, to whom any damage to self-worth by being less than perfect in a core value is destructive and terrifying.
<br/><br/>
I've made a number of observations from scouring the limited literature, as well introspection, observation of patients, and reading others' introspection, that suggest to me that for a large subpopulation of procrastinators at least, the problem is driven mostly by character rather than executive dysfunction. Even before looking at the literature, based entirely on my observations in the clinic, I noticed a commonality in the patients who would complain of procrastination difficulties. They're usually male and middle-aged or younger. They often display a degree of alexithymia, or even more interestingly, very specific alexithymia, toward anxiety only - they either never notice that they feel anxious, cannot name it when they do, or actively deny feeling anxious. I often suspect that this is motivated by anxiety being an ego dystonic emotion in male narcissists (to be anxious is to be weak, which is unacceptable.) In treating these patients, I've measured their symptoms and progress with the <a target=_blank href="https://surveyyourself.files.wordpress.com/2018/01/irrational-procrastination-scale.pdf">Irrational Procrastination Scale</a> in my practice (hereafter IPS), though you can also find the Pure Procrastination Scale (Steel 2010), and two comparisons (Svartdal et al 2016, Svartdal and Steel 2017.) (Steel has <a target=_blank href="https://procrastinus.com/">his own site here</a> with more information.) I've never taken objective data on narcissistic personality though the instrument most commonly used is the Narcissistic Personality Inventory (the NPI.)
<br/><br/>
Literature review shows two things: a small literature investigating possible procrastination subtypes, and a tiny but intriguing signal about a narcissism-procrastination connection. There are a few more papers indexed by procrastination and compulsive personality, among them Primac's paper showing the success of a brief therapeutic intervention in compulsive personality decreasing both narcissism and procrastination. Of the three procrastination subtypes noted in the literaure (avoidant, arousal, and decisional procrastinators) narcissistic procrastinators as I describe them below would most closely match the avoidant subtype. Some studies have found differences in the subtypes, for example in their activity at different times of day (Díaz-Morales et al 2008.) However Steel in 2010 performed a meta-analysis concluding that there is no evidence for the subtypes as distinct entities. Lyons and Rice (2014) reported on avoidance and arousal procrastination subtypes specifically and found relationships with secondary psychopathy and the Entitlement/Exploitativeness facets of the NPI. In contrast Nawaz et al (2018) did not find a correlation between the IPS and the NPI. Shame is known to be at the core of pathology in narcissism, and Fee and Tangney (2000) found correlations in procrastinators between shame, but not guilt. Wohl et al (2010) found that students who forgave themselves for procrastinating while studying were better able to overcome study procrastination in the future, again suggesting a role for shame in the behavior. Mann (2004) noted avoidance effects proportionate to narcissistic injury in undergraduates. There is a slightly stronger signal for procrastination and obsessive personality - suggesting a common thread of perceived poor self-efficacy. A study comparing procrastinators versus non-procrastinators did not find differences in the cognitive abilities they measured, but did conclude that "Further research must provide evidence for persistent procrastination as a personality disorder that includes anxiety, avoidance, and a fear of evaluation of ability" (Ferrari 1991.)
<br/><br/>
What all this strongly suggests is that narcissism plays a role in procrastination, if not in all impacted procrastinators, then in a significant subpopulation. In addition to the literature cited here, here are the observations I've made of patients that support a narcissistic subtype of procrastinator and a mechanism for the behavior.
<br/><br/>
<b>CLINICAL OBSERVATIONS</b>
<br/>
<ol>
<li><b>Some procrastinators have reported that being sick or sleep deprived makes it easier for them NOT to procrastinate.</b> This flies in the face of the executive dysfunction hypothesis. They say, basically, "I'm already miserable, so why not just do the thing I don't want to do." This suggests that what they're avoiding with procrastination is something that makes them feel generally bad, and when they already feel that way, there's no point in avoiding the task.
<li><b>This subtype of procrastinator doesn't just forget about the task.</b> They usually don't just forget to do it; or, remember, but not feel like it, and just put it out of their minds. It's actually continually on their minds while they're avoiding it. This is also very unlike executive dysfunction.
<li><b>Many procrastinators have the experience of having TWO things they're procrastinating about, and they switch which one they're avoiding. This pattern is possibly the most instructive of all of them here, because of how little sense it makes without this model.</b> For example, someone is supposed to do A all day, but avoids it. Then a deadline approaches for B (say, they're supposed to start getting ready to leave for an important meeting.) Then they actually do start doing A! If they're concerned about being injured by not doing perfectly at these activities, at some point anticipation of B (which they think they will do badly at) builds so much that they need some distraction. Now, the prospect of failing at A is further away and therefore not as painful, and they'll be partly distracted from A by impending B anyway - but more importantly, they'll be distracted from thinking about failing at B by doing A half-assed (and in narcissism, there is often constant activity to avoid feelings of worthlessness by using superficial productivity.) Tim Urban's <a target=_blank href="https://waitbutwhy.com/2015/07/why-im-always-late.html">description of this phenomenon is here</a> in cartoon form. Briefly: it's time for him to leave for an appointment - Task B - when the Procrastination Monkey says "that work you were trying to do all day, I've changed my mind and suddenly I'm into it.") Note that strong focus on another task is not what you would expect from impulsivity either.
<li><b>Sometimes, once a procrastinator finally begins working on the avoided task, they explode with anger if they have to move on to something else.</b> At first glance this would appear to be a perfect example of poor set-switching and therefore eminently explainable by the executive dysfunction model (autism spectrum people do this too) but you can actually differentiate based on the nature of the task - if you can switch to a self-worth-supporting activity, the narcissistic procrastinator would resist less, while to the autistic person the nature of the new task would not matter. In fact the more fun-for-its-own-sake is the new task, the more the narcissistic procrastinator would resist (don't you dare ask them to play video games once they finally get started on the previously-avoided task! But asking them to work on a boring, important tax document might be alright.)
<li><b>Procrastinators often use words to describe the way they feel like "worthless" or "useless", classic for wounded narcissists.</b> They absolutely consider their procrastination a huge problem. Contrast with ADHD patients who avoid work, and for whom the avoidance is often fairly ego-syntonic.
<li><b>Many procrastinators describe doing pointless, un-fun busywork while worrying about the thing they're supposed to be doing.</b> Tim Urban has a related idea called the "dark playground", but on the dark playground you can do fun things (while feeling guilty about them); I'll call this domain "busywork purgatory". Tellingly, it's always meaningless busywork. It's never something fun (well I'm not doing it, might as well play video games); it's not something else important. It's trivial, and it's usually something continuously attention-occupying that can be completed that day (for a burst of that feeling of accomplishment.) Many people have experienced the urge to clean the dorm room instead of studying, but dorm room cleaning is actually more useful than most busywork purgatory activities.
<li><b>A culturally-influenced aspect: in modern America there is a premium on productivity and success over most other characteristics.</b> In a culture where (for example) loyalty to religion or family is most prized, I would expect that instead of busywork purgatory, the procrastinator gets stuck in prayer purgatory or doing-things-for-your-family purgatory, to prop up their self-worth.
<li><b>Probably the most disabling impact of this procrastination subtype: people <i>reverse prioritize</i>, spending more time on unimportant activities and starting them earlier and more easily.</b> I've heard procrastinators say that they can tell how important they think something is by how easily they work on it or how relaxed and creative they can be about it (see this Tweet by <a target=_blank href="https://twitter.com/vgr/status/1029560680305704961">someone who appears to be admitting to reverse prioritization.</a>) With executive dysfunction alone, you would expect random order of work with respect to actual priorities, as opposed to a reverse ordering. <b>Procrastinators are therefore often able to be quite productive at something that is not important to them. Paradoxically, if their productivity and success lead that thing to be a central part of how they measure their self-worth, they will start procrastinating at it.</b> People will described starting to feel "trapped" and that's when they start to procrastinate. I would argue the behavior is not reactance but rather avoidance of ego-threat. Again, pure executive dysfunction would predict a random order or a tendency to always do "shiny" fun things, not reverse-prioritization of things that is deemed unimportant.
<li><b>Some procrastinators suggest that recent successes make them less likely to procrastinate,</b> possibly because suddenly they have an expectation of a more positive outcome as a result of their efforts, rather than only negative outcomes (and this thought distortion is actually <i>reinforced by reality</i> in previous instances; part of the problem is that their outcomes <i>really are</i> negative, and they've taught themselves this quite effectively.) This also suggests that the problem is not only do narcissistic procrastinators envision a negative outcome in the end, they get no positive feedback from the intermediate steps along the way because there's no feedback in the form of external praise. Thinking about it this way, there's literally no reason to start the task, because it will be at best neutral while you're doing it, and then bad when you finish.
<li><b>A bizarre compensation behavior I've heard from multiple procrastinators is the pattern of performing an otherwise important task out of context, after the fact, and alone (where it has no value to anyone.)</b> Bizarrely they will act like their completion of the task is exactly equivalent to having done it in the normal manner and time, all the while knowing exactly how childish and strange it is. I know of one person who was going to run a marathon, panicked because he thought he would do badly and didn't get up in the morning to go - but then showed up to the deserted starting line four hours late, ran the course, then actually emailed the organizers to yell at them - "What kind of a race is this? No aid stations?" (because they had long been taken down) "No one to hand me a finisher's medal at the end?" (Yes, because the race was over and everyone had gone home.) This person actually followed up for a while with angry phone calls and emails, <i>fully aware how ridiculous it must sound</i> but feeling compelled to do so; he said if he <i>hadn't</i> done this he would've felt "weak", classic for a male narcissist. Another example of this bizarre behavior was one procrastinator who routinely waited until after customer service lines shut down for the day to call (his bank, to change his password, etc.) He wasn't aware of doing it intentionally, but repeatedly noticed that it was 5:02pm, and it was time to call his bank. He would leave angry messages if there were voicemails, post tirades on companies' social media feeds, etc. He said he noticed he felt a strange satisfaction and even comfort in getting angry, and admitted to being oddly disappointed on those occasions when he called and got a live person who could help him. He stated the task was usually one where he wasn't sure if he would be successful or would know how to "navigate the system" to a successful outcome.
<li><b>The types of tasks that this subtype procrastinates on are rarely solo activities, or tasks with certain outcomes.</b> Going for a solo hike, even one which involves complex planning, does not threaten to ego-injure the person with possible failure, nor does it provide an opportunity to fail in front of anyone else.
<li><b>If the person is angry at someone, especially at an authority figure telling them to do the task (less likely a competing peer), the procrastinator will engage in very thinly-veiled passive aggression by doing a previously-avoided task well and on-time,</b> often fantasizing that they are frustrating the expectations of the authority who expects them to be late. Anger at authority cannot be the sole explanation, since the person knows that they are doing what the authority wants. Being less likely to behave this way toward peers is not as good an explanation as fear of ego-injury from credible critics (low confidence in success in front of peer competitors who are credible critics is less tolerable than say, a boss who the person is angry at and no longer considers credible.) While the sympathetic activation could be responsible for the focus (again, arguing for a simple executive dysfunction model), anger is known to focus narcissists.
</ol>
<br/>
<b>Synthesizing these observations, the narcissistic subtype of procrastination is not the same as the avoidant subtype, but of the three traditionally considered subtypes, avoidant is the most similar. The model for the narcissistic or ego-threat subtype of procrastination is as follows. </b>A person with some traits of a fragile narcissist, likely in the context of some executive dysfunction, encounters some task they have to do. This task is part of a series of actions leading to a goal that they intellectually want, which they consider quite important to their core identity - something that reflects on who they are and want to be, in front of other people, especially those who can credibly criticize them. Because they have poor confidence and/or unrealistically high standards, they feel they are likely not to succeed. Given their character structure of having a fragile self-worth that must be propped up with perfect external achievements, this is a profound ego-threat, and they feel anxiety contemplating the outcome of the task. Consequently they avoid doing it, but not thinking about it. They substitute either activities which distract them with continuous activity and certain near-term positive outcomes (no matter how trivial) or another otherwise-avoided important activity, but one which is farther in the future (and therefore, the ego-threat is farther off as well.) They finally undertake the avoided task when the time is so low and the threat looming so immediately that their awareness of the damage they're doing to themselves overwhelms the comfort they get by distracting themselves. If there is some way for the person to feel they can say to others they completed the task but WITHOUT exposing themselves to criticism and ego-threat, they'll do that, sometimes even if it's patently ridiculous; e.g. doing the task when no one else sees them and after it no longer matters. If the person already feels bad in general (from physical illness) or angry, even at an authority figure telling them to do the task, they are paradoxically more able to complete the task.
<br/><br/><br/>
<b>TREATMENT</b>
<br/><br/>
My complaints about the paucity of procrastination research are partly driven by having treated it in my own practice, and having to round up what little evidence there is and then use "clinical judgment" for the rest. To round up the pharmacotherapy options: there are basically none, and in particular, there are none for the subtype I propose here. There is very indirect evidence for amphetamines (in one paper, college students abusing amphetamines reported less procrastination) but again, if these are all types procrastinators mixed together, such an indistinct smeared-out result is exactly what you would expect to see. The following is <b>not a treatment recommendation</b> - but I tried propranolol with a patient who had comorbid non-pathological social anxiety, used it a couple times and thought it helped, but he was much more successful with CBT (more on this shortly.) There is no evidence on Pubmed for other stimulants, benzodiazepines, beta blockers, or the SSRIs and SNRIs available on the US market. I've had people report that caffeine makes them work faster and focus and lifts their mood, but in fact after it wears off they realize that caffeine just helped them do more tasks in "busywork purgatory" - it didn't help them focus on the true high-value tasks.
<br/><br/>
The best evidence for successful treatment is from psychotherapy, specifically for CBT, which is also what has far-and-away worked the best in my own experience. Rozental et al have two studies which show among other things that in-person CBT with a therapist is the same at end of treatment as internet-based self-guided CBT, but the in-person people maintain their improvements better over time. Improvement was was over a full standard deviation from the control (!) but only about a third of participants improved - also consistent with my own experience that it doesn't help everyone, but the ones that get it, really get it.
<br/><br/>
This treatment does not differentiate by subtype or provide information that would let us infer about the relative benefit for narcissistic vs other mechanisms of procrastination. So what would I expect would be most successful approaches in CBT for narcissistic procrastination? (These therapeutic maneuvers are inferred from the model of narcissistic subtype procrastination above, but should be tested empirically in placebo-controlled studies and therefore remain speculative.)
<br/>
<ul>
<li><b>Exposure therapy for failure and criticism of your core attributes.</b> As a therapist - have the patient make a list of the things they consider core important attributes, skills, and values they offer, and people who are qualified to evaluate them against those standards. Perform role-play or imaginal exposure.
<li><b>Learn to identify the anxiety that comes up when you start to avoid something - name it and develop a counter-habit, like working on the task for five minutes.</b> As a therapist - have the patient tell you tasks that they procrastinate on. "Ambush" them during therapy, mentioning one of them out of the blue, then hit "pause" and ask the patient what it made them think of and how it made them feel. Keep a journal with successes of when they successfully fought back against the feeling outside of therapy, where it worked for at least five minutes (only track the successes, not the failures.)
<li><b>Enlist a significant other, roommate, family member etc. to check up on you and give positive support when you finish tasks.</b> (And don't avoid asking them out of shame, worrying you'll appear weak, etc. which is why this usually doesn't happen.)
<li> <b>Develop a habit of remembering that the individual steps do have value, even if you have to imagine others praising you for completing them.</b> Envision a realistic positive outcome and how it will feel. Break things down into very very small steps, remind yourself this is how successful people do it (don't minimize by saying that means you're weak) and then pay attention to how you knock out these tasks - a success spiral.
<li><b>Radical acceptance and forgiveness - we have certain abilities, we're going to screw up sometimes, and we're fine the way we are.</b> Be consciously aware that castigating yourself mentally is not going to help you change, and in fact will do the opposite.
</ul>
<br/>
<b>AFTERWORD: Why is Procrastination Seemingly So Much More Prevalent Now?</b>
<br/><br/>
Procrastination is certainly not new in this or the last century. What <i>does</i> seem to be new is the number of people affected by it, and there's probably an easy answer for why that would be so. When you're stuck in the Malthusian grind like most of our ancestors were until about a century ago, your life is a series of constant emergencies, and we should expect that our brains are adapted to focus in this way, on near-term impending disasters with short time horizons and only a few concrete elements. (Starvation, fights, etc.) And indeed procrastinators often do quite well under pressure - they often report this as an excuse early on in their lives for why they always work up to the deadline, until they're honest with themselves about how out of control their behavior really is. (This is also borne out in the literature on the arousal subtype of procrastination.) It's interesting that stoicism as a coherent philosophy of classical antiquity was largely a philosophy of patricians, and its texts contain lots of subtle signals about their status by complaining that it was hard for them not to waste their time, i.e. not to procrastinate with trivia. This might have been a problem for an emperor or senator, but a subsistence farmer in a Roman province rarely had the luxury of stretches of time without highly activating direct threats to survival. Today we all live better than senators did in that era, which is to say we all have stretches of unstructured time and no threats to our survival - although notice that the things that finally do motivate us, even in procrastination, are all perceived threats.
<br/><br/>
There is also speculation that narcissism has become more prevalent as time has marched on. This is less than a settled point and I won't go into the debate here, but if that's the case, and narcissism does contribute to procrastination, you would expect to see more procrastination.
<br/><br/>
A third possibility is the cognitive parallel to the hygiene hypothesis. Immune systems, when not challenged sufficiently by invading pathogens, get very paranoid, and are more likely to mount autoimmune attacks. In the comparatively sterile modern environments where we now live, this is a problem. In the same way, in the absence of constant emergencies, the human threat detection system has more false alarms, and the one threat that does still exist is the threat of criticism, disapproval, and being perceived as weak (especially if you're male.) While such disapproval in the paleolithic could result in your death if you were thrown out of the tribe, today it seldom means anything of the sort. Un-learning our exaggerated social threat responses will likely be one of the central mental health tasks of the twenty-first century.
<br/><br/><br/>
<b>REFERENCES</b>
<br/><br/>
Díaz-Morales JF1, Ferrari JR, Cohen JR. <a target=_blank href="https://www.researchgate.net/publication/51423113_Indecision_and_Avoidant_Procrastination_The_Role_of_Morningness-Eveningness_and_Time_Perspective_in_Chronic_Delay_Lifestyles">Indecision and avoidant procrastination: the role of morningness-eveningness and time perspective in chronic delay lifestyles.</a> J Gen Psychol. 2008 Jul;135(3):228-40. doi: 10.3200/GENP.135.3.228-240.
<br/><br/>
Fee RL., Tangney JP. <a target=_blank href="http://psycnet.apa.org/record/2002-10572-013">Procrastination: a means of avoiding shame or guilt?</a> J Soc Behav Personal. (Special issue: Procrastination: current issues and new directions). 2000;15:167–184.
<br/><br/>
Ferrari JR. <a target=_blank href="http://journals.sagepub.com/doi/abs/10.2466/pr0.1991.68.2.455?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dpubmed">Compulsive procrastination: some self-reported characteristics.</a> Psychol Rep. 1991 Apr;68(2):455-8.
<br/><br/>
Lyons M, Rice H. <a target=_blank href="http://www.academia.edu/5845216/Thieves_of_time_Procrastination_and_the_Dark_Triad_of_personality">Thieves of Time: Procrastination and the Dark Triad of Personality</a>. Personality and Individual Differences Volumes 61–62, April–May 2014, p. 34-37
<br/><br/>
Mann MP. <a target=_blank href="https://www.sciencedirect.com/science/article/pii/S0191886903003015">The adverse influence of narcissistic injury and perfectionism on college students' institutional attachment</a>. Personal indiv Diff. 2004;36:1797–1806.
<br/><br/>
Nawaz H, Shah SIA, Mumtaz A, Sohail Chughtai A. (2018). <a target=_blank href="https://www.researchgate.net/publication/326668500_Alarming_trend_of_procrastination_and_narcissism_among_medical_undergraduates">Alarming trend of procrastination and narcissism among medical undergraduates</a>. From Researchgate.
<br/><br/>
Primac DW. <a target=_blank href="http://journals.sagepub.com/doi/abs/10.2466/pr0.1993.72.1.309?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dpubmed">Measuring change in a brief therapy of a compulsive personality.</a> Psychol Rep. 1993 Feb;72(1):309-10.
<br/><br/>
Rozental A, Forsell E, Svensson A, Andersson G, Carlbring P. <a target=_blank href="http://psycnet.apa.org/record/2015-19938-001">Internet-based cognitive-behavior therapy for procrastination: A randomized controlled trial.</a> J Consult Clin Psychol. 2015 Aug;83(4):808-24. doi: 10.1037/ccp0000023. Epub 2015 May 4.
<br/><br/>
Rozental A, Forsström D, Lindner P, Nilsson S, Mårtensson L, Rizzo A, Andersson G, Carlbring P. <a target=_blank href="https://www.sciencedirect.com/science/article/abs/pii/S0005789417300898">Treating Procrastination Using Cognitive Behavior Therapy: A Pragmatic Randomized Controlled Trial Comparing Treatment Delivered via the Internet or in Groups.</a> Behav Ther. 2018 Mar;49(2):180-197. doi: 10.1016/j.beth.2017.08.002. Epub 2017 Aug 5.
<br/><br/>
Steel, P. (2002). <a target=_blank href="https://surveyyourself.files.wordpress.com/2018/01/irrational-procrastination-scale.pdf">The Irrational Procrastination Scale</a>. PhD Thesis, unpublished.
<br/><br/>
Steel, P. (2010). <a href="https://www.sciencedirect.com/science/article/pii/S0191886910000930?via%3Dihub">Arousal, avoidant and decisional procrastinators: do they exist?</a> Pers. Individ. Dif. 48, 926–934. doi: 10.1016/j.paid.2010.02.025
<br/><br/>
Svartdal F, Pfuhl G, Nordby K, Foschi G, Klingsieck KB, Rozental A, Carlbring P, Lindblom-Ylänne S, Rębkowska K. <a target=_blank href="https://www.frontiersin.org/articles/10.3389/fpsyg.2016.01307/full#B47">On the Measurement of Procrastination: Comparing Two Scales in Six European Countries.</a> Front Psychol. 2016 Aug 31;7:1307. doi: 10.3389/fpsyg.2016.01307. eCollection 2016.
<br/><br/>
Svartdal F, Steel P. <a target=_blank href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5676095/">Irrational Delay Revisited: Examining Five Procrastination Scales in a Global Sample</a>. Front Psychol. 2017; 8: 1927. Published online 2017 Nov 3. doi: 10.3389/fpsyg.2017.01927
<br/><br/>
Wohl MJA, Pychyl TA, Bennett SH. <a target=_blank href="https://www.sciencedirect.com/science/article/pii/S0191886910000930?via%3Dihub">I forgive myself, now I can study: How self-forgiveness for procrastinating can reduce future procrastination.</a> Personality and Individual Differences. Volume 48, Issue 8, June 2010, p. 926-934<br/>Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com3tag:blogger.com,1999:blog-4724592643224262209.post-72929386352635866172018-04-29T01:16:00.000-07:002018-04-29T01:16:04.815-07:00Psychiatrists Per Capita in the US, in Outpatient Practice Terms<center>
<img width=65% height=65% src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpwljKponiq9J1W0UITUvfuHHsGM6xVCPsru0Ya-GhIXTw5NtGxFUo_-0ry4t0uH_C-aIHAA03t3AgUoGT1jLf8dgHb-vVPjfR0obNKjz5H5_eNBUV5AvQSGhmwdfCIZ8JZG9np3tYhtix/s1600/psychiatrists+per+capita.PNG" /></img>
</center>
The number is psychiatrists per 100,000. Data from <a target=_blank href="http://www.dartmouthatlas.org/data/table.aspx?ind=144&ch=&sort=asc&sortcol=1&loc=80,124,172,226,229,338,63,219,263,335,323,331,313,201,182,144,147,104,113,55,72,203,227,292,293,294,295,296,297,298,299,300,274,284,287,256,235,250,322,341,270,86,108,121,141,151,154,88,239,163,352,344,321,327,328,170,178,193,221,251,252,244,257,259,301,106,142,143,148,87,56,68,69,111,302,282,207,167,173,84,82,83,70,103,130,135,166,181,216,314,281,260,340,330,310,232,233,190,174,136,149,145,125,97,122,118,175,213,307,345,348,333,255,253,204,199,115,59,75,191,192,168,228,220,240,306,309,266,74,73,288,326,316,355,85,64,123,120,139,152,283,285,265,261,234,211,195,184,318,334,317,320,223,279,126,159,254,218,164,177,337,319,185,183,315,109,95,153,78,225,200,249,275,196,209,198,277,258,241,231,238,160,93,58,57,79,89,91,131,128,280,210,324,347,351,354,165,158,271,127&loct=3&tf=32&fmt=169">Dartmouth Health Atlas</a>. If you make simplifying assumptions, you can get an idea what that means. 1 in 6 people lives with mental illness (currently, not lifetime prevalence.) So if in your city there are 10 psychiatrists per 100,000 people, that means 1 psychiatrist per 10,000 people, and 1 psychiatrist for 1,667 people with mental illness. How long would it take to see them? The most under- and over-served areas are Oxford, Mississippi with 3.4 psychiatrists per 100,000 and San Luis Obispo, California with 36.5 psychiatrists per 100,000. (although I'll wager the later is counting psychiatrists at Atascadero State Hospital.) If you assume all these people being seen on an outpatient basis, by psychiatrists working 48 weeks a year, 5 days a week, with 16-30 minute shifts per day about 2/3 full, then in San Luis Obispo you could see your whole share in a little over 2 months (that is, the average follow-up time would be two months.) In Oxford it would be just under <b>two years</b>.
<br/><br/>
You'll note the long tail, which begins right around 15 per 100,000, and those locations are: Morristown NJ, Alameda County (Bay Area) CA, New Orleans LA, Honolulu HI, Springfield MA, Durham NC, Santa Cruz CA, Ridgewood NJ, Hartford CT, Portland ME, Hackensack NJ, Lebanon NH, East Long Island NY, Pueblo CO, Evanston IL, Baltimore MD, Washington DC, Bridgeport CT, New Haven CT, Bronx NY, San Mateo County (Bay Area) CA, Boston MA, Manhattan NY, San Francisco CA, White Plains NY, Napa CA, and San Luis Obispo CA. There's an obvious bias toward cities with academic centers and/or places where white collar workers like to live, although the last two locations (at least) also have large state psychiatric hospitals.Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-79096951313916573452018-01-07T16:50:00.002-08:002018-04-28T20:23:50.034-07:00How Steep is Your Empathy Curve?<blockquote><i>Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connection with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident had happened. The most frivolous disaster which could befall himself would occasion a more real disturbance. If he was to lose his little finger to-morrow, he would not sleep to-night; but, provided he never saw them, he will snore with the most profound security over the ruin of a hundred millions of his brethren, and the destruction of that immense multitude seems plainly an object less interesting to him, than this paltry misfortune of his own. To prevent, therefore, this paltry misfortune to himself, would a man of humanity be willing to sacrifice the lives of a hundred millions of his brethren, provided he had never seen them? Human nature startles with horror at the thought, and the world, in its greatest depravity and corruption, never produced such a villain as could be capable of entertaining it.</i>
</blockquote>
- Part III, the Theory of Moral Sentiments, Adam Smith
<br/><br/>
If we are honest, we all must admit that there are some people in the world we care about more than others. While we're horrified (at least in our culture) about the idea of having a favorite child, almost everyone is pretty quick to say they'd rather save their child than a stranger that they've never met. But how about ten strangers? Or a hundred "millions"?
<br/><br/>
Of course people differ. Someone with no empathy for anyone including him or herself (autistic; badly depressed narcissist) would look like this:<br/><br/><center>
<img width=60% height=60% src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLz5fG-4NsG3Adyi8mHCpgq3KgDePwtQrD0ECR1Ygzu7hnbjF046tGUu4nP1quZ6n2oFAWBjvDGglZSXbSKjFb4AN0UwlhXsp8mmaw-I4xrGiXFoXxn5WL4HHAd17oOAgaEmaGylLOWGxS/s1600/no+empathy+even+for+self.png" data-original-width="819" data-original-height="460" /></img></center>
<br/><br/>
A narcissist, psychopath or very young child would look like this.
<br/><br/><center>
<img width=60% height=60% src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgW3XYyFWQ4-JOnR8fsy5sue459NZuoPBJf7hVC6rdzMmlhS2j4PBzUc41daAx39h4LsYn2yqkgih59latVxSeUqvZieMUiJb7_l0fHv3z8dDpfWqQg9DV8cmtXEgJ6nmoaehj1642l50C8/s1600/no+empathy+for+others.png" data-original-width="819" data-original-height="460" /></img></center>
<br/><br/>
This is what most of us look like.
<br/><br/><center>
<img width=60% height=60% src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCMi8Anadgfj_176SsvPoJir5cyvQfwWChy58RoaNQOX2CY5aK18-ZmKuKsStSx7PCFqWcDQcIBUXUAphV-88lCQmS6clad3ctFCNfV3r1YzLpSA1qlXJ_t_kOUP0ahaFiZ4_1CSZYK_Fb/s1600/empathy+family.png" data-original-width="819" data-original-height="490" /></img></center>
<br/><br/>
This is what many people want to look like, but probably don't. Scott Alexander noted that people on the left often make a point of showing empathy for people more unlike them. (Is this a stable strategy?)
<br/><br/><center>
<img width=60% height=60% src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggkN2PDyD2M5lnOODbVaw5LMFDlw4VtnUbD3BzM-26JeeNE01rdc64IuCo3Ku5TFTx8MXqxz_KS97UVES9qHnIKBzF94h-3bekPIkcVP1rG-Jr9hKXhYm1Wg25mv55papu8b2xLzvLn0Mu/s1600/empathy+progressive.png" data-original-width="819" data-original-height="490" /></img></center>
<br/><br/>
And finally, this is what the Buddha demonstrates, and how some progressive people claim to act - equal empathy for all beings (on the far right presumably are nonhuman primates, other animals, plants, etc.)
<br/><br/><center>
<img width=60% height=60% src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiL0TwmkVWLroD8sOXBX1EWV0hnei64t6FVFf-Zc-znN2QHygTpMKfbZqJ8ut22n4mT3a-FLRQ-bxW0ZioF3XXzSM35xOgijUiuKDyj3QzlaJcB_QABqoacuyGgvEUxMXzktoPzsXT3PLkp/s1600/empathy+buddha.png" data-original-width="819" data-original-height="490" /></img></center>
<br/><br/>
Of course these are all quick-and-dirty qualitative graphs to give you and idea, but they illustrate the hallmark of empathy curves. For most of us, empathy curves are <i>sigmoidal</i> - there is an inflection point at some level of non-self. Questions that are raised by considering this relationship are:
<br/><br/>
a) Before you get all excited that the graphs above are showing some progression toward a desirable goal - is it <i>sustainable</i> to show the same empathy to all others - either zero (narcissists) or full empathy (the Buddha)? Or even to show empathy in <i>inverse</i> proportion to how much like you someone else is? Empathy has real world impacts and there are obvious sociobiological reasons why most people's curves look like the third graph, but from a purely practical perspective, if your empathic behavior leads to your rapid extinction, it doesn't seem to be effecting much good in the world. The steepness of the empathy curve also produces a lot of the current political divide in the West - i.e., the less able to abstract a principle beyond their ingroup, the more contentious a faction.
<br/><br/>
b) Empathy curves can change over time for an individual, and finding what else is different about individuals who undergo change (neuroanatomically, psychologically) may be informative. For example, in psychiatry, there is the concept of the "burned out" antisocial, the person who commits vicious crimes indicating low empathy when he (usually he and not she) is younger. Then after about age 40, the same person is much less likely to commit further violent crimes. My speculation is that these people are not burned out but rater finally "grown in", i.e. their orbitofrontal cortex has finally produced enough synapses to affect their behavior, in the same way that ADHD symptoms often fade into and through adulthood as the cortex matures (again, in more often in males.) Many of us can think of anecdotal examples of a male who in his youth was a hell-raiser only concerned with himself, then transforms into a devoted family man - but he still has a very steep empathy curve that drops off once you move outside the family. (That guy who dotes on his daughter but was in a biker gang when he was younger? He might actually be a very good father - but he <i>still</i> doesn't have much concern about anyone's pain but that of his wife and kids, and if you're past his steep sigmoidal drop-off, you definitely don't want to test that.)
<br/><br/>
I've looked for evidence of differing oxytocin levels or even ADH/vasopressin (or receptor mutations) in the literature about psychopaths and antisocial PD. If you accept crime as a proxy for low empathy, there's a small literature on testosterone's role, but it's not nearly as clear as you would think. For one thing, there's actually a meta-analysis undermines the argument that testosterone is behind the pattern in crime spike and then decrease after early adulthood (<a target=_blank href="https://www.sciencedirect.com/science/article/pii/S135917890400014X?via%3Dihub">Archer et al 2001</a>), and <a target=_blank href="https://www.sagepub.com/sites/default/files/upm-binaries/60294_Chapter_23.pdf">Ulmer and Steffensmeier (2014)</a> point out that though testosterone does not drop precipitously after adolescence and very early adulthood, crime typically does.
There is a vague signal about aggression dropping as testosterone declines with age (including in women - <a target=_blank href="https://www.ncbi.nlm.nih.gov/pubmed/9316179">Dabbs and Harbison 1997</a>) but neither this nor any of the previous studies track testosterone and crime in the same individuals, which would be most informative. Overall the research on decreasing crime is scant - it seems once these people stop committing crimes, we're less interested in studying them.
<br/><br/><br/>
<b>References</b><br/><br/>
Archer J, Graham-Kevan N, Davies M. <a target=_blank href="https://www.sciencedirect.com/science/article/pii/S135917890400014X?via%3Dihub">Testosterone and aggression: A reanalysis of Book, Starzyk, and Quinsey's (2001) study</a>. Aggression and Violent Behavior. Volume 10, Issue 2, January–February 2005, Pages 241-261
<br/><br/>
Ulmer JT, Steffensmeier D. <a target=_blank href="https://www.sagepub.com/sites/default/files/upm-binaries/60294_Chapter_23.pdf">The age and crime relationship: Social variation, social explanations</a>. In The Nurture Versus Biosocial Debate in Criminology: On the Origins of Criminal Behavior and Criminality (pp. 377-396). SAGE Publications Inc.. 2014.
<br/><br/>
Dabbs JM Jr, Hargrove MF. <a target=_blank href="https://www.ncbi.nlm.nih.gov/pubmed/9316179">Age, testosterone, and behavior among female prison inmates</a>. Psychosom Med. 1997 Sep-Oct;59(5):477-80.
<br/><br/>
Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-57834733258908591922018-01-07T16:11:00.000-08:002018-03-27T17:39:48.093-07:00Why Do People Remain Loyal to a Losing Team?<i>Cross-posted to the <a target=_blank href="http://mdk10outside.blogspot.com">MDK10Outside</a> and <a target=_blank href="http://thelateenlightenment.blogspot.com">the Late Enlightenment</a>.</i>
<br/><br/>
<b>tl;dr Sports fan behavior is explained by a combination of constant identity-forming team loyalty which is an end in itself, and status signaling by association which is modulated by team performance. These two factors differ between individuals and are associated with different cognitive styles, with constant loyalty more associated with moral foundations and intransitive preferences.</b>
<br/><br/>
It's been observed that you can tell who a team's true fans are by noticing who remains loyal to the team even when that team is losing. I think this is meaningful, but it does beg the question: what are those fans getting out of it?[1] Of course any speculation about this must mention the very real example of the Cleveland Browns, who over the past 2 years have a 1-31 record, and this year after going 0-16 they were on the receiving end of a sarcastic "perfect season" parade.
<br/><br/>
Humans get utility from associating with others with high status. Much of the happiness that a sports fan gets from their emotional connection to their team derives from this, and many observations are consistent with what a status-by-association theory would predict: fans are happier when their teams win because they feel high status and can <i>signal</i> higher status, they engage in extreme dominance displays when their teams win important contests (i.e., people acting like idiots as they come out of a championship game if their team won, yelling, jumping on cars, setting off fireworks) but not if they didn't win, they attend games more when the team is winning and less when the team is losing, and they wear branded gear to identify themselves with the team and otherwise let others know of their association.[2]
<br/><br/>
But this theory falls short of explaining why, for example, there is any such thing as a team's consistent fanbase. By this model, everyone should just cheer for the best team, game by game (or even play by play!) It especially doesn't explain why the the Cleveland Browns have any fans left at all; supposedly they're a football team but I've seen a number of convincing arguments against that, for instance, every game of the 2017 season. During an 0-16 season you would expect that if fandom is about fully rational people maximizing utility by associating with high status teams, the fans would stop posting on forums, they would put their gear away and deny to others that they were fans, and the stadium would not just have lower attendance, it would be completely empty. Yet this is not what happened.
<br/><br/>
I think the answer here very likely has to do with the gap we see between two types of beliefs/behaviors that often produce apparent impasses in other domains of life, especially religion and politics, the intensity of which differs between individuals. This gap in rational and more instinctual behavior will seem very familiar to readers of books like Jonathan Haidt's <a target=_blank href="https://www.amazon.com/Righteous-Mind-Divided-Politics-Religion/dp/0307455777">Righteous Mind</a>, or Simler and Hanson's <a target=_blank href="http://elephantinthebrain.com">Elephant in the Brain</a>. Humans demonstrate some domains in their cognition which are inflexible and impervious to reason - to use Haidt's categories, <i>harm, fairness, loyalty, authority,</i> and<i> purity.</i> By "inflexible" I mean "not open to discussion, or conversion into money or other goods/services." For example, you likely do not believe that murdering children is morally acceptable. Are you interested in hearing arguments about why it <i>might</i> be morally acceptable? If you would never consider such a thing, and you're uncomfortable that I would even suggest it in a thought experiment, you're showing inflexibility in discussing it. Okay - would you kill an adult for $50,000? I see that also upset you, I'm sorry to have opened with such a low offer! $75,000 then? You're being inflexible (I hope!) in reacting by thinking "It's not about the number!" Okay, what's the conversion rate between adults and children? Forget murder, how about urinating on a picture of your family for money? etc., you get the point. "Inflexible" means it can't even be suggested as open for discussion, which includes not being allowed to convert between moral-foundation-violating acts and money, or between different types immoral acts. (A favorite of action movies and dramas to demonstrate the extreme evil of an antagonist is to have them force someone declare the relative value of immoral acts, e.g. Sophie's Choice.) To connect back to the abstract, the philosophical term for having values that cannot be negotiated, and for which there is no relative value like this, is that they are <i>intransitive.</i>
<br/><br/>
I took you on this little tour of moral darkness to illustrate that morally normal humans do not adhere to consistent rationality, and the ones that actually do are psychopaths.[3] (You may be interested to know that Haidt found that when he surveyed the business students he was teaching, they scored low on every single moral dimension, taught as they are that everything is negotiable.) So what does all this have to do with the Cleveland Browns? Many of us have noticed that "hardcore" sports fans - the ones who stick around with long faces even when the Browns are losing, and falsify the first model above - tend to have certain personality and cultural characteristics that fit well with some of these inflexible moral foundations: they tend to be more religious, more nationalistic, more conservative and more valuing of loyalty and authority.[4] Sports fans rarely become hardcore about a team <i>after</i> entering adulthood, and very often there is a family lineage of fandom - and these are exactly the <a target=_blank href="https://www.goodreads.com/quotes/809630-give-me-the-child-for-the-first-seven-years-and">times and ways in which characteristics of core identity are formed</a>. Also telling, while there were about 3,000 people who showed up for the Cleveland Browns parade, there were many fans who were quite angry about it - but online objections were mostly that it was "embarrassing". (No mention of the 0-16 record that inspired the parade.)
<br/><br/>
Before I put into words what might be motivating them and make predictions, here's a summary of the two kinds of of beliefs, producing two kinds of motivation. While these beliefs exist in everyone, there is going to be a distribution in the population, with one category of beliefs dominating the fandom-related cognition of some fans, and the other category dominating that of others.
<br/><br/>
<table border=1>
<tr><td><b>HARDCORE FAN</b></td><td> <b>CASUAL FAN</b></td></tr>
<tr><td>motivated by moral foundations </td><td> by utility calculations</td></tr>
<tr><td>end in themselves </td><td>deliberate, external goal-oriented</td></tr>
<tr><td>higher value on loyalty </td><td> lower value on loyalty</td></tr>
<tr><td>adopted in childhood, maybe from family</td><td>adopted voluntarily in adulthood</td></tr>
<tr><td>not negotiable </td><td>negotiable</td></tr>
<tr><td>central to identity </td><td> not central to identity </td></tr>
<tr><td>unwilling or unable to verbalize </td><td> position clearly verbalized</td></tr>
<tr><td>more often encountered in person </td><td> more often encountered online</td></tr>
<tr><td>sees casual fans as untrustworthy, sleazy</td><td>sees hardcore fans as stupid, gullible</td></tr>
</table>
<br/><br/>
Of course it's a spectrum, and every fan is somewhere on this spectrum, but many of us clearly lean toward one or the other end. (If you're reading this, you're more likely in the right column than the left.)
<br/><br/>
To summarize the hardcore fan: he is motivated by more basic, instinctual moral drives, especially loyalty. Being a good fan is an end in itself, and an offer to burn a team jersey, to cheer for the other team, etc. in exchange for money is likely to not only be immediately refused but to provoke active offense. These fans consider their fandom a crucial part of their identity, to the extent of including team-related themes in their weddings or <a target=_blank href="https://247sports.com/nfl/cleveland-browns/Bolt/Cleveland-Browns-fan-blames-organization-in-obituary-112862330">mentioning it in obituaries</a> ("he lives and dies by the Browns"; "a Browns fan to the core.") He can get uncomfortable when the business aspects of a professional sport are discussed and overshadow the games on the field. Asking him to explain his fandom will be met with puzzlement, anger, or a jumbled set of team cheers and slogans, in the same manner as a person asked to explain why they are patriotic or follow a certain religion - "If I have to explain it to you, you'll never understand." And finally, because tribal loyalty sentiments are more warning-barks or team cheers than any kind of actionable proposition, you're more likely to hear such sentiments when talking to him in person, where the nonverbal (affect-laden and irrational) part of communication dominates. He will be a fan for life. When the bandwagon people disappear during losing seasons the hardcore fan says "Good riddance, good-time Charlie."
<br/><br/>
To summarize the casual fan: he is motivated by utility calculations about external goals (this team might win this year so I'll cheer for them, maybe I can make friends this way, maybe I'll look successful if I follow a good team.) He doesn't see what's impressive about staying loyal to losers, and really doesn't understand why making fun of your team when they lose is shameful or embarrassing. He probably picked up his fandom after college, maybe when he moved to a new city. He probably don't care either way about the business dealings of the team. If someone offered him money to stay home from a game or burn team logos, he would seriously consider the offer. He doesn't introduce himself to strangers as a fan, and five years from now he might not be following the team, or might not be following the sport at all. He can give clear reasons why he started following the team, and you're more likely to hear from people like him online. He shakes his head at the hardcores who keep shelling out cash for losing teams' jerseys.
<br/><br/>
Both the hardcores and non-hardcores gain utility in proportion to the team's performance. A team's performance can be negative, causing you to lose utility by associating with them.[5] But there must be another source of utility for the hardcores, who somehow gain utility from the association no matter the team's performance - and that source of utility is a constant ability to demonstrate loyalty, period, to others as well as to themselves to reinforce their own identity. And this signal is most informative when your side is losing.[6] Speaking quantitatively, in the utility equation for this model, there are two terms, loyalty (a constant for everyone, hardcore or not), plus the product of team performance times associative utility. Associative utility is how much your utility changes per team winningness. Both loyalty and associative utility vary by individuals, and team performance of course is determined by the team. The equation looks like this:
<br/><br/>
<center><b>Total utility = Loyalty-based utility + (Team performance * associative utility)</b></center>
<br/><br/>
Team performance can be positive or negative. For the hardcores, loyalty is such a large term that it doesn't matter how negative team performance is, loyalty will alway be greater and the total utility will always be positive (this could be the definition of "hardcore", "rain or shine", etc.) Further toward the other end of the spectrum, the value of loyalty signaling decreases and the team performance makes more of a difference in whether people keep following the team. It's also worth pointing out that this explains people who don't care about sports at all, because they have zero loyalty and zero associative utility - that is, it doesn't matter how much the team wins, they still won't care.
<br/><br/>
<br/>
<b>PREDICTIONS</b>
<br/><br/>
Many of these predictions seem trivial, but the point is to relate these predictions to specific components of the hardcore fan's motivation structure as noted in the table above, which would be more informative.
<ul>
<li>While utility is hard to measure directly, there are good proxies for it, like revenues, attendance, or Nielsen ratings. Given that there will be a distribution of hardcore to non-hardcore fans, there will be a non-zero floor to revenues so even 0-16 teams don't go to zero, as we observed. If we graph all of the teams on performance vs utility proxy, I would expect a mostly linear-looking scatter plot with an increase in the slope at the good end, for those teams with some expectation of a national championship, and possibly a flattening at the bottom. This may depend more on expected utility (if fans are pleasantly surprised by a win vs. they expect their team always to win.) I plan to try to collect some kind of utility-proxy data and see if this is in fact the case.
<li>In general a sport will be more successful in inspiring loyalty, the more similar it is to tribal warfare (always a reliable revenue stream for every team); maybe this is why football has eclipsed baseball as the national pastime.
<li>The more hardcore, the more they will pay attention to the outside charity activities of their own team, and the more outraged they will be by disloyalty-demonstrating acts, e.g. <b>kneeling during the national anthem</b>. They will also be more interested in the moral failings of opposing teams, especially rivals.
<li>The more hardcore, the less they will be interested in statistics, especially of other teams, even ones their teams are playing in important games.
<li>The more hardcore, the greater the difference in their interest in a player when he is on their team, vs. after he is traded. That is, hardcores think each of their players is a great person on and off the field - when he plays for their team - and any suggestion that they'll stop caring about him the second he is traded is likely to be met with hostility, but in fact this is the behavior they demonstrate. (He will also be annoyed when asked why, or when Seinfeld is cited - "Essentially you're cheering for clothing.")
<li>The more hardcore, the more they will feel sad or angry after a loss, and the more likely they are to attend or watch the next game despite having been very sad or angry at the last game's outcome.
<li>The more hardcore, the less tolerant they will be of fans behaving negatively toward the team, even when the team loses (very concrete and contra expectations here: you might expect hardcore fans to support a parade showing anger against the people making their Browns lose, but it seems to be exactly the opposite. Parallels to gay marriage here too: how exactly does the 0-16 parade degrade your fandom, when you didn't attend?)
<li>The more hardcore, the more they will confuse the team with a government agency or public good (i.e., demanding that the city finance a new stadium.)[7] More recent teams with cities that have highly educated and/or mobile populations (i.e. the coastal Pacific) will therefore find that they can't get what they want from those cities, because the voters don't care (Seattle, San Francisco, San Diego) where other cities filled with less mobile, less educated people would crucify their mayor for allowing a team to leave on their watch.
<li> It's often been noted that the Midwest with its brutal early winters has far more rabid sports fans than the mild West Coast. One possibility is that the loyalty-demonstration value of attending every game is diminished when all of those games are 70 F and sunny, vs some of them being freezing cold. (Think of the people who still wait in line in the cold and dark on Black Friday morning to buy things for their families. They do know that Amazon exists. So what do you think they're really doing?) Of course there could be a climate-independent cultural difference between east and west coast, but the model's prediction would be that Miami has equally low loyalty.
<li>The more hardcore, the more they will be upset if <a target=_blank href="http://mdk10outside.blogspot.com/2010/07/awww-lebron-james-is-leaving-cleveland.html">a star player leaves for another franchise</a>, or <a target=_blank href="http://mdk10outside.blogspot.com/2012/02/seattle-sonics-moved-what-happened-then.html">the whole team moves to another city</a>, and they use words like "betrayal."[7]
<li>The more hardcore, the less tolerant they will be of long-term, off-field strategies, especially ones that alter play and result in on-field losses. (Both the 2008 Detroit Lions and 2017 Cleveland Browns had 4-0 preseasons, then went 0-16. Tanking (<a target=_blank href="https://scout.com/nfl/browns/Board/105323/Contents/Are-the-Browns-tanking--109431153">here</a> and <a href="https://www.theringer.com/2017/6/16/16077336/nfl-tanking-nightmare-new-york-jets-cleveland-browns-competitive-balance-6c9f3937811b">here</a>) and/or salary cap manipulation? Difficult to explain as mere incompetence. And if it were confirmed that this is what is happening, the hardcore fans would be angry; casual fans might say "Huh, that's kind of clever, although it means you've been putting a bad product on the field." "My team is not a 'product'!" the hardcore fan says.)
<li>I'm not sure what to predict about the impact of hardcoreness on betting. The hardcores' loyalty may make them become overconfident in their team's performance. On the other hand, moral foundations-related beliefs are often kept carefully separate from anything affecting real-world decision-making. By that I mean: sacred beliefs are often more tribal chant than actionable proposition, and in general, people desperately avoid any bet that touches their moral foundations (next time someone makes a verifiable statement about religion or politics that you disagree with, offer to bet them, and see what happens. Typically they backtrack to a non-verifiable version of what they said, and/or get very offended that you would "cheapen" such an important matter by betting on it - which are all moves to avoid testing their belief.) Then again, the hardcore fans presumably know more about their team than most others, which means they should be more confident in their predictions, and be more willing to bet. Consequently they may be less willing to bet proportional to their claimed confidence, than would a casual fan with equal knowledge of the team would be. In my one test of this during March Madness, I found that <a target=_blank href="http://mdk10outside.blogspot.com/2012/03/how-wise-is-crowd-in-basketball.html">self-identified fans did more accurately predict the outcome of a game involving their team than non-fans</a>, but I collected no information on willingness to bet.
</ul>
<b>Footnotes</b>
<br/><br/>
[1] This very article is diagnostic. By trying to dissect loyalty, instead of taking it as an obvious good and discussing it in the context of a specific team, I mark myself as someone with a small loyalty term in my equation - whereas people whose sports utility equation is dominated by loyalty would not understand, and/or be actively be offended by, a question like "What do you get out of being a fan of your team?"
<br/><br/>
[2] One might argue that a purely rational human being would ignore sports altogether - what do a bunch of guys chasing a ball on a field somewhere else in my city have anything to do with me, I've never even met them! - and I'm sympathetic to that argument.
<br/><br/>
[3] I hope no one read the paragraph about the price of murder and thought, "Hmmm...What <i>is</i> my price to kill someone?" In the case of <a target=_blank href="https://en.wikipedia.org/wiki/Richard_Kuklinski">exemplar psychopath Richard Kuklinski</a>, he got positive utility from harming people so he kept doing it even after he ran out of work.
<br/><br/>
[4] When people do not have VNM-consistent rationality (that is, they have these inflexible, non-negotiable, non-fungible beliefs - i.e., intransitive preferences) - they can be <a target=_blank href="https://www.lesserwrong.com/posts/kHyJaixCdiZyFRo66/real-world-examples-of-money-pumping">turned into money pumps</a>, by observant and unscrupulous characters who can carve their motivation structure at the joints, i.e. focusing on the the inconsistencies. While this has been reproduced now in artificial settings, not only salespeople but politicians have been doing it since the dawn of civilization. <b>The NFL and in particular the Cleveland Browns are doing exactly this to the fans by exploiting the intransitive preference of loyalty,</b> and I would be very surprised if their marketing does not already have a model of their fans and spending patterns similar to what I've described here. Another follow-up is to look for literature on whether psychopathy allows one to see these disconnects more easily, or (hopefully) the ability to see them and the willingness to act on them are unrelated and therefore form a mercifully narrower sliver on a Venn diagram of the population.
<br/><br/>
[5] There's probably a Markovian/hedonic treadmill effect here too, where the utility multiplier from a team's win is not constant but rather influenced by expectations based on the team's record. Next year if the Patriots go 9-3, fans leaving a game after a win won't be as happy as Browns fans if the Browns have the same record.
<br/><br/>
[6] Remember Karl Rove dragging out the 2012 election night broadcast and refusing to accept the outcome, seeming a little nuts? But simultaneously advertising to ten million Republicans watching <i>that he never ever gives up.</i> Say what you will about Karl Rove, but "bad strategic thinker" was not among the many epithets hurled at him.
<br/><br/>
[7] When the Baltimore Colts were about to move to Indianapolis in 1984, the city actually tried to pass an eminent domain act (!) to take over the team, but the Colts escaped with the team's property under cover of darkness the night before. Other teams like the Chargers have found <a target=_blank href="http://mdk10outside.blogspot.com/2015/07/the-chargers-stadium-suddenly-they-want.html">a much more lukewarm reaction on threatening to leave</a>, and <a target=_blank href="https://www.youtube.com/watch?v=U8GaWQslt8c">found themselves</a> without <a target=_blank href="http://rtznews.info/greeny-astounded-by-chargers-being-booed-at-home-mike-mike-espn/">many fans</a>.
<br/><br/>
[8] While I wrote this post I was wearing a <a target=_blank href="https://www.youtube.com/watch?v=2fraSdN-PG8">Garfunkel and Oates sportsball</a> T-shirt, so you can guess which end of the spectrum I'm near.Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-3763076350075010412018-01-07T12:17:00.003-08:002018-06-12T19:32:08.493-07:00Evil Gandhis and Poor Executive Function: How the World Looks if You Have Poor Impulse Control<i>[There is <a target=_blank href="http://www.reddit.com/r/slatestarcodex/comments/8qlsrd/evil_gandhis_and_poor_executive_function_how_the/">a great discussion about this post at the Slate Star Codex subreddit</a>, with valid criticisms, which is worth checking out.]</i>
<br/><br/>
<i>Cross-posted to <a href="http://thelateenlightenment.blogspot.com">the Late Enlightenment</a>.</i>
<br/><br/>
Imagine that in some distant, cloudy mountain hideaway there is a city of evil Gandhis - or just unempathic monks - who spend all their waking hours meditating. As a result of the self-control they've created in this manner, their executive function is superhuman - after all, extensive meditation builds not just cognitive discipline but EEG-measurable physical changes in the brain. When finally you scale the last soaring frozen wall and scramble over the edge onto the floor of their lookout points, you have finally arrived in this storied, isolated monastery-city. You are greeted by intellects vast, cool, and unsympathetic, studying you from their great central plaza with piercing eyes. You find that you are the first visitor from your country. Suddenly a horrific pain erupts from the back of your neck, and you turn to see one of the monks withdrawing a red hot brand that he has just poked you with.
<br/><br/>
Obviously you demand to know why you deserved that. As they are merely dispassionately interested in collecting knowledge, this one calmly explains that they would like to see if your skin burns in the same way theirs does. You turn to see several more of them calmly approaching you with various glowing metal rods; behind them, in the fire at the center of the plaza, someone is handing out more metal rods. You tell them to stop, but they ignore you. Finally, you turn to the closest one approaching you, and punch him in the face. Your punch lays him flat out and his metal rod clangs to the ground.
<br/><br/>
"That's assault," one of the monks says. "We're going to have to lock you up now."
<br/><br/>
"Assault?" you shout. "What was I supposed to do? You made me assault you!"
<br/><br/>
The monk rolls his eyes. Only then do you notice various burns, knife and whip scars all over his face and arms. "You're like a child. It's not our problem if your self-control is so poor that you can't stand being burned a few times."
<br/><br/>
[Reddit user <a target=_blank href="http://www.reddit.com/user/davidmanheim">davidmanheim</a> at the SSC subreddit suggested that this thought experiment would work better if instead of just burning our protagonist, the monks capture him and set up food for him that he is supposed to cross hot coals to obtain. When he goes around the coals and takes it anyway, he is locked up for theft. I agree that this would make the point better.]
<br/><br/>
To a person with a Cluster B personality disorder - including narcissistic PD or especially borderline - the world must seem to be filled with such evil cold-blooded monks. If I have BPD, then these people just can't see that when they withhold affection, that's so intolerable - it's just the same as a hot iron - that they're making me attack them to protect myself. (I have heard a severe narcissist in a psychiatric hospital, fighting while being restrained by staff after being refused special treatment, literally say "Look what you're making me do! You're making me do this!" The resemblance to what a five year-old might say is not coincidental.)
<br/><br/>
But this is more than just an interesting perspective - it's relevant to a critical assumptions that we make in liberal democracies. Namely, that people have agency, and this agency allows them to be responsible for themselves, and to some degree others. While (so far as I know) pain-tolerating monks do not exist, people with severe borderline and narcissistic personality disorder - with poor executive function and low distress tolerance - do exist. And we do lock them up.
<br/><br/>
It turns out that "agency" has buried within it many components, which do vary quite a bit across the population, and which profoundly affect people's ability to run their own lives and live with others. The one case where we're comfortable saying that humans don't have agency is children - but even that is somewhat arbitrary and agranular (many of us can think of a sixteen year old more capable of running her own life than a twenty-eight year old.) The monks would lock you or me up because we're at the extreme bad end of their distribution, just like we lock up people in jails or long-term care facilities, but we wait for someone to commit an act, of the sort that they are guaranteed to commit at some point, if they're at the extreme end of the distribution. As society becomes more complex, more and more people will commit such acts, and we'll have to get more honest and clear about exactly how we deal with them.
Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com1tag:blogger.com,1999:blog-4724592643224262209.post-17929779664307648302017-12-19T08:56:00.000-08:002017-12-19T08:56:44.860-08:00High Altitude Psychosis: A New Medical Entity?<i>Cross-posted at <a target=_blank href="http://mdk10outside.blogspot.com/">the MDK10Outside running and climbing blog</a>.</i>
<br/><br/>
A new paper breaks down cases of acute mountain sickness into several categories, including isolated psychosis with no evidence of cerebral edema (a quarter of cases!) Psychosis was more associated with accidents than the other subgroups, not surprising in retrospect. These cases were all taken from above 3500m/11,700'. It's always interesting that humans, and life on Earth generally, can tolerate some amazing extremes, but when the partial pressure of O2 drops a little bit, everything breaks.
<br/><br/>
Hüfner K, Brugger H, Kuster E, Dünsser F, Stawinoga AE, Turner R, Tomazin I, Sperner-Unterweger B. <a target=_blank href="https://www.cambridge.org/core/journals/psychological-medicine/article/isolated-psychosis-during-exposure-to-very-high-and-extreme-altitude-characterisation-of-a-new-medical-entity/C2BCDEDCB0C6415B16531008857D730C">Isolated psychosis during exposure to very high and extreme altitude – characterisation of a new medical entity.</a> Psychol Med. 2017 Dec 5:1-8. doi: 10.1017/S0033291717003397Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0tag:blogger.com,1999:blog-4724592643224262209.post-1271488095960781342017-12-05T20:28:00.000-08:002018-04-28T20:27:57.724-07:00Whistled Languages are Not Distinct Languages<i>[Added later: the Bora in the Amazon <a target=_blank href="https://www.scientificamerican.com/podcast/episode/drumming-beats-speech-for-distant-communication/">use drums to communicate over long distances</a> - interestingly, using a "compressed" pitch and rhythm system very similar to the whistled system of the Canaries. <b>It's still not its own language,</b> but it's indisputably an outstanding piece of human cultural innovation, and comparison to how Silbo compresses information would be very useful.]</i>
<br/><br/>
An article in the New Yorker <a target=_blank href="https://www.newyorker.com/tech/elements/the-whistled-language-of-northern-turkey">re-romanticizes the idea of "whistled languages",</a> moving from the more famous Silbo of the Canary Islands to a whistled version of Turkish found in northern Turkey. I wrote about <a target=_blank href="http://cognitionandevolution.blogspot.com/2017/04/why-silbo-is-not-language.html">Silbo previously,</a> a whistled version of (now) Spanish. (And that's what it should be called - whistled Spanish. In fact Silbo means "whistle.")
<br/><br/>
As before, I emphasize here that I'm not trying to diminish the significance of these innovations in human culture, just argue against the misconception that these whistled versions of languages are in fact languages in themselves. I'm thrilled that people have caught on to the cultural value and worked to preserve them, and furthermore whistled language provides fertile testing grounds for hypotheses about the comprehension of language, like the <a target=_blank href="http://www.cell.com/current-biology/abstract/S0960-9822(15)00794-0">lateralization study in Current Biology</a> conducted by Güntürkün, Güntürkün and Hahn cited in the New Yorker. And there are multiple places in the world where a whistled version of a language has developed, for obvious adaptive advantage (hard-to-cross mountains and/or thick forests.) <b>But whistled languages are not distinct languages.</b>
<br/><br/>
Why not?
<br/><br/>
1. Whistled communication is never used as the primary method. People use Silbo to call across canyons to each other (effective and clever) but there was no report of the ear-splitting sound of story-tellers around the campfire (because then they were speaking normally.) I place this claim first because reports of whistled Turkish state that it was used in non-work settings, which on its face would seem to undermine my argument. BUt this loses sight of why it matters whether people use whistled Turkish in more than a few restricted settings - <b> because there is still no such thing as a first-language whistle-language speaker, <i>and certainly there never has been a monolingual</i>.</b> At best, this makes it a pidgin, like <a target=_blank href="https://www.washington.edu/uwired/outreach/cspn/Website/Classroom%20Materials/Curriculum%20Packets/Treaties%20&%20Reservations/Documents/Chinook_Dictionary_Abridged.pdf">Chinook Jargon</a> or <a target=_blank href="https://en.wikipedia.org/wiki/Russenorsk">Russenorsk.</a>
<br/><br/>
2. Whistled communication is a whistled version of a primary, "normal" spoken language. If a whistled language is distinct, then when we whisper in English next to each other, that would count as a separate language as well. And of course it doesn't - it's a restricted, rarely-used, context-specific form of communication which makes some phonetic sacrifices due to its usefulness in that setting.
<br/><br/>
3. Can you learn the Turkish whistled language without learning Turkish? No. Granted, a first-language Turkish speaker might not understand it at first, much like you didn't understand Pig Latinthe first time you heard it. But you can learn Pig Latin - only if you know English first.
<br/><br/><br/>
No comment on how this bears on my (partly tongue-in-cheek and now mostly abandoned) cultural affinity argument about <a target=_blank href="http://cognitionandevolution.blogspot.com/2017/09/in-favor-of-broad-altaic-hypothesis.html">whistling in other Altaic languages</a>.
Michael Catonhttp://www.blogger.com/profile/01017910055699348111noreply@blogger.com0