Consciousness and how it got to be that way

Friday, October 19, 2018

Lying and Intention

One frequently discussed problem in the analysis of what constitutes moral behavior is that of the contribution of an actor's intention, if any, to the morality of the act.

I someone hits me with a car, if I am a consequentialist, I have no grounds to say that the act was more or less moral based on the intent of the person. If someone hits me with their car at 35 mph and breaks my femur, to a consequentialist shouldn't care if the person did it intentionally and was pleased by this outcome, or accidentally and horrified by it.

This is correct - only if the consequentialist just cares about this single, isolated act, which in the confines of a thought experiment, is often the (unintentional) implicit assumption. But of course this isn't the case, and it violates our moral intentions and (if the two are separable) the actual reactions we have in such situations, or even just hearing about such situations. Even taking out the egocentric anger and desire for revenge likely to be incurred by someone who you know hit you intentionally and enjoyed it, you would be right to be concerned that this person is out there running around loose where they can hurt someone else - and a monster like that is unlikely to limit themselves to cars in such endeavors. That is - their intention predicts future actions, which is why it matters, even (especially!) to a consequentialist. The person who is horrified is less likely to do such things again, although even in that situation, if their horror is misaligned with choices they keep making (they were texting, they were under the influence, etc.) then this also figures into our evaluation - because it predicts future behavior. A stronger statement is that without intention as a predictor and link to future behavior, to talk about the morality of an isolated act is meaningless.

The law in most OECD countries actually gets this right, at least for murder, where it differentiates by degree. The difference in intent between accident and non-accident is obvious enough, but the difference between first and second degree murder is also important. There is something very different and more threatening about someone who murders after planning it out, rather than by unchecked impulse. If someone had a bizarre neurological disorder causing them to helplessly pick up long objects and swing them at everyone around them, you wouldn't want them walking around loose, but you would see that they were horrified themselves at this tragic illness, so you also would recognize that this is not a person who intends harm and whose other actions are suspect as well. (If someone you know to be unfortunately afflicted as a neurological-disease-stick-swinger calls you (from far away hopefully) and asks for a donation to a charity, you're much more likely to think the charity is legitimate than if you get a call from someone you know to be an intentional, actually-enjoying-hitting-people stick-swinger.)

This is the same problem which makes certain human behavioral patterns appear irrational when in fact they are not. For example, it's a well-studied result in game theory that humans are willing (in fact eager!) to punish cheaters even if the damage is done, and enacting the punishment has a non-zero cost. Yes, in a true one-round game, the rational thing to do is to stop one's losses and walk away - think of not getting in a pissing match with someone who cuts you off on the freeway - but this is a rare circumstance. The small bands we've lived in throughout most of history, where you were around the same people all the time and kept score on each other, would predispose exactly such a behavior to emerge - and in game theory experiments or one-time encounters in large populations, we may not be able to override our programming. Granted, in those relatively rare encounters, it is irrational not to override it - but again, these encounters are rare. Unless you're planning to be the cheater, you likely minimize our time around and interactions with complete strangers. I've come to refer to these kinds of situations (either game theory experiments or in real-life, like each day on the freeway) as GOOTs - Games Of One Turn.


There are truth-telling absolutists (Kant is the obvious example, but a modern defender is Sam Harris) who have difficulty ever justifying an intentional mistruth. In Harris's case this is especially interesting as he (correctly) defends the role of intention in moral acts generally. The morally justified lie in the murderer-at-the-door thought experiment was most famously and tragically realized in the example of Anne Frank (Varden 2010), and in comparing Kant's argument to this specific event it often takes rather more argument than we might hope it should to justify why trying to deceive the fascist occupiers was acceptable.

The claim "lying is always wrong" - hereafter referred to as the naive theory of lying - fails because three related assumptions are clearly falsified.
  1. Bad Assumption #1 - identical agency: Humans are all equally capable of identifying and acting on truth; that is, we all have an identical set of beliefs about the world, are equally able to reason about them, and will therefore respond to the same information in the same way. (Kant's Golden Rule fails for this reason as well.) This could occur because of false beliefs, biases, or any other departure from rationality that leads to suboptimal computing of beliefs.
  2. Bad Assumption #2 - moral isolation of speech from other actions: Speech is capable of communicating unfiltered truth mind-to-mind, and that the moral weight of a statement comes from how true it is, rather than the effect that you are intending to have with your statement. In this, speech is qualitatively different from other actions.
  3. Bad Assumption #3: cooperation-independence: Truth-telling applies to all humans, even if they are not cooperating with even your most basic interests (e.g., preservation of life and avoidance of needless suffering.) This alone justifies lying to the Nazi at Anne Frank's door. I would agree that it is correct that intentionally creating a false impression in someone else's mind is immoral because it's a form of harming them, but a) if someone intends to harm you, harming them in attempt to stop this is not immoral and b) one can certainly create a false impression by telling the truth, as will be explained below.
Because these bad assumptions are false, applying the naive theory of lying to behavior produces inconsistent results with respect to harm we cause by speaking to people. This makes it an inadequate moral rule. (If your theory of morality does not have a place for harm minimization in general, the argument we would have to have is at a much more profound level.)

It is telling and under-appreciated that children as they develop a theory of mind often test the (widely accepted) naive theory of lying against the principle of harm minimization. Right now you can probably think of a smirking child who told you something that was technically true, but intended to deceive you or cause you some other problem. Cute at first, but if they keep doing it through to adulthood, you realize you're dealing with an immoral person.


There are infinite scenarios that illustrate this, but in the abstract, the most common scenario is this. (Concrete examples follow each abstract description.)

Person P believes fact X.

Q believes not-X.

In reality, not-X is true. (That is to say, P violates naive theory of lying Bad Assumption #1 - identical agency.)

Not only is Q quite confident that not-X is true, but they quite clearly understand that P believes X, and for bad reasons.

Q tells P fact Y (which is true!), knowing full well that this will cause P, based on P's false belief of X, to perform action A, which harms P. (Or, Q just makes no effort to convince P that not-X, allowing the false belief to stand.) (Bad Assumption #2 violated - moral isolation of speech from other actions.)

Notice that nowhere did Q lie. Q identified a false belief of P that warped P's judgment, and told P something true that will make P do something harmful to himself in the context of P's warped judgment, intending for his true statement to make P harm himself. Q did not lie, but rather used a true statement to create a false conclusion in P's mind that harmed P. Q is immoral.

There are many, many concrete examples in the world of financial transactions, where person P either has a false idea about the value or quality of an item, or the movement of a market - which Q does not correct because they benefit from the transaction.[1]

Let's say you're at the base of a cliff you've climbed before, and you know the view from the top is beautiful. As a local, you know that what appears to be the most obvious route is actually quite dangerous, because the rock is crumbly and anchors can pop out, risking that the climbers will fall and die - in fact this has happened many times, especially to outsiders who won't listen to the locals. A couple of tourist climbers show up, and you overhear them talking about how this rock face looks like sturdy solid granite, and they're planning to go up the obvious (but unknown to them, most dangerous) route. If you say nothing (to warn them about the crumbly rock), you're immoral. If you say, "The view at the top is great!" then you're really immoral.

The obvious objection is that you are, in a sense, lying by omission. You would tell the person it's dangerous, and you certainly shouldn't induce them to try it! (Yes, obviously, but again, by the naive theory of lying, you would've done nothing wrong.)


So, let's make things more interesting, returning to the abstract formulation and introducing a very real problem, that of differing beliefs causing people to make different decisions (thus violating Bad Assumption #1, identical agency.)

Q (who in this scenario is a better person) does try to convince P that not-X.

P refuses to believe Q despite Q giving good reasons.

Q recognizes P's false belief structure, and understands that by telling untrue fact Z to P, P will make a choice in their best interest, owing to their distorted beliefs.

Q tells fact Z to P - that is, Q lies to P - and P makes a better decision than if Q had told P the truth.

Back to the rock climbers. If you're not a jerk, so you go over to visitors and say, "I heard you talking about your route. I have to tell you, this whole cliff is kind of crumbly but the obvious route is really dangerous and people have died on it. I really think you shouldn't try it."

As they sort their gear, the visitors scoff. "Ha. I don't think so. These local yokels might be scared of it. Or maybe they just don't want outsiders climbing their route. People told us the locals here are liars. If you're a local I'm not going to believe anything you say. Are you?"

If you tell them the truth, you will harm them. If you lie and say "Nah, I'm visiting for the weekend from L.A. and a friend of mine there knows someone who died on that route" then maybe they'll listen. If you are scrupulous and try to change their minds, they dismiss you as a local, climb, then fall and die, you would be pretty immoral to say "Well with their false belief they put me in a bad decision, and by choosing to put them in more danger which led to their deaths, I made the right decision."


A final abstraction, for Bad Assumption #3, cooperation independence. Here, Q is again a bad person; maybe even a Nazi at Anne Frank's door.

P knows that Q intends to harm them.

Q's harming them requires information about P, provided by P.

P tells C to Q, knowing that not-C is actually correct. P lies to Q with the intention of protecting themselves or others.

One way to think about this for the naive theory of lying crowd: if lying is on the spectrum violence, then when someone intends to commit violence to you, lying to them is a form of self-defense, and in fact much better then physical violence one might otherwise have to employ. This stands independently of the rest of the argument and is consistent with the naive theory of lying.

Back to the cliff. You, the local, are again a jerk. In fact, you've repeatedly gotten visitors to try to climb the most dangerous route by telling them about the view, so that when they fall and die, you can collect their gear. ("Hey, I'm not lying to them! Not my problem if they come to the cliff and are careless about the rock quality on the route!") But the authorities have started to suspect someone is doing this intentionally, so now in your pre-climb conversations with your victims, along with inducing them to climb by extolling the view, you wheedle out of them whether they have any connection to law enforcement. Of course this time, the pair of climbers that comes is indeed law enforcement, but undercover. When you ask, they say "No." They're trying to stop you from doing this to other people, and by identifying themselves, they couldn't do that. They are behaving morally by responding to your violence with defensive, much less severe violence, and trying to stop you from harming others. (Naive-theory-of-lying people: would it matter if instead they didn't really lie and instead said "Hey, do we look like law enforcement?")


An improvement on the naive theory of truth-telling is this. If we intend to help people and intend to avoid harm, we should say things that will accomplish these things. Intention is quite important, because as with other actions, it predicts the speaker's future actions. The default assumption should be that this is almost always accomplished by telling the truth. However, once you have evidence that telling someone the truth will not help them or even harm them, and/or that lying in an extremely limited way will help (or that you're dealing with someone with bad intentions, i.e. who's not cooperating with even your basic safety), it is acceptable to say untrue things. We can call this theory of intention and lying helping by intentionally creating and accurate model - HICAM.

In those rare instances where we lie to benefit someone, we can call those pro-social lies. (Intentionally differentiated from white lies - more on that below.) But we can also categorize the ways of causing harm by speaking.
  1. Active lying or bullshitting. Actively creating a false impression without the intention of helping someone.
  2. Letting weeds grow. Allowing false beliefs to persist with the intention of harming someone.
  3. Manipulation with the truth. Telling the truth in a way that one intends to create a false impression, often by using pre-existing false beliefs.
Notice that this definition does not justify "white lies", nor have I used the term. A working definition of white lies is lying that spares people's feelings and otherwise have no effect. I might seem to be siding with the severity of the naive theory of lying people when I say that white lies are quite dangerous, for the reason that emotional impact absolutely is an important effect, and psychologically, it's a bit too easy to avoid difficult conversations by telling ourselves we're just telling white lies.

Moral thought experiments (including this one) often use exotic examples, although I bet more people were rock-climbing today than switching trolleys between tracks. But examples in your own life likely abound. That said, if you have kids, you have very likely told a few half-truths or outright whoppers to motivate them, keep them out of trouble, or otherwise improve an outcome when their little brains would likely not have responded as well to a carefully marshaled rational argument. Why? Because children's agency is poorly formed (Bad Assumption #1.)

The same goes for people with dementia. A relative of mine progressed to having no short-term memory. This person was quite attached to her to her family home. She lived in a nursing home for about two years but believed the whole time that she had just arrived within days, and would shortly be returning home. Of course the home (which had fallen into disrepair as she deteriorated) had quickly been sold. Most of her visitors avoided the question of how long she had been there or commenting on the house, but at one point a well-meaning person, realizing my relative thought she just arrived and that she would shortly be returning home, told this woman the truth. This resulted in an emotional meltdown and suicide attempt. The next day of course, my relative had no memory and again thought she had just arrived and would soon be returning home. Was her well-meaning visitor a moral person? Would she have been moral to repeat this episode?

There are many milder but more common versions of this. Someone adheres doggedly to a certain authority. Someone dismisses you because you do not, or you're the wrong religion, ethnicity, political afiliation, etc. These are themselves not very pleasant reasons to be disbelieved, but what is your responsibility here? Say you own an auto shop, and a customer you don't like brings their car in. Inspecting it, you notice that the brakes are allowed to fail. Because you don't like this customer and you know he's a racist, when he comes to pick up the car to find out what work needs to be done, you intentionally send a black employee (who's in on it) to tell him he needs his brakes replaced, expecting full well the customer will refuse because a black person is telling him this, drive off, and crash when his brakes fail. Yes, he's no prince charming, but if someone told me that story, with that clear intention, I would worry about the shop owner's character and not want to be around him. (In point of fact, when someone is worried that a message will be ignored or taken the wrong way because of something like this, they do often use a messenger more likely to be taken seriously and accurately.)

And finally, ask any physician how they motivate their patients with low motivation, cultural barriers, or poor health literacy to do things that will keep them alive. It's hard enough in primary care. Try psychiatry! The temptation to severely spin the truth to improve outcomes for your patient arises frequently, and sometimes wins.


The obvious (and correct) objection to any non-absolutist model of truth-telling is that it provides a very slippery slope. When you free yourself from a commitment to absolute truth, it becomes maybe a little too easy to justify fibbing in what you've convinced yourself is someone else's actual best interest. Consequently, following these rules, you should still expect opportunities for pro-social lies to come up quite INfrequently, and you also have to commit to real honesty about your motivations for telling what you believe is a pro-social lie. You have to accept that when you tell a lie, when you think you've found a pro-social motivation, you're probably deceiving yourself. The analogy here is to uber-empiricist Hume's statement that (paraphrasing) despite his identification of the problem of induction, still, if you think you've found a violation to the laws of the universe, you've probably just made a mistake.

Above: Left, how we implictly think of the relationship between epistemic and instrumental rationality. Right, a more accurate scheme.

However HICAM does relate to some distinctions made in epistemology and observations from the psychology of mood and rationality. Epistemic rationality is what we usually think of as rationality, when you can make a valid argument. Instrumental rationality is action that increases utility, with no semantic component. I am being epistemically rational when I can describe and predict a thrown object's course mathematically; I am being instrumentally rational when I catch it without thinking of that (and so is a dog.) People tend to think of the two rationalities as separate domains or "two sides of the same coin", but a better argument is that epistemic rationality is a subset, a special case of instrumental rationality. Not controversial; speech and thought are actions. Consequently there will be times - rarely - that the actual outcome of computing someone else's statement will be different than what would have occurred if it were computed and acted upon rationally (as with a person who holds false beliefs - Bad Assumption #1, identical agency.) Saying something false or invalid to get someone to do something good for them is one of the rare times that speech is ONLY in the realm of instrumental rather than epistemic rationality (see below.)

What's more, there's an interesting finding in psychology described as depressive realism, where depressed people actually make better predictions about their own performance than non-depressed people. In seeming conflict with this is the robust finding that optimism predicts success (Bortolotti, 2018.) It's as if we have to choose between seeing reality as it is and being depressed, or delusional and happy - and most perplexing, successful. Fellow psychiatrist-blogger Scott Alexander uses the analogy of mood as being like an advisor to someone who motivates their client or pulls them back based on historical performance. Depressed mood is like an advisor to someone who always fails, telling them not to ever try anything, because history predicts they will fail again; as contrasted with the adviser of someone who always succeeds (happiness, optimism.) Here we see another domain where optimizing more for instrumental rationality (the less rational but more motivating optimistic beliefs)[2] produces better outcomes than optimizing for epistemic rationality (the glum, accurate beliefs.)[3] All this is to say, the occasional leaking of speech out of the epistemic domain into the purely instrumental domain - prosocial lies - is entirely compatible with what we know about human behavior. We can think of the distorted beliefs held by optimistic people as prosocial lies we tell ourselves.

All this justification of making false statements as long as they "work" is likely to make rationalists squirm, and indeed the alert reader with supernatural religious convictions might say: "Even if you think religion is false, doesn't HICAM and especially your argument in favor of prosocial lies justify believing it? Isn't religion actually the best example of an instrumentally helpful, though epistemically irrational belief?" Indeed I wrote about exactly this problem some years ago, arguing that as a set of untrue statements - lies - religion is immoral, even if it sometimes inspires good acts that otherwise would not have occurred. (That's one of many reasons.) How can I make such a claim, but defend HICAM? Again - we would expect pro-social lies should be very rare, and require strong justification. Contamination by selfish, anti-social motives is always a danger. Therefore the argument is really a quantitative one - maybe we might expect to tell a pro-social lie a couple time a year, rather than fill an entire book with them. Much more than that, and it's overwhelmingly likely most of them are actually someone telling plain old lies, anti-socially. So if ever you catch me intentionally building a whole delusional world around someone that I claim is in their best interest, I would certainly be acting out of immoral intentions.

Finally: while I've written a lot about the harm that can come about from saying true things to a person whose thought process is distorted by false beliefs (thus leading that person to actually make bad decisions, despite having told them the truth), this is less often a problem than it otherwise might be. The reason is that people who verbally claim false beliefs often find reasons not to act on those claimed beliefs. Example: someone says to you "my favorite football team will definitely 100% win the game tomorrow, since the previously injured quarterback is back in the line-up." But it turns out that 2 minutes ago it was just announced that actually, the quarterback will NOT be playing tomorrow. As an unethical person, you rush to lock down the bet while your interlocutor holds a false belief (letting weeds grow.) But suddenly they get cold feet, often with a disingenous "Gambling is immoral" or "I don't want my fandom to be polluted with money", etc. In fact humans have lots of "speed bump" heuristics, to keep false beliefs from propagating too far and to keep us from overcommitting, even though epistemically we can't really explain it that way (see the endowment effect for one such example.) It's interesting to note that it's often the newly converted who don't have the speed bumps specific to a new set of beliefs who get into trouble. On the other hand, people with severe psychiatric illness, who have a brain which is physically different from most other humans. They really believe their delusional beliefs, judging by how they endorse them with action.


Bortolotti L. Optimism, Agency, and Success. Ethic Theory Moral Prac (2018).

Varden H. Kant and Lying to the Murderer at the Door...One More Time: Kant's Legal Philosophy and Lies to Murderers and Nazis. J Soc Philos, Vol 41(4) Dec 1 2010.


[1] To be clear, this is not a claim that all transactions are immoral. When we trade, we necessarily hold different valuations of the things being exchanged, otherwise the trade would be irrational. As real objects will inevitably have different values to different people in different situations, this is not an obstacle to moral rational trading. However, if one is trading a more abstract entity that only holds value in terms of its tradable utility, or predicting an outcome that is only connected through arbitrary agreement, and especially in zero sum scenarios, then objectively incorrect valuations by one party is likely to play a larger role in the trade. Case in point, bets, commodities, or stock options.

[2] Assuming that our delusional (but success-producing) optimism has been selected for by evolution, I often amuse myself by wondering whether, if Homo erectus could understand psychiatric nosology, they would view their descendants (us) in horror, running around manic all the time as we might appear to them.

[3] Of course different levels of optimism or pessimism are rational for different risk:benefit scenarios, just as in game theory, penalty and payout determine the most rational strategy. Case in point: one of the things that cognitive behavior therapy or social anxiety aims to do is make people re-evaluate the actual risk of social interactions. So what if someone doesn't like you or won't say yes to a date? Does it physically harm you? The payout, while unlikely, is high, and the risk (once you get past your anxiety) is almost zero. On the other hand, this would be a terrible approach for rock-climbing. Antonio Gramsci (quoted by Steve Hsu on his blog) expresses this nicely: "Pessimism of the intellect, optimism of the will."

Sunday, September 2, 2018

Procrastination Variants: Narcissistic Subtype

This post is research only and should not be taken as medical advice or treatment recommendations.

Many things in psychology are multicausal, and/or have subtypes. Initially this causes difficulties in trying to study them. Lumping together different illnesses has obscured the truth many times in the history of neuroscience and mental health, and is certainly still doing it now. (In the early twentieth century most physicians would have considered schizophrenia, autism, intellectual disability and dementia the same thing, and now most educated Westerners have some idea that at least these are different conditions, if not what the symptoms are.)

Procrastination is a problem for a lot of people that gets surprisingly little attention in the psychology literature, relative to its prevalence and the amount of suffering it engenders. A simple model relies on executive dysfunction affecting set-switching, and it works like this. You want to accomplish B, but you have to do A first to get to B. A is unpleasant and merely an instrumental goal. If you have poor executive function, you can't get yourself to start doing A; you "put it off". Or, you're already doing X, which although unrelated is much more fun in the moment than A would be, so you REALLY can't get yourself to start.

No doubt this model does usefully describe many people's experience, and even for the people best described by the model I'll advance below, an executive function deficit probably does play a part to some degree in just about every chronic procrastinator. But the pattern many people describe has several inconsistencies that suggest that what's really motivating the procrastination is avoiding the threat of ego injury - especially in narcissists, to whom any damage to self-worth by being less than perfect in a core value is destructive and terrifying.

I've made a number of observations from scouring the limited literature, as well introspection, observation of patients, and reading others' introspection, that suggest to me that for a large subpopulation of procrastinators at least, the problem is driven mostly by character rather than executive dysfunction. Even before looking at the literature, based entirely on my observations in the clinic, I noticed a commonality in the patients who would complain of procrastination difficulties. They're usually male and middle-aged or younger. They often display a degree of alexithymia, or even more interestingly, very specific alexithymia, toward anxiety only - they either never notice that they feel anxious, cannot name it when they do, or actively deny feeling anxious. I often suspect that this is motivated by anxiety being an ego dystonic emotion in male narcissists (to be anxious is to be weak, which is unacceptable.) In treating these patients, I've measured their symptoms and progress with the Irrational Procrastination Scale in my practice (hereafter IPS), though you can also find the Pure Procrastination Scale (Steel 2010), and two comparisons (Svartdal et al 2016, Svartdal and Steel 2017.) (Steel has his own site here with more information.) I've never taken objective data on narcissistic personality though the instrument most commonly used is the Narcissistic Personality Inventory (the NPI.)

Literature review shows two things: a small literature investigating possible procrastination subtypes, and a tiny but intriguing signal about a narcissism-procrastination connection. There are a few more papers indexed by procrastination and compulsive personality, among them Primac's paper showing the success of a brief therapeutic intervention in compulsive personality decreasing both narcissism and procrastination. Of the three procrastination subtypes noted in the literaure (avoidant, arousal, and decisional procrastinators) narcissistic procrastinators as I describe them below would most closely match the avoidant subtype. Some studies have found differences in the subtypes, for example in their activity at different times of day (Díaz-Morales et al 2008.) However Steel in 2010 performed a meta-analysis concluding that there is no evidence for the subtypes as distinct entities. Lyons and Rice (2014) reported on avoidance and arousal procrastination subtypes specifically and found relationships with secondary psychopathy and the Entitlement/Exploitativeness facets of the NPI. In contrast Nawaz et al (2018) did not find a correlation between the IPS and the NPI. Shame is known to be at the core of pathology in narcissism, and Fee and Tangney (2000) found correlations in procrastinators between shame, but not guilt. Wohl et al (2010) found that students who forgave themselves for procrastinating while studying were better able to overcome study procrastination in the future, again suggesting a role for shame in the behavior. Mann (2004) noted avoidance effects proportionate to narcissistic injury in undergraduates. There is a slightly stronger signal for procrastination and obsessive personality - suggesting a common thread of perceived poor self-efficacy. A study comparing procrastinators versus non-procrastinators did not find differences in the cognitive abilities they measured, but did conclude that "Further research must provide evidence for persistent procrastination as a personality disorder that includes anxiety, avoidance, and a fear of evaluation of ability" (Ferrari 1991.)

What all this strongly suggests is that narcissism plays a role in procrastination, if not in all impacted procrastinators, then in a significant subpopulation. In addition to the literature cited here, here are the observations I've made of patients that support a narcissistic subtype of procrastinator and a mechanism for the behavior.

  1. Some procrastinators have reported that being sick or sleep deprived makes it easier for them NOT to procrastinate. This flies in the face of the executive dysfunction hypothesis. They say, basically, "I'm already miserable, so why not just do the thing I don't want to do." This suggests that what they're avoiding with procrastination is something that makes them feel generally bad, and when they already feel that way, there's no point in avoiding the task.
  2. This subtype of procrastinator doesn't just forget about the task. They usually don't just forget to do it; or, remember, but not feel like it, and just put it out of their minds. It's actually continually on their minds while they're avoiding it. This is also very unlike executive dysfunction.
  3. Many procrastinators have the experience of having TWO things they're procrastinating about, and they switch which one they're avoiding. This pattern is possibly the most instructive of all of them here, because of how irrational it otherwise is without this model. For example, someone is supposed to do A all day, but avoid it. Then a deadline approaches for B (say, they're supposed to start getting ready to leave for an important meeting.) Then they actually do start doing A! If they're concerned about being injured by not doing perfectly at these activities, at some point anticipation of B (which they think they will do badly at) builds so much that they need some distraction. Now, the prospect of failing at A is further away and therefore not as painful, and they'll be partly distracted from A by impending B anyway - but more importantly, they'll be distracted from thinking about failing at B by doing A half-assed (and in narcissism, there is often constant activity to avoid feelings of worthlessness with superficial productivity.) Tim Urban's description of this phenomenon is here in cartoon form, but briefly, it's time for him to leave for an appointment - Task B - when the Procrastination Monkey says "that work you were trying to do all day, I've changed my mind and suddenly I'm into it.") Note that strong focus on another task is not what you would expect from impulsivity either.
  4. Sometimes, once a procrastinator begins working on the avoided task, they explode with anger if they have to move on to something else. At first glance this would appear to be a perfect example of poor set-switching and therefore eminently explainable by the executive dysfunction model (autism spectrum people do this too) but you can actually differentiate based on the nature of the task - if you can switch to a self-worth-supporting activity, the narcissistic procrastinator would resist less, while to the autistic person the nature of the new task would not matter. In fact the more fun-for-its-own-sake is the new task, the more the narcissistic procrastinator would resist (don't you dare ask them to play video games once they finally get started on the previously-avoided task! But asking them to work on a boring, important tax document might be alright.)
  5. Procrastinators often use words to describe the way they feel like "worthless" or "useless", classic for wounded narcissists. They absolutely consider their procrastination a huge problem. Contrast with ADHD patients who avoid work, for whom the avoidance is often fairly ego-syntonic.
  6. Many procrastinators describe doing pointless, un-fun busywork while worrying about the thing they're supposed to be doing. Tim Urban has a related idea called the "dark playground", but on the dark playground you can do fun things (while feeling guilty about them); I'll call this domain "busywork purgatory". Tellingly, it's always meaningless busywork. It's never something fun (well I'm not doing it, might as well play video games); it's not something else important. It's trivial, and it's usually something continuously attention-occupying that can be completed that day (for a burst of that feeling of accomplishment.) Many people have experienced the urge to clean the dorm room instead of studying, but dorm room cleaning is actually more useful than most busywork purgatory activities.
  7. A culturally-influenced aspect: in modern America there is a premium on productivity and success over most other characteristics. In a culture where (for example) loyalty to religion or family is most prized, I would expect that instead of busywork purgatory, the procrastinator gets stuck in prayer purgatory or doing-things-for-your-family purgatory, to prop up their self-worth.
  8. Probably the most disabling impact of this procrastination subtype: people reverse prioritize, spending more time on unimportant activities and starting them earlier and more easily. I've heard procrastinators say that they can tell how important they think something is by how easily they work on it or how relaxed and creative they can be about it (see this Tweet by someone who appears to be admitting to reverse prioritization.) With executive dysfunction alone, you would expect random order of work with respect to actual priorities, as opposed to a reverse ordering. Procrastinators are therefore often able to be quite productive at something that is not important to them. Paradoxically, if their productivity and success lead that thing to be a central part of how they measure their self-worth, they will start procrastinating at it. People will described starting to feel "trapped" and that's when they start to procrastinate. I would argue the behavior is not reactance but rather avoidance of ego-threat. Again, pure executive dysfunction would predict no success at anything, not reverse-prioritized" at a thing that is deemed unimportant.
  9. Some procrastinators suggest that recent successes make them less likely to procrastinate, possibly because they have an expectation of a positive outcome as a result of their efforts, rather than only negative outcomes (and this thought distortion is actually reinforced by reality in previous instances; part of the problem is that their outcomes really are negative, and they've taught themselves this quite effectively.) This also suggests that the problem is not only do narcissistic procrastinators envision a negative outcome in the end, they get no positive feedback from the intermediate steps along the way because there's no feedback in the form of external praise. Thinking about it this way, there's literally no reason to start the task, because it will be at best neutral while you're doing it, and then bad when you finish.
  10. A bizarre compensation behavior I've heard from multiple procrastinators is the pattern of performing an otherwise important task out of context, after the fact, and alone (where it has no value to anyone.) Bizarrely they will act like their completion of the task is exactly equivalent to having done it in the normal manner and time, all the while knowing exactly how childish and strange it is. I know of one person who was going to run a marathon, then showed up to the deserted starting line four hours late, ran the course, then actually emailed the organizers to yell at them - "What kind of a race is this? No aid stations?" (because they had long been taken down) "No one to hand me a finisher's medal at the end?" (Yes, because the race was over and everyone had gone home.) This person actually followed up for a while with angry phone calls and emails, fully aware how ridiculous it must sound but feeling compelled to do so; he said if he hadn't done this he would've felt "weak", classic for a male narcissist. Another example of this bizarre behavior was one procrastinator who routinely waited until after customer service lines shut down for the day to call (his bank, to change his password, etc.) He wasn't aware of doing it intentionally, but repeatedly noticed that it was 5:02pm, and it was time to call his bank. He would leave angry messages if there were voicemails, post tirades on companies' social media feeds, etc. He said he noticed he felt a strange satisfaction and even comfort in getting angry, and admitted to being oddly disappointed on those occasions when he called and got a live person who could help him. He stated the task was usually one where he wasn't sure if he would be successful or would know how to "navigate the system" to a successful outcome.
  11. The types of tasks that this subtype procrastinates on are rarely solo activities, or tasks with certain outcomes. Going for a solo hike, even one which involves complex planning, does not threaten to ego-injure the person with possible failure, nor does it provide an opportunity to fail in front of anyone else.
  12. If the person is angry at someone, especially at an authority figure telling them to do the task (less likely a competing peer), the procrastinator will engage in very thinly-veiled passive aggression by doing a previously-avoided task well and on-time, often fantasizing that they are frustrating the expectations of the authority who expects them to be late. Anger at authority cannot be the sole explanation, since the person knows that they are doing what the authority wants. Being less likely to behave this way toward peers is not as good an explanation as fear of ego-injury from credible critics (low confidence in success in front of peer competitors who are credible critics is less tolerable than say, a boss who the person is angry at and no longer considers credible.) While the sympathetic activation could be responsible for the focus (again, arguing for a simple executive dysfunction model), anger is known to focus narcissists.

Synthesizing these observations, the narcissistic subtype of procrastination is not the same as the avoidant subtype, but of the three traditionally considered subtypes, avoidant is the most similar. The model for the narcissistic or ego-threat subtype of procrastination is as follows. A person with some traits of a fragile narcissist, likely in the context of some executive dysfunction, encounters some task they have to do. This task is part of a series of actions leading to a goal that they intellectually want, which they consider quite important to their core identity - a way that reflects on who they are and want to be, in front of other people, especially those who can credibly criticize them. Because they have poor confidence and/or unrealistically high standards, they feel they are likely not to succeed. Given their character structure of having a fragile self-worth that must be propped up with perfect external achievements, this is a profound ego-threat, and they feel anxiety contemplating the outcome of the task. Consequently they avoid doing it, but not thinking about it. They substitute either activities which distract them with continuous activity and certain near-term positive outcomes (no matter how trivial) or another otherwise-avoided important activity, but one which is farther in the future (and therefore, the ego-threat is farther off as well.) They finally undertake the avoided task when the time is so low and the threat looming so immediately that their awareness of the damage they're doing to themselves overwhelms the comfort they get by distracting themselves. If there is some way for the person to feel they can say to others they completed the task but WITHOUT exposing themselves to criticism and ego-threat, they'll do that, sometimes even if it's patently ridiculous; e.g. doing the task when no one else sees them and after it no longer matters. If the person already feels bad in general (from physical illness) or angry, even at an authority figure telling them to do the task, they are paradoxically more able to complete the task.


My complaints about the paucity of procrastination research are partly driven by having treated it in my own practice, and having to round up what little evidence there is and then use "clinical judgment" for the rest. To round up the pharmacotherapy options: there are basically none, and in particular, there are none for the subtype I propose here. There is very indirect evidence for amphetamines (in one paper, college students abusing amphetamines reported less procrastination) but again, if these are all procrastinators mixed together, that's exactly what you would expect to see. I tried propranolol with a patient who had comorbid non-pathological social anxiety, used it a couple times and thought it helped, but he was much more successful with CBT (more on this shortly.) There is no evidence on Pubmed for other stimulants, benzodiazepines, beta blockers, or the SSRIs and SNRIs available on the US market. I've had people report that caffeine makes them work faster and focus and lifts their mood, but in fact after it wears off they realize that caffeine just helped them do more tasks in "busywork purgatory" - it didn't help them focus on the true high-value tasks.

The best evidence for successful treatment is from psychotherapy, specifically for CBT, which is also what has far-and-away worked the best in my own experience. Rozental et al have two studies which show among other things that in-person CBT with a therapist is the same at end of treatment as internet-based self-guided CBT, but the in-person people maintain their improvements better over time. Improvement was was over a full standard deviation from the control (!) but only about a third of participants improved - also consistent with my own experience that it doesn't help everyone, but the ones that get it, really get it.

This treatment does not differentiate by subtype or provide information that would let us infer about the relative benefit for narcissistic vs other mechanisms of procrastination. So what would I expect would be most successful approaches in CBT for narcissistic procrastination? (These therapeutic maneuvers are inferred from the model of narcissistic subtype procrastination above, but should be tested empirically in placebo-controlled studies and therefore remain speculative.)
  • Exposure therapy for failure and criticism of your core attributes. As a therapist - have the patient make a list of the things they consider core important attributes, skills, and values they offer, and people who are qualified to evaluate them against those standards. Perform role-play or imaginal exposure.
  • Learn to identify the anxiety that comes up when you start to avoid something - name it and develop a counter-habit, like working on the task for five minutes. As a therapist - have the patient tell you tasks that they procrastinate on. "Ambush" them during therapy, mentioning one of them out of the blue, then hit "pause" and ask the patient what it made them think of and how it made them feel. Keep a journal with successes of when they successfully fought back against the feeling outside of therapy, where it worked for at least five minutes (only track the successes, not the failures.)
  • Enlist a significant other, roommate, family member etc. to check up on you and give positive support when you finish tasks. (And don't avoid asking them out of shame, worrying you'll appear weak, etc. which is why this usually doesn't happen.)
  • Develop a habit of remembering that the individual steps do have value, even if you have to imagine others praising you for completing them. Envision a realistic positive outcome and how it will feel. Break things down into very very small steps, remind yourself this is how successful people do it (don't minimize by saying that means you're weak) and then pay attention to how you knock out these tasks - a success spiral.
  • Radical acceptance and forgiveness - we have certain abilities, we're going to screw up sometimes, and we're fine the way we are. Be consciously aware that castigating yourself mentally is not going to help you change, and in fact will do the opposite.

AFTERWORD: Why is Procrastination Seemingly So Much More Prevalent Now?

Procrastination is certainly not new in this or the last century. What does seem to be new is the number of people affected by it, and there's probably an easy answer for why that would be so. When you're stuck in the Malthusian grind like most of our ancestors were until about a century ago, your life is a series of constant emergencies, and we should expect that our brains are adapted to focus in this way, on near-term impending disasters with short time horizons and only a few concrete elements. (Starvation, fights, etc.) And indeed procrastinators often do quite well under pressure - they often report this as an excuse early on in their lives for why they always work up to the deadline, until they're honest with themselves about how out of control their behavior really is. (This is also borne out in the literature on the arousal subtype of procrastination.) It's interesting that stoicism as a coherent philosophy of classical antiquity was largely a philosophy of patricians, and its texts contain lots of subtle signals about their status by complaining that it was hard for them not to waste their time, i.e. not to procrastinate with trivia. This might have been a problem for an emperor or senator, but a subsistence farmer in a Roman province rarely had the luxury of stretches of time without highly activating direct threats to survival. Today we all live better than senators did in that era, which is to say we all have stretches of unstructured time and no threats to our survival - although notice that the things that finally do motivate us, even in procrastination, are all perceived threats.

There is also speculation that narcissism has become more prevalent as time has marched on. This is less than a settled point and I won't do into the debate here, but if that's the case, and narcissism does contribute to procrastination, you would expect to see more procrastination.

A third possibility is the cognitive parallel to the hygiene hypothesis. Immune systems, when not challenged sufficiently by invading pathogens, get very paranoid, and are more likely to mount autoimmune attacks. In the comparatively sterile modern environments where we now live, this is a problem. In the same way, in the absence of constant emergencies, the human threat detection system has more false alarms, and the one threat that does still exist is the threat of criticism, disapproval, and being perceived as weak (especially if you're male.) While such disapproval in the paleolithic could result in your death if you were thrown out of the tribe, today it seldom means anything of the sort. Un-learning our exaggerated social threat responses will likely be the central mental health task of the twenty-first century.


Díaz-Morales JF1, Ferrari JR, Cohen JR. Indecision and avoidant procrastination: the role of morningness-eveningness and time perspective in chronic delay lifestyles. J Gen Psychol. 2008 Jul;135(3):228-40. doi: 10.3200/GENP.135.3.228-240.

Fee RL., Tangney JP. Procrastination: a means of avoiding shame or guilt? J Soc Behav Personal. (Special issue: Procrastination: current issues and new directions). 2000;15:167–184.

Ferrari JR. Compulsive procrastination: some self-reported characteristics. Psychol Rep. 1991 Apr;68(2):455-8.

Lyons M, Rice H. Thieves of Time: Procrastination and the Dark Triad of Personality. Personality and Individual Differences Volumes 61–62, April–May 2014, p. 34-37

Mann MP. The adverse influence of narcissistic injury and perfectionism on college students' institutional attachment. Personal indiv Diff. 2004;36:1797–1806.

Nawaz H, Shah SIA, Mumtaz A, Sohail Chughtai A. (2018). Alarming trend of procrastination and narcissism among medical undergraduates. From Researchgate.

Primac DW. Measuring change in a brief therapy of a compulsive personality. Psychol Rep. 1993 Feb;72(1):309-10.

Rozental A, Forsell E, Svensson A, Andersson G, Carlbring P. Internet-based cognitive-behavior therapy for procrastination: A randomized controlled trial. J Consult Clin Psychol. 2015 Aug;83(4):808-24. doi: 10.1037/ccp0000023. Epub 2015 May 4.

Rozental A, Forsström D, Lindner P, Nilsson S, Mårtensson L, Rizzo A, Andersson G, Carlbring P. Treating Procrastination Using Cognitive Behavior Therapy: A Pragmatic Randomized Controlled Trial Comparing Treatment Delivered via the Internet or in Groups. Behav Ther. 2018 Mar;49(2):180-197. doi: 10.1016/j.beth.2017.08.002. Epub 2017 Aug 5.

Steel, P. (2002). The Irrational Procrastination Scale. PhD Thesis, unpublished.

Steel, P. (2010). Arousal, avoidant and decisional procrastinators: do they exist? Pers. Individ. Dif. 48, 926–934. doi: 10.1016/j.paid.2010.02.025

Svartdal F, Pfuhl G, Nordby K, Foschi G, Klingsieck KB, Rozental A, Carlbring P, Lindblom-Ylänne S, Rębkowska K. On the Measurement of Procrastination: Comparing Two Scales in Six European Countries. Front Psychol. 2016 Aug 31;7:1307. doi: 10.3389/fpsyg.2016.01307. eCollection 2016.

Svartdal F, Steel P. Irrational Delay Revisited: Examining Five Procrastination Scales in a Global Sample. Front Psychol. 2017; 8: 1927. Published online 2017 Nov 3. doi: 10.3389/fpsyg.2017.01927

Wohl MJA, Pychyl TA, Bennett SH. I forgive myself, now I can study: How self-forgiveness for procrastinating can reduce future procrastination. Personality and Individual Differences. Volume 48, Issue 8, June 2010, p. 926-934

Sunday, April 29, 2018

Psychiatrists Per Capita in the US, in Outpatient Practice Terms

The number is psychiatrists per 100,000. Data from Dartmouth Health Atlas. If you make simplifying assumptions, you can get an idea what that means. 1 in 6 people lives with mental illness (currently, not lifetime prevalence.) So if in your city there are 10 psychiatrists per 100,000 people, that means 1 psychiatrist per 10,000 people, and 1 psychiatrist for 1,667 people with mental illness. How long would it take to see them? The most under- and over-served areas are Oxford, Mississippi with 3.4 psychiatrists per 100,000 and San Luis Obispo, California with 36.5 psychiatrists per 100,000. (although I'll wager the later is counting psychiatrists at Atascadero State Hospital.) If you assume all these people being seen on an outpatient basis, by psychiatrists working 48 weeks a year, 5 days a week, with 16-30 minute shifts per day about 2/3 full, then in San Luis Obispo you could see your whole share in a little over 2 months (that is, the average follow-up time would be two months.) In Oxford it would be just under two years.

You'll note the long tail, which begins right around 15 per 100,000, and those locations are: Morristown NJ, Alameda County (Bay Area) CA, New Orleans LA, Honolulu HI, Springfield MA, Durham NC, Santa Cruz CA, Ridgewood NJ, Hartford CT, Portland ME, Hackensack NJ, Lebanon NH, East Long Island NY, Pueblo CO, Evanston IL, Baltimore MD, Washington DC, Bridgeport CT, New Haven CT, Bronx NY, San Mateo County (Bay Area) CA, Boston MA, Manhattan NY, San Francisco CA, White Plains NY, Napa CA, and San Luis Obispo CA. There's an obvious bias toward cities with academic centers and/or places where white collar workers like to live, although the last two locations (at least) also have large state psychiatric hospitals.

Sunday, January 7, 2018

How Steep is Your Empathy Curve?

Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connection with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident had happened. The most frivolous disaster which could befall himself would occasion a more real disturbance. If he was to lose his little finger to-morrow, he would not sleep to-night; but, provided he never saw them, he will snore with the most profound security over the ruin of a hundred millions of his brethren, and the destruction of that immense multitude seems plainly an object less interesting to him, than this paltry misfortune of his own. To prevent, therefore, this paltry misfortune to himself, would a man of humanity be willing to sacrifice the lives of a hundred millions of his brethren, provided he had never seen them? Human nature startles with horror at the thought, and the world, in its greatest depravity and corruption, never produced such a villain as could be capable of entertaining it.
- Part III, the Theory of Moral Sentiments, Adam Smith

If we are honest, we all must admit that there are some people in the world we care about more than others. While we're horrified (at least in our culture) about the idea of having a favorite child, almost everyone is pretty quick to say they'd rather save their child than a stranger that they've never met. But how about ten strangers? Or a hundred "millions"?

Of course people differ. Someone with no empathy for anyone including him or herself (autistic; badly depressed narcissist) would look like this:

A narcissist, psychopath or very young child would look like this.

This is what most of us look like.

This is what many people want to look like, but probably don't. Scott Alexander noted that people on the left often make a point of showing empathy for people more unlike them. (Is this a stable strategy?)

And finally, this is what the Buddha demonstrates, and how some progressive people claim to act - equal empathy for all beings (on the far right presumably are nonhuman primates, other animals, plants, etc.)

Of course these are all quick-and-dirty qualitative graphs to give you and idea, but they illustrate the hallmark of empathy curves. For most of us, empathy curves are sigmoidal - there is an inflection point at some level of non-self. Questions that are raised by considering this relationship are:

a) Before you get all excited that the graphs above are showing some progression toward a desirable goal - is it sustainable to show the same empathy to all others - either zero (narcissists) or full empathy (the Buddha)? Or even to show empathy in inverse proportion to how much like you someone else is? Empathy has real world impacts and there are obvious sociobiological reasons why most people's curves look like the third graph, but from a purely practical perspective, if your empathic behavior leads to your rapid extinction, it doesn't seem to be effecting much good in the world. The steepness of the empathy curve also produces a lot of the current political divide in the West - i.e., the less able to abstract a principle beyond their ingroup, the more contentious a faction.

b) Empathy curves can change over time for an individual, and finding what else is different about individuals who undergo change (neuroanatomically, psychologically) may be informative. For example, in psychiatry, there is the concept of the "burned out" antisocial, the person who commits vicious crimes indicating low empathy when he (usually he and not she) is younger. Then after about age 40, the same person is much less likely to commit further violent crimes. My speculation is that these people are not burned out but rater finally "grown in", i.e. their orbitofrontal cortex has finally produced enough synapses to affect their behavior, in the same way that ADHD symptoms often fade into and through adulthood as the cortex matures (again, in more often in males.) Many of us can think of anecdotal examples of a male who in his youth was a hell-raiser only concerned with himself, then transforms into a devoted family man - but he still has a very steep empathy curve that drops off once you move outside the family. (That guy who dotes on his daughter but was in a biker gang when he was younger? He might actually be a very good father - but he still doesn't have much concern about anyone's pain but that of his wife and kids, and if you're past his steep sigmoidal drop-off, you definitely don't want to test that.)

I've looked for evidence of differing oxytocin levels or even ADH/vasopressin (or receptor mutations) in the literature about psychopaths and antisocial PD. If you accept crime as a proxy for low empathy, there's a small literature on testosterone's role, but it's not nearly as clear as you would think. For one thing, there's actually a meta-analysis undermines the argument that testosterone is behind the pattern in crime spike and then decrease after early adulthood (Archer et al 2001), and Ulmer and Steffensmeier (2014) point out that though testosterone does not drop precipitously after adolescence and very early adulthood, crime typically does. There is a vague signal about aggression dropping as testosterone declines with age (including in women - Dabbs and Harbison 1997) but neither this nor any of the previous studies track testosterone and crime in the same individuals, which would be most informative. Overall the research on decreasing crime is scant - it seems once these people stop committing crimes, we're less interested in studying them.


Archer J, Graham-Kevan N, Davies M. Testosterone and aggression: A reanalysis of Book, Starzyk, and Quinsey's (2001) study. Aggression and Violent Behavior. Volume 10, Issue 2, January–February 2005, Pages 241-261

Ulmer JT, Steffensmeier D. The age and crime relationship: Social variation, social explanations. In The Nurture Versus Biosocial Debate in Criminology: On the Origins of Criminal Behavior and Criminality (pp. 377-396). SAGE Publications Inc.. 2014.

Dabbs JM Jr, Hargrove MF. Age, testosterone, and behavior among female prison inmates. Psychosom Med. 1997 Sep-Oct;59(5):477-80.

Why Do People Remain Loyal to a Losing Team?

Cross-posted to the MDK10Outside and the Late Enlightenment.

tl;dr Sports fan behavior is explained by a combination of constant identity-forming team loyalty which is an end in itself, and status signaling by association which is modulated by team performance. These two factors differ between individuals and are associated with different cognitive styles, with constant loyalty more associated with moral foundations and intransitive preferences.

It's been observed that you can tell who a team's true fans are by noticing who remains loyal to the team even when that team is losing. I think this is meaningful, but it does beg the question: what are those fans getting out of it?[1] Of course any speculation about this must mention the very real example of the Cleveland Browns, who over the past 2 years have a 1-31 record, and this year after going 0-16 they were on the receiving end of a sarcastic "perfect season" parade.

Humans get utility from associating with others with high status. Much of the happiness that a sports fan gets from their emotional connection to their team derives from this, and many observations are consistent with what a status-by-association theory would predict: fans are happier when their teams win because they feel high status and can signal higher status, they engage in extreme dominance displays when their teams win important contests (i.e., people acting like idiots as they come out of a championship game if their team won, yelling, jumping on cars, setting off fireworks) but not if they didn't win, they attend games more when the team is winning and less when the team is losing, and they wear branded gear to identify themselves with the team and otherwise let others know of their association.[2]

But this theory falls short of explaining why, for example, there is any such thing as a team's consistent fanbase. By this model, everyone should just cheer for the best team, game by game (or even play by play!) It especially doesn't explain why the the Cleveland Browns have any fans left at all; supposedly they're a football team but I've seen a number of convincing arguments against that, for instance, every game of the 2017 season. During an 0-16 season you would expect that if fandom is about fully rational people maximizing utility by associating with high status teams, the fans would stop posting on forums, they would put their gear away and deny to others that they were fans, and the stadium would not just have lower attendance, it would be completely empty. Yet this is not what happened.

I think the answer here very likely has to do with the gap we see between two types of beliefs/behaviors that often produce apparent impasses in other domains of life, especially religion and politics, the intensity of which differs between individuals. This gap in rational and more instinctual behavior will seem very familiar to readers of books like Jonathan Haidt's Righteous Mind, or Simler and Hanson's Elephant in the Brain. Humans demonstrate some domains in their cognition which are inflexible and impervious to reason - to use Haidt's categories, harm, fairness, loyalty, authority, and purity. By "inflexible" I mean "not open to discussion, or conversion into money or other goods/services." For example, you likely do not believe that murdering children is morally acceptable. Are you interested in hearing arguments about why it might be morally acceptable? If you would never consider such a thing, and you're uncomfortable that I would even suggest it in a thought experiment, you're showing inflexibility in discussing it. Okay - would you kill an adult for $50,000? I see that also upset you, I'm sorry to have opened with such a low offer! $75,000 then? You're being inflexible (I hope!) in reacting by thinking "It's not about the number!" Okay, what's the conversion rate between adults and children? Forget murder, how about urinating on a picture of your family for money? etc., you get the point. "Inflexible" means it can't even be suggested as open for discussion, which includes not being allowed to convert between moral-foundation-violating acts and money, or between different types immoral acts. (A favorite of action movies and dramas to demonstrate the extreme evil of an antagonist is to have them force someone declare the relative value of immoral acts, e.g. Sophie's Choice.) To connect back to the abstract, the philosophical term for having values that cannot be negotiated, and for which there is no relative value like this, is that they are intransitive.

I took you on this little tour of moral darkness to illustrate that morally normal humans do not adhere to consistent rationality, and the ones that actually do are psychopaths.[3] (You may be interested to know that Haidt found that when he surveyed the business students he was teaching, they scored low on every single moral dimension, taught as they are that everything is negotiable.) So what does all this have to do with the Cleveland Browns? Many of us have noticed that "hardcore" sports fans - the ones who stick around with long faces even when the Browns are losing, and falsify the first model above - tend to have certain personality and cultural characteristics that fit well with some of these inflexible moral foundations: they tend to be more religious, more nationalistic, more conservative and more valuing of loyalty and authority.[4] Sports fans rarely become hardcore about a team after entering adulthood, and very often there is a family lineage of fandom - and these are exactly the times and ways in which characteristics of core identity are formed. Also telling, while there were about 3,000 people who showed up for the Cleveland Browns parade, there were many fans who were quite angry about it - but online objections were mostly that it was "embarrassing". (No mention of the 0-16 record that inspired the parade.)

Before I put into words what might be motivating them and make predictions, here's a summary of the two kinds of of beliefs, producing two kinds of motivation. While these beliefs exist in everyone, there is going to be a distribution in the population, with one category of beliefs dominating the fandom-related cognition of some fans, and the other category dominating that of others.

motivated by moral foundations by utility calculations
end in themselves deliberate, external goal-oriented
higher value on loyalty lower value on loyalty
adopted in childhood, maybe from familyadopted voluntarily in adulthood
not negotiable negotiable
central to identity not central to identity
unwilling or unable to verbalize position clearly verbalized
more often encountered in person more often encountered online
sees casual fans as untrustworthy, sleazysees hardcore fans as stupid, gullible

Of course it's a spectrum, and every fan is somewhere on this spectrum, but many of us clearly lean toward one or the other end. (If you're reading this, you're more likely in the right column than the left.)

To summarize the hardcore fan: he is motivated by more basic, instinctual moral drives, especially loyalty. Being a good fan is an end in itself, and an offer to burn a team jersey, to cheer for the other team, etc. in exchange for money is likely to not only be immediately refused but to provoke active offense. These fans consider their fandom a crucial part of their identity, to the extent of including team-related themes in their weddings or mentioning it in obituaries ("he lives and dies by the Browns"; "a Browns fan to the core.") He can get uncomfortable when the business aspects of a professional sport are discussed and overshadow the games on the field. Asking him to explain his fandom will be met with puzzlement, anger, or a jumbled set of team cheers and slogans, in the same manner as a person asked to explain why they are patriotic or follow a certain religion - "If I have to explain it to you, you'll never understand." And finally, because tribal loyalty sentiments are more warning-barks or team cheers than any kind of actionable proposition, you're more likely to hear such sentiments when talking to him in person, where the nonverbal (affect-laden and irrational) part of communication dominates. He will be a fan for life. When the bandwagon people disappear during losing seasons the hardcore fan says "Good riddance, good-time Charlie."

To summarize the casual fan: he is motivated by utility calculations about external goals (this team might win this year so I'll cheer for them, maybe I can make friends this way, maybe I'll look successful if I follow a good team.) He doesn't see what's impressive about staying loyal to losers, and really doesn't understand why making fun of your team when they lose is shameful or embarrassing. He probably picked up his fandom after college, maybe when he moved to a new city. He probably don't care either way about the business dealings of the team. If someone offered him money to stay home from a game or burn team logos, he would seriously consider the offer. He doesn't introduce himself to strangers as a fan, and five years from now he might not be following the team, or might not be following the sport at all. He can give clear reasons why he started following the team, and you're more likely to hear from people like him online. He shakes his head at the hardcores who keep shelling out cash for losing teams' jerseys.

Both the hardcores and non-hardcores gain utility in proportion to the team's performance. A team's performance can be negative, causing you to lose utility by associating with them.[5] But there must be another source of utility for the hardcores, who somehow gain utility from the association no matter the team's performance - and that source of utility is a constant ability to demonstrate loyalty, period, to others as well as to themselves to reinforce their own identity. And this signal is most informative when your side is losing.[6] Speaking quantitatively, in the utility equation for this model, there are two terms, loyalty (a constant for everyone, hardcore or not), plus the product of team performance times associative utility. Associative utility is how much your utility changes per team winningness. Both loyalty and associative utility vary by individuals, and team performance of course is determined by the team. The equation looks like this:

Total utility = Loyalty-based utility + (Team performance * associative utility)

Team performance can be positive or negative. For the hardcores, loyalty is such a large term that it doesn't matter how negative team performance is, loyalty will alway be greater and the total utility will always be positive (this could be the definition of "hardcore", "rain or shine", etc.) Further toward the other end of the spectrum, the value of loyalty signaling decreases and the team performance makes more of a difference in whether people keep following the team. It's also worth pointing out that this explains people who don't care about sports at all, because they have zero loyalty and zero associative utility - that is, it doesn't matter how much the team wins, they still won't care.


Many of these predictions seem trivial, but the point is to relate these predictions to specific components of the hardcore fan's motivation structure as noted in the table above, which would be more informative.
  • While utility is hard to measure directly, there are good proxies for it, like revenues, attendance, or Nielsen ratings. Given that there will be a distribution of hardcore to non-hardcore fans, there will be a non-zero floor to revenues so even 0-16 teams don't go to zero, as we observed. If we graph all of the teams on performance vs utility proxy, I would expect a mostly linear-looking scatter plot with an increase in the slope at the good end, for those teams with some expectation of a national championship, and possibly a flattening at the bottom. This may depend more on expected utility (if fans are pleasantly surprised by a win vs. they expect their team always to win.) I plan to try to collect some kind of utility-proxy data and see if this is in fact the case.
  • In general a sport will be more successful in inspiring loyalty, the more similar it is to tribal warfare (always a reliable revenue stream for every team); maybe this is why football has eclipsed baseball as the national pastime.
  • The more hardcore, the more they will pay attention to the outside charity activities of their own team, and the more outraged they will be by disloyalty-demonstrating acts, e.g. kneeling during the national anthem. They will also be more interested in the moral failings of opposing teams, especially rivals.
  • The more hardcore, the less they will be interested in statistics, especially of other teams, even ones their teams are playing in important games.
  • The more hardcore, the greater the difference in their interest in a player when he is on their team, vs. after he is traded. That is, hardcores think each of their players is a great person on and off the field - when he plays for their team - and any suggestion that they'll stop caring about him the second he is traded is likely to be met with hostility, but in fact this is the behavior they demonstrate. (He will also be annoyed when asked why, or when Seinfeld is cited - "Essentially you're cheering for clothing.")
  • The more hardcore, the more they will feel sad or angry after a loss, and the more likely they are to attend or watch the next game despite having been very sad or angry at the last game's outcome.
  • The more hardcore, the less tolerant they will be of fans behaving negatively toward the team, even when the team loses (very concrete and contra expectations here: you might expect hardcore fans to support a parade showing anger against the people making their Browns lose, but it seems to be exactly the opposite. Parallels to gay marriage here too: how exactly does the 0-16 parade degrade your fandom, when you didn't attend?)
  • The more hardcore, the more they will confuse the team with a government agency or public good (i.e., demanding that the city finance a new stadium.)[7] More recent teams with cities that have highly educated and/or mobile populations (i.e. the coastal Pacific) will therefore find that they can't get what they want from those cities, because the voters don't care (Seattle, San Francisco, San Diego) where other cities filled with less mobile, less educated people would crucify their mayor for allowing a team to leave on their watch.
  • It's often been noted that the Midwest with its brutal early winters has far more rabid sports fans than the mild West Coast. One possibility is that the loyalty-demonstration value of attending every game is diminished when all of those games are 70 F and sunny, vs some of them being freezing cold. (Think of the people who still wait in line in the cold and dark on Black Friday morning to buy things for their families. They do know that Amazon exists. So what do you think they're really doing?) Of course there could be a climate-independent cultural difference between east and west coast, but the model's prediction would be that Miami has equally low loyalty.
  • The more hardcore, the more they will be upset if a star player leaves for another franchise, or the whole team moves to another city, and they use words like "betrayal."[7]
  • The more hardcore, the less tolerant they will be of long-term, off-field strategies, especially ones that alter play and result in on-field losses. (Both the 2008 Detroit Lions and 2017 Cleveland Browns had 4-0 preseasons, then went 0-16. Tanking (here and here) and/or salary cap manipulation? Difficult to explain as mere incompetence. And if it were confirmed that this is what is happening, the hardcore fans would be angry; casual fans might say "Huh, that's kind of clever, although it means you've been putting a bad product on the field." "My team is not a 'product'!" the hardcore fan says.)
  • I'm not sure what to predict about the impact of hardcoreness on betting. The hardcores' loyalty may make them become overconfident in their team's performance. On the other hand, moral foundations-related beliefs are often kept carefully separate from anything affecting real-world decision-making. By that I mean: sacred beliefs are often more tribal chant than actionable proposition, and in general, people desperately avoid any bet that touches their moral foundations (next time someone makes a verifiable statement about religion or politics that you disagree with, offer to bet them, and see what happens. Typically they backtrack to a non-verifiable version of what they said, and/or get very offended that you would "cheapen" such an important matter by betting on it - which are all moves to avoid testing their belief.) Then again, the hardcore fans presumably know more about their team than most others, which means they should be more confident in their predictions, and be more willing to bet. Consequently they may be less willing to bet proportional to their claimed confidence, than would a casual fan with equal knowledge of the team would be. In my one test of this during March Madness, I found that self-identified fans did more accurately predict the outcome of a game involving their team than non-fans, but I collected no information on willingness to bet.

[1] This very article is diagnostic. By trying to dissect loyalty, instead of taking it as an obvious good and discussing it in the context of a specific team, I mark myself as someone with a small loyalty term in my equation - whereas people whose sports utility equation is dominated by loyalty would not understand, and/or be actively be offended by, a question like "What do you get out of being a fan of your team?"

[2] One might argue that a purely rational human being would ignore sports altogether - what do a bunch of guys chasing a ball on a field somewhere else in my city have anything to do with me, I've never even met them! - and I'm sympathetic to that argument.

[3] I hope no one read the paragraph about the price of murder and thought, "Hmmm...What is my price to kill someone?" In the case of exemplar psychopath Richard Kuklinski, he got positive utility from harming people so he kept doing it even after he ran out of work.

[4] When people do not have VNM-consistent rationality (that is, they have these inflexible, non-negotiable, non-fungible beliefs - i.e., intransitive preferences) - they can be turned into money pumps, by observant and unscrupulous characters who can carve their motivation structure at the joints, i.e. focusing on the the inconsistencies. While this has been reproduced now in artificial settings, not only salespeople but politicians have been doing it since the dawn of civilization. The NFL and in particular the Cleveland Browns are doing exactly this to the fans by exploiting the intransitive preference of loyalty, and I would be very surprised if their marketing does not already have a model of their fans and spending patterns similar to what I've described here. Another follow-up is to look for literature on whether psychopathy allows one to see these disconnects more easily, or (hopefully) the ability to see them and the willingness to act on them are unrelated and therefore form a mercifully narrower sliver on a Venn diagram of the population.

[5] There's probably a Markovian/hedonic treadmill effect here too, where the utility multiplier from a team's win is not constant but rather influenced by expectations based on the team's record. Next year if the Patriots go 9-3, fans leaving a game after a win won't be as happy as Browns fans if the Browns have the same record.

[6] Remember Karl Rove dragging out the 2012 election night broadcast and refusing to accept the outcome, seeming a little nuts? But simultaneously advertising to ten million Republicans watching that he never ever gives up. Say what you will about Karl Rove, but "bad strategic thinker" was not among the many epithets hurled at him.

[7] When the Baltimore Colts were about to move to Indianapolis in 1984, the city actually tried to pass an eminent domain act (!) to take over the team, but the Colts escaped with the team's property under cover of darkness the night before. Other teams like the Chargers have found a much more lukewarm reaction on threatening to leave, and found themselves without many fans.

[8] While I wrote this post I was wearing a Garfunkel and Oates sportsball T-shirt, so you can guess which end of the spectrum I'm near.

Evil Gandhis and Poor Executive Function: How the World Looks if You Have Poor Impulse Control

[There is a great discussion about this post at the Slate Star Codex subreddit, with valid criticisms, which is worth checking out.]

Cross-posted to the Late Enlightenment.

Imagine that in some distant, cloudy mountain hideaway there is a city of evil Gandhis - or just unempathic monks - who spend all their waking hours meditating. As a result of the self-control they've created in this manner, their executive function is superhuman - after all, extensive meditation builds not just cognitive discipline but EEG-measurable physical changes in the brain. When finally you scale the last soaring frozen wall and scramble over the edge onto the floor of their lookout points, you have finally arrived in this storied, isolated monastery-city. You are greeted by intellects vast, cool, and unsympathetic, studying you from their great central plaza with piercing eyes. You find that you are the first visitor from your country. Suddenly a horrific pain erupts from the back of your neck, and you turn to see one of the monks withdrawing a red hot brand that he has just poked you with.

Obviously you demand to know why you deserved that. As they are merely dispassionately interested in collecting knowledge, this one calmly explains that they would like to see if your skin burns in the same way theirs does. You turn to see several more of them calmly approaching you with various glowing metal rods; behind them, in the fire at the center of the plaza, someone is handing out more metal rods. You tell them to stop, but they ignore you. Finally, you turn to the closest one approaching you, and punch him in the face. Your punch lays him flat out and his metal rod clangs to the ground.

"That's assault," one of the monks says. "We're going to have to lock you up now."

"Assault?" you shout. "What was I supposed to do? You made me assault you!"

The monk rolls his eyes. Only then do you notice various burns, knife and whip scars all over his face and arms. "You're like a child. It's not our problem if your self-control is so poor that you can't stand being burned a few times."

[Reddit user davidmanheim at the SSC subreddit suggested that this thought experiment would work better if instead of just burning our protagonist, the monks capture him and set up food for him that he is supposed to cross hot coals to obtain. When he goes around the coals and takes it anyway, he is locked up for theft. I agree that this would make the point better.]

To a person with a Cluster B personality disorder - including narcissistic PD or especially borderline - the world must seem to be filled with such evil cold-blooded monks. If I have BPD, then these people just can't see that when they withhold affection, that's so intolerable - it's just the same as a hot iron - that they're making me attack them to protect myself. (I have heard a severe narcissist in a psychiatric hospital, fighting while being restrained by staff after being refused special treatment, literally say "Look what you're making me do! You're making me do this!" The resemblance to what a five year-old might say is not coincidental.)

But this is more than just an interesting perspective - it's relevant to a critical assumptions that we make in liberal democracies. Namely, that people have agency, and this agency allows them to be responsible for themselves, and to some degree others. While (so far as I know) pain-tolerating monks do not exist, people with severe borderline and narcissistic personality disorder - with poor executive function and low distress tolerance - do exist. And we do lock them up.

It turns out that "agency" has buried within it many components, which do vary quite a bit across the population, and which profoundly affect people's ability to run their own lives and live with others. The one case where we're comfortable saying that humans don't have agency is children - but even that is somewhat arbitrary and agranular (many of us can think of a sixteen year old more capable of running her own life than a twenty-eight year old.) The monks would lock you or me up because we're at the extreme bad end of their distribution, just like we lock up people in jails or long-term care facilities, but we wait for someone to commit an act, of the sort that they are guaranteed to commit at some point, if they're at the extreme end of the distribution. As society becomes more complex, more and more people will commit such acts, and we'll have to get more honest and clear about exactly how we deal with them.