Consciousness and how it got to be that way

Friday, October 19, 2018

Lying and Intention

Some years ago, I went to see a movie with a friend who has since passed away. (This is actually one of my favorite memories of her.) Relevant: the movie was Blair Witch Project. My friend was badly scared by horror movies (why did she go? I don't know, but she's a grown up, not my problem) and when I took her back to her house, she was still quite worked up.

I should add that it had been a very hot day, and her house didn't have A/C. It was still sweltering, even after midnight, so she knew if she wanted to sleep she would have to open all the windows, which she did. This is also relevant, because a) her room was a very small addition to the house, with windows on both sides of, and behind, her bed, in fact so close to the bed that a cruel person who likes scaring his friends could actually reach in from outside and grab her; and b) I am in fact the kind of person who would do something like that, and lie about my intentions, and had done such things many times before. (I'm quirky that way.)

"Can't you please just stay until my roommates get home?" she implored.

"No, I have to go home and go to bed."

A look of horror crept across her face and her eyes widened. "I know what you're going to do! You're going to drive two blocks away like you're going home, then park, and silently walk back, and wait outside the window until you see me nodding off, then grab me and scare the crap out of me!"

"No, I would never do that!"

"Yes! Yes you will! I know that's what you're going to do no matter what you say!"

"No. No, I am definitely going to go home, and go to bed." Despite her pleas, I walked out. I then got in my car, drove home, and went to bed. I slept very well.

The next morning around 6 a.m. - probably not coincidentally, around the time the sun rose - I was awakened by my phone ringing. It was my friend. "What," I mumbled as I picked it up.

"You asshole."

"What?" I said. "Me asshole? You asshole. You're waking me up at six in the morning."

"You bet I am! I've been sitting here on tenterhooks all night waiting for you to reach in the window and didn't sleep at all and you actually went home and went to bed!"

I said nothing, but I smirked.

"I can hear you smirking! This is exactly what you planned isn't it?"

"Listen," I said, "I did exactly what I said I would do. I told the truth. I did the morally correct thing, and you chose not to believe me, even though I was telling you my true intentions, and then acted on those intentions. So that's your problem. Now if you'll excuse me I have to get some more sleep." I hung up and turned off the phone. I slept very well.


One frequently discussed problem in the analysis of what constitutes moral behavior is that of the contribution of an actor's intention, if any, to the morality of the act.

I someone hits me with a car, if I am a consequentialist, I have no grounds to say that the act was more or less moral based on the intent of the person. If someone hits me with their car at 35 mph and breaks my femur, to a consequentialist shouldn't care if the person did it intentionally and was pleased by this outcome, or accidentally and horrified by it.

This is correct - only if the definition of "consequentialist" is narrow, and really means "near-sighted consequentialist" - someone who just cares about single, isolated acts, which in the confines of a thought experiment, is often the (unintentional) implicit assumption. But of course this isn't the case, and it violates our moral intentions and (if the two are separable) the actual reactions we have in such situations, or even just hearing about such situations. Even taking out the egocentric anger and desire for revenge likely to be incurred by someone who you know hit you intentionally and enjoyed it, you would be right to be concerned that this person is out there running around loose where they can hurt someone else - and a monster like that is unlikely to limit themselves to cars in such endeavors. That is - their intention predicts future actions, which is why it matters, even (especially!) to a consequentialist. The person who is horrified is less likely to do such things again, although even in that situation, if their horror is misaligned with choices they keep making (they were texting, they were under the influence, etc.) then this also figures into our evaluation - because it predicts future actions. A stronger statement is that without intention as a predictor and link to future actions, to talk about the morality of an isolated act is meaningless.

The law in most OECD countries actually gets this right, at least for murder, where it differentiates by degree. The difference in intent between accident and non-accident is obvious enough, but the difference between first and second degree murder is also important. There is something very different and more threatening about someone who murders after planning it out, rather than by unchecked impulse. If someone had a bizarre neurological disorder causing them to helplessly pick up long objects and swing them at everyone around them, you wouldn't want them walking around loose, but you would see that they were horrified themselves at this tragic illness, so you also would recognize that this is not a person who intends harm and whose other actions are suspect as well. (If someone you know to be unfortunately afflicted as a neurological-disease-stick-swinger calls you - from far away, hopefully - and asks for a donation to a charity, you're much more likely to think the charity is legitimate than if you get a call from someone you know to be an intentional, actually-enjoying-hitting-people stick-swinger.)

This is the same problem which makes certain human behavioral patterns appear irrational in the context of a necessarily limited, close-ended experiment, when in fact they are not. For example, it's a well-studied result in game theory that humans are willing (in fact, eager!) to punish cheaters even if the damage is done, and enacting the punishment has a non-zero cost. Yes, in a true one-round game, the rational thing to do is to stop one's losses and walk away - think of not getting in a pissing match with someone who cuts you off on the freeway - but this is a rare circumstance. The small bands we've lived in throughout most of history, where you were around the same people all the time and kept score on each other, would predispose exactly such a behavior to emerge - and in game theory experiments or one-time encounters in large populations, we may not be able to override our programming. Granted, in those relatively rare encounters, it is irrational not to override it - but again, these encounters are rare. And tellingly, unless you're planning to be the cheater, you likely minimize your time around, and interactions with, complete strangers. I've come to refer to these kinds of situations (either game theory experiments or in real-life, like each day on the freeway) as GOOTs - Games Of One Turn. Many finite-round games are known to strongly affect decision-making. For example, if you're playing a certain number of rounds of prisoner's dilemma, you know that there is no more revenge possible after the last round, so you plan to defect the last round. And your frenemy in the game knows it, and you know they know it, etc. So you defect one round earlier...et cetera, until the rational player who is optimizing payout and playing a finite game defects immediately on the first round.


HOW DOES THIS APPLY TO LYING?

There are truth-telling absolutists (Kant is the obvious example, but a modern defender is Sam Harris) who have difficulty ever justifying an intentional mistruth. In Harris's case this is especially interesting as he (correctly) defends the role of intention in moral acts generally. The morally justified lie in the murderer-at-the-door thought experiment that challenged Kant was most famously and tragically realized in the example of Anne Frank (Varden 2010), and in comparing Kant's argument to this specific event it often takes rather more argument than we might hope it should to justify why it was acceptable to deceive the murderous fascist occupiers.

The claim "lying is always wrong" - hereafter referred to as the naive theory of lying - fails because three related assumptions are clearly falsified, all of which are present in the true account I provided above.
  1. Bad Assumption #1 - identical agency: Humans are all equally capable of identifying and acting on truth; that is, we all have an identical set of beliefs about the world, are equally able to reason about them, and will therefore respond to the same information in the same way. (Kant's Golden Rule fails for this reason as well.) This could occur because of false beliefs, biases, or any other departure from rationality that leads to suboptimal computing of beliefs. (Sometimes an immoral person will deliberately create those false beliefs ahead of time and then deceive by telling the literal truth, as I did.)
  2. Bad Assumption #2 - moral isolation of speech from other actions: Speech is capable of communicating unfiltered truth mind-to-mind, and that the moral weight of a statement comes from how true it is, rather than the effect that you are intending to have with your statement. In this, speech is qualitatively different from other actions. (My intention was to use my speech as an act to deprive my friend of sleep and keep her up all night. That I told the truth should not disqualify from being, as she correctly identified me, an asshole.)
  3. Bad Assumption #3: cooperation-independence: Truth-telling applies to all humans, even if they are not cooperating with even your most basic interests (e.g., preservation of life and avoidance of needless suffering.) This alone justifies lying to the Nazi at Anne Frank's door. I would agree that it is correct that intentionally creating a false impression in someone else's mind is immoral because it's a form of harming them, but a) if someone intends to harm you, harming them in attempt to stop this is not immoral and b) one can certainly create a false impression by telling the truth, as will be explained below. (In my true story, if my friend had said "Okay, go home, I'll be right here" and then gone to someone else's house to spend the night, would that have been immoral? That is, would I have been justified getting mad at her if I had acted her on her lie, come to the window to scare her, and she was gone? How dare she deceive me like that!)
Because these bad assumptions are false, applying the naive theory of lying to behavior produces inconsistent results with respect to harm we cause by speaking to people. This makes it an inadequate moral rule. (If your theory of morality does not have a place for harm minimization in general, the argument we would have to have is at a much more profound level.)

It is telling and under-appreciated that children as they develop a theory of mind often test the (widely accepted) naive theory of lying against the principle of harm minimization. Right now you can probably think of a smirking child who told you something that was technically true, but intended to deceive you or cause you some other problem. Cute at first, but if they keep doing it through to adulthood, you realize you're dealing with an immoral person.


FALSIFYING THE NAIVE THEORY OF LYING - ABSTRACT AND CONCRETE

There are infinite scenarios that illustrate this, but in the abstract, the most common scenario is this. (Concrete examples follow each abstract description.)


Person P believes fact X.

Q believes not-X.

In reality, not-X is true. (That is to say, P violates naive theory of lying Bad Assumption #1 - identical agency.)

Not only is Q quite confident that not-X is true, but they quite clearly understand that P believes X, and for bad reasons.

Q tells P fact Y (which is true!), knowing full well that this will cause P, based on P's false belief of X, to perform action A, which harms P. (Or, Q just makes no effort to convince P that not-X, allowing the false belief to stand.) (Bad Assumption #2 violated - moral isolation of speech from other actions.)



Notice that nowhere did Q lie. Q identified a false belief of P that warped P's judgment, and told P something true that will make P do something harmful to himself in the context of P's warped judgment, intending for his true statement to make P harm himself. Q did not lie, but rather used a true statement to create a false conclusion in P's mind that harmed P. Q is immoral.

There are many, many concrete examples in the world of financial transactions, where person P either has a false idea about the value or quality of an item, or the movement of a market - which Q does not correct because they benefit from the transaction.[1]

Let's say you're at the base of a cliff you've climbed before, and you know the view from the top is beautiful. As a local, you know that what appears to be the most obvious route is actually quite dangerous, because the rock is crumbly and anchors can pop out, risking that the climbers will fall and die - in fact this has happened many times, especially to outsiders who won't listen to the locals. A couple of tourist climbers show up, and you overhear them talking about how this rock face looks like sturdy solid granite, and they're planning to go up the obvious (but unknown to them, most dangerous) route. If you say nothing (to warn them about the crumbly rock), you're immoral. If you say, "The view at the top is great!" you're falsifying Bad Assumption #2, and you're really immoral.

The obvious objection is that you are, in a sense, lying by omission. You would tell the person it's dangerous, and you certainly shouldn't induce them to try it! (Yes, obviously, but again, by the naive theory of lying, you would've done nothing wrong.)


LYING TO CONVINCE A DISTORTED BRAIN OF THE TRUTH

So, let's make things more interesting, returning to the abstract formulation and introducing a very real problem, that of differing beliefs causing people to make different decisions (thus violating Bad Assumption #1, identical agency.)


Q (who in this scenario is a better person) does try to convince P that not-X.

P refuses to believe Q despite Q giving good reasons.

Q recognizes P's false belief structure, and understands that by telling untrue fact Z to P, P will make a choice in their best interest, owing to their distorted beliefs.

Q tells fact Z to P - that is, Q lies to P - and P makes a better decision than if Q had told P the truth.



Back to the rock climbers. If you're not a jerk, you go over to visitors and say, "I heard you talking about your route. I have to tell you, this whole cliff is kind of crumbly but the obvious route is really dangerous and people have died on it. I really think you shouldn't try it."

As they sort their gear, the visitors scoff. "Ha. I don't think so. These local yokels might be scared of it. Or maybe they just don't want outsiders climbing their route. People told us the locals here are liars. If you're a local I'm not going to believe anything you say. Are you?"

If you tell them the truth, you will harm them. If you lie and say "Nah, I'm visiting for the weekend from L.A. and a friend of mine there knows someone who died on that route" then maybe they'll listen. If you are scrupulous and try to change their minds, they dismiss you as a local, climb, then fall and die, you would be pretty immoral to say "Well with their false belief they put me in a bad situation, and by choosing to put them in more danger which led to their deaths, I made the right decision."


LYING TO PEOPLE TRYING TO HARM YOU

A final abstraction, for Bad Assumption #3, cooperation independence. Here, Q is again a bad person; maybe even a Nazi at Anne Frank's door.


P knows that Q intends to harm them.

Q's harming them requires information about P, provided by P.

P tells C to Q, knowing that not-C is actually correct. P lies to Q with the intention of protecting themselves or others.



One way to think about this for the naive theory of lying crowd: if lying is on the spectrum violence, then when someone intends to commit violence to you, lying to them is a form of self-defense, and in fact much better than the physical violence one might otherwise have to employ. This stands independently of the rest of the argument and is consistent with the naive theory of lying.

Back to the cliff. You, the local, are back to being a jerk again. In fact, you've repeatedly gotten visitors to try to climb the most dangerous route by telling them about the view, so that when they fall and die, you can collect their gear. ("Hey, I'm not lying to them! Not my problem if they come to the cliff and are careless about the rock quality on the route!") But the authorities have started to suspect someone is doing this intentionally, so now in your pre-climb conversations with your victims, along with inducing them to climb by extolling the view, you wheedle out of them whether they have any connection to law enforcement. Of course this time, the pair of climbers that comes is indeed law enforcement, but undercover. When you ask, they say "No." They're trying to stop you from doing this to other people, and by identifying themselves, they couldn't do that. They are behaving morally by responding to your violence with defensive, much less severe violence, and trying to stop you from harming others. (Naive-theory-of-lying people: would it matter if instead they technically didn't really lie and instead said "Hey, do we look like law enforcement?" To a five-year-old who doesn't understand theory of mind, possibly.)


THE MORALITY OF LYING AND INTENTION

An improvement on the naive theory of truth-telling is this. If we intend to help people and intend to avoid harm, we should say things that will accomplish these things. Intention is quite important, because as with other actions, it predicts the speaker's future actions. The default assumption should be that this is almost always accomplished by telling the truth. However, once you have evidence that telling someone the truth will not help them or even harm them, and/or that lying in an extremely limited way will help (or that you're dealing with someone with bad intentions, i.e. who's not cooperating with even your basic safety), it is acceptable to say untrue things. We can call this theory of intention and lying helping by intentionally creating an accurate model - HICAM.

In those rare instances where we lie to benefit someone, we can call those pro-social lies. (Intentionally differentiated from white lies - more on that below.) But we can also categorize the ways of causing harm by speaking.
  1. Active lying or bullshitting. Actively creating a false impression without the intention of helping someone.
  2. Letting weeds grow. Allowing false beliefs to persist with the intention of harming someone.
  3. Manipulation WITH the truth. Telling the truth in a way that one intends to create a false impression, often by using pre-existing false beliefs.
Notice that this definition does not justify "white lies", nor have I used the term. A working definition of white lies is lying that spares people's feelings and otherwise have no effect. I might seem to be siding with the naive theory of lying people when I say that white lies are quite dangerous, for the reason that emotional impact absolutely is an important effect, and psychologically, it's a bit too easy to avoid difficult conversations by telling ourselves we're just telling white lies.

Moral thought experiments (including this one) often use exotic examples, although I bet more people were rock-climbing today than switching trolleys between tracks. But examples in your own life likely abound. That said, if you have kids, you have very likely told a few half-truths or outright whoppers to motivate them, keep them out of trouble, or otherwise improve an outcome when their little brains would likely not have responded as well to a carefully marshaled rational argument. Why? Because children's agency is poorly formed (Bad Assumption #1.)

The same is likely true if you have family members or clients with some neuropsychiatric illness who would otherwise not be cooperating with you or otherwise make horrendous, unintentionally self-harming decisions; for example dementia. Some years ago, a relative of mine with dementia eventually progressed to having no short-term memory. This person was quite attached to her family home. She lived in a nursing home for about two years but believed the whole time that she had just arrived only a few days prior, and would shortly be returning home. Of course the home (which had fallen into disrepair as she deteriorated) had quickly been sold. Most of her visitors avoided the question of how long she had been there or commenting on the house, but at one point a well-meaning person, realizing this lady thought she just arrived and that she would shortly be returning home, told her the truth. This resulted in an emotional meltdown and suicide attempt. The next day of course, my relative had no memory and again thought she had just arrived and would soon be returning home. Was her well-meaning visitor a moral person? Would she have been moral to repeat this episode?

There are many milder but more common versions of this. Someone adheres doggedly to a certain authority. Someone dismisses you because you do not, or you're the wrong religion, ethnicity, political affiliation, etc. These are themselves not very pleasant reasons to be disbelieved, but what is your responsibility here? Say you own an auto shop, and a customer you don't like brings their car in. Inspecting it, you notice that the brakes are about to fail. Because you don't like this customer and you know he's a racist, when he comes to pick up the car to find out what work needs to be done, you intentionally send a black employee (who's in on it) to tell him he needs his brakes replaced, expecting full well the customer will refuse because a black person is telling him this, drive off, and crash when his brakes fail. Yes, he's no prince charming, but if someone told me that story, with that clear intention, I would worry about the shop owner's character and not want to be around him. (In point of fact, when someone is worried that a message will be ignored or taken the wrong way because of something like this, they do often use a messenger more likely to be taken seriously and accurately.)

And finally, ask any physician how they motivate their patients with low motivation, cultural barriers, or poor health literacy to do things that will keep them alive. It's hard enough in primary care. Try psychiatry! The temptation to severely spin the truth to improve outcomes for your patient arises frequently, and sometimes wins.


PROBLEMS AND FURTHER OBSERVATIONS ABOUT HICAM

The obvious (and correct) objection to any non-absolutist model of truth-telling is that it provides a very slippery slope. When you free yourself from a commitment to absolute truth, it becomes maybe a little too easy to justify fibbing in what you've convinced yourself is someone else's actual best interest. Consequently, following these rules, you should still expect opportunities for pro-social lies to come up quite INfrequently, and you also have to commit to real honesty about your motivations for telling what you believe is a pro-social lie. You have to accept that when you tell a lie, when you think you've found a pro-social motivation, you're probably deceiving yourself. The analogy here is to uber-empiricist Hume's statement that (paraphrasing) despite his identification of the problem of induction, still, if you think you've found a violation to the laws of the universe, you've probably just made a mistake.



Above: Left, how most of us think of the relationship between epistemic and instrumental rationality. Right, a more accurate scheme.


However HICAM does relate to some distinctions made in epistemology and observations from the psychology of mood and rationality. Epistemic rationality is what we usually think of as rationality, when you can make a valid argument. Instrumental rationality is action that increases utility, with no semantic component. I am being epistemically rational when I can describe and predict a thrown object's course mathematically; I am being instrumentally rational when I catch it without thinking of that (and so is a dog.) People tend to think of the two rationalities as separate domains or "two sides of the same coin", but a better argument is that epistemic rationality is a subset, a special case of instrumental rationality. Not controversial; speech and thought are actions. Consequently there will be times - rarely - that the actual outcome of computing someone else's statement will be different than what would have occurred if it were computed and acted upon rationally (as with a person who holds false beliefs - Bad Assumption #1, identical agency.) Saying something false or invalid to get someone to do something good for them is one of the rare times that speech is ONLY in the realm of instrumental rather than epistemic rationality (see below.)


What's more, there's an interesting finding in psychology described as depressive realism, where depressed people actually make better predictions about their own performance than non-depressed people. In seeming conflict with this is the robust finding that optimism predicts success (Bortolotti, 2018.) It's as if we have to choose between seeing reality as it is and being depressed, or delusional and happy - and most perplexing, successful. Fellow psychiatrist-blogger Scott Alexander uses the analogy of mood as being like an advisor to someone who motivates their client or pulls them back based on historical performance. Depressed mood is like an advisor to someone who always fails, telling them not to ever try anything, because history predicts they will fail again; as contrasted with the adviser of someone who always succeeds (happiness, optimism.) Here we see another domain where optimizing more for instrumental rationality (the less rational but more motivating optimistic beliefs)[2] produces better outcomes than optimizing for epistemic rationality (the glum, accurate beliefs.)[3] All this is to say, the occasional leaking of speech out of the epistemic domain into the purely instrumental domain - prosocial lies - is entirely compatible with what we know about human behavior. We can think of the distorted beliefs held by optimistic people as prosocial lies we tell ourselves.

All this justification of making false statements as long as they "work" is likely to make rationalists squirm, and indeed the alert reader with supernatural religious convictions might say: "Even if you think religion is false, doesn't HICAM and especially your argument in favor of prosocial lies justify believing it? Isn't religion actually the best example of an instrumentally helpful, though epistemically irrational belief?" Indeed I wrote about exactly this problem some years ago, arguing that as a set of untrue statements - lies - religion is immoral, even if it sometimes inspires good acts that otherwise would not have occurred. (That's one of many reasons.) How can I make such a claim, but defend HICAM? Again - we would expect pro-social lies should be very rare, and require strong justification. Contamination by selfish, anti-social motives is always a danger. Therefore the argument is really a quantitative one - maybe we might expect to tell a pro-social lie a couple time a year, rather than fill an entire book with them. Much more than that, and it's overwhelmingly likely most of them are actually someone telling plain old lies, anti-socially. So if ever you catch me intentionally building a whole delusional world around someone that I claim is in their best interest, I would certainly be acting out of immoral intentions.

Finally: while I've written a lot about the harm that can come about from saying true things to a person whose thought process is distorted by false beliefs (thus leading that person to actually make bad decisions, despite having told them the truth), this is less often a problem than it otherwise might be. The reason is that people who verbally claim false beliefs often find reasons not to act on those claimed beliefs. Example: someone says to you "my favorite football team will definitely 100% win the game tomorrow, since the previously injured quarterback is back in the line-up." But it turns out that 2 minutes ago it was just announced that actually, the quarterback will NOT be playing tomorrow. As an unethical person, you rush to lock down the bet while your interlocutor holds a false belief (letting weeds grow.) But suddenly they get cold feet, often with a disingenous "Gambling is immoral" or "I don't want my fandom to be polluted with money", etc. In fact humans have lots of "speed bump" heuristics, to keep false beliefs from propagating too far and to keep us from overcommitting, even though epistemically we can't really explain it that way (see the endowment effect for one such example.) It's interesting to note that it's often the newly converted who don't have the speed bumps specific to a new set of beliefs who get into trouble. On the other hand, there are people with severe psychiatric illness, who have a brain which is physically different from most other humans. They really believe their delusional beliefs, judging by how they endorse them with action, with no speed bumps.


REFERENCES

Bortolotti L. Optimism, Agency, and Success. Ethic Theory Moral Prac (2018). https://doi.org/10.1007/s10677-018-9894-6

Varden H. Kant and Lying to the Murderer at the Door...One More Time: Kant's Legal Philosophy and Lies to Murderers and Nazis. J Soc Philos, Vol 41(4) Dec 1 2010.


FOOTNOTES

[1] To be clear, this is not a claim that all transactions are immoral. When we trade, we necessarily hold different valuations of the things being exchanged, otherwise the trade would be irrational. As real objects will inevitably have different values to different people in different situations, this is not an obstacle to moral rational trading. However, if one is trading a more abstract entity that only holds value in terms of its tradable utility, or predicting an outcome that is only connected through arbitrary agreement, and especially in zero sum scenarios, then objectively incorrect valuations by one party is likely to play a larger role in the trade. Case in point, bets, commodities, or stock options.

[2] Assuming that our delusional (but success-producing) optimism has been selected for by evolution, I often amuse myself by wondering whether, if Homo erectus could understand psychiatric nosology, they would view their descendants (us) in horror, running around manic all the time as we might appear to them.

[3] Of course different levels of optimism or pessimism are rational for different risk:benefit scenarios, just as in game theory, penalty and payout determine the most rational strategy. Case in point: one of the things that cognitive behavior therapy or social anxiety aims to do is make people re-evaluate the actual risk of social interactions. So what if someone doesn't like you or won't say yes to a date? Does it physically harm you? The payout, while unlikely, is high, and the risk (once you get past your anxiety) is almost zero. On the other hand, this would be a terrible approach for rock-climbing. Antonio Gramsci (quoted by Steve Hsu on his blog) expresses this nicely: "Pessimism of the intellect, optimism of the will."

No comments:

Post a Comment