1. In public discourse, usually, most people do not have valence differences when they disagree, but rather differences of priority. This is especially true in politics. Concrete example: most people agree that racism is bad. The disagreement tends to be about how important it is to eliminate racism right now, relative to other problems. A valence disagreement occurs in this case when a white supremacist says no, actually, racism is good. Valence disagreements are of course much more intractable. Online discourse over the past few years has raised disagreements about priority to the same level of intractability, by turning priority disagreements into valence disagreements, e.g., "You shouldn't be talking about anything else right now but keeping out illegal immigrants. I don't care if you say you're against illegal immigration, if fighting illegal immigration isn't your number one priority, then you're actually for illegal immigration."
2. One trick to make your argument demand top priority is to claim that there is a massively, or even infinitely negative consequence for not adjusting one's actions in the ways required of a rational actor if the argument is true - or claiming that there is such a consequence even for ignoring the argument. Organized religion makes the most famous such claims, but the outcomes feared by certain political ideologies (if they don't get their way) can approach the same severity. This is the Gambit of Extreme Negative Utility (GENU.) Related to this, a counterargument to Pascal's wager is that you don't know which version of which religion is the right one, but as I saw a very clever Christian argue online, you have to assume infinite religions to choose from for the expected utility to work out in favor of ignoring Pascal's claim (although this still doesn't tell us which one to choose.) So what do we do? Do we set an arbitrary cut-off for utility beyond which we assume someone is lying to get our attention? This seems very dangerous, unless we think there are no rare events with far greater negative utility than we expect beforehand. So if we throw out Hell, Orwell's boot on a human face forever, and white supremacists' fear of a world overwhelmed by non-white barbarism, don't we also have to stop worrying about the Great Filter, the technological singularity, and CRISPR-derived biological weapons made by suicide cults? [Added later: turns out that Pascal's Mugging, a concept in the rationality world, captures the essence of this - any risk/benefit scenario can be overwhelmed by claimed arbitrarily large or infinite negative utility, which Christians like Pascal call "Hell".]
3. It matters which arguments we spend time and attention considering, because although humans seem to have an implicit assumption of infinite time and attention to consider arguments, of course we do not.
Hence, most authorities use some form of agenda-setting and distraction to influence discourse.
'Warm Tips' in the wild
1 day ago
No comments:
Post a Comment