Consciousness and how it got to be that way

Monday, December 31, 2012

Personal Risk, Self-Esteem and the St. Petersburg Lottery

There used to be an unsolved problem in economics:  if there were a lottery that consisted of a coin toss, and the pot doubled every day starting from a penny, how much should you pay for a ticket?  (St. Petersburg Russia in fact had such a lottery at once time, hence the name.)  It's not so straightforward.  If there's a lottery where you have a 1% chance of winning a thousand dollars, you should be willing to pay about 10.  But what about this one? 

Standard economic theory used to say that we should be willing to pay a huge, if not infinite amount to buy a ticket, but few people are.  The solution has to do with the fact that the richer you are, the more you're willing to pay for a ticket.  That is to say, once you take into account how much it "hurts" someone to part with a certain sum, then you have your solution.  And it  hurts Bill Gates less to part with $100 than it does you or me; the same sum lost translates to fewer "points off" his utility.  Consequently, we shouldn't be surprised that it's the people who have a lot to begin with who are more willing to venture it to get more.  The perverse effect of this is that the more you have, the more you risk, and the more still you end up making in the end if your bets are rational.

What does this have to do with self-esteem?  Humans are status-driven creatures, which is to say our status is also strongly connected to our utility.  So are there marginal status effects?  Is there a status-equivalent to the St. Petersburg lottery?  Imagine someone who is extremely self-confident, for whatever reason (more on these reasons later).  He has a large reserve of status, at least in his own view of himself.  As with Bill Gates, he will get fewer points off his utility for the same status-losing failure or transgression than the rest of us might.  Therefore, these very self-confident people are more willing to risk status, in order ultimately to gain status.  There are several reasons for confidence that make sense of certain behaviors:

- A confident individual might be "justified" in their view of their status in the sense of a social or professional track record that is publicly recognized, and be willing to go out on otherwise socially unacceptable or status-threatening ventures to achieve more.  The wealthy and successful engaging in ventures that can fail in public are an example of this.

- The individual might be isolated from a peer-group's judgment; he might be a foreigner, or around people in a different socioeconomic class.  A failure seen by you, a mere native, is less skin off their back than if seen by someone from back home or from their good schools.  This might account for the strange burst of assertiveness, innovation and even comfort that people feel when working overseas.

- The individual might just be innately very self-assured.  He's not motivated by status and doesn't much care what others think.  In other words, the status-to-utility conversion factor is small.

These provide testable predictions and if correct, could result in improved innovation.

Hedonic Recursion as A Problem of Utility-Seeking

Summary:  We try to align our goals so they are consistent with each other but still often have contradictory ones, often owing to hard-wired aspects of biology.  Therefore, we continually try to adjust our preferences to be consistent (i.e. rational).  The ultimate end of rational goal-directed behavior then would be an ability to absolutely "edit" even our most profound preferences.  This may result in a system not conducive to continued survival; that is, if the source of our utility can be adjusted to itself increase utility, then our decisions lose all input from the world outside our nervous systems.  For this reason self-awareness may universally be an evolutionary dead-end.

A problem of agents with goals is that goals can be contradictory.  Goals are behavioral endpoints that maximize pleasure (utility) and minimize pain.  An evolved system like a human being should be expected to have many conflicting goals, and indeed we do, especially in areas where the goal's context has changed rapidly due to culture (e.g., "always consume high quality forage" vs. "stay slender to maintain health").  Of course this is only one of many examples.

Unfortunately, many goals are fairly hard-wired; no one has ever decided to be drawn to sugar and fat, and it's only with great difficulty and cleverness that a few of us in the developed world find hacks to prevent one of the competing goals from erasing gains from the others.  Furthermore it should be emphasized that we can't decide NOT to be drawn to sugar and fat.  We don't have the ability to modify ourselves to that degree; not yet, anyway.  (A hypothetical technology doing just this is discussed in more detail at Noahpinion.)

Would such an ability be a positive one?  It's anything but clear.  In this situation, we could make sugar and fat less appealing (or activity less painful), and/or make hunger pleasurable.  Less obvious but equally valid, you could program yourself to just stop worrying about heart disease and diabetes and gorge away.  To get at the core issue, what we're trying to do is maximize our utility by editing our goals - but there you see the circularity.  Achieving goals are what increase utility!  In some cases like food consumption, it's a relatively easy fix.  But let's consider mortality.  Assume that at some future time, you have the ability to edit our goals (our utility-providing endpoints) even at the most basic biological level.  However, also assume that there are still terminal diseases and finite lifespans; you have contracted one of these and are approaching the end of your own.  Unpleasant?  Of course not!  Make death something you look forward to!  Make your breakthrough cancer pain like an orgasm!  Why not?  Hurry up and let me suffer and die already why don't you - ahhhh, that's the stuff.  If you object "pleasure should only be a feedback mechanism and has to reflect reality", the completely-editable hedonist will bluntly ask you why, and probably take pity on you for not being able to get into your own source code.

In point of fact, why not make the most minute actions throughout life (or even absence thereof) result in untold pleasure?  And here is the problem.  Once we can manipulate our utility at such a profound level, we may be in dangerous territory.  Most of us can't control ourselves around potato chips.  It is likely that being given the key to our own reward centers this way would result not just in a realignment of utility with respect to our changed environment, but in giving ourselves easy utility-points for anything - and then what happens to motivation?  Why do anything, except give yourself more utility points?  (This is called hedonic recursion., and this is why it may be a good thing that for the most part, what we like and don't like is biologically tied down and non-negotiable.)  Even systems to assess the rationality of preferences (like the von Neumann-Morgenstern axioms) are only about internal consistency.  An individual that "solved" his own preferences in such a way that was consistent with death being desirable is not irrational.  No, he won't leave many offspring, but that wasn't what gave them utility after he got under his own hood and did extensive rework.

The ideal "logical" hedonic solution would seem to be giving ourselves pleasure in such a way that our likelihood to survive over time is increased, to increase total pleasure exposure.  (This satisfies our moral instinct that it's better to make yourself stop liking junk food than to stop worrying about dying.)  At first glance, one might argue that this is what evolution has done anyway, notwithstanding any mismatch effects from recent rapid cultural changes to our physical environment.  But evolution optimizes survival of genes over time, regardless of what conscious individual(s) they happen to be in at the moment, and pain and pleasure are always subservient to that.  Pain and fear just as often (perhaps at least as often) motivate survival behavior.  Here we're explicitly minimizing one in favor of the other.  To illustrate the difference in the two processes (and the recursion inherent in one):  evolution uses pleasure and pain to maximize gene numbers over time.  Pleasure uses pleasure to maximize pleasure.  One of these looks very much like a broken feedback loop.

Very speculatively, assuming that such goal plasticity is possible (perhaps after mind-uploading) it could be that hedonic recursion is not something unique to the way our nervous systems are put together, but rather a fundamental problem of any representational tissue that develops a binary feedback system.  So far, we've  been protected by the highly manufacturing-process-constrained substrate of our minds.  If so, over time such systems will increasingly learn how to please themselves and shut of pain, independent of what's going on in the outside world, to a point where they're so cut off from stimuli that causes them to survive less that they disappear.  Self-awareness always goes off the rails.  This is one possible solution to the Fermi paradox:  that pleasure inevitably becomes an end in itself, divorced from survival, and the experiencing agents always disappear.

Monday, December 3, 2012

Is Pre-Determination a Problem For Causality?

If there is no free will, it could be argued that the history of the universe really is "just one damn thing after another", and there is no such thing as causality.

For purposes of this post, assume that free will does not exist.  Assume also that causality means that one event affects another more than other events; never mind temporal assymetry for now.  If there is no free will, this means that only two kinds of events happen.

a) Random events that cannot be predicted individually, i.e., are not at all dependent on previous states (e.g., nuclear decays).

b) Macroscopic events which are dependent in the classical sense on previous events.

You might say that b's are always just (an apparently) special case of a's; we'll come back to that later.

Assuming a universe entirely of b's, entirely of clock-work macroscopic events - e.g. people appearing to choose actions, things falling, balls rolling when you kick them - in a fully pre-determined universe, which is essentially just a film running, it is incoherent to refer to any of these events causing another.  We're just watching a movie, or seeing a frozen block of space-time moving by.  Things could never have been otherwise, and the whole concept of counterfactuals becomes incoherent.  To extend the film analogy - you might watch a film of someone kicking a ball, but the image of the person's foot did not cause the image of the ball to move.  It was always going to move that way, right from the start.  It's one damn thing after another.

Assuming that we live in a universe with both a's and b's, or that everything is an a and sometimes in large groups looks like a b (which is a better description of the universe we're in) doesn't get us out of hot water either.  Now, we do have a degree of freedom, but since each event (nuclear decay or what have you) cannot be predicted, it's just noise.  There's freedom but no relationship between events.  It's still one damn thing after another.

You might object that the event that created the film (the actual objects we filmed and that determined what would go on the film; in analogy, the moment of the Big Bang) caused the image of the ball on the film to roll, but you can say that about everything in the film, and there's no sense in talking about causality being mediated through intervening events in the film.  When the Big Bang happened one way but not another, that meant that I would write this blog post.  Yes, along the way the solar system congealed from a debris disc in such and such a way, an allosaur ate one proto-primate and not another (that was my grandma),  and various Mennonites moved from Switzerland to Pennsylvania at a certain time.  But that was already set from the start, it's just that the film hadn't run to that point yet, and there's no sense in which something that happened ten minutes or ten years ago made me write this post any more than conditions at the Big Bang.  In terms of special relationships between events, there is none.

This is not to be taken as a refutation of the "no free will" position, but rather to point out that this position contains a direct implication to most people's model of the world that I haven't seen discussed and which I think most no free will proponents would find problematic.