Consciousness and how it got to be that way

Monday, December 31, 2012

Hedonic Recursion as A Problem of Utility-Seeking

Summary:  We try to align our goals so they are consistent with each other but still often have contradictory ones, often owing to hard-wired aspects of biology.  Therefore, we continually try to adjust our preferences to be consistent (i.e. rational).  The ultimate end of rational goal-directed behavior then would be an ability to absolutely "edit" even our most profound preferences.  This may result in a system not conducive to continued survival; that is, if the source of our utility can be adjusted to itself increase utility, then our decisions lose all input from the world outside our nervous systems.  For this reason self-awareness may universally be an evolutionary dead-end.


A problem of agents with goals is that goals can be contradictory.  Goals are behavioral endpoints that maximize pleasure (utility) and minimize pain.  An evolved system like a human being should be expected to have many conflicting goals, and indeed we do, especially in areas where the goal's context has changed rapidly due to culture (e.g., "always consume high quality forage" vs. "stay slender to maintain health").  Of course this is only one of many examples.

Unfortunately, many goals are fairly hard-wired; no one has ever decided to be drawn to sugar and fat, and it's only with great difficulty and cleverness that a few of us in the developed world find hacks to prevent one of the competing goals from erasing gains from the others.  Furthermore it should be emphasized that we can't decide NOT to be drawn to sugar and fat.  We don't have the ability to modify ourselves to that degree; not yet, anyway.  (A hypothetical technology doing just this is discussed in more detail at Noahpinion.)

Would such an ability be a positive one?  It's anything but clear.  In this situation, we could make sugar and fat less appealing (or activity less painful), and/or make hunger pleasurable.  Less obvious but equally valid, you could program yourself to just stop worrying about heart disease and diabetes and gorge away.  To get at the core issue, what we're trying to do is maximize our utility by editing our goals - but there you see the circularity.  Achieving goals are what increase utility!  In some cases like food consumption, it's a relatively easy fix.  But let's consider mortality.  Assume that at some future time, you have the ability to edit our goals (our utility-providing endpoints) even at the most basic biological level.  However, also assume that there are still terminal diseases and finite lifespans; you have contracted one of these and are approaching the end of your own.  Unpleasant?  Of course not!  Make death something you look forward to!  Make your breakthrough cancer pain like an orgasm!  Why not?  Hurry up and let me suffer and die already why don't you - ahhhh, that's the stuff.  If you object "pleasure should only be a feedback mechanism and has to reflect reality", the completely-editable hedonist will bluntly ask you why, and probably take pity on you for not being able to get into your own source code.

In point of fact, why not make the most minute actions throughout life (or even absence thereof) result in untold pleasure?  And here is the problem.  Once we can manipulate our utility at such a profound level, we may be in dangerous territory.  Most of us can't control ourselves around potato chips.  It is likely that being given the key to our own reward centers this way would result not just in a realignment of utility with respect to our changed environment, but in giving ourselves easy utility-points for anything - and then what happens to motivation?  Why do anything, except give yourself more utility points?  (This is called hedonic recursion., and this is why it may be a good thing that for the most part, what we like and don't like is biologically tied down and non-negotiable.)  Even systems to assess the rationality of preferences (like the von Neumann-Morgenstern axioms) are only about internal consistency.  An individual that "solved" his own preferences in such a way that was consistent with death being desirable is not irrational.  No, he won't leave many offspring, but that wasn't what gave them utility after he got under his own hood and did extensive rework.

The ideal "logical" hedonic solution would seem to be giving ourselves pleasure in such a way that our likelihood to survive over time is increased, to increase total pleasure exposure.  (This satisfies our moral instinct that it's better to make yourself stop liking junk food than to stop worrying about dying.)  At first glance, one might argue that this is what evolution has done anyway, notwithstanding any mismatch effects from recent rapid cultural changes to our physical environment.  But evolution optimizes survival of genes over time, regardless of what conscious individual(s) they happen to be in at the moment, and pain and pleasure are always subservient to that.  Pain and fear just as often (perhaps at least as often) motivate survival behavior.  Here we're explicitly minimizing one in favor of the other.  To illustrate the difference in the two processes (and the recursion inherent in one):  evolution uses pleasure and pain to maximize gene numbers over time.  Pleasure uses pleasure to maximize pleasure.  One of these looks very much like a broken feedback loop.

Very speculatively, assuming that such goal plasticity is possible (perhaps after mind-uploading) it could be that hedonic recursion is not something unique to the way our nervous systems are put together, but rather a fundamental problem of any representational tissue that develops a binary feedback system.  So far, we've  been protected by the highly manufacturing-process-constrained substrate of our minds.  If so, over time such systems will increasingly learn how to please themselves and shut of pain, independent of what's going on in the outside world, to a point where they're so cut off from stimuli that causes them to survive less that they disappear.  Self-awareness always goes off the rails.  This is one possible solution to the Fermi paradox:  that pleasure inevitably becomes an end in itself, divorced from survival, and the experiencing agents always disappear.

9 comments:

  1. Access to self directed goal plasticity (administrator rights if you wish) might be restricted to a value-free (non-judgemental) approach, free of desire/resistance. This could or could not facilitate evolutionary success, depending on the scale on which this would be applied. If applied widely (society level), this could be an attractor for our social system. If restricted to a minority it might be their demise.

    ReplyDelete
  2. I think you just hit on the whole problem with this. That can't be a solution. Value-free and free of desire? Then there's no reason or rationale to modify the system, unless you do so randomly, or eliminate pleasure and pain altogether, in which case, there's no reason (or, a neuroscientist would probably argue, *ability*) to do anything.

    ReplyDelete
  3. Hi again, I came across an article that reminded me of this thread on your blog: Case Study of Ecstatic Meditation: fMRI and EEG Evidence of Self-Stimulating a Reward System (http://www.hindawi.com/journals/np/2013/653572/). Just wanted to share that - check out the "Implications" section of the article.

    ReplyDelete
  4. Thanks for the link - as the implications section of the paper states, "Rather than simply stimulating the reward system in response to traditional goals of food and sex, it would be beneficial to regulate the system and focus it on long-term goals that are more adaptive." Yes it would be beneficial, but when you have complete control over the feedback, programming yourself for the mistaken goals (which are not obviously mistaken) can become disastrous. It's obvious that heroin-like direct pleasure stimulation is bad but if your model of yourself in the world is flawed you can go off the rails as well.

    ReplyDelete
  5. Redesign could be restricted socially or by pre-commitment devices. A mind could back itself up or create active copy sidelines before self-modifying. Much like people see junkies and steer away from drugs, or ban drugs.

    Redesign could happen so that pain-driven functions are replaced with pleasure-driven ones without destroying functionality (gradients of bliss instead of pleasure-pain-axis).

    ReplyDelete
  6. The latter point is a good one. It seems that it would still be unpleasant to go from 9-pleasure to 8-pleasure, but not as bad as pain.

    Pre-commitment devices are useful but in powerfully self-modifying agents, it's difficult to imagine what those could be. People discussing self-modifying AI, which may suffer from this problem, sometime seem to be making directly opposing arguments on this front: "the AIs will be vastly more intelligent than us as we are to worms" and then "and we, the worms, are going to create pre-commitment devices that they can't think their way around".

    ReplyDelete
  7. Another possibility is r-selection strategies from high reproduction, combined with Darwinian/economic competition. Spawn a lot of fast-growing kids/mature copies, allow self-modification and then let those die off (painlessly) that can't afford their existence.

    The problem of staying on a functional path is like the problem of getting a ping-pong ball through a small slit in a large wall. You can try to perfect your aim and never miss, but you can also just fling thousands and thousands of balls randomly.

    As long as we don't face alien competition, efficiency doesn't much matter. Evolution doesn't need perfection, just survivability.

    As a bonus, the surplus happiness we want to create comes from the sidelines succumbing to hedonic recursion.

    ReplyDelete
  8. You don't need perfection, but you also don't need outside competition. Species have most certainly contributed to their own extinction in the past by being too well-adapted in the near-term. Super-predators and all the organisms that contributed to the oxygen catastrophe (and their own demise) were not wiped out because of aliens. Of course, the biosphere as a whole has kept plugging along so far, it's just certain species that disappeared (and we may be one of those). Of course, those past species were very limited in the changes they could make (they ate everything, or they poisoned themselves by adding a gas to the atmosphere). If AGI is as powerful as some of its proponents promise, the biosphere as a whole is more likely to be threatened, rather than just one or a set of species.

    ReplyDelete
  9. Okay, but hedonic recursion isn't really the same as being too well-adapted. If anything, from Darwinian perspective, it's maladaptation. A Heroin addict is less functional than a non-addict (if nothing else, he has to pay heroin in addition to all other bills). Someone who plays computer games all day has a harder time holding a job and raising kids. For animals, the literally wireheading rat certainly wasn't too well-adapted!

    It seems there's a negative correlation between the probability of succumbing to hedonic recursion and fitness. So you could model it as an adaptive weakness, like vulnerability to viruses. If the general ability that led to this weakness - intelligence, foresight, self-modification - brings enough advantages to compensate, extinction doesn't seem to be inevitable.

    That's not to say AGI can't have weird failure modes that end up with the biosphere destroyed even if its designers didn't want that.

    ReplyDelete