Summary: We try to align our goals so they are consistent with each other but still often have contradictory ones, often owing to hard-wired aspects of biology. Therefore, we continually try to adjust our preferences to be consistent (i.e. rational). The ultimate end of rational goal-directed behavior then would be an ability to absolutely "edit" even our most profound preferences. This may result in a system not conducive to continued survival; that is, if the source of our utility can be adjusted to itself increase utility, then our decisions lose all input from the world outside our nervous systems. For this reason self-awareness may universally be an evolutionary dead-end.
problem of agents with goals is that goals can be contradictory. Goals
are behavioral endpoints that maximize pleasure (utility) and minimize
pain. An evolved system like a human being should be expected to have
many conflicting goals, and indeed we do, especially in areas where the
goal's context has changed rapidly due to culture (e.g., "always consume
high quality forage" vs. "stay slender to maintain health"). Of course
this is only one of many examples.
many goals are fairly hard-wired; no one has ever decided to be drawn to sugar
and fat, and it's only with great difficulty and cleverness that a few
of us in the developed world find hacks to prevent one of the competing goals from erasing
gains from the others. Furthermore it should be emphasized that we
can't decide NOT to be drawn to sugar and fat. We don't have the
ability to modify ourselves to that degree; not yet, anyway. (A hypothetical technology doing just this is discussed in more detail at Noahpinion.)
such an ability be a positive one? It's anything but clear. In this
situation, we could make sugar and fat less appealing (or activity less
painful), and/or make hunger pleasurable. Less obvious but equally
valid, you could program yourself to just stop worrying about heart
disease and diabetes and gorge away. To get at the core issue, what
we're trying to do is maximize our utility by editing our goals - but
there you see the circularity. Achieving goals are what increase
utility! In some cases like food consumption, it's a relatively easy
fix. But let's consider mortality. Assume that at some future time,
you have the ability to edit our goals (our utility-providing endpoints)
even at the most basic biological level. However, also assume that
there are still terminal diseases and finite lifespans; you have
contracted one of these and are approaching the end of your own.
Unpleasant? Of course not! Make death something you look forward to!
Make your breakthrough cancer pain like an orgasm! Why not? Hurry up
and let me suffer and die already why don't you - ahhhh, that's the
stuff. If you object "pleasure should only be a feedback mechanism and
has to reflect reality", the completely-editable hedonist will bluntly
ask you why, and probably take pity on you for not being able to get
into your own source code.
In point of fact, why
not make the most minute actions throughout life (or even absence
thereof) result in untold pleasure? And here is the problem. Once we
can manipulate our utility at such a profound level, we may be in
dangerous territory. Most of us can't control ourselves around potato
chips. It is likely that being given the key to our own reward centers
this way would result not just in a realignment of utility with respect
to our changed environment, but in giving ourselves easy utility-points
for anything - and then what happens to motivation? Why do anything,
except give yourself more utility points? (This is called hedonic recursion.,
and this is why it may be a good thing that for the most part, what we
like and don't like is biologically tied down and non-negotiable.) Even
systems to assess the rationality of preferences (like the von Neumann-Morgenstern axioms)
are only about internal consistency. An individual that "solved" his
own preferences in such a way that was consistent with death being
desirable is not irrational. No, he won't leave many offspring, but
that wasn't what gave them utility after he got under his own hood and
did extensive rework.
The ideal "logical"
hedonic solution would seem to be giving ourselves pleasure in such a
way that our likelihood to survive over time is increased, to increase
total pleasure exposure. (This satisfies our moral instinct that it's
better to make yourself stop liking junk food than to stop worrying
about dying.) At first glance, one might argue that this is what
evolution has done anyway, notwithstanding any mismatch effects from
recent rapid cultural changes to our physical environment. But
evolution optimizes survival of genes over time, regardless of what
conscious individual(s) they happen to be in at the moment, and pain and
pleasure are always subservient to that. Pain and fear just as often
(perhaps at least as often) motivate survival behavior. Here we're
explicitly minimizing one in favor of the other. To illustrate the
difference in the two processes (and the recursion inherent in one):
evolution uses pleasure and pain to maximize gene numbers over time.
Pleasure uses pleasure to maximize pleasure. One of these looks very
much like a broken feedback loop.
assuming that such goal plasticity is possible (perhaps after
mind-uploading) it could be that hedonic recursion is not something
unique to the way our nervous systems are put together, but rather a
fundamental problem of any representational tissue that develops a
binary feedback system. So far, we've been protected by the highly
manufacturing-process-constrained substrate of our minds. If so, over
time such systems will increasingly learn how to please themselves and
shut of pain, independent of what's going on in the outside world, to a
point where they're so cut off from stimuli that causes them to survive
less that they disappear. Self-awareness always goes off the rails.
This is one possible solution to the Fermi paradox: that pleasure
inevitably becomes an end in itself, divorced from survival, and the
experiencing agents always disappear.
Ask Language Log: "Strange Writing"
3 hours ago