Consciousness and how it got to be that way

Monday, December 31, 2012

Personal Risk, Self-Esteem and the St. Petersburg Lottery

There used to be an unsolved problem in economics:  if there were a lottery that consisted of a coin toss, and the pot doubled every day starting from a penny, how much should you pay for a ticket?  (St. Petersburg Russia in fact had such a lottery at once time, hence the name.)  It's not so straightforward.  If there's a lottery where you have a 1% chance of winning a thousand dollars, you should be willing to pay about 10.  But what about this one? 

Standard economic theory used to say that we should be willing to pay a huge, if not infinite amount to buy a ticket, but few people are.  The solution has to do with the fact that the richer you are, the more you're willing to pay for a ticket.  That is to say, once you take into account how much it "hurts" someone to part with a certain sum, then you have your solution.  And it  hurts Bill Gates less to part with $100 than it does you or me; the same sum lost translates to fewer "points off" his utility.  Consequently, we shouldn't be surprised that it's the people who have a lot to begin with who are more willing to venture it to get more.  The perverse effect of this is that the more you have, the more you risk, and the more still you end up making in the end if your bets are rational.

What does this have to do with self-esteem?  Humans are status-driven creatures, which is to say our status is also strongly connected to our utility.  So are there marginal status effects?  Is there a status-equivalent to the St. Petersburg lottery?  Imagine someone who is extremely self-confident, for whatever reason (more on these reasons later).  He has a large reserve of status, at least in his own view of himself.  As with Bill Gates, he will get fewer points off his utility for the same status-losing failure or transgression than the rest of us might.  Therefore, these very self-confident people are more willing to risk status, in order ultimately to gain status.  There are several reasons for confidence that make sense of certain behaviors:


- A confident individual might be "justified" in their view of their status in the sense of a social or professional track record that is publicly recognized, and be willing to go out on otherwise socially unacceptable or status-threatening ventures to achieve more.  The wealthy and successful engaging in ventures that can fail in public are an example of this.

- The individual might be isolated from a peer-group's judgment; he might be a foreigner, or around people in a different socioeconomic class.  A failure seen by you, a mere native, is less skin off their back than if seen by someone from back home or from their good schools.  This might account for the strange burst of assertiveness, innovation and even comfort that people feel when working overseas.

- The individual might just be innately very self-assured.  He's not motivated by status and doesn't much care what others think.  In other words, the status-to-utility conversion factor is small.


These provide testable predictions and if correct, could result in improved innovation.

Hedonic Recursion as A Problem of Utility-Seeking

Summary:  We try to align our goals so they are consistent with each other but still often have contradictory ones, often owing to hard-wired aspects of biology.  Therefore, we continually try to adjust our preferences to be consistent (i.e. rational).  The ultimate end of rational goal-directed behavior then would be an ability to absolutely "edit" even our most profound preferences.  This may result in a system not conducive to continued survival; that is, if the source of our utility can be adjusted to itself increase utility, then our decisions lose all input from the world outside our nervous systems.  For this reason self-awareness may universally be an evolutionary dead-end.


A problem of agents with goals is that goals can be contradictory.  Goals are behavioral endpoints that maximize pleasure (utility) and minimize pain.  An evolved system like a human being should be expected to have many conflicting goals, and indeed we do, especially in areas where the goal's context has changed rapidly due to culture (e.g., "always consume high quality forage" vs. "stay slender to maintain health").  Of course this is only one of many examples.

Unfortunately, many goals are fairly hard-wired; no one has ever decided to be drawn to sugar and fat, and it's only with great difficulty and cleverness that a few of us in the developed world find hacks to prevent one of the competing goals from erasing gains from the others.  Furthermore it should be emphasized that we can't decide NOT to be drawn to sugar and fat.  We don't have the ability to modify ourselves to that degree; not yet, anyway.  (A hypothetical technology doing just this is discussed in more detail at Noahpinion.)

Would such an ability be a positive one?  It's anything but clear.  In this situation, we could make sugar and fat less appealing (or activity less painful), and/or make hunger pleasurable.  Less obvious but equally valid, you could program yourself to just stop worrying about heart disease and diabetes and gorge away.  To get at the core issue, what we're trying to do is maximize our utility by editing our goals - but there you see the circularity.  Achieving goals are what increase utility!  In some cases like food consumption, it's a relatively easy fix.  But let's consider mortality.  Assume that at some future time, you have the ability to edit our goals (our utility-providing endpoints) even at the most basic biological level.  However, also assume that there are still terminal diseases and finite lifespans; you have contracted one of these and are approaching the end of your own.  Unpleasant?  Of course not!  Make death something you look forward to!  Make your breakthrough cancer pain like an orgasm!  Why not?  Hurry up and let me suffer and die already why don't you - ahhhh, that's the stuff.  If you object "pleasure should only be a feedback mechanism and has to reflect reality", the completely-editable hedonist will bluntly ask you why, and probably take pity on you for not being able to get into your own source code.

In point of fact, why not make the most minute actions throughout life (or even absence thereof) result in untold pleasure?  And here is the problem.  Once we can manipulate our utility at such a profound level, we may be in dangerous territory.  Most of us can't control ourselves around potato chips.  It is likely that being given the key to our own reward centers this way would result not just in a realignment of utility with respect to our changed environment, but in giving ourselves easy utility-points for anything - and then what happens to motivation?  Why do anything, except give yourself more utility points?  (This is called hedonic recursion., and this is why it may be a good thing that for the most part, what we like and don't like is biologically tied down and non-negotiable.)  Even systems to assess the rationality of preferences (like the von Neumann-Morgenstern axioms) are only about internal consistency.  An individual that "solved" his own preferences in such a way that was consistent with death being desirable is not irrational.  No, he won't leave many offspring, but that wasn't what gave them utility after he got under his own hood and did extensive rework.

The ideal "logical" hedonic solution would seem to be giving ourselves pleasure in such a way that our likelihood to survive over time is increased, to increase total pleasure exposure.  (This satisfies our moral instinct that it's better to make yourself stop liking junk food than to stop worrying about dying.)  At first glance, one might argue that this is what evolution has done anyway, notwithstanding any mismatch effects from recent rapid cultural changes to our physical environment.  But evolution optimizes survival of genes over time, regardless of what conscious individual(s) they happen to be in at the moment, and pain and pleasure are always subservient to that.  Pain and fear just as often (perhaps at least as often) motivate survival behavior.  Here we're explicitly minimizing one in favor of the other.  To illustrate the difference in the two processes (and the recursion inherent in one):  evolution uses pleasure and pain to maximize gene numbers over time.  Pleasure uses pleasure to maximize pleasure.  One of these looks very much like a broken feedback loop.

Very speculatively, assuming that such goal plasticity is possible (perhaps after mind-uploading) it could be that hedonic recursion is not something unique to the way our nervous systems are put together, but rather a fundamental problem of any representational tissue that develops a binary feedback system.  So far, we've  been protected by the highly manufacturing-process-constrained substrate of our minds.  If so, over time such systems will increasingly learn how to please themselves and shut of pain, independent of what's going on in the outside world, to a point where they're so cut off from stimuli that causes them to survive less that they disappear.  Self-awareness always goes off the rails.  This is one possible solution to the Fermi paradox:  that pleasure inevitably becomes an end in itself, divorced from survival, and the experiencing agents always disappear.

Monday, December 3, 2012

Is Pre-Determination a Problem For Causality?

If there is no free will, it could be argued that the history of the universe really is "just one damn thing after another", and there is no such thing as causality.

For purposes of this post, assume that free will does not exist.  Assume also that causality means that one event affects another more than other events; never mind temporal assymetry for now.  If there is no free will, this means that only two kinds of events happen.

a) Random events that cannot be predicted individually, i.e., are not at all dependent on previous states (e.g., nuclear decays).

b) Macroscopic events which are dependent in the classical sense on previous events.


You might say that b's are always just (an apparently) special case of a's; we'll come back to that later.

Assuming a universe entirely of b's, entirely of clock-work macroscopic events - e.g. people appearing to choose actions, things falling, balls rolling when you kick them - in a fully pre-determined universe, which is essentially just a film running, it is incoherent to refer to any of these events causing another.  We're just watching a movie, or seeing a frozen block of space-time moving by.  Things could never have been otherwise, and the whole concept of counterfactuals becomes incoherent.  To extend the film analogy - you might watch a film of someone kicking a ball, but the image of the person's foot did not cause the image of the ball to move.  It was always going to move that way, right from the start.  It's one damn thing after another.

Assuming that we live in a universe with both a's and b's, or that everything is an a and sometimes in large groups looks like a b (which is a better description of the universe we're in) doesn't get us out of hot water either.  Now, we do have a degree of freedom, but since each event (nuclear decay or what have you) cannot be predicted, it's just noise.  There's freedom but no relationship between events.  It's still one damn thing after another.

You might object that the event that created the film (the actual objects we filmed and that determined what would go on the film; in analogy, the moment of the Big Bang) caused the image of the ball on the film to roll, but you can say that about everything in the film, and there's no sense in talking about causality being mediated through intervening events in the film.  When the Big Bang happened one way but not another, that meant that I would write this blog post.  Yes, along the way the solar system congealed from a debris disc in such and such a way, an allosaur ate one proto-primate and not another (that was my grandma),  and various Mennonites moved from Switzerland to Pennsylvania at a certain time.  But that was already set from the start, it's just that the film hadn't run to that point yet, and there's no sense in which something that happened ten minutes or ten years ago made me write this post any more than conditions at the Big Bang.  In terms of special relationships between events, there is none.

This is not to be taken as a refutation of the "no free will" position, but rather to point out that this position contains a direct implication to most people's model of the world that I haven't seen discussed and which I think most no free will proponents would find problematic.

Wednesday, November 28, 2012

Toxoplasma - Suicidal Rats and Suicidal Humans

In 2009 I posted an admittedly long-shot theory about suicidality in rodents and maladaptive (violent) behavior in humans, namely serial murder. Along comes a very interesting piece in Archives of General Psychiatry by Pedersen et al, Toxoplasma gondii Infection and Self-directed Violence in Mothers.

Toxoplasma is that bug that infects rodents and somehow makes them approach cats. The parasite then goes through the next stage of its life cycle in the cat's gut, and is defecated out, at which point it infects another rodent. Another long-shot but interesting possibility is that the "cat lady" phenomenon is actually a toxoplasmosis infection: if you're an animal so big that the cats can't eat you, you just know you want to be around cats, so you get more and more. We do know already there are definite effects in humans, although so far they could just be explained as boring motor retardation effects: in a U.S. military study, people who totaled their vehicles in accidents were significantly more likely to have Toxo antibodies.

I think we're eventually going to learn that Toxoplasma is able to manipulate the behavior of humans in very specific uncomfortable ways vis a vis rabies, which is itself an amazing pathogen. Humans and tox have a long history together. In fact, the Rh factor on red blood cells is a subunit of an ammonia channel that's implicated in resistance to tox, and it's not surprising that the only part of the world with appreciable Rh negativity is Europe (where until recent times there wasn't much cat feces around). The closer you get to Basque country, the higher the Rh negativity; interesting since the Basques have been in (barely) post-glacial Europe longer than any other population.  For a system that could result in immunized Rh- mothers that after their first child can produce very sick anemic kids, it has to be doing something important.

Sunday, November 25, 2012

Leapfrogging the Failures of C. elegans Connectomics

An argument against the present utility of trying to do brain emulations is that we cannot predict or simulate the behavior of Caenorhabditis elegans, despite having a complete connectome map of its simple nervous system. So why do we think replicating the much more complex human connectome computationally would be any more useful? These criticisms have not stopped some groups from forging ahead. Theodore Wong and team at IBM has now published preliminaries on the most ambitious brain simulation to date, with 10^14 synapses.

Certainly simulating a human nervous system is the end goal of all this, but it seems a lot of money and work is going toward projects when there is a very glaring failure at an earlier level of the same enterprise. As long as your project is being funded by a large institution with deep pockets, you probably don't have to address this question; at least not yet. But if a counterargument exists to "simulating C. elegans doesn't give us anything useful so simulating H. sapiens is also unlikely to do so", I haven't yet seen it.

Sunday, October 14, 2012

Does Naturalism Invalidate Epistemological Claims?

No.  Thomas Nagel's Mind and Cosmos has been receiving a number of reviews, most of them not positive.  The work has mostly been viewed as a member of those recent works which attack evolution by saying, Either we can know the world as it is, or our minds are the result of an irrational process like natural selection, but to claim that both can be true is either implausible or impossible. This is a false dichotomy.

In case people haven't noticed, our minds are not perfect proposition generating and evaluating machines, and at the very least we have working memory and speed-of-operation limits, not to mention all the blind spots and good-enough algorithms that you'd expect from a hand-me-down organ gradually accumulating change. Nagel is essentially arguing that trying to think with such a mess of tissue, we couldn't possibly know truth or observe these laws, or that our observation would necessarily be suspect. As to the latter, that's exactly why we've developed a method to check what we think we know (experiments). As to the former, we do seem to find ourselves in a universe that appears to be lawful and have objective truth, so how can we know that we're not just imagining things?

As a thought experiment - imagine you design a universe that does have laws and facts. You start it off with no self-aware entities, but allow natural selection. Over time thinking entities may develop. Yes, they will be imperfect, but purely for their own survival they may develop a basic ability to make limited predictions about how certain laws in their universe operate so that they can survive. And looking down over your creation one day, you notice one of them saying "we are limited and imperfect results of chance, how can we ever discern natural law"?

In point of fact I do not think that we can ever really know with absolute certainty we're seeing natural law or facts, but what we can observe and infer is good enough for our limited existence.

Tuesday, October 2, 2012

Memory Traces Observed in Vitro

It's less than implanting and reading a memory in vitro, but researchers were able to tell most of the time based on output which dentate neurons were being directy stimuated. Probably not now, but at some point, we're going to have to start asking hard questions about what structures necessarily produce consciousness (whether in vivo, vitro, or computationally), because then this work takes on a moral dimension.

Tuesday, September 25, 2012

Two Connections RE Psychopathy and Smell

You may already have seen the Chemosensory Perception paper linking poor sense of smell and high scores on psychopathy inventories.

Two speculations about this. First, moral disgust is increasingly seen as using the same structures in the brain as physical disgust, and oddly, some of the people who are most morally consistent - that is, the ones that most use reason when considering moral decisions - also have a lower capacity for disgust. Also, the same or similar genes control sexual development and smell, and both sets of structures are disrupted in Kallmann syndrome. There may be a broader link between the control of pre-rational impulsive behavior, i.e., sense-appetite driven goal-oriented behavior.

Tuesday, May 22, 2012

The Flynn Effect Is Maybe Not So Mysterious

The closer scientists look at the epidemiology of [tapeworms in the brain, neurocysticercosis], the worse it becomes. Nash and other neurocysticercosis experts have been traveling through Latin America with CT scanners and blood tests to survey populations. In one study in Peru, researchers found 37 percent of people showed signs of having been infected at some point. Earlier this spring, Nash and colleagues published a review of the scientific literature and concluded that somewhere between 11 million and 29 million people have neurocysticercosis in Latin America alone. Tapeworms are also common in other regions of the world, such as Africa and Asia. “Neurocysticercosis is a very important disease worldwide,” Nash says.

More here.

Saturday, March 31, 2012

Electroconvulsive Therapy and Attention-Related Connectivity

Perrin et al show that ECT reduces left frontal connectivity, specifically in Brodmann Areas 44, 45 and 46. 44 and 45 are Broca's region and 46 is implicated in attention and inhibition. Of special interest, anterior cingulotomy for refractory depression resects the same area. One interpretation of these results provides further support for the hypothesis of depressive realism and working memory (and anterior cingulate + DLPFC) hyperfunction in depression.

Left DLPFC voxels showing reduced global functional connectivity, after ECT (not adjusted for treatment outcome.)

Subjective Experience and Memory

The strange properties of subjective experience are nowhere more evident than in the use of drugs which prevent long-term potentiation - or destroy the capacity for consciousness in the first place. Is there anything it is like to be under anesthesia, or do you just forget that you were tortured?

Note, Harris's focus here is about free will and morality - and to his detriment he completely neglects the robustness of epiphenomenal accounts to the neuroscience experiments he's citing, probably because few people would find an epiphenomenalist account of free will no less distasteful than a total absence of free will. But the question here sends him down an interesting road, and to my knowledge the first clinical test of the global workspace hypothesis of consciousness was in fact by a team of anesthesiologists.

Direct Observation vs. Filtration Through Others: A Key Driver of Difference of Belief

Previously I asked whether lack of (or slow) belief propagation might actually be a defense mechanism against harmful beliefs from agents with mal-intent being implanted. It turns out that even when given common priors, in this experiment people gave more weight to things they observed or concluded themselves, rather than those communicated by other participants. Via Overcoming Bias; and of direct interest to the decision-theory engineers at Less Wrong.

Sunday, March 25, 2012

The Least Surprising Thing Ever

"Although we also ran the learning task with high-dose ketamine, performance was inadequate and we will not report this part of the study further."

Yeah, I bet it did!

- From Corlett, et al, Frontal Responses During Learning Predict Vulnerability to the Psychotogenic Effects of Ketamine: Linking Cognition, Brain Activity, and Psychosis. Archives of General Psychiatry. 2006;63:611-621.

Tuesday, March 20, 2012

Growth Rate of New Words Slowing

It's been theorized that some combination of widespread literacy and large groups of people speaking the same language should slow language innovation. And indeed, mining Google's English corpus data, we see a growth rate of about 8,500 words a year, and it's slowing - but another theory put forward is that the marginal utility of new words is decreasing. One reason that we start using new words is to describe new technologies, and technology certainly hasn't slowed down in the previous centuries relative to the ones before it, so it's hard to see how the marginal utility theory could apply.

Another study to test this could be to see whether new technologies are being described by new combinations of words, or by words that have undergone semantic drift (same word applied with new meanings). This would be harder to do because the machines would have to parse semantic content, and if we could do that well we'd already have lots of Turing-test-passing computers.

Thursday, March 8, 2012

Lack of Belief Propagation as a Defense Mechanism

A problem with human beliefs - verbally professed coherent statements about the world - is that we don't automatically apply their implications to all our other beliefs and behavior, and we end up behaving inconsistently.

Note the qualifier "coherent". There are lots of claimed beliefs about the world that don't amount to a coherent truth claim, or that alter understanding about the world. Many of these have to do with signaling social afiliation, especially with religion, nationality or ethnicity, and even sports teams. But this post is not about those.

Let's take an example: someone starts telling their fellow Southern California residents that they should prepare a kit and a plan for future fire evacuations, since destructive fires occur more often than earthquakes (this is especially true in San Diego County, given the path of the major faults). However this person does not himself prepare a kit. This is not because he actively thinks he's somehow safe from fires, but rather because it just never occurs to him to propagate his new belief to his other beliefs and behaviors. (As you might be guessing, this was me.)

The most severe failure to propagate beliefs occurs in delusions, where there is a fixed belief that is given absolute weight, and conflicting beliefs are discarded or interpreted so as not to be in conflict with it - when, that is, they're compared and noted to conflict. But relative to our long-term memory, we have small short-term (working) memory - we can't call all of our long-term declarative memories (our beliefs) into short-term memory to constantly update them. This mismatch is true of all humans, not just those of us with delusions, although it's possible that one reason people have delusions is because of a working memory deficit. Consequently, there's no guarantee that we've updated all our beliefs when we develop a new one, even if we consciously attempt to sort through all beliefs that the new one may impact. This failure to completely propagate beliefs is a sin of omission, and doesn't include some of the sins of comission we find in reasoning biases that prevent us from doing so, like confirmation bias.

Of course, beliefs can be false, and lead to bad decisions that damage the agent, and all humans have some false beliefs. Because we develop false beliefs, and what's more language allows other agents with ill-intent to implant false beliefs, it's worth considering that an immediate propagation of a new belief through all our beliefs and therefore behavior might not be in our best interest. You might make the argument that complete propagation would make the agent more likely to identity the new belief as false since it could be identified as inconsistent with the rest of teh agent's beliefs. The challenge is that we still can't guarantee all the other beliefs the agent has are true, if only because they're all developed with limited information.

Therefore, it's worth considering whether rapid propagation of beliefs might actually be selected against, because it produces naive agents more likely to be taken advantage of by other agents.

Sunday, March 4, 2012

Strongest Sapir-Whorf Evidence Yet: Economic Implications of Future Tense

Languages with obligatory future tense marking have higher future discount rates, across a host of behaviors. So far the paper seems solid and this observation is pretty amazing. In terms of obvious implications, this sure beats distance and color classification!

The obvious speculation is that future time marking increases the cognitive distance of future plans. It's immediately more interesting that many languages use future tense or future tense-like constructions for their subjunctive. If we put future discounting into procrastination equation terms, we could say that placing future acts into further away, less concrete categories is raising the delay, and/or lowering the expectancy.

Saturday, February 11, 2012

An Autoimmune Encelphalitis: Anti-NMDAR

Yes, really. Fascinating. Scary. And more common as a cause of encephalitis in the U.S. than you would think. (H/T Evidence Based Mommy.)

Thursday, February 2, 2012

More ADHD in Kids with Early Anesthesia

A paper in Mayo Clinic Proceedings shows the correlation stated in the title.

This is troubling, not least of which because working memory is known to be decreased in ADHD, and there are post-anesthesia working memory deficits in adults. It may be that early exposure to general anesthesia might damage the wiring brain's working memory hardware.