Consciousness and how it got to be that way

Wednesday, July 22, 2009

Consciousness, Reduction, and Memory

A tool that I think we underutilize in the hard question of consciousness is the idea that if some entities are conscious, and some are not, then there is a boundary between the two categories. My suspicion so far is that any close examination of this boundary inevitably becomes a reductio ad absurdum, and the boundary evaporates, regardless of the examiner's initial intentions; and once the boundary has evaporated, we're left with the unintuitive non-assertion that there is no reason to think everything doesn't have some rudimentary consciousness - or the non-starter that nothing is conscious. The first assertion led Chalmers to his famous and misinterpreted statement about conscious thermostats.

You don't think thermostats are conscious? Fine. What about dogs? That's a slippery slope; as a furry, warm blooded vertebrate primate, you're subject to some pretty powerful biases about what the signposts for self-awareness are. Roger Penrose once (half-jokingly) conceded insects to the world of unconscious Turing machines, and Dan Dennett immediately challenged him: why? In other words, if dogs are conscious, why not octopi, crabs, E. coli, and the Melissa virus? Where's the line, and what is it? A 40 Hz wave? Unsatisfying, to say the least.

The point is this: if you believe that consciousness only occurs in what we on Earth call living things, then you must also believe that at one point it did not exist. Fair enough: so where and when did the first glimmer happen? It's not obviously a meaningless question to ask whether the first consciousness appeared in a trilobite scuttling about in a seabed on the piece of continental plate that is now Turkmenistan, a minute before sunrise on 28 June, 539,640,122 BCE (I think it was a Tuesday). It's tempting to dismiss point-of-origin stories like this, but the alternative is either to accept some provincial point-of-origin since the Big Bang, or accept that consciousness is a spectrum, in which case we're back to the thermostat (or to none of us being conscious). Both alternatives are counterintuitive, but modern science is littered with such choices; not surprising given how far we are out of our depth, i.e. collecting food, finding mates, and running from predators in East Africa.

So far I have not explicitly stated the materialist assumption that consciousness is related only the the matter of the entity in question, and how that matter is arranged - but there is the stickier question of within what limits that matter can vary and remain conscious. By that I mean, my brain is physically different from yours, and from a monolingual Greenlandic woman's. You could rob each of these three entities of their current level of consciousness by making physical changes to them, but all three were different to start with, so how do you know they were all conscious? Another not obviously meaningless question is to ask whether a capacity for consciousness must necessarily permeate an entire species. Why assume the conscious/non-conscious boundary follows species boundaries? Maybe on that fateful June 28 in the early Cambrian, there was just one single conscious trilobite, surrounded by zombie trilobites. And maybe some humans are conscious and some aren't.

This may seem to point the way to a reductive program, to test the boundaries of what can be conscious. We can't go looking for the boundary with a time machine to see where it all began, and of course even if we could there remains the challenge with the hard question that we have to rely on first-person accounts we get through third person reports - we can't build a consciousness meter to wave at trilobites, and they can't tell us that the sunrise was pretty. And even if they did it doesn't prove they experienced joy at seeing it; the whole problem is the inviolable subjective first-personness of it. But since we are assuming that consciousness relies on matter and arrangement (i.e. nervous systems) in a reproducible way, in a pattern that at least some humans can understand, we can still reductively investigate alterations of the material state of the basis of consciousness using human first-person accounts, in ways that don't veer off into other problems of behavior as such investigations often do. This still won't answer the hard question but it will at least show us to what things the hard question can apply.

We can't quite cut out brain tissue and ask people whether they're conscious (the idea is to restore the previous state, when you know they were conscious as near as that can be known by a third-party). But what we can do, and have done is study cognitively abnormal humans who can communicate their experience. These break down into 1) people with some disorder, either through trauma or congenital condition and 2) people who change their brain chemistry either from some activity (meditation, extreme exertion) or consume mind-altering compounds. With #1, people usually remain in the same state. With #2, these occasions occur under very uncontrolled conditions and we have very limited options ethically - "go run a hundred miles then meditate for a month and tell me what it's like" - and scientifically - there are only so much agonists for receptor X and the brain doesn't cooperate in the way they're distributed.

If we have anything to say about it hopefully the numbers of people in category #1 will drop. If as time passes our ability to reversibly under- or over-stimulate parts of the brain increases, as I hope it will, then I hope future neuroscientists will be able to pick from a suite of compounds that block specific tissues in the brain (not just receptors) from interacting with the rest. Of course, this program might not be able to tell the difference between basic requirements of consciousness, and the provincial arrangements of our own brains - or primate, or mammalian, or vertebrate brains. (Note: I am not advocating kidnapping of and experimenting on aliens, though if you have one, call me). It seems to me the two components of consciousness in our normal cognition that would be of immediate interest and are relatively isolable in anatomical terms are memory (sensory, short- and long-term) and goal-orientated behavior, specifically with regard to pain and pleasure.

Regarding reductive investigations of memory: is it possible to remove consciousness in a way that is reportable later? In other words, say you find a molecule that shuts off only the here-and-now experience, but not anything else, including memory. While the subject is under the influence, she's same as she ever was, lucid, talking, responding - a classic philosophical zombie. You give the wash-out. She reports that she now remembers the conversation, remembers what happened while she was "under", but somehow didn't experience them at the time. Can this even be meaningful?

Science fiction thought experiments of memory implants come to mind: in the movie Blade Runner, androids that live only four years are given childhood memories so they don't realize they're androids; did they experience those childhoods? The reverse situation - that is, experience, but no memory, known as anterograde amnesia - is the subject of the movie Memento (category #1, abnormal human), and does occur in the real world, but there are also abundant real-world examples in category #2, as anyone who consumes alcohol can learn. A personal account illustrates this. At a friend's birthday party I overindulged. Among the many escapades that evening which charmed and delighted my fellow party-goers was the following groan I emitted while sitting in the hot tub: "Oh god, I'm going to be sick...what's the point of blacking out if you have to experience it anyway." At which point they wisely shooed me from the tub, and my prophecy was realized.

I tell this story not to concern you more for my liver than my brain, but because the interesting part is that I don't remember it. I did black out - I know this only from (effectively) third-hand accounts. Thanks to my ethanol-clogged NMDA receptors, I have no memory at all of that event (or many others that evening). Did I experience nausea? Where did the experience "go"? What evidence do I have that at that moment I was not a zombie, even to myself? One solution is that I'm silly to worry about where the experience goes - that at least in human brains, experience requires only sensory memory (in us, a second or so), or that it requires sensory and short-term memory (in us, five to ten minutes). But is there a drug even in principle like the one in the experiment above that could have saved me from the experience of nausea but preserved the memory? That's not an entirely dispassionate question, becuase I would make that trade in a second.

The second area of investigation - goal-seeking behavior - raises questions about whether it is meaningful to talk about consciousness in the absence of pain or pleasure. I'm not talking about full-body analgesia; I'm talking about not experiencing psychological discomfort in response to thoughts about seeing their kids at the end of the day or worrying about your mortgage. Granted, that's a little more involved than questions about memory.

I think a continuing focus on specific parts of the brain in a reductive search for absolute boundaries of consciousness - if indeed there are any - is wise for more than just theoretical reasons. Any research program that ignores its sources of financial support is one that won't move along very quickly. The hard problem of consciousness, while I consider it the central question of philosophy and science, is not one that promises any immediate application that can return effort invested in it. I'm obviously sympathetic to philosophy and science for its own sake, or I wouldn't be writing this out of personal passion. But we all want to see progress on this question, and being able to sell the research based on applications to Alzheimers, ADHD, and schizophrenia would go a long way to obtaining support and public awareness. Technology that we don't yet have that will require money to develop - or, technology that we do have that requires money to obtain and use. And in the end, I can't think of a better outcome anyway than that this research could end up helping human beings suffering with cognitive disorders.

No comments:

Post a Comment