The problem of understanding consciousness is traditionally broken into the easy problem (how the brain works as a computer) and the hard problem (how the brain creates subjective experience). Indeed, the hard problem seems so intractable, and progress toward a solution seems so tricky to measure, and because it's not even clear what kind of an answer will explain it (we really don't know how to start getting there from here) that it's been called more of a mystery than a problem.
Here are possible reasons the hard problems still seems more like a mystery than a problem:
- It's an incredibly difficult problem: our science so far is not nearly sufficient to explain it, and/or our brains have difficulty with these explanations. (Whether this is a feature of brains in general or humans brains right now is another question.
- There are bad ideas obstructing our understanding. This is an active failure of explanation, rather than a passive failure as above. We have a folk theory or theories (a la Churchland) about subjective experience that we don't know we have and/or are not ready to discard), which is complicating our explanations. Our account is like trying to explain chemistry with vitalist doctrine, or the solar system with the Earth at the center (probably worse than that last one, which can be done, it's just pointlessly difficult and messy.)
- The first-person-ness of experience is a red herring. This may be one specific bad idea as above. When you're explaining an object in free-fall, no one worries that you yourself are not experiencing acceleration. The explanation works regardless of where the explainer is.
- Some non-materialist irrational truths hold in the universe. If that's "true" I don't know how we could ever know it.
Explaining things necessarily involves trying to build a bridge from what we already know to the not-yet-understood thing; but so far this endeavor has the flavor of checking internal consistency between what we already know about nervous systems. It seems that we're mostly motivated to explain consciousness, because we're bothered by the resistance of this idea to explanation. (I certainly am.) But if we don't know where we're going yet this kind of approach might not get us very far. One obscure but interesting example of obsession with internal consistency of theories comes from pre-Common Era China, where Taoist logicians agonized over the relationships between the properties describes by yin and by yang. Yang things are both hard, and white. So what, then, is the logical relationship between hard and white? They're both yang, they reasoned, so there must be such a relationship.
Of course we wonder, where did the Taoists think they were going to get with that kind of tail-spinning if they thought they were trying to answer pretty deep questions? And so we should turn the same question on ourself: why do we care about consciousness? What impact will the theory have once we understand it?
The obvious answer is that it's ultimately a moral question. While it's not clear whether affective components of experience (at base, pleasure and pain) are necessarily a part of consciousness, they certainly are possible in conscious beings, because I experience them, and so even though saying this makes verificationalists mad, I give credit to other apparent first-person viewpoints of other living things (e.g. other humans, dogs, cats) that they experience them too.
Consequently, if neuroscientists of the future that build nervous systems, and AI engineers (if they're two different professions) believe that their lab subjects can experience consciousness, then it becomes incumbent on them to understand what they're experiencing. If we're capable of creating things are conscious, we have to avoid creating ones that are predisposed to suffer. Indeed with such an explanation we may take notice of other structures in the universe that can suffer but that we didn't even realize were conscious before.
Once we understand the material basis of conscious awareness (if there is such an explanation), then we can start asking some heavily Singulatarian-type questions - whether mind uploading, the transporter problem, etc. are really extensions of a self, or just copying, and whether there are meaningful differences in those alternatives.
Finally, understanding the basis of consciousness may allow us to alter the structure of conscious objects in a way that decreases their suffering and expands their happiness - from first principles. Currently we're limited to what I hope will seem in the future to be very limited, clumsy manipulations of nervous systems to decrease suffering - e.g. taking consciousness away with anesthetics, NSAIDs, anti-psychotic medicines, talk therapies, and beyond medicine, all the behaviors that we engage in minute-to-minute to enhance fluorishing and decrease suffering in ourselves and the beings around us.
Tocharica et archaeologica
11 hours ago
No comments:
Post a Comment