Consciousness and how it got to be that way

Monday, September 9, 2013

Churchland's Critique of the Mysterians - Plus Lazy, Middling and Rigorous Mysterian Positions

The mysterian position is often unclear as to exactly the assertion being made, as well as the domains to which it applies. Boiled down, it is this - "The mind cannot understand itself" - although sometimes the claimed unknowability extends further. Often when mysterians throw up their hands, it's in attempts to explain the basis of conscious experience (e.g. Colin McGinn), but often the concept is applied to other domains (language and logic, like Chomsky) or more broadly to all knowledge itself. Patricia Churchland is probably foremost among the mysterians' critics and is having none of this, asking (paraphrasing) "How can mysterians know what we can't know?" (Summary here.) But there are good examples of formally rigorous mysterianism that we already have access to; they're just more limited than what the more aggressive mysterianists are implying.

Churchland regards this surrender to ignorance as no less than anti-Enlightenment, fightin' words if ever there were. While I certainly side with her in terms of approaching the universe with epistemological optimism, the problem is that there are some formal proofs of unknowability, in some domains. She must certainly be aware of these but I'm not aware of any arguments she's made about them. Because it's generally unclear what argument mysterians are making (and their critics are attacking) and what domains they apply to, here is a set of useful distinctions of positions.

Lazy mysterianism. "There is a problem which seems intractable and on which we don't seem to have made much progress; that's because we can't make progress (classically conscious experience)." I think it's clear to all that this kind of giving up is not only un-rigorous, it's pessimistic.

Middling mysterianism, or Human-specific provincial mysterianism, or practical mysterianism. This is the argument that there are things which humans cannot know. But this argument is making a provincial claim about the limitations of the human brain, not about the universe in general. It is without a doubt that the hardware limitations of human brains place constraints on working memory and network size that limit the thoughts we can think, so it's a valid criticism that if we grant that my cat Maximus cannot understand relativity, then there are things that humans cannot understand as well. (Maximus is limited even for a cat but that's not the topic of the post.) The much more controversial part of this brand of mysterianism is when the argument is made not from "commodity" limitations (not enough of something like working memory; you can't think of a two million word sentence all at once) but rather from limitations in the structure of human thought, so that there are logically valid structures that it cannot contain. I'm not going pursue that possiblility, formally or by analogy, but it's worth remembering: an animal with a nervous system designed to mate, avoid predators and find fruit in the African savannah is now insisting that it is a perfect proposition-evaluating machine that can understand everything there is to be understood in the universe. (At the other extreme are people like Nagel, who say that the lowly origin of our brains means we can't know anything.) Especially if other kinds of nervous systems on the planet cannot understand everything, it seems the burden is strongly on those who would make the argument that everything is suddenly within the reach of humans. More simply: if you don't think you could teach the alphabet to Maximus the stupid cat, the burden of proof is on you to explain why everything can be taught to a human. What is so fundamentally different about the two?


From Gary Larson's Far Side.

The key question here is whether it's possible to tell random noise in the universe from something that is comprehensible but beyond us; for example, an advanced self-programming computer or a superintelligent alien, trying to explain some theory to us. You might even say that something is unknowable with current knowledge, because that causes a change in brain state - think how impossible it would be for the otherwise-bright Aristotle to understand the ultraviolet catastrophe without the intervening math and physics - but obviously this didn't mean it couldn't be understood, ever, period. But now we're back to lazy mysterianism, because eventually the UV catastrophe could be understood, so this is useful - the difference between lazy and provincial materialism is whether you can modify your brain state with experience in a way to make you understand it, as Planck and people after him have.

Some would object here, and pursue an angle that argues "if a superintelligent alien tells us something that's 'beyond us"', but we can differentiate nonsense against things that we just don't understand, then we actually do understand it" - which brings us to our next point.

Rigorous mysterianism. Rigorous mysterianism is true. It seems we have a very good reason to think that there are some instances where we can't in principle know things. Turing's halting problem is about as good an example of this as any. Here we have a well-studied formal proof that we cannot know whether an input will ever produce an output. Formal mysterianism! It certainly seems that we have a good reason to believe this is an insoluble problem, because we have a rigorous demonstration of it. An over-optimist might say "Actually the halting problem is just lazy mysterianism. Right now in 2013 it seems like we can't solve the halting problem but that's just a limitation of modern knowledge." Let's be consistent then; by such an argument, all things we don't know will eventually be knowable, including how to go faster than light, how to know both the position and momentum of a particle, and any other number of things. Attacking mysterianism cannot mean rejecting even all formal positive limitations on knowledge, or sequitur quodlibet, anything follows.


Commodity Limitations on Human Cognition and What They Mean For "Understanding"

The interesting points are to be found in the middling, provincial position, because this forces us to define what it means to know or understand something. I may be conflating two terms here: is conscious understanding required for knowledge? Is understanding even a real thing? Think of all the times you've thought you've understood something, really had a genuine honest sense of it, only to have reality eventually show you otherwise. While this doesn't mean that understanding is just a sense experience and always a delusional one at that - though that's one possibility - thinking you understand something is not a solid guide to whether you are correct. Only external reality is, hence the empirical scientific method. This difference, between understanding something and knowledge of something (or the possibility of knowledge about it), may be key to clarifying mysterian positions. And this approach to clarifying the cognitive processes involved in knowledge, and the physical structures that underlie them, should appeal to neurophilosophers like Churchland and help us clarify what we're talking about.

Here's a thought experiment illustrating a commodity limitation to human cognition. An alien lands outside your door and demonstrates some physically amazing thing (teleportation), and says, "Now I'm going to explain it to you and you will understand it. The shortest way to explain it to a human is a twelve billion word sentence." (Which takes roughly 200 years to listen to if you sleep 8 hours and day and listen to the alien for the rest.) That means you will never understand teleportation, period. That may be different from humans not being able to understand it in general.

So you say, "I'll be dead by then, and you might have a nice voice for an alien but listening to you sixteen hours a day the rest of my life would be unpleasant anyway. Can I get a large group of people to help me?" And the alien is nice so it says "Sure." So you start. Because some are on night shift you can do it in 130 years. So in 2143 they get to the end of the sentence, build a machine, and bang-o, teleportation.

In this case: does anyone still "know" how the teleporter works? Obviously yes! you say, they built it! But everyone heard their own three-month subordinate clause, has their own piece of the machine, and it plugs into the rest; no one has all the knowledge. They know that when the parts are put together in such and such way and you put Maximus the stupid cat in this box and press this button, Maximus the stupid cat comes out on the other end - but Maximus is even lazier than he is stupid, so soon enough even he figures out that when you press the red button, you can go to the other room where the food dish is without having to walk all the way there. Unrealistic? Some dogs are smart enough to take the subway in Moscow. They just haven't built their own yet, and no school has been able to teach them. Similarly, it's not a teleporter, but this cat has certainly figured out how to amuse itself with a toilet. Do they understand the subway and the toilet? Do you understand how your phone works? Does any one person at a smartphone manufacturer know how their product works?

Let's go back to the initial alien landing. Now let's say the alien isn't such a chatty Cathy, and the sentence only takes forty years to say, but now the alien is more fixated on having the first person they meet, and just that person, as an audience. For the sake of advancing human technology, he agrees to become a transcription-monk, giving his life to writing down the alien's long sentence. At the end, voila, the monk builds a teleportation device. Now does he know how it works? Does he understand it? "Now this time it's a home run," you say. "He clearly understood the whole sentence and he built the thing all on his own!" Let's come back to this in a bit.

Of course you see the trick here. The long alien sentence is the history of science and engineering, standing on the shoulders of giants. The trick that humans have and that animals don't, or at the very least they don't have it as well-developed as we do, is language, which allows us to overcome those commodity limitations on knowledge and even our own mortality, in order to cooperate with others and advance our ability to predict the behavior of the universe and choose actions accordingly.

The best analogy to the alien sentences are mathematical proofs. Andrew Wiles solved Fermat's last theorem; the proof is over a hundred pages. The vast majority of humans do not understand it (and I submit, could never have understood it, even with the same schooling as Wiles.) But does Wiles understand it? Don't roll your eyes! I'm not asking whether he bumbled about scribbling randomly until he said "Oh dear, I finally seem to have solved it, but don't ask me how". But Wiles, brilliant though he is, still had to write it down (of course) because he understands - that is, holds in working memory - the steps one or a few at a time, not the whole proof. His network density is what differentiates him from subway dogs and from me. Similarly, the people who make smartphones don't understand everything about solid state physics or even their own product, but individuals understand the pieces (because of their networks) and how to plug them together with other pieces they don't understand (like dogs on the subway).

The knowledge of how to make a smartphone clearly exists in the world, yet no one understands it (not all of it; to claim otherwise is to claim that I do actually understand Wiles's solution because I can understand one page of the proof, and believe mathematicians when they tell me it fits with the rest). There's too much information, and even Wiles can't grasp nearly all of it simultaneously. For that matter, even multiplication of large numbers cannot be understood in one piece, but we can still tell it's correct. Understanding as it's usually described, and knowledge - knowledge verified by behavior, by experiment and how to do something - are two different things. There are two arguments buried there that I'll unpack.


CONCLUSION

First, whatever we mean by knowledge it is possible for humans to have, quantitative commodity limitations cannot rule out such possible knowledge. That is to say, your limited working memory (which language helps us overcome) does not determine what is outside of possible human knowledge. Time to understanding doesn't count either; even if we can't understand it now, people might understand it later, building on what we've already learned. To cling to "understanding" as meaning we can think all the thoughts together, we must argue either that there are people who have in their heads, at one point in time, the entire workings of a smartphone (which there are not), OR we must consider smartphones as mysterious, which is stupid, because we make the things. Whatever provincial (human-specific) limitations exist on possible knowledge (if any), they must be based on network density and quality. This is what allows profoundly improved language in humans, and why cats can't do multiplication, and why almost no humans can understand the solution to Fermat's last theorem, but a few can.

Second, and more controversially, it may be more useful to define what we mean by knowledge as information that affects decisions and behavior that consistently produce an expected result, regardless of the subjective experience of this information, i.e. the sense of understanding. Brains and computers both use representations that allow them to behave in ways that interact with the rest of the universe in a predictable way; this is knowledge, even if it's incomplete or perfect. So Wiles, Moscow dogs, and I all have knowledge of how to ride the subway. Wiles and I have knowledge of smart phones. Only Wiles has knowledge of how to solve Fermat's last theorem. Even Wiles does not understand it in full, only in tiny pieces. It seems a short jump to say that it would not be difficult to build a machine that can navigate the first two problems; and we already have systems that can check the last one (but not solve it in the first place). That is to say, computers have representations that let them solve those problems, and therefore have knowledge. But computers don't understand things (have a subjective experience of logical perception), whereas humans can have this sense, but it's notoriously unreliable, and certainly not grounds for claiming to others - and limited to small pieces of most trains of reasoning.

No comments:

Post a Comment