Consciousness and how it got to be that way

Saturday, August 21, 2010

Thoughts on Newcomb

I'm currently reading Robert Nozick's Socratic Puzzles. It contains two essays about Newcomb's Problem. If you've not encountered Newcomb before, a brief description follows, and if you want more, the most discussion I've seen anywhere is at Less Wrong. I can sum up this post thusly: how can Newcomb be a hard problem?

Imagine a superintelligent being (a god, or an alien grad student as Nozick imagines, or more plausibly a UCSD medical student. It's up to you.) This superintelligent being says that it can predict your actions perfectly. It shows you two boxes, Box #1 and Box #2, into which it will place money according to rules that I will shortly give. As for you, you have two options: either open both boxes and take the money from both if there is any, or open only Box #2 and take the money from just Box #2. Now here are the rules, and the kicker. Since the being can predict your actions perfectly, it does the following trick. If it predicts that you're going to take just Box #2, it will place a thousand dollars in box #1, and a million dollars in Box #2. So in this instance, you will get a million dollars, but you'll miss out on the thousand in box #1. On the other hand, if it predicts that you will take both boxes, the being will place a thousand dollars in box #1, but place nothing in Box #2. In that case, you end up with just a thousand dollars. So in other words: the being always puts a thousand dollars in Box #1, whereas in Box #2 there's either a million, or nothing.

So, now the superintelligent being has gone back to its home planet of La Jolla, and you are left wondering what to do. Assuming you want the most money possible, which option do you pick and why?

Figure 1. Decision table for Newcomb's Problem.


There's been a lot of discussion about Newcomb's Box, and not all of the responses adhere to the standard one-or-both answer. But I take the point of this particular logic-koan to be that we're to decide based on the givens of the problem which of the options we would take, so cute answers about trying to cheat, making side bets, etc. are wasting our time. If we're going to introduce those kinds of non-systematic "real world" options into this exercise, then we're going to need a lot more context than we currently have to make a decision. In fact after ten years living in Berkeley I'm surprised that I haven't yet met someone on a street corner claiming to be an alien with a million dollars for me, but if I did I would walk away and not play at all. (Come to think of it, I frequently get similar spontaneous offers of a million dollars or more in my spam folder which I ignore at my peril.)

My own answer is to take only box #2, expecting to get a million dollars. Why? Because I want a million dollars, and the superintelligent alien is apparently smart enough to know that I'll gladly cooperate and not try to make myself unpredictable (more on this in a moment). Why try to be a smart-ass about it? (It's both to your disadvantage and not even possible anyway per the terms of the problem.) The being told you where it would put the million dollars (or not) based on your actions, and it's a given in the problem that the being is perfect at predicting your actions. This is what gives the both-boxers fits. They say one-boxers are idiots because if the being got my choice wrong, it didn't put anything in box #2 because it thought I would choose both. If the being is wrong, I open only box #2, and I get zero (because the alien thought I was going to take both and least get a grand, but he was having an off day.)

I will be beating the following dead horse a lot here: the problem states you have a reliable predictor. Why does Figure 1 above even have a right-side column? If you assume the being is fallible then you're not thinking about Newcomb's problem as stated any more: you're ascribing properties to the being that either conflict with what is given in the problem, or your're making stuff up. (Maybe the alien is fallible and copper and zinc are toxic to it! That way it won't predict in time that I'm going to kill it by throwing my spare pennies and brass keys at it, and then I can get the full amount from both boxes! Sucker. Ridiculous? No more than worrying about the given perfect predictor's not being perfect.)

Figure 2. Correction to Figure 1. This figure is the actual real table for Newcomb's Problem. Figure 1 is somebody else-not-Newcomb's problem that features fallible aliens.

Complaints about the logic of the Box #2-only response (which is the majority's response, if the ones Nozick cites in one of his essays are representative) typically focus on two things. One, that we're assuming reverse causality, that we must think our choice of the boxes will make there be a million dollars in it; and two, that it suggests we don't have free will. I dismiss the second objection out of hand because the whole point of the problem is that the being is a reliable predictor of human behavior - for that one aspect of your behavior, in this problem, no, you don't have free will. Look: we already accepted a being with near-perfect predictive powers. Without that, then the problem changes and we have to guess how likely the being is to get it right. But as long as we have Mr./Ms. Perfect Predictor, then the nature or mechanism is unimportant. You can justify how it accomplishes this however you like (we don't have free will in this respect, or the alien can travel through time) but the point is, any cleverness or strategy or philosophizing you do has already been taken into account by the alien.

But things can be predicted in our world, including human behavior, and for some reason this doesn't seem to evince outcries about undermining the concept of free will. Like it or not, other humans predict things about you all the time that you think you'd have some conscious control over - whether you'll quit smoking, your credit score, your mortality - and across the population, these predictions are quite robust. They don't always have the individual exactitude that our alien friend does of course. But at the very least you must concede that if our alien friend is even as smart as humans, after playing this game multiple times with us, its ability to predict which box you take would be greater than random chance, and you would get some information about which box you should pick based on this. Being completely honest, I think a lot of the resistance to one-boxing comes from the repugnance with which some people regard the idea that their behavior is extremely predictable. (Hey! News flash: it is.) Nozick even offers additional information in his example by saying that you've seen friends and colleagues play the same game, and the being predicted their choice reliably each time. Come on Plato, do you want a million dollars or not? Absolute no-brainer!

The first objection (regarding self-referential decision-making) is slightly more fertile ground for argument and it's the one to which Nozick devotes the most time. The idea is that you're engaging in circular logic: I'm deciding to one-box, therefore the being knew I would one-box, therefore I should decide to one-box. (Again: what's the whole point of the exercise? That whatever decision you're about to make, the being knew you would do it, including all the mental gyrations you're going through to get to your answer.) Nozick gives the example of a person who doesn't know whether his father is Person A or Person B. Person A was a university scientist and died of a painful disease in mid-life which would certainly be passed onto all offspring; children of person A would be expected to display an aptitude for technical subjects. Person B was an athlete, and likewise his children would be expected to display an athletic character. So the troubled young man is deciding on a career, noting that he has excelled equally in both baseball and engineering. "I certainly wouldn't want to have a painful genetic disease. Therefore, I'll choose a career in baseball. Since I've chosen a career in baseball, that means my true prowess is in athletics and therefore, B was my father, and I won't get a genetic disease. Phew!"

Yes, that would be a ridiculous decision process. The difference between the two is this: the category the decider is in the whole time is defined in Newcomb as definitely affecting the decision, whereas in Nozick's parallel, it does not (he could've gone either way.) Whatever you decide in Newcomb, the alien knew you would go through your whole sequence of contortions, and you were in that category all the while. Whether such a deterministic category is meaningful is a different and probably more interesting question than Newcomb as-is. Here's another example: you're in a national park, following a marked trail. You get so far along the trail until you come to a frighteningly steep rock face with only a single cable hammered into it. You reason, "I am about to proceed up these cables. If I'm about to do it, it's only because my action was anticipated by the national park people who design the map and trails and they can predict my actions as a reasonably fit and sensible hiker, and furthermore they put these cables here; they're not in the business of encouraging people to do foolishly dangerous things. Therefore, because I am going to do it, it is safe and I should do it." (Any reader who's ever braved the cables on Half Dome in Yosemite by him or herself without knowing ahead of time what they were getting into has had this exact experience.) This replicates the decision process relating to the for-some-reason mysterious perfect predictor: "I am about to open Box #2 only. If I'm about to open it, the superintelligent being would have put a million dollars in it. Therefore I should open Box #2 only." In fact, all the time we go through such circular reasoning processes as they relate to other human beings who are predicting are actions either in general or specifically for us: I am going to do A, and A wouldn't be available unless other agents who can predict my actions reasonably well knew I would come along and do A, therefore I should do A. This still may be an epistemological mess (something I'm not going to debate here) but the fact is that we use this kind of reasoning constantly, living in a world shaped in the to-us most salient ways by other agents who can predict our actions.

Incidentally, I intentionally used the example of the national park because that we use that kind of reasoning becomes obvious when you're trying to decide whether to climb something or undertake an otherwise risky proposition in a wilderness area, rather than on developed trails with markers; you become acutely aware that this circular justification heuristic based on other agents predicting your actions is suddenly unavailable, and then when it's available again (five miles further on, you run across an old trail) the arrangement seems quite obvious.

As a final note, as in other games (like Prisoner's Dilemma) the payouts can be critically important to how we choose. As the problem is traditionally stated (always a thousand in box #1, either zero or a million in box #2), it actually makes the decision quite easy for us, even if we're worried about the fallibility of our brilliant alien benefactor (which again, if we are, then what's the point of this whole exercise!?!?). Making a decision that throws away a thousand for a crack at a million is not for most humans in Western democracies a bad deal. (If someone could show me a business plan that had a 50% chance of turning a thousand bucks into a million within the few minutes that the Newcomb problem could presumably take place in, I'd be stupid not to do it!) On the other hand if I lived in the developing world and made $50 a month and had six kids to feed, I might think harder about this. (This is the St. Petersburg lottery problem, in which the expected utility of the same payout differs between agents based on their own context, and can be applied to other problems as well.) Similarly if it were five hundred thousand in Box #1 and a million in Box #2, things would be more interesting, for my own expected utility at least. Opening a box expecting a million and getting nothing doesn't hurt so much if you would have only got a thousand by playing it safe and opening both; it would be pretty bad if you'd expected a million and got nothing but could still have half a million if you'd played it safe. (For me. Bill Gates would probably shrug.)

Overall, the whole exercise of Newcomb's Box, as given, seems to me uninteresting and obvious. But enough smart people have gone on debating it for long enough that I must be some kind of philistine who's missing something about it. Nonetheless the arguments I've seen so far are not compelling; feel free to share more.

Sunday, August 8, 2010

Wednesday, August 4, 2010

Hints That You're Living in a Simulation; Plus, What Is a Simulation?

See Bostrom's simulation argument for background. From a practical standpoint, you might be suspicious that you live in a simulation if you inhabit a world with the following characteristics:

Hint #1) Limited resolution. A simulation would be computation intensive. It would be useful to have tricks that increase the economy of operations, but in ways that do no compromise the consistency of the simulation to the players. One such trick would be to set an absolute upper limit to resolution (or a lower limit to the size of the elements that make up the "picture") that is below the sensory threshold of the players. These elements could variously be called pixels or quarks. Similarly, it would behoove the simulators to set a maximum time resolution, i.e. maximum frames-per-second, also called Planck times. Furthermore, the simulation's computing power is spared by a statistical method of calculating relationships between entities in the simulation (i.e quantum mechanics) even though it may look, at the scale of the game players or simulated entities, as if the universe maintained quantitative relationships in terms of integers calculated to arbitrary precision. (Related question: is it possible in principle given the physics of our universe for something the size of a bacterium or virus to "be conscious of" this gap in the behavior of the Newtonian and quantum realms, at a very basic sensory level? If not, isn't it interesting that our universe is such that there can be no consciousness operating on scales that would expose the twitching gears behind the scenes?)

Hint #2) There are limitations in what spaces within the game can be occupied by players or sims. In the old Atari 2600 Pole Position game, you couldn't just randomly go off driving off the track and through the crowd even if you didn't care about losing points; the game just wouldn't let you. Similarly, the total space in our apparent universe that we occupy, or directly interact with, or for that matter even get any significant amount of information from, is an infinitesimally small part of the whole. Unless you're in a submarine or in orbit, you don't go more than 200 meters below sea level or 13,000 m above it. (That's a volume of 2.1 x 10^18 m^3 that for all practical purposes the entirety of human history has occurred in; double that figure, and that's the volume that all of evolutionary history has occurred in.)

Hint #3) Beyond the "active game volume" as described above, dab a few pixels here and there in an otherwise almost entirely dark and empty volume. Make them so far away that sims can't possibly interact with them. Reveal additional detail as necessary whenever someone happens to look more closely at them. (And there's another trick: objects in this simulation are only loosely
defined until one of the players interacts with them, "collapsing the wave function". Yeah, that's what the programmers will call it, that's the ticket.)

Hint #4) Even in that limited location, make the active game volume wrap around. That way the simulators get rid of edge-distortion problems, as in Conway's Life. A sphere is the best way to do this. Therefore, work out the physics rules of the simulation to favor spheres.

Hint #5) Make each state of the simulation dependent on previous states of the simulation, but simplify by dramatically limiting the number of inputs with any causal weight. The simulators can limiting computations by having only mass, charge, space and their change over time determine subsequent frames.

Hint #6) If for some reason it is important for the entities in the simulation to remain ignorant to their existence as part of a simulation, the simulators could make sure the entities are accustomed to not only these kinds of stark informational discontinuities but to profound differences in the quality of awareness, both within themselves and each other. That is, the sims will accept not just that the vast majority of the universe (as seen in the sky at night) is interactively off-limits to them, but they'll also accept that their own awareness thereof and ability to connect the dots will dramatically vary over time. That way, if there is any need to interfere and make adjustments (to stop someone from figuring out the game) it won't strike the sims as strange. (Forgetfulness, deja vu, mental illness, drugs, varying intelligence or ability to concentrate on math, death of player-characters before they can learn too much?)

#6 does raise a very important question: why would the simulators give a damn if we knew we were in a simulation. So what? What would we do about it, sue them? If Pac-Man woke up and deduced that he were a video game character, if he still experienced suffering and mortality the same way, why would it matter? By this same view, there's an easy answer to whether we should behave differently if we're actually in a simulation: no. Whether our universe is in reality just World of Warcraft from the sixth dimension, if we simulated beings can suffer (and I know I can), then the moral rules are exactly the same as before.

It's also worth asking for some humility, and asking why we humans always assume that we would be the purpose of any such simulation. We could be merely incidental consciousnesses that are necessary for harboring the populations of simulated bacteria that the simulators are really studying. Or, the simulators could be cryonicists who preserve pets, and the most popular pets in their dimension look like what we call raccoons, and our universe is actually the raccoon heaven in which their beloved masked companions await a cure for the disease that forced the owners to put them on ice. In fact the raccoon-heaven simulation would contain a whole suite of ecosystem, all of them purely simulated (with the exception of raccoons) to keep up the appearance of a full biosphere. So the point of such a simulation would be to fool raccoons - or maybe even mice (again, why would they care about fooling everyone! If the simulators are reading this, just give me more juicy steaks and I won't make problems. It doesn't cost you anything!)

While the raccoon thought experiment is meant to be whimsical, a healthy respect for our own ignorance is always in order for these kinds of speculations. After all, assuming what we have guessed about the rest of (for the sake of argument simulated) universe is accurate, then there might be "aliens" (other non-human intelligences within the simulation) who may very well be much brighter than us. So even if the simulation is somehow arranged around the most intelligent entities within it (as we assume), those entities need not be human. Even if we're simulated, and we have a real brain and body in the "real" universe that's similar to our form in this one, this simulated universe might be designed for Martians (who are brighter than us) and be much less pleasant than our home dimension.

Finally, the very idea of a simulation is poorly defined. Mostly we think of something like almost completely controlled full-world simulation in the Matrix, but let's explore boundary cases. If I wear rose-colored glasses, is that a simulation (or a red world)? What about LSD that causes me to see unidentified animals scurrying past in my peripheral vision? What about DMT that causes a complete dissociation of external stimuli from subjective experience? What if I have some chip implanted that displays blueprints of machinery in my visual field a la the Terminator, is that a simulation? What about a chip that makes me see a tiger following me around that isn't there? (Hypothetical given current limitations.) What if I hear voices telling me to do things that are produced by tissue inside my own skull, by no conscious intent of anyone? (Not at all hypothetical.)

One of the interesting points in the popular movie Inception is the way that external stimuli appear in dreams. This gives us a hint as to what we mean by simulation, and why we care. Most of us have had experiences where the outside world "intruded" into a dream, with the stimulus obvious after we awoke. I once dreamed that a dimensional portal slid open in front of me with an ominous metallic resonance, and I stepped through it, suddenly speeding over the red, rocky surface of Mars. Then I realized it was my father opening his metal closet door in the next room, and I was looking into that room at the red-orange carpeting. Before I was fully awake I had received the sound stimulus but I had built a world out of it that most of us would not regard as real. (The experience of speeding over Mars was quite real, even if most humans would have a more accurate representation of that auditory stimulus.) So, a better way of saying "how do I know 'this' is reality, rather than another dream, or a simulation?" is to ask "how do I know I am perceiving "true" stimuli, without mapping them unnecessarily onto internal stimuli, giving me as accurate and un-contorted a view of the world as possible?"

And indeed in certain ways, we certainly are dreaming, in the sense of injecting internal stimuli and filtering external stimuli through them. (Notably, it is possible to view schizophrenics as people who experience dreams even while awake and filter their perceptions accordingly.) First and most obviously, because our sense organs are limited in what they can detect, we're obtaining only a slice of possible data. Second, the world we knit together is the result of binding of sensory attributes into object/events, as well as pattern recognition. The limitations of our nervous systems, and the associations we are able to make, profoundly influence the representation we build of the world we're perceiving.

Third, and most significantly, a large part of our experience is non-representational: emotions, pleasure and pain do not exist outside of nervous systems, or rather the events to which those experiences correspond are almost entirely contained within nervous systems. Yes, to be precise the experience of light does not exist until the triggering of a cascade of electrochemical events by radiation incident on pigments in retinal cells; but light, which is what is represented in our experience, exists traveling across the universe. Pain and happiness do not. These are internal stimuli that add a non-representational layer to reality, even more certainly than my dream of the Mars overflight.

A good working definition of a simulation as it is commonly understood is when the majority of one's external stimuli are supplied deliberately by another intelligence to produce experiences that do not correlate to physical reality external to the nervous system (or computational equivalent). This avoids taking a position on AI; the sims may or may not be entities separate from the computation. I.e., you might be in sensory deprivation tank like Neo, or you might be a computer program. The question of reality versus dreams or simulations is not one of discrete "levels" as we've come to think of it in popular culture. It is rather a question about how we know our experiences correspond in some consistent way with events separate from our nervous systems.

Monday, August 2, 2010

Looking for Neurological Differences Between Nouns and Verbs

Just ran across this poster presented at the Organizing for Brain Mapping's Annual Meeting in 2004. Sahin, Halgren, Ubert, Dale, Schomer, Wu and Pinker looked at the fMRI and EEG changes associated with a number of language tasks, and one of the questions they asked was whether activation characteristics were different for nouns and verbs. This study did not find that they were.

In my sketch of a neurolinguistic theory, verbs are first order modifiers and are distinct from adjectives in that they mediate properties and relationships between nouns. (In this sense, intransitive verbs are more similar to adjectives than to transitive verbs.) I also postulate that nouns and first order modifiers should have identifiably different neural correlates. I have not yet completed a literature search (obviously, if I'm citing posters from 2004.) However, even if such different neural correlates obtain, then I think it the task design here was not necessarily adequate to capture such differences, because the participants were asked to morphologically modify the nouns and verbs in isolation, rather than in situ, in grammatical relation to each other.

Another interesting experiment would be to give the participants nonsense words and new affixing rules (i.e. not revealing the part of speech of the nonsense word, i.e. ("if the word has a t in it, add -pex to the end, otherwise, add -peg"), and look for any difference relative to neural correlates of morphological tasks done in real words.

Strong AI, Weak AI, and Talmudic AI

Yale Computer scientist David Gelernter argues here that Judaic dialectic tradition will help us to reason our way through the moral morass of the first truly intelligent machine. I had first written this off as an article in the genre of "interesting collision of worldviews". But in the near future the cognitive science debates we're having today will seem luxuriously academic and unhurried, because for several reasons involving computing and neuroscience they will soon be more than intriguingly difficult questions. Even if we can all agree that suffering must be the basis of morality, we will need a way to know that, on that basis, it's not okay to disassemble someone in a coma, but it is okay to disassemble a machine that can argue for its own self-preservation.

Sunday, August 1, 2010

John Searle Must Be Spamming Me

Because with all the comment-spam, the waiting-to-be-moderated comments list looks like a Chinese chat-room. I have yet to see any Hindi. And anyway I'm sure the machine producing the spam doesn't understand the symbols.