Consciousness and how it got to be that way

Friday, September 13, 2013

What Is the Process By Which "Standard of Care" Improves Over Time?

There is a fantastic post up at Slate Star Codex that I can't recommend enough (both the post and the blog as a whole). In it, the resident physician writer notes that it's unclear how new information is evaluated and adopted as standard-of-care. He gives an example of a now-poorly-supported medical theory (MS caused by poor circulation), a current case where the jury is still out, and a case where a new treatment seems to have very solid data - but is still anything but mainstream medicine. For my fellow psychiatrists, that last one would be the use of minocycline for negative symptoms of schizophrenia. (The writer says no psychiatrists he knows have heard of this but at my institution it's starting to be discussed; however I've never seen anyone started on it for negative symptoms.) The concern is really this: isn't it possible that a potentially valuable publication will languish in obscurity, never to be replicated and built into the evidence-based pantheon? My suspicion is that these things start getting to patients as soon as insurers' and institutions' formularies adopt them, and that medical education and dissemination by journals and conferences is only secondary (apart from the extent to which those things influence formularies.)

Of special importance, this article also makes a point of demolishing the slopping thinking that private sector drug development somehow "suppresses" new treatments that don't make money. That's false. What they do is un-suppress treatments that they think will make money. Before medical school I had a twelve-year career as a drug development consultant, running the studies that would un-suppress drugs, and I find it very frustrating to hear the bias in academia against an enterprise that has done so much good for so many patients. The reality is that drug companies do not suppress or distort information, but they do decide based on a profit motive what information to pursue in the first place. As with all science, each study is a move that decreases uncertainty - about efficacy and safety of each treatment. You have to decide what the marginal value of that uncertainty decrement is, based on some combination of patient suffering and money. And of course that value will be different if you're part of a for-profit company, or an academic institution. As with most things, any narrative that tries to reduce this to a more-neatly-worldview-fitting left-right political angle is at least oversimplifying to the point of incoherence, and more likely just flat out wrong. That is to say: if your claim is that big bad regulations are what make drug discovery difficult and the government is in the way of patients and profits, you're wrong, just like you're wrong if you think that big bad drug companies somehow suppress the truth.

Tuesday, September 10, 2013

Finite Willpower and The Dual-Self: Behavioral and Imaging Evidence

More evidence that ability to choose delayed gratification (i.e., willpower) is a limited resource. The interesting thing here is the relative activity of the dlPFC. Choosing delayed gratification is associated with activation of a network including the dlPFC, and inactivation is associated with more present-orientation. Demand-avoidance (avoiding tasks which tax willpower) is also associated with low willpower.


Of course the obvious eventual application of this research is to make people behave more rationally by increasing their willpower and therefore the future orientation of the actions they choose. The next step is to understand the mechanism of willpower depletion. Interestingly, in exercise science, there is speculation that what accounts for the latent period between high-impact weight lifting sets is neurotransmitter depletion in the synapse, and restoration on the order of minutes by vesicular transporters. There is also some evidence that neurotransmitter re-uptake inhibitors (specifically SSRIs) can increase the amount of exercise that can be performed until exhaustion (specifically, distance-to-exhaustion in distance runners in my own correspondence). The same thing might be happening in the dlPFC network required for willpower. An initial investigation might be to pharmacologically manipulate neurotransmitter concentration in the synapse in animals models and look at the effect on delay of gratification.

Citation: Kool W, McGuire JT, Wang GJ, Botvinick MM (2013) Neural and Behavioral Evidence for an Intrinsic Cost of Self-Control. PLoS ONE 8(8): e72626. doi:10.1371/journal.pone.0072626

Rhesus Macaques Show St. Petersburg Lottery-Like Behavior

From a PNAS paper by Yamada et al. That is, the macaques were willing to take greater risks for a reward when their pre-existing "wealth" is greater, and the possible lost utility is therefore relatively smaller. The wealth in this case was water - either in the form of a drink of water, or their internal store of water as measured by blood osmolality (the macaque's water bank account). More applications of the St. Petersburg lottery here.

Yamada H, Tymula A, Louie K, Glimcher P. Thirst-dependent risk preferences in monkeys identify a primitive form of wealth. PNAS. Published online before print September 9, 2013, doi: 10.1073/pnas.1308718110.

Monday, September 9, 2013

Mechanism, Prevention and Treatment of Clozapine-Induced Agranulocytosis

Clozapine (CLZ) is our most effective atypical antipsychotic, but unfortunately it also has a slightly higher rate of agranulocytosis (about 1%) than the other drugs in the class, which has profoundly limited its use. The reason I chose this topic is that if we understand the mechanism better we can predict who is most likely to suffer this adverse reaction, and we can have a better idea of the course of the reaction and how to treat it. You can find the slides here; this is a talk I did for my pathology rotation at UCSD School of Medicine.

It turns out that Williams Hematology 8th Ed. (2000) is actually wrong about the nature of this reaction, based on studies with CLZ as well as the anti-thyroid medication propylthiouracil (PTU), which is chemically similar to CLZ in the formation of a neutrophil-generated reactive intermediate. The mechanism is very similar to that of other drugs with reactive myeloperoxidase-generated intermediates - as well as some auto-immune vasculitides, in particular granulomatosis with polyangiitis (formerly Wegner's). Critically, both CIA and propylthiouracil (PTU)-induced agranulocytosis feature the appearance of anti-neutrophil cytoplasmic antibodies (ANCAs), just like in GPA. Take-home: genetic screening should be routinely done for patients considering starting clozapine, since there is an HLA-2 polymorphism that has a CIA odds-ratio of 16 relative to those without it. There is at least one case in the literature of a patient who initially had CLZ-induced agranulocytosis (CIA), but did not have this HLA-2 polymorphism, and was re-challenged without a second episode. This also means it's pointless to give filgrastim to CIA patients who are still on CLZ, since the ANCAs reach immature neutrophils in the marrow as well; this was also tried without success on at least one occasion. References are in the slides.

A Possible Solution to Hedonic Recursion in Self-Modifying Agents: Knowledge-Driven Agents

A problem of systems with goals which can self-modify those goals is stability and survival over time. It seems to be a positive that a system could identify goals which are in conflict and modify them to behave consistently in its own benefit, although conflicts can also be solved in favor of the less-survivable goal (example below). The stronger danger is that an agent that can "get under its own hood", so to speak, is able to short circuit the whole process and constantly reward itself for nothing, breaking the feedback loop entirely. This is called hedonic recursion.

An example of conflicting goals: a person wants to be healthy. The same person also really likes eating chocolate. A person with access to his own hardware could resolve the conflict either by modifying himself to make it less fun to eat chocolate, or by to not care about the negatives of being unhealthy. It seems obvious that the first option is the better one for long-term survival, but in the second case, after you modify yourself, you won't care either. And even this second resolution is far less dangerous than outright short-circuiting one's reward center, getting a shot of dopamine for doing nothing. And this short-circuit option would be on the table for a fully self-modifying agent. And, for any self-modifying goal-seeking agent, this will very quickly be realized.

Fortunately or otherwise, this hasn't been a problem for life on Earth yet, because the only way living things here can get rewards is through behavior - because we cannot modify ourselves. The things that cause pleasure and pain are set in stone (or rather, in neurons) and only through behavior (modifying the external environment as opposed to yourself) are rewards obtained. But there are hints in higher vertebrates of small short-circuits - nervous system hacks they have stumbled across which tweak their reward circuits directly. Elephants remember the location of, and seek out fermented fruit (to get happily buzzed). Elephant seals dive rapidly to unnecessary depths to cause narcosis (we think). Primates (including us) masturbate incessantly. And humans specifically have found things like heroin. As we humans learn still more about ourselves and learn how to manipulate the neural substrate, this may be changing. Consequently, if ever humans are able to alter our nervous systems directly and completely, ruin may follow quickly. And indeed, this has been shown with rats: give them the ability to directly stimulate their reward centers with electrical current, and they will do so to the exclusion of all other activities, including those required for survival - hedonic recursion.

In a great discussion at the Machine Intelligence Research Institute website, Luke Muehlhauser talks to Laurent Orseau about how to solve the problem of what kinds of self-modifying agents avoid this problem. The discussion is about how to build an artificial intelligence, but it applies to biological nervous systems that, like us, are increasingly able to self-modify.

One of the theoretical agents Orseau conceived was a knowledge-driven, as opposed to a reinforcement-driven agent, a goal-seeking agent, or a prediction-confirming agent:

...knowledge-seeking has a fundamental distinctive property: On the contrary to rewards, knowledge cannot be faked by manipulating the environment. The agent cannot itself introduce new knowledge in the environment because, well, it already knows what it would introduce, so it's not new knowledge. Rewards, on the contrary, can easily be faked.

I'm not 100% sure, but it seems to me that knowledge seeking may be the only non-trivial utility function that has this non-falsifiability property. In Reinforcement Learning, there is an omnipresent problem called the exploration/exploitation dilemma: The agent must both exploit its knowledge of the environment to gather rewards, and explore its environment to learn if there are better rewards than the ones it already knows about. This implies in general that the agent cannot collect as many rewards as it would like.

But for knowledge seeking, the goal of the agent is to explore, i.e., exploration is exploitation. Therefore the above dilemma collapses to doing only exploration, which is the only meaningful unified solution to this dilemma (the exploitation-only solution leads either to very low rewards or is possible only when the agent already has knowledge of its environment, as in dynamic programming). In more philosophical words, this unifies epistemic rationality and instrumental rationality.

There's a lot more to the argument (you really should read it), but there are several points to be made with respect to this paper.

1) These are not fully self-modifying agents. In this environment their central utility function (reward, knowledge, etc.) remains intact. The solution is to collapse exploitation (reward) into exploration (outward orientation). The knowledge agent can only get buzzed off of novel data, so it has to keep learning. But exploitation and exploration are two conceptually separable entities; so if modification of the central utility function is allowed, eventually the knowledge agents will split exploration and exploitation again, and we're back to reward agents. (At the very least, given arbitrary time, the knowledge agents would create reward agents, to get more data, even if they didn't modify themselves into reward agents.)

2) Orseau's point is taken that if novel data is what's rewarding them, as long as that utility function is intact, they cannot "masturbate" - they have to get stimulation from outside themselves. In another parallel to the real neurology of living things, he states "all agents other than the knowledge agent are not inherently interested in the environment, but only in some inner value." The core of utility is pleasure and pain, which are as much an inner value as it is possible to be. Light is external data, but if you shine a bright light in someone's eyes and it hurts, the pain is not in the light, it's in the experience the light creates through their nervous system. Utility is always an inner value. The trick of the knowledge-based agents is in pinning that inner value to something that cannot arise from inside the system.

3) The knowledge-based agent is maximizing experienced Kolmogorov complexity. That is to say, it wants unexpected information. Interestingly, Orseau says this type of agent is the best candidate for an AI, but such an agent could never evolve by natural selection. He points out that the agents he's using are immortal and don't suffer consequences to their continued operation by any of their experiences. But an agent that can be "damaged" and that is constantly seeking out unexpected environments (ones it doesn't fully understand) would quickly be destroyed. In contrast, Orseau commented that the reinforcement-based agent ends up strongly defending the integrity of its own code. Evolutionarily, any entity that does not defend its own integrity is an entity you won't see very many of (unless the entity is very simple, and/or the substrate is very forgiving of changes. This is why you see a new continuum of viral quasispecies appear after a single year, but why animal species reproductively isolate and you shouldn't hold your breath for, say, hippos to be that much different any time soon.)

4) No doubt real organisms are imperfect amalgamations of all of these agent strategies and more. To that end, Orseau found that the reinforcement (reward)-based agent acts the most like a "survival machine". In his system, I would wager that living things on Earth are reinforcement-based agents with a few goals sprinkled in. (There are many animals, including humans, that startle when they see something snake-like. fMRI studies have even suggested that there are actually specific brain regions in humans corresponding to certain animals - it's really that klugey.) However, of further interest here is that even between humans there are substantial differences in how much utility is to be gained from unexpected novelty, some of them known to be genetically influenced. Some of us are born to be surprise-seeking knowledge agents more than others. The meaning of having multiple genes not at fixation would be useful to investigate. (Only recently valuable in evolutionary time, now that our brains have enough capacity?)


If your goal is to create agents that act to preserve and make more of themselves and remain in contact with the external environment rather than suffering a hedonic recursion implosion, there are a few stop-gaps you might want to put in place.

1. Make self-modification impossible. This is the de facto reality for life on Earth, including us, except for a few hacks like heroin. Life on Earth has at least partly done this, converting early on from RNA to the relatively inert DNA as its code.

2. Build in as strong a future orientation as possible, with the goal being pleasure maximization rather than pain minimization. That way pleasure now (becoming a wirehead) in exchange for no experience of any kind later (pain or pleasure, meaning death) becomes abhorrent. You might complain about the lack of future orientation in humans* but the fact that any organism has any future orientation is testament to its importance.

It could be that we haven't seen alien intelligences because they all become wireheads, and we haven't seen alien singularities expanding toward us because Orseau's E.T. counterparts built their AIs to seek novelty, and the AIs destroy themselves in that way.


Speaking of poor future orientation where reward is concerned: I have seen a man literally dying of heart failure, in part from not complying with his low-sodium diet, eating a cheeseburger and salty, salty fries that he brought with him into the ER.

Dopamine Agonists Increase Salience of "Distractor" Information

From Kéri et al. Take Parkinson's Disease (PD) patients; give them a task where they must remember certain letters associated with pictures, but not other letters associated with pictures, in order to receive a reward. (There were also pictures with NO letters.) At baseline, the PD patients did the same as non-PD patients. After starting the PD patients on one of three dopamine agonists, they now remembered both kinds of letters (specified and distractor) better than non-PD (and non-medicated) patients. That is to say - after administration of dopamine agonists, they were better than non-medicated patients at remembering the rewarded stimuli as well as the distractors.

The core features of psychosis can be modeled as salience defects, and the working clinical hypothesis is that this is mediated by hyperactivity of dopamine in the mesolimbic system (which same feature, unfortunately but predictably, can also cause Parkinsonian symptoms). This is supported by the effectiveness of anti-dopaminergic anti-psychotics in treating psychosis. This paper is important in showing that control of salience is damaged by exogenous dopamine agonism.

Kéri S, Nagy H, Levy-Gigi E, Kelemen O. How attentional boost interacts with reward: the effect of dopaminergic medications in Parkinson's disease. European Journal of Neuroscience Online, 8 SEP 2013 | DOI: 10.1111/ejn.12350.

Churchland's Critique of the Mysterians - Plus Lazy, Middling and Rigorous Mysterian Positions

The mysterian position is often unclear as to exactly the assertion being made, as well as the domains to which it applies. Boiled down, it is this - "The mind cannot understand itself" - although sometimes the claimed unknowability extends further. Often when mysterians throw up their hands, it's in attempts to explain the basis of conscious experience (e.g. Colin McGinn), but often the concept is applied to other domains (language and logic, like Chomsky) or more broadly to all knowledge itself. Patricia Churchland is probably foremost among the mysterians' critics and is having none of this, asking (paraphrasing) "How can mysterians know what we can't know?" (Summary here.) But there are good examples of formally rigorous mysterianism that we already have access to; they're just more limited than what the more aggressive mysterianists are implying.

Churchland regards this surrender to ignorance as no less than anti-Enlightenment, fightin' words if ever there were. While I certainly side with her in terms of approaching the universe with epistemological optimism, the problem is that there are some formal proofs of unknowability, in some domains. She must certainly be aware of these but I'm not aware of any arguments she's made about them. Because it's generally unclear what argument mysterians are making (and their critics are attacking) and what domains they apply to, here is a set of useful distinctions of positions.

Lazy mysterianism. "There is a problem which seems intractable and on which we don't seem to have made much progress; that's because we can't make progress (classically conscious experience)." I think it's clear to all that this kind of giving up is not only un-rigorous, it's pessimistic.

Middling mysterianism, or Human-specific provincial mysterianism, or practical mysterianism. This is the argument that there are things which humans cannot know. But this argument is making a provincial claim about the limitations of the human brain, not about the universe in general. It is without a doubt that the hardware limitations of human brains place constraints on working memory and network size that limit the thoughts we can think, so it's a valid criticism that if we grant that my cat Maximus cannot understand relativity, then there are things that humans cannot understand as well. (Maximus is limited even for a cat but that's not the topic of the post.) The much more controversial part of this brand of mysterianism is when the argument is made not from "commodity" limitations (not enough of something like working memory; you can't think of a two million word sentence all at once) but rather from limitations in the structure of human thought, so that there are logically valid structures that it cannot contain. I'm not going pursue that possiblility, formally or by analogy, but it's worth remembering: an animal with a nervous system designed to mate, avoid predators and find fruit in the African savannah is now insisting that it is a perfect proposition-evaluating machine that can understand everything there is to be understood in the universe. (At the other extreme are people like Nagel, who say that the lowly origin of our brains means we can't know anything.) Especially if other kinds of nervous systems on the planet cannot understand everything, it seems the burden is strongly on those who would make the argument that everything is suddenly within the reach of humans. More simply: if you don't think you could teach the alphabet to Maximus the stupid cat, the burden of proof is on you to explain why everything can be taught to a human. What is so fundamentally different about the two?


From Gary Larson's Far Side.

The key question here is whether it's possible to tell random noise in the universe from something that is comprehensible but beyond us; for example, an advanced self-programming computer or a superintelligent alien, trying to explain some theory to us. You might even say that something is unknowable with current knowledge, because that causes a change in brain state - think how impossible it would be for the otherwise-bright Aristotle to understand the ultraviolet catastrophe without the intervening math and physics - but obviously this didn't mean it couldn't be understood, ever, period. But now we're back to lazy mysterianism, because eventually the UV catastrophe could be understood, so this is useful - the difference between lazy and provincial materialism is whether you can modify your brain state with experience in a way to make you understand it, as Planck and people after him have.

Some would object here, and pursue an angle that argues "if a superintelligent alien tells us something that's 'beyond us"', but we can differentiate nonsense against things that we just don't understand, then we actually do understand it" - which brings us to our next point.

Rigorous mysterianism. Rigorous mysterianism is true. It seems we have a very good reason to think that there are some instances where we can't in principle know things. Turing's halting problem is about as good an example of this as any. Here we have a well-studied formal proof that we cannot know whether an input will ever produce an output. Formal mysterianism! It certainly seems that we have a good reason to believe this is an insoluble problem, because we have a rigorous demonstration of it. An over-optimist might say "Actually the halting problem is just lazy mysterianism. Right now in 2013 it seems like we can't solve the halting problem but that's just a limitation of modern knowledge." Let's be consistent then; by such an argument, all things we don't know will eventually be knowable, including how to go faster than light, how to know both the position and momentum of a particle, and any other number of things. Attacking mysterianism cannot mean rejecting even all formal positive limitations on knowledge, or sequitur quodlibet, anything follows.


Commodity Limitations on Human Cognition and What They Mean For "Understanding"

The interesting points are to be found in the middling, provincial position, because this forces us to define what it means to know or understand something. I may be conflating two terms here: is conscious understanding required for knowledge? Is understanding even a real thing? Think of all the times you've thought you've understood something, really had a genuine honest sense of it, only to have reality eventually show you otherwise. While this doesn't mean that understanding is just a sense experience and always a delusional one at that - though that's one possibility - thinking you understand something is not a solid guide to whether you are correct. Only external reality is, hence the empirical scientific method. This difference, between understanding something and knowledge of something (or the possibility of knowledge about it), may be key to clarifying mysterian positions. And this approach to clarifying the cognitive processes involved in knowledge, and the physical structures that underlie them, should appeal to neurophilosophers like Churchland and help us clarify what we're talking about.

Here's a thought experiment illustrating a commodity limitation to human cognition. An alien lands outside your door and demonstrates some physically amazing thing (teleportation), and says, "Now I'm going to explain it to you and you will understand it. The shortest way to explain it to a human is a twelve billion word sentence." (Which takes roughly 200 years to listen to if you sleep 8 hours and day and listen to the alien for the rest.) That means you will never understand teleportation, period. That may be different from humans not being able to understand it in general.

So you say, "I'll be dead by then, and you might have a nice voice for an alien but listening to you sixteen hours a day the rest of my life would be unpleasant anyway. Can I get a large group of people to help me?" And the alien is nice so it says "Sure." So you start. Because some are on night shift you can do it in 130 years. So in 2143 they get to the end of the sentence, build a machine, and bang-o, teleportation.

In this case: does anyone still "know" how the teleporter works? Obviously yes! you say, they built it! But everyone heard their own three-month subordinate clause, has their own piece of the machine, and it plugs into the rest; no one has all the knowledge. They know that when the parts are put together in such and such way and you put Maximus the stupid cat in this box and press this button, Maximus the stupid cat comes out on the other end - but Maximus is even lazier than he is stupid, so soon enough even he figures out that when you press the red button, you can go to the other room where the food dish is without having to walk all the way there. Unrealistic? Some dogs are smart enough to take the subway in Moscow. They just haven't built their own yet, and no school has been able to teach them. Similarly, it's not a teleporter, but this cat has certainly figured out how to amuse itself with a toilet. Do they understand the subway and the toilet? Do you understand how your phone works? Does any one person at a smartphone manufacturer know how their product works?

Let's go back to the initial alien landing. Now let's say the alien isn't such a chatty Cathy, and the sentence only takes forty years to say, but now the alien is more fixated on having the first person they meet, and just that person, as an audience. For the sake of advancing human technology, he agrees to become a transcription-monk, giving his life to writing down the alien's long sentence. At the end, voila, the monk builds a teleportation device. Now does he know how it works? Does he understand it? "Now this time it's a home run," you say. "He clearly understood the whole sentence and he built the thing all on his own!" Let's come back to this in a bit.

Of course you see the trick here. The long alien sentence is the history of science and engineering, standing on the shoulders of giants. The trick that humans have and that animals don't, or at the very least they don't have it as well-developed as we do, is language, which allows us to overcome those commodity limitations on knowledge and even our own mortality, in order to cooperate with others and advance our ability to predict the behavior of the universe and choose actions accordingly.

The best analogy to the alien sentences are mathematical proofs. Andrew Wiles solved Fermat's last theorem; the proof is over a hundred pages. The vast majority of humans do not understand it (and I submit, could never have understood it, even with the same schooling as Wiles.) But does Wiles understand it? Don't roll your eyes! I'm not asking whether he bumbled about scribbling randomly until he said "Oh dear, I finally seem to have solved it, but don't ask me how". But Wiles, brilliant though he is, still had to write it down (of course) because he understands - that is, holds in working memory - the steps one or a few at a time, not the whole proof. His network density is what differentiates him from subway dogs and from me. Similarly, the people who make smartphones don't understand everything about solid state physics or even their own product, but individuals understand the pieces (because of their networks) and how to plug them together with other pieces they don't understand (like dogs on the subway).

The knowledge of how to make a smartphone clearly exists in the world, yet no one understands it (not all of it; to claim otherwise is to claim that I do actually understand Wiles's solution because I can understand one page of the proof, and believe mathematicians when they tell me it fits with the rest). There's too much information, and even Wiles can't grasp nearly all of it simultaneously. For that matter, even multiplication of large numbers cannot be understood in one piece, but we can still tell it's correct. Understanding as it's usually described, and knowledge - knowledge verified by behavior, by experiment and how to do something - are two different things. There are two arguments buried there that I'll unpack.


CONCLUSION

First, whatever we mean by knowledge it is possible for humans to have, quantitative commodity limitations cannot rule out such possible knowledge. That is to say, your limited working memory (which language helps us overcome) does not determine what is outside of possible human knowledge. Time to understanding doesn't count either; even if we can't understand it now, people might understand it later, building on what we've already learned. To cling to "understanding" as meaning we can think all the thoughts together, we must argue either that there are people who have in their heads, at one point in time, the entire workings of a smartphone (which there are not), OR we must consider smartphones as mysterious, which is stupid, because we make the things. Whatever provincial (human-specific) limitations exist on possible knowledge (if any), they must be based on network density and quality. This is what allows profoundly improved language in humans, and why cats can't do multiplication, and why almost no humans can understand the solution to Fermat's last theorem, but a few can.

Second, and more controversially, it may be more useful to define what we mean by knowledge as information that affects decisions and behavior that consistently produce an expected result, regardless of the subjective experience of this information, i.e. the sense of understanding. Brains and computers both use representations that allow them to behave in ways that interact with the rest of the universe in a predictable way; this is knowledge, even if it's incomplete or perfect. So Wiles, Moscow dogs, and I all have knowledge of how to ride the subway. Wiles and I have knowledge of smart phones. Only Wiles has knowledge of how to solve Fermat's last theorem. Even Wiles does not understand it in full, only in tiny pieces. It seems a short jump to say that it would not be difficult to build a machine that can navigate the first two problems; and we already have systems that can check the last one (but not solve it in the first place). That is to say, computers have representations that let them solve those problems, and therefore have knowledge. But computers don't understand things (have a subjective experience of logical perception), whereas humans can have this sense, but it's notoriously unreliable, and certainly not grounds for claiming to others - and limited to small pieces of most trains of reasoning.

Wednesday, September 4, 2013

Performance at Theory of Mind Tasks Correlates with Working Memory

The experimental task involved adults and children seeing a picture, then having part of the picture blocked and trying to guess what a naive observer would think was in the blocked section. Children get better at this as they get older, but this is mediated by working memory improvements, and differences between individuals are mediated both by inhibitory control as well as working memory. This is in accord with previous work on modeling others' (false) beliefs in general.

Hansen Lagattuta K, Sayfan L, Harvey C. Beliefs About Thought Probability: Evidence for Persistent Errors in Mindreading and Links to Executive Control. Child Development, ahead of press 2013, Volume 00, Number 0, Pages 1 –16.