Consciousness and how it got to be that way

Monday, April 22, 2013

A Problem With Detecting Wild Complexity

Recently philosopher Eric Schwitzgebel posted a quantitative thought experiment entitled "Preliminary Evidence That the World Is Simple (An Exercise in Stupid Epistemology)". I responded in a comment at the post and I've expanded that comment below. Besides being a lot of fun, Schwitzgebel's post advances (and attempts to refute) the Wild Complexity Thesis: that previously known values of a variable don't help us much in predicting other values of that variable. The more difficult to predict a universe is - or the less compressible into prediction rules - the more complex and difficult to understand it is. This is a question about a property of the universe we find ourselves in that is simultaneously deep and non-mandatory.

He tested this by choosing 30 pairs of variables and calculating a ratio for instantiated values for them; for instance, the height of a library book off the floor and the number of pages, for a number of books. (The variables are chosen to seem mundane but are not random or pseudo-random.) He then checked to see whether the third value was wild, relative to the first two - that is, whether it was more than an order of magnitude greater than the larger of the two, or an order less than the smaller.

My problem with the thought experiment is what I term the Provincial Perception Problem (granted, this could even be the "stupid epistemology" Schwitzgebel mentioned in his title, and I missed the joke.) My challenge is not to the conceptual validity of it but rather Schwitzgebel's method, that is how he chooses what variables to test. He states (emphases mine):
I can use my data to test the Wild Complexity Thesis, on the assumption that the variables I have chosen are at least roughly representative of the kinds of variables we encounter in the world, in day-to-day human lives as experienced in a technologically advanced Earthly society. (I don't generalize to the experiences of aliens or to aspects of the world that are not salient to experience, such as Planck-scale phenomena.)
He finds 27 of 30 variables are non-wild. Does this undermine the Wild Complexity Thesis?

It does not; actually, it doesn't tell us much of anything. Why not? Because he stacked the odds against detecting Wild Complexity based on the variable sets he chose. The variables fall into two groups: a few are information about natural phenomena like stars, but most are information about man-made objects (attributes of library books, McDonald's, etc.) But is it really so surprising that man-made variables are non-wild? We already know that our "day-to-day human lives" are not entirely bewildering experiences with no discernible patterns, so why should we expect day-to-day objects to be any different? After all, we humans are creatures on the order of a meter or two in height, that live on the order 10^9-10^9.5 seconds but perceive things and behave on the order of a second or so, and detect EM radiation at a wavelength of 400-700 nm. We are very provincial, predictable entities, so by choosing man-made variables we are unfairly enriching our list with non-wild variables.

But there is a deeper problem, one which is likely to affect nervous systems in general. If even non-man-made things are non-wild, like for instance ratios between a star's brightness and distance from Earth, doesn't that make us lean in the direction of the universe's being non-wild? Maybe not. Humans perceive and understand only a very narrow slice of the universe, and that slice is more likely to be non-wild than wild. Why? Evolution is more likely to produce replicators (and nervous systems) that gather and act on information about non-wild variables, and that restricts what we as products of evolution perceive in the first place. For some variable where the next value is likely to be wildly distant, what's the advantage of developing sense organs to detect it or a nervous system that can store and compare it?  Why bother?  By leaving out "aspects of the world that are not salient to experience", Schwitzgebel is still biasing his variable sets toward the non-wild. Consequently even by picking natural objects, we're picking the natural objects that we're likely to notice, which are more likely to be non-wild. Even by focusing on stars, we can't escape enriching the set of chosen variables for non-wildness, because we're not built to experience or notice the patterns of wild variables in the first place.

Assuming the Provincial Perception Problem is relevant, then we could subdivide the test results for the Wild Complexity Thesis into Strong and Weak Wild Complexity. Strong Wild Complexity holds in a universe where evolved intelligences find lots of wild variables even in their narrow slice of experience; Schwitzgebel's result has already strongly suggested that this is false. Weak Wild Complexity holds if the space of all possible variable ratios is mostly wild, except for those relevant to the narrow slices that evolved intelligences inhabit. Non Wild Complexity (or simplicity) holds in universes where most variables are non-wild. My argument is that Schwitzgebel has not differentiated between Weak Wild Complexity and simplicity.

So how to tell the difference between Weak Wild Complexity and simplicity? Given our provinciality, is the question hopelessly circular? I don't know. But if it's not, by picking more variables in a way that decreases our bias toward non-wildness, we're more likely to get a meaningful answer. And to do that, we should do exactly the opposite of what Schwitzgebel suggested, and include (for example) only Planck-scale phenomena; maybe we should include only those products of modern-day science which required effectively-black-box computer processes to generate, and are not accessible to our non-wild-variable-noticing brains. If we're able to choose variables far from the domain of human experience or direct comprehension and they're still mostly non-wild, then that makes a stronger case for true simplicity.

Sunday, April 14, 2013

Are Utilitarians Psychopaths?

"Low Levels of Empathic Concern Predict Utilitarian Moral Judgment", state Gleichgerrcht and Young in this PLOSOne paper. It's a central concern of your blogger that "moral reasoning" might be an oxymoron, that morality is necessarily not a deliberative process and not subject to algorithmization, so to speak. This is obviously a central concern because the assumption that this is not the case is at the core of the Enlightenment. It could be that there's a moral mismatch hypothesis operating here, and that empathic concern is actually not a good contributor to decision-making in modern life.

Saturday, April 13, 2013

No Racial Differences in Intelligence in Infants

Link here (2006 research from U.S. government agency). Interesting, because differences do emerge later in childhood, in particular with East Asians outperforming others. Why is this? Either there are genetic differences that emerge at that point - or culture matters (i.e. it's not just window-dressing) and has an impact on intelligence.

Friday, April 12, 2013

An Objective Measurement of Pain

In medicine, pain is an often-overlooked vital sign, and it's very difficult to know the level of pain people are in (if any) based on their report - and that's the only way we have of doing it. That may be changing. An fMRI study of 114 people at U of Colorado showed >90% sensitivity and specificity for objective pain detection. Each subject was their own control, since they were subjected to painful and non-painful heat. The team was even capable of fairly reliably discriminating between social and physical pain.

Thursday, April 4, 2013

Artificial Languages Talk at UCSD, April 19

Information here. The developers of Na'avi, Klingon and Dothraki will be speaking.