He tested this by choosing 30 pairs of variables and calculating a ratio for instantiated values for them; for instance, the height of a library book off the floor and the number of pages, for a number of books. (The variables are chosen to seem mundane but are not random or pseudo-random.) He then checked to see whether the third value was wild, relative to the first two - that is, whether it was more than an order of magnitude greater than the larger of the two, or an order less than the smaller.
My problem with the thought experiment is what I term the Provincial Perception Problem (granted, this could even be the "stupid epistemology" Schwitzgebel mentioned in his title, and I missed the joke.) My challenge is not to the conceptual validity of it but rather Schwitzgebel's method, that is how he chooses what variables to test. He states (emphases mine):
I can use my data to test the Wild Complexity Thesis, on the assumption that the variables I have chosen are at least roughly representative of the kinds of variables we encounter in the world, in day-to-day human lives as experienced in a technologically advanced Earthly society. (I don't generalize to the experiences of aliens or to aspects of the world that are not salient to experience, such as Planck-scale phenomena.)He finds 27 of 30 variables are non-wild. Does this undermine the Wild Complexity Thesis?
It does not; actually, it doesn't tell us much of anything. Why not? Because he stacked the odds against detecting Wild Complexity based on the variable sets he chose. The variables fall into two groups: a few are information about natural phenomena like stars, but most are information about man-made objects (attributes of library books, McDonald's, etc.) But is it really so surprising that man-made variables are non-wild? We already know that our "day-to-day human lives" are not entirely bewildering experiences with no discernible patterns, so why should we expect day-to-day objects to be any different? After all, we humans are creatures on the order of a meter or two in height, that live on the order 10^9-10^9.5 seconds but perceive things and behave on the order of a second or so, and detect EM radiation at a wavelength of 400-700 nm. We are very provincial, predictable entities, so by choosing man-made variables we are unfairly enriching our list with non-wild variables.
But there is a deeper problem, one which is likely to affect nervous systems in general. If even non-man-made things are non-wild, like for instance ratios between a star's brightness and distance from Earth, doesn't that make us lean in the direction of the universe's being non-wild? Maybe not. Humans perceive and understand only a very narrow slice of the universe, and that slice is more likely to be non-wild than wild. Why? Evolution is more likely to produce replicators (and nervous systems) that gather and act on information about non-wild variables, and that restricts what we as products of evolution perceive in the first place. For some variable where the next value is likely to be wildly distant, what's the advantage of developing sense organs to detect it or a nervous system that can store and compare it? Why bother? By leaving out "aspects of the world that are not salient to experience", Schwitzgebel is still biasing his variable sets toward the non-wild. Consequently even by picking natural objects, we're picking the natural objects that we're likely to notice, which are more likely to be non-wild. Even by focusing on stars, we can't escape enriching the set of chosen variables for non-wildness, because we're not built to experience or notice the patterns of wild variables in the first place.
Assuming the Provincial Perception Problem is relevant, then we could subdivide the test results for the Wild Complexity Thesis into Strong and Weak Wild Complexity. Strong Wild Complexity holds in a universe where evolved intelligences find lots of wild variables even in their narrow slice of experience; Schwitzgebel's result has already strongly suggested that this is false. Weak Wild Complexity holds if the space of all possible variable ratios is mostly wild, except for those relevant to the narrow slices that evolved intelligences inhabit. Non Wild Complexity (or simplicity) holds in universes where most variables are non-wild. My argument is that Schwitzgebel has not differentiated between Weak Wild Complexity and simplicity.
So how to tell the difference between Weak Wild Complexity and simplicity? Given our provinciality, is the question hopelessly circular? I don't know. But if it's not, by picking more variables in a way that decreases our bias toward non-wildness, we're more likely to get a meaningful answer. And to do that, we should do exactly the opposite of what Schwitzgebel suggested, and include (for example) only Planck-scale phenomena; maybe we should include only those products of modern-day science which required effectively-black-box computer processes to generate, and are not accessible to our non-wild-variable-noticing brains. If we're able to choose variables far from the domain of human experience or direct comprehension and they're still mostly non-wild, then that makes a stronger case for true simplicity.
Very cool, Michael, and nicely put! I've been thinking some similar things myself. There's a Kantian or quasi-Kantian point in the vicinity here, I think: We always only have access to "things-in-themselves" via our own cognitive prejudices, and this determines the shape that things in themselves will take for us. Certain features of the world or apparent world (like simplicity) might really be inevitable features of our cognitive apparatus, rather than accurately reflecting reality as it is independent of us.
ReplyDeleteHopefully there's still a way to get at things which are *less likely* to be products of our nervous systems; we can't be 100% sure but we can make good guesses at what those things are likely to be.
ReplyDelete