Usefully defined, cognitive closure is a phenomenon where concepts or thoughts which are otherwise logically valid or accurately reflect some pattern in the real world are fundamentally unthinkable. It is assumed that limits to cognition in humans are owed to some commitment in our neuronal architecture and that other conscious beings could conceivably think thoughts which are for us imaccessible. Colin McGinn is well-known for discussing the concept in the context of arguing that the consciousness is one such cognitively closed arena.
There are at least four senses in which cognitive closure is trivially true; first, in terms of signifier transparency, or trivial closure due to habit. That is, I am a native English speaker, not a Japanese speaker, so when I look at a woody-stemmed plant ten meters tall with leaves and roots I cannot have the experience of thinking "ki" without it being polluted by thinking "tree". In fact in a real sense, I question the idea of a "literal" translation. There is just no way to convey in English the exact tone difference between German Sie and du or Spanish Usted and tú. But this is nitpicking; no one has exactly the same reaction to every object in the world either, based on their personal experiences (like Dennett's argument that the red you experience can't possibly be the same as the red I do). Sapir-Whorf notwithstanding, this is not a kind of closure that interests us.
Second and equally trivial are closures due to linear hardware limitations (storage or bandwidth limits). You and I can't multiply 151,692 by 65,778 in our heads. I don't think this is what we're talking about either.
Third, and slightly more subtle, is trivial closure due to lack of pattern recognition ability. Imagine I break the Mona Lisa's face into one of those Wall Street Journal dot portraits of a million black-and-white pixels, a thousand by a thousand, and I give it to you as a row of a million black-and-white squares, locking you in a room until you can tell me what it is. The chance that you would figure it out before your death is low, but as soon as I tell you "It's the Mona Lisa's face in rows of pixels" it would be mere minutes before you had arranged it properly. If that's cognitive closure, then your dog is similarly closed to language. He's been listening to you talk for years now and all he's figured out is treat, walk, and bad. In fact when Chomsky discusses this term this is the sense he means.
It's worth pointing out that even these so-called trivial examples, while not as eerie as the almost Lovecraftian way we think of closure, does in fact bring with it practical consequences. There is no reason to think our intelligence is at the upper bound of what is possible (I certainly hope not); a superintelligent alien conceivably could hold digits in memory and manipulate language in a way that puts us in the role of the aforementioned golden retriever. It is often objected that we now have machines to do our cognition for us, which is a mistake of definition: regardless of whether cognition is computation, it is also an experience. (Another trivial form of cognitive closure is that everyone's cognition is off-limits to everyone else's, because our nervous tissue is not contiguous: not the concept of the the first vs. third person divide, but the experience of it).
When you punch a bunch of big numbers into a calculator, you're really handling a cognitive black box; yes, you can check the output for consistency, but the cognitive experience of multiplication is closed to you, even though you can check the output for consistency. Dennett has argued against hardware-limitation closure based on the increased use of prosthetic apparatus (computers) allowing us to perform the calculations, but unless the calculator is wired to your brain and you experience the calculations, you're not experiencing them.
There are many trivial ways to understand closure but they are frequently confused with the deeper idea that there exist inferences or connections that accurate describe parts of the world which somehow our architecture obscure from us, not out of hardware limits, pattern recognition, mere linguistic habit, or isolation of tissue. The concept (which I call "strong cognitive closure") suggests far more fundamental limits to our minds, and because of the limited and klugey nature of our brains I'm very tempted to think such a thing may occur, but without a formal way to evaluate closed concepts, if cognitive closure of this kind does exist, the first question is whether we could even have an experience of it. That is to say, would we come to a point in a train of thought, be aware that said thoughts are coherent, but be frustrated and unable to proceed? Or would we be utterly ignorant that there was any barrier that we had just bounced off?
McGinn is arguing the first case in his discussions of consciousness, because we're all aware of our frustration with the topic of consciousness and its seemingly incommensurable first vs. third person modes. The problem here is in how we distinguish between something that is truly cognitively closed and something that is just a very thorny problem that we haven't solved yet. In other words, is there a way we can ever know for sure that something is cognitively closed?
For example: if we solve the Grand Unified Theory, we'll know it's not cognitively closed to us. But until we do, maybe it is, maybe it isn't. For that matter, even after it's solved and a handful of physicists understand it, it will remain cognitively closed to me and most likely you as well - unless there's a way to show there's a difference between not understanding something right now, and not ever being able to understand it in principle. Another chance to clarify what "real" cognitive closure is: certainly my brain as it is now constructed could not understand the G.U.T., because I lack the math. If the G.U.T. is cognitively closed to humans, the structure of our central nervous system assures that no amount of training could sufficiently alter the brain to accommodate the ideas. Again, is there a way to differentiate between these two?
It's worth pointing out that we're increasingly appreciating that the human mind works more like a maze of funhouse mirrors than a crisply calculating abacus - it is full to bursting with blindspots, hangups, and heuristics that may not have been much challenged a hundred thousand years ago in Africa, but today frequently get us in trouble (ask the psychologists - anchoring, sunk cost fallacies, you name it).
The encouraging thing, both in terms of self-actualization as well as in investigating cognitive closure, is that we have "meta-heuristics" which allow us to occasionally be aware of of our own shortcomings in such a way as to avoid those pitfalls. Our minds are clearly inelegant Rube Goldberg contraptions, but that doesn't mean were are helplessly clueless that this is so.
It seems to me that if there were understandable criteria for strong cognitive closure - if we had a list of consistent principles and could say "Anything that requires mental processes X, Y, and Z to understand can not be understood" - well, then we could understand it. Therefore if such a thing as cognitive closure does exist, it would necessarily include itself as one of the incomprehensibles, and consequently, the second case would obtain - that is to say, that we cannot be aware of cognitive closure when it occurs. If so, then the discussion ends here: we can never know if we've encountered a closure, and it would be exactly as if cognitive closure did not exist.
A theologian once said that God was so perfect, He didn't have to bother existing. So it is with strong cognitive closure. Having trouble understanding your credit card statement, or written Chinese? Weak (trivial) cognitive closure. Unfortunately I can't point you to an example of strong cognitive closure, because whichever position you take on it, for practical purposes, there isn't any.
11 hours ago