If someone came to me with a machine that could survive on its own for years in the canyon behind my house, not only finding its way around but scavenging fuel and meeting others of its kind to make more of itself so that there was sustainably and indefinitely a population of these machines back there in the brush, I would be damn impressed. I would even be prepared to call this invention "intelligent". Of course there really are such machines down in the canyon already; they're called coyotes, and they're pretty clever, but they aren't about to pass any Turing tests.
Cognitive philosophy discussions are often confused by equating "language" with "intelligence", or even assuming language is a good quick-and-dirty proxy indicator. Turing himself stressed the dirtiness of the measure at the beginning of the article where he introduced the test and didn't argue that a command of language necessarily required the commander to be thinking and/or conscious. Teaching a box to repeat superficial pleasantries for a finite duration, without that language relating to spontaneous behavior, seems like a fairly superfluous parlor trick and one which doesn't have much to say about intelligence. It's worth asking whether programmers working in a more highly inflected language like Russian are as impressed with this goal; if not, maybe they realize populating a structure with terms whose semantic content doesn't violate category expectations too badly doesn't prove much. Great, you wrote a program to tack case endings on the thousand most commonly used nouns, and some Markov chain rules to decide when to use them!
It seems clear that in the natural world some form of consciousness and spontaneous problem-solving behavior preceded language. There are animals from several taxa with excellent problem-solving skills (other primates, bears which arguably even have memes, crows, even octopuses) but without language. Leaving aside the hard problem of consciousness, we can measure behavior.
It's perhaps understandable that we're approaching this question bass-ackwards since instead of the natural world where language developed very late on top of and in context with self-replication capabilities, our toolkit is a class of entities which began as symbolic, syntactic rule-followers. But for more fruitful attempts at AI, we would do better to focus on building generalized problem-solvers and forget about chatbots. I suspect Turing would agree. That approach is the most likely one to effectively make the question of whether a machine can think seem as moot as whether a submarine can swim.
Space v. Time in the grammar of emojis
13 minutes ago
No comments:
Post a Comment