22 June 2004

Animal magic

We've all read the recent news about the border collie who had a 200-word vocabulary and could acquire new words at a rate equal to to that of a three-year-old child.

And remember the parrot with a 950-word vocabulary or the intelligent crow who could fashion an ad hoc tool to retrieve food from a container?

Or, geez, the pop culturally overloaded meme of Koko the gorilla communicating with a modified form of American Sign Language to everybody and anybody.

What's the impact of all of these stories? They bring up interesting points from several domains of knowledge.

As much as the god-like Noam Chomsky says that there's a language organ, there are many forms that language can take. Seeing language in the animal kingdom (or even in insects) does not mean we need a reevaluation of all things human.

Chomsky divides all formal languages into a hierarchy of four grammars. These grammars are labeled type 0 (zero) through type 3, with type 0 being the most expressive and type 3 the least. Each successive numbered grammar is contained (can be expressed) by the previous grammar. The book Speech and Language Processing explains that What is perhaps not intuitively obvious is that the decrease in the generative power of languages from the most powerful to the weakest can be accomplished merely by placing constraints on the way the grammar rules are allowed to be written. Basically, if you allow words to be combined in one way you end up with regular expressions (type 3), another way and you get computer languages (type 2). Eventually you get various natural languages (types 1 and 0). All this from just changing the way the grammar rules are written.

These grammars are generally used to describe human and computer languages but are equally relevant for other abstract expressions. They also have relevance when discussing consciousness and intentionality. In Steven W. Horst's book Symbols, Computation, and Intentionality, he says that The characteristic feature of intentional states [intentionality] is that they are about something or directed towards something. Intentionality is a primary aspect of consciousness and the computational theories that would describe consciousness. The main argument of the Horst book is that a computational model of mind is not sufficient to describe human consciousness. Although he doesn't address emergent systems described by bottom-up AI (e.g. neural networks), his argument suggests that formal grammars used in top-down AI, regardless of their expressive powers, cannot produce consciousness.

Self-consciousness in animals is an open question. Although the mirror test has been used to suggest that great apes and dolphins may be aware of their existence, some argue that they're not aware that they're aware. Anthony O’Hear points out that a conscious animal might be a knower...but only a self-conscious being knows that he is a knower. More conservative critics say [the mirror test] does not really demonstrate self-concept but only mirror recognition, which may be a different quality. It almost seems that we're back to square one.

So, how important is basic language and tool-making skills in animals? Those types of symbol manipulation seem to exist in animals but more on a continuum than as part of some binary language organ. Like formal grammars, mental representation systems may come in forms able to express simple abstraction such as limited tool-making, or richer abstractions such as mirror recognition and sign language. The fact that a smaller brain can perform a small subset of human tasks shouldn't be so surprising. The difficulty is finding the limits and parameters of those tasks.

[ 20 August 2008 ]

Minor point of additional interest: magpies are self-aware.

[ posted by sstrader on 22 June 2004 at 11:38:21 PM in Science & Technology ]