The fact that it is possible for us to express so much that other species cannot has to do with the way our language is constructed. We are not limited to a specific utterance to express a given meaning, though there are utterances that do this: "Ow," for example. Nor are we limited to a number of stock expressions, though we have those, too, such as "How do you do?" Our language is not made up simply of a set of expressions like these, wrenched from us or trotted out for social purposes. Rather, it is constructed of a fairly small set of sounds, known as phonemes (about 40 in English), that most often have no meaning in themselves: nnn, eee. These we string together one after the other to form meaningful bits and pieces: morphemes (ex-, -ism) and words (needle). The number of words made from phonemes is large—about 600,000 in English—but finite.
Except for a small set of words exhibiting onomatopoeia (meow, clang), the strings of sounds making up our words and parts of words bear no necessary or logical relation to their meaning. Think for a moment of the way English refers to one of our favorite pets, the dog. Dog. French calls it chien, and in German it is Hund. These words sound entirely different from one another, yet they all mean the same thing. There cannot then be anything necessarily "dogish" about any of these words. It's just that members of each language" community, for historical reasons, agree that the word they use refers to that animal and nothing else. The fact that the sounds of words are not logically or necessarily tied to the meanings, a characteristic often referred to as the arbitrariness of the sign, is one of the reasons for the flexibility of human language. It can express virtually anything speakers want or need to express.
Before we look any further at the structure of language, let us consider for a moment one of the words I have just been using, a word we use all the time without giving it any thought because the concept it expresses is so obvious. Or is it? The word I mean (there I go using it again) is mean, or meaning. The question of the meaning of meaning is a profound one that has been with us for a long time. But even what seems to us most simple and obvious may in fact be quite complex, defying easy definition. Sound sequences may be associated arbitrarily with meanings, but the two are bound together in the linguistic structures we create in a way that is anything but simple, and the way in which meaning is expressed by language is anything but obvious.
In order to express meaning linguistically, once we have organized the sound sequences of a language into words, we must arrange them, again in sequence, into sentences. Sentences tend to express complete thoughts. (Here is another word whose meaning I have just assumed. Take a moment, if you will, to define sentence for yourself, without simply reciting what you may have been taught in school. Does a sentence express a complete thought' Must it do so explicitly in order to be a sentence? What about expressions like "Leave!" or "No!"?)
Although the number of words at any time in a given language is large, it is, as I said, finite. Yet the number of sentences that can be constructed out of these words is infinite. That's why "I learned a new word today" is a reasonable thing to say, but you would get some strange looks if you said "I learned a new sentence today." An unabridged dictionary will give you some idea of the number of words in English at the time the dictionary was compiled (words are being dropped from and added to the language all the time), but see what happens when you try to count the number of sentences you can make out of a simple one such as "Amy likes Stan:"
I think that Amy likes Stan.
You know that I think that Amy likes Stan.
I think that you know that I think that Amy likes Stan.
You 're sure that I think that you know that I think that Amy likes Stan.
You will soon see the impossibility of the task. (I know that you will soon see the impossibility of the task. You realize that I know that you will soon see the impossibility of the task . . .).
The interesting property of human language embodied in these sentences is called recursion. It is this property of allowing one sentence to be contained, or embedded, within another that causes the number of sentences in any language to be infinite—for all languages are capable of such embedding. Embedding turns up in nursery games; perhaps you remember "the house that Jack built." It begins with a quite ordinary sentence:
This is the house that Jack built.
What you undoubtedly never stopped to notice if you, as a child, uttered this sentence was that it expresses two different ideas about the house, two separate thoughts, really:
This is the house
Jack built this house.
The single sentence "This is the house that Jack built" is in fact a compression of these two separate thoughts into one sentence. So far, it looks easy. Then the sentence is expanded, including yet another idea, to
This is the cheese that lay in the house that Jack built
and then to another
This is the rat that ate the cheese that lay in the house that Jack built
and still another
This is the cat that chased the rat that ate the cheese that lay in the house that Jack built
By the time the game ends, memory is taxed: The players are hard put to remember all the steps. They are aided, of course, by the fact that a story is implicit in these steps, a story that follows a logical progression. It turns out not to be so difficult after all.
But when sentences with multiple embeddings are produced that are not based on a simple story pattern, what happens?
The dog chased the boy.
The boy the dog chased got lost.
The boy the dog the man trained chased got lost.
The boy the dog the man the book belonged to trained chased got lost.
How long did it take you to get lost? Even as adults, we soon do. Why? Because such a sentence is too involved to follow, let alone remember. Therefore, we don't usually create sentences like these. It's hard to follow them, to keep track of what's happening, and once you've gotten some distance into one of them, it's hard to keep in mind all of its earlier components. "Keeping track" is a part of the processing function of the brain, as "keeping in mind" is the function of its memory capability. But the fact is we can create such sentences, even though our processing and memory limitations lead us to avoid them. This property of language – that sentences can readily be placed one inside the other and that the process can in principle be infinitely continued – is characteristic of all human languages. Like the arbitrariness of the sign, it is another of the properties that make our sort of language unique. These properties lead us to puzzle over and marvel at the nature of a brain that is capable of producing such sentences but then, because of its limitations in other domains, has difficulty in managing them. As we are led to considerations like this, we see how it is that linguistics relates to other areas of cognitive science – in this particular case, cognitive psychology and neuroscience, areas we examined in Parts 1 and 2.
The Rules of Language
I have pointed out some of the building blocks of language: sounds, meaningless in themselves, that combine in a linear order to make meaningful units (prefixes, suffixes, words); combinations of words, also in a linear order, making sentences; sentences that fit inside one another, demonstrating that their nature is one of infiniteness. But these building blocks by themselves are not enough to constitute language. How do we know the order in which to put them? How do we know that one sound is English (mm, as in me) and another is not (that French r, for example, that we have such difficulty pronouncing)? How do we know that one combination of sounds is English (tr, as in track, for instance), and another is not' (Try nl. Is nlack an English word? Could it be? Why not') How do we know that one ordering of words is a "real" sentence ("I think that Amy likes Stan") and another, using the very same words, is not ("Think likes I Stan that Amy")?