Skip to main content
Social Sci LibreTexts

3.8: Language

  • Page ID
    10433
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Our vast intelligence also allows us to have Language, a system of communication that uses symbols in a regular way to create meaning. Language gives us the ability to communicate our intelligence to others by talking, reading, and writing. Although other species have at least some ability to communicate, none of them have language. There are many components of language that will now be reviewed.

    Components of Language

    Screen Shot 2019-01-15 at 11.26.16 AM.png
    Figure 3.17

    Phoneme: A phoneme is the smallest unit of sound that makes a meaningful difference in a language. The word “bit” has three phonemes. In spoken languages, phonemes are produced by the positions and movements of the vocal tract, including our lips, teeth, tongue, vocal cords, and throat, whereas in sign languages phonemes are defined by the shapes and movement of the hands.

    There are hundreds of unique phonemes that can be made by human speakers, but most languages only use a small subset of the possibilities. English contains about 45 phonemes, whereas other languages have as few as 15 and others more than 60. The Hawaiian language contains less phonemes as it includes only 5 vowels (a, e, i, o, and u) and 7 consonants (h, k, l, m, n, p, and w).

    Infants are born able to detect all phonemes, but they lose their ability to do so as they get older; by 10 months of age a child’s ability to recognize phonemes becomes very similar to that of the adult speakers of the native language. Phonemes that were initially differentiated come to be treated as equivalent (Werker & Tees, 2002).

    Morpheme: Whereas phonemes are the smallest units of sound in language, a morpheme is a string of one or more phonemes that makes up the smallest units of meaning in a language. Some morphemes are prefixes and suffixes used to modify other words. For example, the syllable “re-” as in “rewrite” or “repay” means “to do again,” and the suffix “-est” as in “happiest” or “coolest” means “to the maximum.”

    Semantics: Semantics refers to the set of rules we use to obtain meaning from morphemes. For example, adding “ed” to the end of a verb makes it past tense.

    Syntax: Syntax is the set of rules of a language by which we construct sentences. Each language has a different syntax. The syntax of the English language requires that each sentence have a noun and a verb, each of which may be modified by adjectives and adverbs. Some syntaxes make use of the order in which words appear. For example, in English “The man bites the dog” is different from “The dog bites the man.”

    Pragmatics: The social side of language is expressed through pragmatics, or how we communicate effectively and appropriately with others. Examples of pragmatics include turn- taking, staying on topic, volume and tone of voice, and appropriate eye contact.

    Lastly, words do not possess fixed meanings but change their interpretation as a function of the context in which they are spoken. We use contextual information, the information surrounding language, to help us interpret it. Examples of contextual information include our knowledge and nonverbal expressions such as facial expressions, postures, and gestures. Misunderstandings can easily arise if people are not attentive to contextual information or if some of it is missing, such as it may be in newspaper headlines or in text messages.

    Language Developmental Progression

    An important aspect of cognitive development is language acquisition. The order in which children learn language structures is consistent across children and cultures (Hatch, 1983). Starting before birth, babies begin to develop language and communication skills. At birth, babies recognize their mother’s voice and can discriminate between the language(s) spoken by their mothers and foreign languages, and they show preferences for faces that are moving in synchrony with audible language (Blossom & Morgan, 2006; Pickens et al., 1994; Spelke & Cortelyou, 1981).

    Screen Shot 2019-01-15 at 11.26.28 AM.png
    Figure 3.18. Source.

    Do newborns communicate? Of course they do. They do not, however, communicate with the use of oral language. Instead, they communicate their thoughts and needs with body posture (being relaxed or still), gestures, cries, and facial expressions. A person who spends adequate time with an infant can learn which cries indicate pain and which ones indicate hunger, discomfort, or frustration.

    Intentional Vocalizations: In terms of producing spoken language, babies begin to coo almost immediately. Cooing is a one-syllable combination of a consonant and a vowel sound (e.g., coo or ba). Interestingly, babies replicate sounds from their own languages. A baby whose parents speak French will coo in a different tone than a baby whose parents speak Spanish or Urdu. These gurgling, musical vocalizations can serve as a source of entertainment to an infant who has been laid down for a nap or seated in a carrier on a car ride. Cooing serves as practice for vocalization, as well as the infant hears the sound of his or her own voice and tries to repeat sounds that are entertaining. Infants also begin to learn the pace and pause of conversation as they alternate their vocalization with that of someone else and then take their turn again when the other person’s vocalization has stopped.

    At about four to six months of age, infants begin making even more elaborate vocalizations that include the sounds required for any language. Guttural sounds, clicks, consonants, and vowel sounds stand ready to equip the child with the ability to repeat whatever sounds are characteristic of the language heard. Eventually, these sounds will no longer be used as the infant grows more accustomed to a particular language.

    At about 7 months, infants begin Babbling, engaging in intentional vocalizations that lack specific meaning and comprise a consonant-vowel repeated sequence, such as ma-ma-ma, da-da-da. Children babble as practice in creating specific sounds, and by the time they are 1 year old, the babbling uses primarily the sounds of the language that they are learning (de Boysson- Bardies, Sagart, & Durand, 1984). These vocalizations have a conversational tone that sounds meaningful even though it isn’t. Babbling also helps children understand the social, communicative function of language. Children who are exposed to sign language babble in sign by making hand movements that represent real language (Petitto & Marentette, 1991).

    Gesturing: Children communicate information through gesturing long before they speak, and there is some evidence that gesture usage predicts subsequent language development (Iverson & Goldin-Meadow, 2005). Deaf babies also use gestures to communicate wants, reactions, and feelings. Because gesturing seems to be easier than vocalization for some toddlers, sign language is sometimes taught to enhance one’s ability to communicate by making use of the ease of gesturing. The rhythm and pattern of language is used when deaf babies sign just as it is when hearing babies babble.

    Understanding: At around ten months of age, the infant can understand more than he or she can say, which is referred to as receptive language. You may have experienced this phenomenon as well if you have ever tried to learn a second language. You may have been able to follow a conversation more easily than contribute to it. One of the first words that children understand is their own name, usually by about 6 months, followed by commonly used words like “bottle,” “mama,” and “doggie” by 10 to 12 months (Mandel, Jusczyk, & Pisoni, 1995). Infants shake their head “no” around 6–9 months, and they respond to verbal requests to do things like “wave bye-bye” or “blow a kiss” around 9–12 months. Children also use contextual information, particularly the cues that parents provide, to help them learn language. Children learn that people are usually referring to things that they are looking at when they are speaking (Baldwin, 1993), and that that the speaker’s emotional expressions are related to the content of their speech.

    Holophrasic Speech: Children begin using their first words at about 12 or 13 months of age and may use partial words to convey thoughts at even younger ages. These one word expressions are referred to as Holophrasic Speech. For example, the child may say “ju” for the word “juice” and use this sound when referring to a bottle. The listener must interpret the meaning of the holophrase, and when this is someone who has spent time with the child, interpretation is not too difficult. But, someone who has not been around the child will have trouble knowing what is meant. Imagine the parent who to a friend exclaims, “Ezra’s talking all the time now!” The friend hears only “ju da ga” to which the parent explains means, “I want some milk when I go with Daddy.”

    Language Errors: The early utterances of children contain many errors, for instance, confusing /b/ and /d/, or /c/ and /z/. The words children create are often simplified, in part because they are not yet able to make the more complex sounds of the real language (Dobrich & Scarborough, 1992). Children may say “keekee” for kitty, “nana” for banana, and “vesketti” for spaghetti because it is easier. Often these early words are accompanied by gestures that may also be easier to produce than the words themselves. Children’s pronunciations become increasingly accurate between 1 and 3 years, but some problems may persist until school age.

    A child who learns that a word stands for an object may initially think that the word can be used for only that particular object, which is referred to as Underextension. Only the family’s Irish Setter is a “doggie”, for example. More often, however, a child may think that a label applies to all objects that are similar to the original object, which is called Overextension. For example, all animals become “doggies”.

    First words and cultural influences: First words if the child is using English tend to be nouns. The child labels objects such as cup, ball, or other items that they regularly interact with. In a verb-friendly language such as Chinese, however, children may learn more verbs. This may also be due to the different emphasis given to objects based on culture. Chinese children may be taught to notice action and relationships between objects, while children from the United States may be taught to name an object and its qualities (color, texture, size, etc.). These differences can be seen when comparing interpretations of art by older students from China and the United States.

    Two word sentences and telegraphic (text message) speech: By the time they become toddlers, children have a vocabulary of about 50-200 words and begin putting those words together in telegraphic speech, such as “baby bye-bye” or “doggie pretty”. Words needed to convey messages are used, but the articles and other parts of speech necessary for grammatical correctness are not yet used. These expressions sound like a telegraph, or perhaps a better analogy today would be that they read like a text message. Telegraphic Speech/Text Message Speech occurs when unnecessary words are not used. “Give baby ball” is used rather than “Give the baby the ball.”

    Infant-directed Speech: Why is a horse a “horsie”? Have you ever wondered why adults tend to use “baby talk” or that sing-song type of intonation and exaggeration used when talking to children? This represents a universal tendency and is known as Infant-directed Speech. It involves exaggerating the vowel and consonant sounds, using a high-pitched voice, and delivering the phrase with great facial expression (Clark, 2009). Why is this done? Infants are frequently more attuned to the tone of voice of the person speaking than to the content of the words themselves, and are aware of the target of speech. Werker, Pegg, and McLeod (1994) found that infants listened longer to a woman who was speaking to a baby than to a woman who was speaking to another adult. It may be in order to clearly articulate the sounds of a word so that the child can hear the sounds involved. It may also be because when this type of speech is used, the infant pays more attention to the speaker and this sets up a pattern of interaction in which the speaker and listener are in tune with one another.

    Theories of Language Development

    Psychological theories of language learning differ in terms of the importance they place on nature and nurture. Remember that we are a product of both nature and nurture. Researchers now believe that language acquisition is partially inborn and partially learned through our interactions with our linguistic environment (Gleitman & Newport, 1995; Stork & Widdowson, 1974).

    Learning Theory: Perhaps the most straightforward explanation of language development is that it occurs through the principles of learning, including association and reinforcement (Skinner, 1953). Additionally, Bandura (1977) described the importance of observation and imitation of others in learning language. There must be at least some truth to the idea that language is learned through environmental interactions or nurture. Children learn the language that they hear spoken around them rather than some other language. Also supporting this idea is the gradual improvement of language skills with time. It seems that children modify their language through imitation and reinforcement, such as parental praise and being understood. For example, when a two-year-old child asks for juice, he might say, “me juice,” to which his mother might respond by giving him a cup of apple juice.

    However, language cannot be entirely learned. For one, children learn words too fast for them to be learned through reinforcement. Between the ages of 18 months and 5 years, children learn up to 10 new words every day (Anglin, 1993). More importantly, language is more generative than it is imitative. Language is not a predefined set of ideas and sentences that we choose when we need them, but rather a system of rules and procedures that allows us to create an infinite number of statements, thoughts, and ideas, including those that have never previously occurred. When a child says that she “swimmed” in the pool, for instance, she is showing generativity. No adult speaker of English would ever say “swimmed,” yet it is easily generated from the normal system of producing language.

    Other evidence that refutes the idea that all language is learned through experience comes from the observation that children may learn languages better than they ever hear them. Deaf children whose parents do not speak ASL very well nevertheless are able to learn it perfectly on their own, and may even make up their own language if they need to (Goldin-Meadow & Mylander, 1998). A group of deaf children in a school in Nicaragua, whose teachers could not sign, invented a way to communicate through made-up signs (Senghas, Senghas, & Pyers, 2005). The development of this new Nicaraguan Sign Language has continued and changed as new generations of students have come to the school and started using the language. Although the original system was not a real language, it is becoming closer and closer every year, showing the development of a new language in modern times.

    Screen Shot 2019-01-15 at 11.28.29 AM.png
    Figure 3.19: Three theorists who provide explanations for language development. From left: B.F. Skinner, Source. Albert Bandura. Source. Noam Chomsky. Source.

    Chomsky and Nativism: The linguist Noam Chomsky is a believer in the nature approach to language, arguing that human brains contain a Language Acquisition Device that includes a universal grammar that underlies all human language (Chomsky, 1965, 1972). According to this approach, each of the many languages spoken around the world (there are between 6,000 and 8,000) is an individual example of the same underlying set of procedures that are hardwired into human brains. Chomsky’s account proposes that children are born with a knowledge of general rules of syntax that determine how sentences are constructed. Language develops as long as the infant is exposed to it. No teaching, training, or reinforcement is required for language to develop as proposed by Skinner.

    Chomsky differentiates between the deep structure of an idea; that is, how the idea is represented in the fundamental universal grammar that is common to all languages, and the surface structure of the idea or how it is expressed in any one language. Once we hear or express a thought in surface structure, we generally forget exactly how it happened. At the end of a lecture, you will remember a lot of the deep structure (i.e., the ideas expressed by the instructor), but you cannot reproduce the surface structure (the exact words that the instructor used to communicate the ideas).

    Although there is general agreement among psychologists that babies are genetically programmed to learn language, there is still debate about Chomsky’s idea that there is a universal grammar that can account for all language learning. Evans and Levinson (2009) surveyed the world’s languages and found that none of the presumed underlying features of the language acquisition device were entirely universal. In their search they found languages that did not have noun or verb phrases, that did not have tenses (e.g., past, present, future), and even some that did not have nouns or verbs at all, even though a basic assumption of a universal grammar is that all languages should share these features.

    Critical Periods: Anyone who has tried to master a second language as an adult knows the difficulty of language learning. Yet children learn languages easily and naturally. Children who are not exposed to language early in their lives will likely never learn one. Case studies, including Victor the “Wild Child,” who was abandoned as a baby in France and not discovered until he was 12, and Genie, a child whose parents kept her locked in a closet from 18 months until 13 years of age, are (fortunately) two of the only known examples of these deprived children. Both of these children made some progress in socialization after they were rescued, but neither of them ever developed language (Rymer, 1993). This is also why it is important to determine quickly if a child is deaf, and to communicate in sign language immediately. Deaf children who are not exposed to sign language during their early years will likely never learn it (Mayberry, Lock, & Kazmi, 2002). The concept of critical periods highlights the importance of both nature and nurture for language development.

    Screen Shot 2019-01-15 at 11.30.57 AM.png
    Figure 3.20: Victor of Aveyon. Source.

    Social pragmatics: Another view emphasizes the very social nature of human language. Language from this view is not only a cognitive skill, but also a social one. Language is a tool humans use to communicate, connect to, influence, and inform others. Most of all, language comes out of a need to cooperate. The social nature of language has been demonstrated by a number of studies that have shown that children use several pre-linguistic skills (such as pointing and other gestures) to communicate not only their own needs, but what others may need. So a child watching her mother search for an object may point to the object to help her mother find it.

    Eighteen-month to 30-month-olds have been shown to make linguistic repairs when it is clear that another person does not understand them (Grosse, Behne, Carpenter & Tomasello, 2010). Grosse et al. (2010) found that even when the child was given the desired object, if there had been any misunderstanding along the way (such as a delay in being handed the object, or the experimenter calling the object by the wrong name), children would make linguistic repairs. This would suggest that children are using language not only as a means of achieving some material goal, but to make themselves understood in the mind of another person.

    Screen Shot 2019-01-15 at 11.33.02 AM.png
    Figure 3.21 Drawing of Brain Showing Broca’s and Wernicke’s Areas. For most people, the left hemisphere is specialized for language. Broca’s area, near the motor cortex, is involved in language production, whereas Wernicke’s area, near the auditory cortex, is specialized for language comprehension. Source.

    Brain Areas for Language: For the 90% of people who are right-handed, language is stored and controlled by the left cerebral cortex, although for some left-handers this pattern is reversed.Thesedifferencescan easily be seen in the results of neuroimaging studies that show that listening to and producing language creates greater activity in the left hemisphere than in the right. Broca’s area, an area in front of the left hemisphere near the motor cortex, is responsible for language production (Figure 3.21). This area was first localized in the 1860s by the French physician Paul Broca, who studied patients with lesions to various parts of the brain. Wernicke’s area, an area of the brain next to the auditory cortex, is responsible for language comprehension.


    This page titled 3.8: Language is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Martha Lally and Suzanne Valentine-French via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.