Skip to main content
Social Sci LibreTexts

9.2: Phonemic contrast

  • Page ID
    • Catherine Anderson, Bronwyn Bjorkman, Derek Denis, Julianne Doner, Margaret Grant, Nathan Sanders, and Ai Taniguchi
    • eCampusOntario

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    One or more interactive elements has been excluded from this version of the text. You can view them online here:

    There’s another part of the mental grammar that hearing babies start to learn well before they can speak. Remember from a previous chapter that the phonology of each language is specific to that language: the patterns of which features and segments contrast with each other and which are simply allophones is different in each language of the world. So, for example, we know from that in English, aspirated [pʰ] and unaspirated [p] are both allophones of a single phoneme. But in Thai, these two segments contrast with each other and are two different phonemes. The phonetic difference is the same, but how that difference is organized in the mental grammar is different in the two languages. Phoneme contrasts are a classic example of unconscious linguistic knowledge: in all likelihood, nobody ever had to teach you that [k] and [b] are different sounds and that the words cat and bat refer to two different animals! But if no one ever taught it to you, how did you learn it? How did the phonology of your first language come to be in your mental grammar? We saw one kind of habituation technique in the previous unit. Researchers use a different habituation method for slightly older babies.

    How to be a linguist: Observing conditioned head turns

    This method works on the same kind of logic as the high-amplitude sucking method, but instead of measuring sucking strength, the researchers observe where the child looks. When the sound first starts to play from a speaker, the baby usually looks towards the speaker. Once they habituate to that sound, they get bored and look away. If the next sound that’s played is the same as the first one, they stay habituated. But if the next sound is different and they notice the difference, they look back towards the speaker. So just like with the sucking method, if we observe that head-turn, we conclude that they noticed a difference between the two sounds.

    Using this technique, linguists and psychologists have learned that babies are very good at noticing phonetic differences, and they can tell the difference between all kinds of different sounds from many different languages. But this ability changes within the first year of life. Janet Werker, at the University of British Columbia, looked at children and adults’ ability to notice the phonetic difference between three different pairs of syllables: the English contrast /ba/ and /da/, the Hindi contrast between a retroflex stop /ʈa/ and a dental stop /t̪a/, and a Nłeʔkepmxcín contrast between glottalized velar /kʼi/ and uvular /qʼi/ stops (Werker & Tees, 1984). Each of these pairs differs in place of articulation, and within each language, each pair is contrastive. The researchers played a series of syllables and asked English-speaking adults to press a button when the syllables switched from one segment to the other. As you might expect, the English-speaking adults were perfect at the English contrast but did extremely poorly on the Hindi and Nłeʔkepmxcín contrasts.

    Then Werker tested babies’ ability to notice these three phonetic differences, using the head-turn method. These babies were growing up in monolingual English-speaking homes. At age six months, the English-learning babies were about 80-90% successful at noticing the differences in English, in Hindi and in Nłeʔkepmxcín. But by age ten months, their success rate had dropped to about 50-60%, and by the time they were one year old, they were only about 10-20% successful at hearing the phonetic differences in Hindi and Nłeʔkepmxcín. So these kids are only one year old, they’ve been hearing English spoken for only one year, and they’re not even really speaking it themselves yet, but already their performance on this task is matching that of English-speaking adults. The difference between retroflex [ʈa] and dental [t̪a] is not contrastive in English, so the mental grammar of the English-learning baby has already categorized both those sounds as just unusual-sounding allophones of English alveolar /ta/. Likewise, the difference between a velar and a uvular stop, which is contrastive in Nłeʔkepmxcín, is not meaningful in English, so the baby’s mind has already learned to treat a uvular stop as an allophone of the velar stop, not as a separate phoneme.

    The research on babies acquiring signed language came later, but it showed exactly the same pattern (Baker Palmer et al., 2012). Babies younger than six months noticed the difference between phonemically contrastive ASL handshapes even without any exposure to ASL, just like the babies without Hindi experience noticed the Hindi spoken contrast. By age 14 months, ASL-acquiring babies had retained their ability to recognize phonemically contrastive handshapes, but the English-learning babies without ASL experience had lost that sensitivity.

    So for either sign language or vocal language, a child’s mind has built up phoneme categories in their mental grammar by about age one, according to the contrasts that they have experience with in their language environment. One thing to note here is that this doesn’t mean that it’s impossible to learn new phonemic contrasts in a new language, but that later learning will be shaped by the learning that has happened in the first year. More on that in the next chapter!

    Check your understanding

    Query \(\PageIndex{1}\)


    Baker Palmer, S., Fais, L., Golinkoff, R. M., & Werker, J. F. (2012). Perceptual Narrowing of Linguistic Sign Occurs in the 1st Year of Life: Perceptual Narrowing of Linguistic Sign. Child Development, 543–553.

    Werker, J. F., & Tees, R. C. (1984). Cross-language speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behavior and Development, 7(1), 49–63.

    9.2: Child Language Acquisition Theories

    9.2.1 Child Language Acquisition Theories, from Sarah Harmon

    Video Script

    When it comes to theories of how children learn their first language or languages, there has been an evolution in this area. We're going to start off with some very common and old theories of how languages are acquired by children, but we're also going to point out what's wrong with them and how we have evolved in our thinking. It's important to understand that this is an area we are still trying to figure out; we don't have all the answers yet. We definitely are working with hypotheses and theories, but we don't know for sure. For reasons that are obvious, we don't fully understand how the brain works yet; we're still learning that too. Therefore, what we learn now could change—in fact, probably will change—over the course of your lifetime. It certainly has changed over the course of my lifetime.

    To start off, let's talk about some theories that used to be very popular, and that used to be considered ‘absolute truth’—as it turns out, it's not the case. These are all old and erroneous, in part, if not in whole.

    The big discussion has always been on imitation versus reinforcement versus analogy. Imitation refers to the idea that children imitate language that they hear and just spit it back out. Well, yes and no; clearly, we have seen in data over the last 50 years that definitely there is something to the concept: children do hear things and they try to emulate it to the best of their ability. Think of the [fɪs, fɪʃ] scenario, think of [knopf, nop]; they were trying. But that is not the only way that they learn language, because otherwise they wouldn't make mistakes. Reinforcement is the theory that when you positively reinforce good behavior, the child will continue doing it, and that negative behavior is chastised or punished somehow. The problem with this theory is that children make all sorts of combinations with respect to their languages, and doubly so if they're in a multilingual environment. It doesn't matter how many times you tell a child that the plural of foot is not foots and that it’s feet, because the child is not going to produce until some point in the future. Suffice it to say that it's not just the reinforcement; you can't just reward good language behavior and castigate bad language behavior; it doesn't have an effect. The same is true with analogy. Most people used to say that children acquired language and they built everything via analogy; they just made the same mistakes, and then learn to correct them at some point. Again, some of this is true; certainly, some of the errors that are frequently done by children with so called irregular forms in morphology and syntax could be examples of analogy. But there are plenty of examples where that doesn't happen. We come back to what seems to be the issue: analogy doesn't exactly describe most of what happens with respect to child language acquisition. Conditioned response is a little bit like reinforcement; it's the BF Skinner version of it. If you've taken psychology, you probably know what this concept is: a reaction to stimulus, reinforcing the positive while chastising or castigating the negative. Again, we know that doesn't work, let alone the ethical issues that come with conditioned responses examples.

    We know that none of these explain what happens with respect to children as they learn their first language or languages. The question becomes: What do we know?

    These are four theories and hypotheses that we have currently. While they may not cover everything with respect to child language acquisition, it does seem to be the case that they have a lot more in common than not.

    The Innateness Hypothesis is the first one to talk about, both in general and chronologically. It's also the first one that tapped into Chomsky’s Universal Grammar concept, the fact that we have this innate ability to speak a language, and that, so far, we have yet to observe the exact same thing in any other animal on the planet. I'm hedging that hugely for a reason; when we get to the next chapter and talk about animal communication, I’ll explain a little bit more, but think about those hallmarks of language, because that's what the Innateness Hypothesis comes back to. It also refers to what we talked about with respect to topology, the fact that there are universals that we see, as well as strong tendencies and non-absolute universals. We see so many trends with respect to languages that are very common, if not ubiquitous, so clearly there's something there.

    From the Innateness Hypothesis, we get these other three theories, and they are additions or specifications of this innateness concept, tapping into Universal Grammar and this facility that we have with respect to language.

    Active Construction of Grammar Theory is a theory that children actively invent rules as they go along. In other words, they observe and absorb language; they actively and subconsciously create patterns. They think they see and/or hear a pattern, and then they build upon it. It's mostly based off the observation of how children imitate the other children that they’re around—not imitating the adults. By the way, this is connected to creolization because, again, we get this concept of peers helping each other out. There have been numerous studies with respect to how children pick up new terms, and most of the time it's not from their adult caregivers. Most of the time is from other children—sometimes siblings or cousins, maybe in the school or daycare situation. It's as if they're taking an input subconsciously, analyzing it subconsciously, hypothesizing what the rule will be, and then apply that to the mental lexicon. We do not have clear evidence of this, like we can't analyze the brain to see that happening. But we definitely do notice that this is part of how children acquire new terminology and new phrasing; they get it mostly from peers. It certainly explains overgeneralization patterns, but frequently people chalked that up to analogy. If you have a child learning a given language—I'll just say English—and they are not around other children, there is a tendency for there to be a slowing of that acquisition. But the more children they're around, the more they acquire and the quicker they acquire it. That being said, it's really hard to observe this actually happening in the brain; this is just based on observation of children in a lab setting and in social settings.

    Connectionist Theory is the fact that children learn by creating neural connections and that this is based off of exposure to a variety of stimuli. In this case, that stimulates the new language, and so they basically learn these associations and they build from there—they're making connections. We see this in how they acquire other skills, whether it's walking and moving and motor skills to building with blocks to building a sand castle or anything like that we see them do this with other skills and the thought is okay, maybe they do this with language. There seems to be again some evidence of this; it taps into this overgeneralization aspect, the fact that children are making connections based off of prior knowledge or input or stimulus. That being said, again we have the same problem with active construction, which is we can't observe what's going on between the ears, as it were. It's hard to do that, so we don't really know that that's the case. But we suspect there's something to that.

    The last one is Social Interaction Theory, which is in some ways a combination of both of the previous two. It's the concept that children acquire language through social interaction; the more they're around peers, in particular—but in this case, you also include the adults—they have more input and they're making more connections. That social interaction also feeds their desire to want to speak more and to communicate in ways that are clear. There's something to be said for this as well, and in a number of cultures, especially European and American cultures, but even in parts of Asia and parts of Africa, we do see that adults will slow down or simplify their language so that it is more comprehensible. The child can interact more because they're not overly intimidated by all these people going a mile a minute with complex phraseology and lexicon. There's something to that, because when you are including people in a dialogue, you want there to be a communication, not just of ideas, but the feeling that you can communicate with me, because I understand you (or want to understand you) and you want to understand me. We slow things down for children, and that’s why children can learn from other children a little bit better. Again, we have that same problem of how do you measure that.

    There's also the other interesting piece, and I’ll use one of my nephews as an example, my older nephew who's now eight. He is fascinated by big words, and has been his entire linguistic life, even as a four-year-old. He did not want to hear so called ‘child language’ or ‘baby language’; he wanted big words. He loved it when Auntie Sarah would start talking; several times, unbeknownst to some of my students, would be sitting here in my room, as I did my online lectures for linguistics, because he wanted to hear the big words. He tried to use the big words—he still tries to use the big words—but it doesn't always work. Now, as an eight-year-old, he's getting stronger with his reading, and he wants the books with big words in them; he wants to try and read them. There seems to be something with respect to this Social Interaction Theory; if you have a child around adults and children, they're going to want to interact with both at as close to level as possible. All three of those theories are built upon the Innateness Hypothesis.

    We don't know for sure, and it will probably be some time before we really can understand, how the brain works, especially with children. But there are certain things that we can observe, and that'll be in the next section, when we talk about the different stages of language acquisition. There is plenty that we can observe just by watching, but hopefully someday we can crack this thing and really get to it.

    11.4: Early language production

    One or more interactive elements has been excluded from this version of the text. You can view them online here:

    Two babies on their backs in a crib. Both are crying and kicking their legs.
    Figure 11.2. Crying Babies.

    We saw in the previous unit that babies are already learning a lot about their language even before they can talk. If they’re learning so much so fast, why don’t they talk right away? When babies are just born, their bodies just aren’t ready yet. Newborns can certainly cry, which uses the lungs and the vocal tract. But a newborn’s larynx is higher in the vocal tract than an adult’s: it starts lowering around age six months. And newborns just aren’t very good at controlling their bodies yet! It takes them a few months to begin to learn how to move their articulators. They begin to gain control of their fingers, hands and forearms first, and their ability to control their jaw, tongue and lips comes a bit later. So in those first few months after birth, their hand movements and vocal productions are kind of random: mostly wiggling and gurgling.


    Once babies start to get their bodies organized, their productions tell us that their mental grammars are starting to get organized, too. Starting around age six months, the sounds babies make begin to have some syllable structure. They start to produce reduplicated consonant-vowel syllables. Usually the vowel in these syllables is the low [a] made with the jaw fully open, and the consonants are usually voiced stops. Why voiced stops? They’re still learning to control their vocal tract, so it’s easier to alternate between fully open for vowels and fully obstructed for stops than to produce the consonants with partial obstruction like fricatives or approximants. Same thing for the voicing: if they just keep the vocal folds vibrating for vowels and voiced consonants, it’s easier than alternating between voiced vowels and voiceless consonants. So the classic baby babbling sound, which linguists call canonical babbling, is [babababa] or [dadadada]. As they continue to grow and gain control of their muscles, they start to vary the vowels and consonants, and their babbles might sound more like [badibadi] or [ɡudaɡuda].

    Vocal babbling starts at about age six months for both hearing and deaf babies, but deaf babies gradually produce less vocal babbling as they continue to grow. When the language environment is a signed language, babies start to babble using their hands. Both hearing and deaf babies who grow up with sign as an ambient language produce reduplicated syllables using handshapes from the language environment (Petitto et al., 2004; Petitto & Marentette, 1991). Sign-acquiring babies’ first productions more often use the proximal articulators (the ones closer to the torso, like the elbows and wrists) than the distal ones (the articulators farther from the torso like fingers and knuckles) (Chen Pichler, 2012).

    So we have some evidence that babbling isn’t just random sounds: whether the language environment is vocal or sign, babies start to produce forms that are organized similarly to the language in the environment:

    • Their babbles are made up of repeated patterns that have the structure of syllables.
    • They alternate between handshape and path movement or between a closed and open vocal tract.
    • Their babbles use a subset of the segments/handshapes that appear in the language environment.

    So when you see or hear a baby babbling, it might look or sound like random nonsense, but what they’re really doing is exercising their mental grammar!

    First Words

    Clear plastic baby bottle on the tray of a high chair.
    Figure 11.3. Baby Bottle.
    Several small coloured balls in a wooden box.
    Figure 11.4. Balls.

    When babies babble, they’re practicing making the forms, that is, the signs or sounds of the language they’re acquiring. Remember that a word links a form with a meaning. So how can we tell if a child who produces a form has a meaning linked to that form? In other words, how can we tell if they’re babbling or producing words? We have to look at the context. If an English-acquiring child says [baba] when they’re reaching for their bottle or for a ball then they’re probably using that form to refer, so it counts as a word even though it doesn’t have the same form that the adult word [bɑɾəl] or [bɑl] does. But if they’re producing [baba] just for the fun of making the sounds, then their utterance is non-referential, – it doesn’t have meaning, so it counts as babbling.

    Babies acquiring sign languages often sign their first words at about age 0;8 or 0;9, while it’s usually later than that for babies acquiring a spoken language, closer to age 1;0. This difference might be because babies develop muscle control of their hands and arms earlier than of their tongues and lips. It could also be because sign words more often have iconic forms than spoken words.

    When children start to produce and understand words, the first words in their vocabulary are quite similar even when we compare across languages and modalities. Usually the first word meanings they acquire are for referring to things that are common and observable in the immediate environment, like names for their family members and pets, the word baby, and words for common objects like milk, ball, shoe. It’s also common for their first words to include greetings like hi and bye, and other expressives like uh-oh and no. If there are verbs in their set of first words, the verbs are likely to refer to actions like cry or eat.

    A small orange pumpkin in a bin with a variety of other squashes.
    Figure 11.5. Pumpkin.

    Of course, because one-year-olds don’t have a lot of experience with the world yet, they often haven’t got adult-like meanings in their mental grammar. For example, I know a toddler who saw a pumpkin for the first time and declared, “Apple!”. If the child hasn’t yet got a mental representation for pumpkins, gourds and squashes, they might well overextend the meaning of apple to include many other roundish fruits. It’s also common for children’s word meanings to be underextended, so, for example, the word elephant might refer to a particular stuffed animal but not to any other elephants, real ones or toy ones. In short, children develop meanings for words based on their experience of encountering the word in their environment.

    Landau and Gleitman (1985) provide some very interesting examples of how children’s word meanings are shaped by their experience of the environment, from their research comparing blind children to sighted children. You might guess that a blind child doesn’t really have a concept for the verb look, since they can’t see, but that’s not actually the case. In one experiment, the researchers asked children to “look up!”. Sighted kids tilted their heads to face the ceiling, even if they were wearing a blindfold. But when they asked a blind child, whom they called Kelli, to “look up”, she kept facing forward and put her hands up toward the ceiling! So does that mean that Kelli’s meaning for the verb look is the same as for the verb touch? In the next experiment, the researchers put an object in front of Kelli and said, “You can touch this but don’t look at it.” She tapped or stroked the object, and then once they told her, “Now you can look at it”, she ran her hands all over the object to explore it. So just like sighted children, Kelli’s mental grammar had two distinct meanings for the two verbs look and touch. It’s just that her meaning for the verb look was different from that of sighted children, since her experience of the world was different.

    Check your understanding

    Query \(\PageIndex{1}\)


    Chen Pichler, D. (2012). Acquisition. In R. Pfau, M. Steinbach, & B. Woll (Eds.), Sign language: An International Handbook (pp. 647–686). De Gruyter Mouton.

    Landau, B., & Gleitman, L. R. (1985). Language and Experience: Evidence from the Blind Child. Harvard University Press.

    Petitto, L. A., & Marentette, P. F. (1991). Babbling in the Manual Mode: Evidence for the Ontogeny of Language. Science, 251(5000), 1493–1496.

    Petitto, L. A., Holowka, S., Sergio, L. E., Levy, B., & Ostry, D. J. (2004). Baby hands that move to the rhythm of language: Hearing babies acquiring sign languages babble silently on the hands. Cognition, 93(1), 43–73.

    11.5: The Language environment and the so-called word gap

    One or more interactive elements has been excluded from this version of the text. You can view them online here:

    We’ve seen all kinds of examples of how important the language environment is for young language learners. Hearing babies start learning the patterns of speech sounds from the environment even before they’re born, and deaf babies start learning the patterns of sign language as soon as they get access to it in their environment. On a really simple, obvious level, the particular language that children acquire depends on the language that is in the environment. So it makes sense to wonder whether, for kids who are acquiring the same language, differences in their environments make a difference to their language development.

    In the 1990s, Hart & Risley (1995) compared the English-language environments of American kids from different socioeconomic and racial groups, and reported what their book calls “staggering contrasts” between rich and poor families. The most-often quoted finding from their very influential study is the claim that, by the time they’re three years old, children in poor families have heard thirty million fewer words than children in middle-class and affluent families. That phrase, the “30 million word gap”, is used all over the place, and has led to all kinds of policy decisions and interventions to try to bridge the gap. There’s been more research on the factors that lead to disparities in school performance, and more investment in early childhood education programs for low-income families.

    But that catchy phrase has also led to a lot of stigma for low-income parents, since it seems to blame them for not talking to their kids enough. (Figueroa, to appear) Calling their reported difference a “gap” reinforces the stereotype of a neglectful parent who ignores their kids and lets them watch tv all day. And a closer look at Hart & Risley’s data reveals that what they called a gap might be just a side effect of they way they counted words. In 2018, Sperry and colleagues (Sperry, Sperry & Miller, 2018) conducted a followup to Hart & Risley’s study, but they used a different way of counting.

    The 1995 analysis only counted words that were spoken by the mother directly to the child. They didn’t count words spoken to the child by anyone else, and they didn’t count words from conversations that happened nearby but weren’t directed at the child. In other words, they used a measure for number-of-words that favours the way that white, middle-class, stay-at-home moms interact with their children. The other factor in their so-called 30-million word gap is that the affluent families they had studied were largely white, while the poor families were almost entirely Black. So in addition to income differences, there would also have been cultural and linguistic differences in the ways adults interact with children, which the analysis didn’t account for.

    In contrast, the authors of 2018 study argued that children can learn from language they overhear in the environment, not just from what adults speak directly to them, so they included more measures in their analysis. When they counted using the 1995 measure, that is, speech by the primary caregiver to the child, the 2018 researchers found no clear pattern that depended on socioeconomic status. And when they counted speech by all caregivers to the child – parents, grandparents, older siblings, aunties and uncles – they found that the households with the greatest number of words per hour were the poor Black families. The same was true when they considered all the language used in the environment, not just what was directed at the child. Here’s what the researchers say about their findings:

    “Not only did the Word Gap disappear, but also some poor and working-class communities showed an advantage in the number of words children heard, compared with middle-class communities. Our study also revealed a great deal of variation among communities within each socioeconomic stratum.” (Sperry et al., 2018, p. 11)

    That last point is important: there’s a ton of variation in how parents interact with their children. A recent meta-analysis (Anderson et al., 2021) found that the nature of the language environment does indeed play a big role in children’s language development, but concluded that socioeconomic status was not a good predictor of what that language environment is like. In other words, whether you’re rich or poor doesn’t affect how much you talk to your kids!

    This “word gap” controversy is a classic example of how scientists, including language scientists, can reach biased conclusions from supposedly objective quantitative measurements. Both studies used what seems like a pretty objective measure: the number of words spoken per hour. But the choice of whose words to count led the two research teams to quite different findings. So even the decision of what to measure is not purely objective. The other issue here is that, if you’re interested in researching the language environment, the number of words spoken per hour is maybe not the most important thing to observe. Language use isn’t just about quantity, where more is better. Language is used in conversations, in relationships within communities that have specific cultural practices and expectations. So if we want to get a picture of the factors that support children’s language development, it’s a good idea to consider those complex relationships and communities too.

    Check your understanding

    Query \(\PageIndex{1}\)


    Anderson, N. J., Graham, S. A., Prime, H., Jenkins, J. M., & Madigan, S. (2021). Linking Quality and Quantity of Parental Linguistic Input to Child Language Skills: A Meta-Analysis. Child Development, 92(2), 484–501

    Hart, B., & Risley, T. R. (1995). Meaningful differences in the everyday experience of young American children. Paul H Brookes Publishing.

    Figueroa, M. (to appear). Language Development, Linguistic Input, and Linguistic Racism. WIREs Cognitive Science.

    Sperry, D. E., Sperry, L. L., & Miller, P. J. (2018). Reexamining the Verbal Environments of Children From Different Socioeconomic Backgrounds. Child Development, 90(4), 1303–1318.

    This page titled 9.2: Phonemic contrast is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Catherine Anderson, Bronwyn Bjorkman, Derek Denis, Julianne Doner, Margaret Grant, Nathan Sanders, and Ai Taniguchi (eCampusOntario) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.