Skip to main content
Social Sci LibreTexts

9.1: Tiny, powerful language learners

  • Page ID
    199992
    • Catherine Anderson, Bronwyn Bjorkman, Derek Denis, Julianne Doner, Margaret Grant, Nathan Sanders, and Ai Taniguchi
    • eCampusOntario

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    One or more interactive elements has been excluded from this version of the text. You can view them online here: https://ecampusontario.pressbooks.pub/essentialsoflinguistics2/?p=515#oembed-1


    If you’ve ever taken care of a newborn, you’ll know that they’re so busy growing that they can’t do much else. They can sleep, they can eat (as long as their food is in liquid form), they can pee and poop, and they can cry. That’s pretty much it. When they’re just born, they can’t even really focus their eyes or control their limbs! But by the time they start school at age four or five, most of them can walk and run and jump, some can swim and ride bikes, and they can hold fairly sophisticated conversations with complex sentences and multi-syllable words. Here’s an example of an utterance by one of my twins when they were about four and a half years old:

    “Mummy, do you remember that time we made chocolate pudding? Maybe we could make some on Saturday. This Saturday, in the afternoon. I don’t need to take a nap. Let’s do it together, and maybe then we can eat some!”

    As awesome as my kids are, there’s nothing remarkable about their language development. Every typically-developing child who has access to language will become fluent in the language (or languages!) used around them, mostly without explicit teaching. So how do kids’ minds get from the newborn stage where the only sounds they make are crying and hiccuping to this stage of being able to bargain about naptime and kitchen projects? There must be an awful lot of learning that happens in those first few years. As we’ll see in this chapter, that learning can start even before birth! In this chapter we take a look at some of the elements of grammatical knowledge that children acquire, and we ask what that tells us about mental grammar.

    Children’s rapid language development can look quite magical from the outside, but it results from immense brain activity in response to the language environment, also known as the ambient language, the language used by the family members, caregivers, adults and older children that a child spends time with. The baby brain tracks the patterns in the language environment and uses them to build up the child’s own mental grammar. If the language environment is English, the child develops a mental grammar for English. And obviously, children in a Tamil-speaking environment develop a mental grammar for Tamil, children in a Mandarin-speaking environment develop a mental grammar for Mandarin, and so on. If the ambient language is American Sign Language (ASL), then the child will develop a mental grammar for ASL, regardless of whether they’re deaf or hearing. If a child is in an environment where two or more languages are used, they’ll develop mental grammars for both those languages.

    The Language environment and oralist culture

    There is one group of children who don’t have access to the language environment. About 90-95% of babies who are born deaf or hard of hearing are born into families where the primary language modality is speech (Mitchell & Karchmer, 2004). In some cases, families who learn that their child is deaf choose to start learning a sign language like ASL. If they use sign in their interactions with their child, then the baby has access to an ambient language in the visual modality and will develop a mental grammar from that input. But some families of deaf children choose not to use signed language, which means that their child does not get access to any ambient language: neither auditory nor visual. Why might a family make that decision? To answer that question, we need to talk a bit about cochlear implants.

    A cochlear implant (CI) is an electronic prosthetic device that is implanted surgically. It takes sounds from the environment and converts them to electrical signals. Those signals then get transmitted to the brain via the auditory nerve (NIH NIDCD, 2021). In developed countries, it is very common for deaf children to receive a CI between ages one and three years. Even so, most kids with CIs still score well below hearing kids on standard tests of language proficiency, even when they’ve had the implant for several years – not because of any deficit in the children, but because the electrical signals from a cochlear implant are not a sufficient language environment for typical development (Mauldin, 2019). But in spite of this evidence, there’s still a persistent stigma around sign languages and deafness, and many parents, teachers, and medical professionals consider it so important for children to “pass” in a hearing world that they don’t offer access to signed language because they fear that it might interfere with the child’s eventual acquisition of spoken language. Because of this belief that vocal language is better than sign language, known as oralism, many deaf children have no exposure to language before receiving a CI, and after implantation have access only to the atypical electrical signals produced by the device. This language deprivation leads to long-term impairments in social and cognitive function and, ironically, to lower scores on tests of vocal language comprehension (M. L. Hall et al., 2019; W. C. Hall, 2017; Humphries et al., 2016; Lillo-Martin & Henner, 2021). In contrast, when deaf children of hearing parents have access to ASL by age six months, their vocabulary develops at a comparable rate to deaf children of deaf signing parents (Caselli et al., 2021).

    This evidence from deaf kids shows us just how vital the language environment is, and how important it is that children have access to an ambient language as early as possible. The neural connections that make up mental grammar can only form in response to language input from the environment. Without that input in the first year of life, it’s much harder for the brain to build a mental grammar.

    A note on notation. When we’re talking about very young children, their age in months is often more relevant than their age in years. The convention in this literature is to use a semi-colon between the years and months of a child’s age. So a child aged 1;6 is one year and six months old, or one-and-a-half.


    Check your understanding

    Query \(\PageIndex{1}\)

    References

    Caselli, N., Pyers, J., & Lieberman, A. M. (2021). Deaf Children of Hearing Parents Have Age-Level Vocabulary Growth When Exposed to American Sign Language by 6 Months of Age. The Journal of Pediatrics, 232, 229–236.

    Hall, M. L., Hall, W. C., & Caselli, N. K. (2019). Deaf children need language, not (just) speech. First Language, 39(4), 367–395.

    Hall, W. C. (2017). What You Don’t Know Can Hurt You: The Risk of Language Deprivation by Impairing Sign Language Development in Deaf Children. Maternal and Child Health Journal, 21(5), 961–965.

    Humphries, T., Kushalnagar, P., Mathur, G., Napoli, D. J., Padden, C., Rathmann, C., & Smith, S. (2016). Avoiding Linguistic Neglect of Deaf Children. Social Service Review, 90(4), 589–619.

    Lillo-Martin, D., & Henner, J. (2021). Acquisition of Sign Languages. Annual Review of Linguistics, 7(1), 395–419.

    Mauldin, L. (2019). Don’t look at it as a miracle cure: Contested notions of success and failure in family narratives of pediatric cochlear implantation. Social Science & Medicine, 228, 117–125.

    Mitchell, R. E., & Karchmer, M. A. (2004). Chasing the Mythical Ten Percent: Parental Hearing Status of Deaf and Hard of Hearing Students in the United States. Sign Language Studies, 4(2), 138–163.

    NIH NIDCD. (2021). Cochlear Implants. National Institute on Deafness and Other Communication Disorders.

    11.2: When does language learning start?

    One or more interactive elements has been excluded from this version of the text. You can view them online here: https://ecampusontario.pressbooks.pub/essentialsoflinguistics2/?p=517#oembed-1


    We saw earlier that early exposure to language input is super-important because the mind starts building a mental grammar from a very young age. Just how early does language learning start? For hearing babies, it can start even before birth! If we measure the heart rate of a fetus in the uterus, we find that the heart rate increases in response to external sounds at about the seven-month point in pregnancy, so we know that a fetus can hear outside noises even while it’s still inside the uterus. Not only can fetuses hear, but they’re also remembering some of what they hear: by eight months of pregnancy, a fetus’s heart rate increases more in response to their pregnant parent’s voice than to a stranger’s voice (Kisilevsky et al., 2003). This means they’ve stored some memory of what their parent’s voice sounds like.

    Ok, so by looking at fetal heart rates we can conclude that hearing babies have already stored some auditory memories by the time they’re born. But that doesn’t necessarily mean they have any mental grammar, does it? How can we tell what newborns know about their language? After birth, there are so many more interesting stimuli in their world that measuring heart rate isn’t as informative, but they certainly can’t tell us what they know. What can we observe that would tell us something about mental grammar?

    How to be a linguist: Observing High-Amplitude Sucking

    A blue pacifier in front of a grey background.
    Figure 11.1. A pacifier.

    Babies can’t do much, but one thing they’re very good at is sucking. Using an instrument called a pressure transducer, which is connected to a pacifier, we can measure how powerfully they suck. When a baby is interested in something, like a sound that she’s hearing, she starts to suck harder. If you keep playing that same sound, eventually she’ll get bored and her sucking strength will decrease. When her sucking strength drops off, we say that the baby has habituated to the sound. But if you play a new sound, she gets interested again and starts sucking powerfully again. So we can observe if a baby notices the difference between two sounds by observing whether her sucking strength increases when you switch from one sound to the other.

    Using this high-amplitude sucking habituation method, researchers have found that newborns whose parents speak French notice the difference between French and Russian sentences spoken by the same person (Mehler et al., 1988). The fact that these newborn infants are sensitive to this difference tells us that they must have some memory of the patterns of French, to be able to tell it apart from Russian. What could these babies with French parents have learned about French before they were born? A lot of the sound information they receive in the uterus is quite muffled, but what they do have access to is the prosody. The rhythmic pattern of French depends on syllables of similar length, while that of Russian depends on syllable stress. That prosodic rhythm is audible to a fetus in the uterus, so by the time they’re born, babies can tell the difference between the rhythm they’ve heard before and an unfamiliar rhythm.

    In fact, if the pregnant parent speaks more than one language, a newborn can even tell the difference between those two languages! In a high-amplitude sucking study in British Columbia, babies born to Thai-English bilingual parents noticed when the spoken language switched from English to Thai (Byers-Heinlein et al., 2010). This suggests that their minds are already starting to set up two different mental grammars for the two languages they’ll be acquiring!

    So even when they’re just born, hearing babies have not only learned what their parent’s voice sounds like, they’ve also already learned some of the prosody of the language (or languages!) spoken in their environment.

    I also want us to remember, though, that language acquisition is not a race. In some cultures, parents like to believe that their child is “advanced” in some way: smarter or stronger than other babies. Even though this chapter talks about general patterns of acquisition, remember that there is huge variation across individual children, and learning something earlier isn’t necessarily any advantage. The reason I mention that here is there’s no evidence that a fetus that doesn’t have access to speech sounds in utero, either because the fetus is deaf or the parent uses signed language, is disadvantaged in any way. As soon as they’re exposed to language in a modality they can access, they start building their mental grammar.


    Check your understanding

    Query \(\PageIndex{1}\)

    References

    Byers-Heinlein, K., Burns, T. C., & Werker, J. F. (2010). The Roots of Bilingualism in Newborns. Psychological Science, 21(3), 343–348.

    Kisilevsky, B. S., Hains, S. M. J., Lee, K., Xie, X., Huang, H., Ye, H. H., Zhang, K., & Wang, Z. (2003). Effects of Experience on Fetal Voice Recognition. Psychological Science, 14(3), 220–224.

    Mehler, J., Jusczyk, P., Lambertz, G., Halsted, N., Bertoncini, J., & Amiel-Tison, C. (1988). A precursor of language acquisition in young infants. Cognition, 29, 143–178.


    This page titled 9.1: Tiny, powerful language learners is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Catherine Anderson, Bronwyn Bjorkman, Derek Denis, Julianne Doner, Margaret Grant, Nathan Sanders, and Ai Taniguchi (eCampusOntario) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.