Skip to main content
Social Sci LibreTexts

9.4: Word Category Acquisition

  • Page ID
    129553
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    After successfully differentiating speech units from sound stream, young children also have to make sense out of the combination of speech units. This requires them to learn the grammatical word categories of the language. Past studies have investigated various hypothesis about how children learn grammatical categories. One way that children learn grammatical categories through statistical learning by tracking statistical information such as frequencies and co-occurence of certain sounds. Children could rely on distributional cues or sentence context to determine the category of a certain word (Mintz, Newport and Bever, 2002). In addition, studies on artificial language learning indicate that infants as young as 12 months of age can use distributional cues to group words that have no semantic meanings into categories (Gerken, Wilson and Lewis, 2005; Gomez and Lakusta, 2004; Lany and Gomez, 2008). Recent studies have suggested that infants use prosody and intonation information to determine syntactic and phrase boundaries in their first year of life (Pennekamp, Weber, and Friederici, 2006). Computational models are constructed by psycholinguists to further unravel the mechanisms in statistical learning. Mintz (2003) focused on the frequent frames in child-directed speech (e.g you __ it, the __one) and found that there is a strong pattern of frames that could enhance children’s category acquisition. Clair, Monaghan and Christiansen (2010) applied computational models further expanded the frames to investigate in child-directed speech. They combined fixed frames bigrams and trigrams (e.g aX, aXb) into flexible frames (e.g. aX + Xb), which increase the power of training data. The accuracy of combined flexible frames is largely higher than bigrams or trigrams, suggesting that a less rigid distributional form may provide more information on children learning language. Although these studies successfully contributed to the knowledge of children’s word category acquisition, they focused on particular structures that can not explain all the categories or the environment of all common words. Also, distributional models could not answer problems about ambiguous categories. In English, about 11% of word types in English are grammatically ambiguous (e.g cook (n./v.)) (DeRose, 1998). It is important to build a model that is able to assign more than one category to a word in order to represent categorical ambiguity in real world.


    This page titled 9.4: Word Category Acquisition is shared under a not declared license and was authored, remixed, and/or curated by Matthew J. C. Crump via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.