Skip to main content
Social Sci LibreTexts

5.5: Theories of Cognitive Development, Learning, and Memory

  • Page ID
    105480
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Pavlov

    Ivan Pavlov (1880-1937) was a Russian physiologist interested in studying digestion. As he recorded the amount of salivation his laboratory dogs produced as they ate, he noticed that they actually began to salivate before the food arrived as the researcher walked down the hall and toward the cage. The dogs knew that the food was coming because they had learned to associate the footsteps with the food. The keyword here is “learned”. A learned response is called a “conditioned” response.

    Pavlov began to experiment with this “psychic” reflex. He began to ring a bell, for instance, prior to introducing the food. Sure enough, after making this connection several times, the dogs could be made to salivate to the sound of a bell. Once the bell had become an event to which the dogs had learned to salivate, it was called a conditioned stimulus. The act of salivating to a bell was a response that had also been learned, now termed in Pavlov’s jargon, a conditioned response.

    Notice that the response, salivation, is the same whether it is conditioned or unconditioned (unlearned or natural). What changed is the stimulus to which the dog salivates. One is natural (unconditioned) and one is learned (conditioned).

    clipboard_e9885fe7cb0a244107ef920fc5cfeee85.png
    Figure \(\PageIndex{1}\): Pavlov’s experiments with dogs and conditioning. (Image by Maxxl² is licensed under CC BY-SA 4.0)

    Let’s think about how classical conditioning is used on us. One of the most widespread applications of classical conditioning principles was brought to us by the psychologist, John B. Watson.15

    Classical Conditioning

    Classical conditioning is a form of learning whereby a conditioned stimulus (CS) becomes associated with an unrelated unconditioned stimulus (US), in order to produce a behavioral response known as a conditioned response (CR). The conditioned response is the learned response to the previously neutral stimulus. The unconditioned stimulus is usually a biologically significant stimulus such as food or pain that elicits an unconditioned response (UR) from the start. The conditioned stimulus is usually neutral and produces no particular response at first, but after conditioning, it elicits the conditioned response.

    If we look at Pavlov's experiment, we can identify these four factors at work:

    • The unconditioned response was the salivation of dogs in response to seeing or smelling their food.
    • The unconditioned stimulus was the sight or smell of the food itself.
    • The conditioned stimulus was the ringing of the bell. During conditioning, every time the animal was given food, the bell was rung. This was repeated during several trials. After some time, the dog learned to associate the ringing of the bell with food and to respond by salivating. After the conditioning period was finished, the dog would respond by salivating when the bell was rung, even when the unconditioned stimulus (the food) was absent.
    • The conditioned response, therefore, was the salivation of the dogs in response to the conditioned stimulus (the ringing of the bell).16

    Neurological Response to Conditioning

    Consider how the conditioned response occurs in the brain. When a dog sees food, the visual and olfactory stimuli send information to the brain through their respective neural pathways, ultimately activating the salivary glands to secrete saliva. This reaction is a natural biological process as saliva aids in the digestion of food. When a dog hears a buzzer and at the same time sees food, the auditory stimuli activates the associated neural pathways. However, since these pathways are being activated at the same time as the other neural pathways, there are weak synapse reactions that occur between the auditory stimuli and the behavioral response. Over time, these synapses are strengthened so that it only takes the sound of a buzzer to activate the pathway leading to salivation.

    Operant Conditioning

    Operant conditioning is a theory of behaviorism, a learning perspective that focuses on changes in an individual's observable behaviors. In operant conditioning theory, new or continued behaviors are impacted by new or continued consequences. Research regarding this principle of learning was first studied by Edward L. Thorndike in the late 1800's, then brought to popularity by B.F. Skinner in the mid-1900's. Much of this research informs current practices in human behavior and interaction.

    Skinner's Research

    Thorndike's initial research was highly influential on another psychologist, B.F. Skinner. Almost half a century after Thorndike's first publication of the principles of operant conditioning, Skinner attempted to prove an extension to this theory—that all behaviors were in some way a result of operant conditioning. Skinner theorized that if a behavior is followed by reinforcement, that behavior is more likely to be repeated, but if it is followed by punishment, it is less likely to be repeated. He also believed that this learned association could end, or become extinct if the reinforcement or punishment was removed.

    To prove this, he placed rats in a box with a lever that when tapped would release a pellet of food. Over time, the amount of time it took for the rat to find the lever and press it became shorter and shorter until finally, the rat would spend most of its time near the lever eating. This behavior became less consistent when the relationship between the lever and the food was compromised. This basic theory of operant conditioning is still used by psychologists, scientists, and educators today.

    Shaping, Reinforcement Principles, and Schedules of Reinforcement

    Operant conditioning can be viewed as a process of action and consequence. Skinner used this basic principle to study the possible scope and scale of the influence of operant conditioning on animal behavior. His experiments used shaping, reinforcement, and reinforcement schedules in order to prove the importance of the relationship that animals form between behaviors and results.

    All of these practices concern the setup of an experiment. Shaping is the conditioning paradigm of an experiment. The form of the experiment in successive trials is gradually changed to elicit a desired target behavior. This is accomplished through reinforcement, or reward, of the segments of the target behavior, and can be tested using a large variety of actions and rewards. The experiments were taken a step further to include different schedules of reinforcement that become more complicated as the trials continued. By testing different reinforcement schedules, Skinner learned valuable information about the best ways to encourage a specific behavior, or the most effective ways to create a long-lasting behavior. Much of this research has been replicated on humans, and now informs practices in various environments of human behavior.17

    Positive and Negative Reinforcement

    Sometimes, adding something to the situation is reinforcing as in the cases we described above with cookies, praise and money. Positive reinforcement involves adding something to the situation in order to encourage a behavior. Other times, taking something away from a situation can be reinforcing. For example, the loud, annoying buzzer on your alarm clock encourages you to get up so that you can turn it off and get rid of the noise. Children whine in order to get their parents to do something and often, parents give in just to stop the whining. In these instances, negative reinforcement has been used.

    clipboard_e40e752d4e5ac31f9c355b37a4402353b.png
    Figure \(\PageIndex{2}\): Reinforcement in operant conditioning. (Image by Curtis Neveu is licensed under CC BY-SA 3.0 and Modified from source image)

    Operant conditioning tends to work best if you focus on trying to encourage a behavior or move a person into the direction you want them to go rather than telling them what not to do. Reinforcers are used to encourage a behavior; punishers are used to stop behavior. A punisher is anything that follows an act and decreases the chance it will reoccur. But often a punished behavior doesn’t really go away. It is just suppressed and may reoccur whenever the threat of punishment is removed. For example, a child may not cuss around you because you’ve washed his mouth out with soap, but he may cuss around his friends. Or a motorist may only slow down when the trooper is on the side of the freeway. Another problem with punishment is that when a person focuses on punishment, they may find it hard to see what the other does right or well. And punishment is stigmatizing; when punished, some start to see themselves as bad and give up trying to change.

    Reinforcement can occur in a predictable way, such as after every desired action is performed, or intermittently, after the behavior is performed a number of times or the first time it is performed after a certain amount of time. The schedule of reinforcement has an impact on how long a behavior continues after reinforcement is discontinued. So a parent who has rewarded a child’s actions each time may find that the child gives up very quickly if a reward is not immediately forthcoming. Think about the kinds of behaviors that may be learned through classical and operant conditioning. But sometimes very complex behaviors are learned quickly and without direct reinforcement. Bandura’s Social Learning covered later in the chapter explains how.19

    Watson and Behaviorism

    Another theorist who added to the spectrum of the behavioral movement was John B. Watson. Watson believed that most of our fears and other emotional responses are classically conditioned. He had gained a good deal of popularity in the 1920s with his expert advice on parenting offered to the public. He believed that parents could be taught to help shape their children’s behavior and tried to demonstrate the power of classical conditioning with his famous experiment with an 18 month old boy named “Little Albert”. Watson sat Albert down and introduced a variety of seemingly scary objects to him: a burning piece of newspaper, a white rat, etc. But Albert remained curious and reached for all of these things. Watson knew that one of our only inborn fears is the fear of loud noises so he proceeded to make a loud noise each time he introduced one of Albert’s favorites, a white rat. After hearing the loud noise several times paired with the rat, Albert soon came to fear the rat and began to cry when it was introduced.

    Watson filmed this experiment for posterity and used it to demonstrate that he could help parents achieve any outcomes they desired, if they would only follow his advice. Watson wrote columns in newspapers and in magazines and gained a lot of popularity among parents eager to apply science to household order. Parenting advice was not the legacy Watson left us, however. Where he really made his impact was in advertising. After Watson left academia, he went into the world of business and showed companies how to tie something that brings about a natural positive feeling to their products to enhance sales. Thus the union of sex and advertising!20 Sometimes we do things because we’ve seen it pay off for someone else. They were operantly conditioned, but we engage in the behavior because we hope it will pay off for us as well. This is referred to as vicarious reinforcement (Bandura, Ross and Ross, 1963).

    clipboard_ec1f72dbd8a7b99c22e6b8e86db4e8a3d.png
    Figure \(\PageIndex{3}\): A photograph taken during Little Albert research. (Image is in the public domain)

    Do parents socialize children or do children socialize parents?

    Bandura (1986) suggests that there is interplay between the environment and the individual. We are not just the product of our surroundings, rather we influence our surroundings. There is interplay between our personality and the way we interpret events and how they influence us. This concept is called reciprocal determinism. An example of this might be the interplay between parents and children. Parents not only influence their child’s environment, perhaps intentionally through the use of reinforcement, etc., but children influence parents as well. Parents may respond differently with their first child than with their fourth. Perhaps they try to be the perfect parents with their firstborn, but by the time their last child comes along they have very different expectations both of themselves and their child. Our environment creates us and we create our environment.


    joe lifting kennedyScreen Shot2021-08-09 at 8.13.53 PM.png
    Figure \(\PageIndex{4}\): Father and child playing. (Image by Miachelle Andrade. is licensed under CC BY 4.0)

    Social Learning Theory

    Albert Bandura is a leading contributor to social learning theory. He calls our attention to the ways in which many of our actions are not learned through conditioning; rather, they are learned by watching others (1977). Young children frequently learn behaviors through imitation. Sometimes, particularly when we do not know what else to do, we learn by modeling or copying the behavior of others. A new employee, on his or her first day of a new job might eagerly look at how others are acting and try to act the same way to fit in more quickly. Adolescents struggling with their identity rely heavily on their peers to act as role-models. Newly married couples often rely on roles they may have learned from their parents and begin to act in ways they did not while dating and then wonder why their relationship has changed.

    Screen Shot 2022-01-04 at 8.46.13 AM.png

    Figure 5.5.5: Younger child imitating older child. (Image by Miachelle Andrade is licensed under CC BY 4.0)

     

    Contributors and Attributions

    15. Lifespan Development - Module 4: Infancy by Lumen Learning references Psyc 200 Lifespan Psychology by Laura Overstreet, licensed under CC BY 4.0

    16. Children’s Development by Ana R. Leon is licensed under CC BY 4.0

    17. Children’s Development by Ana R. Leon is licensed under CC BY 4.0

    19. Lifespan Development - Module 4: Infancy by Lumen Learning references Psyc 200 Lifespan Psychology by Laura Overstreet, licensed under CC BY 4.0

    20. Lifespan Development - Module 4: Infancy by Lumen Learning references Psyc 200 Lifespan Psychology by Laura Overstreet, licensed under CC BY 4.0


    This page titled 5.5: Theories of Cognitive Development, Learning, and Memory is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Paris, Ricardo, Raymond, & Johnson (College of the Canyons) .