Skip to main content
Social Sci LibreTexts

9.2: Types of Learning and Biological Adaptation

  • Page ID
    217208
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
    Learning Objectives
    • Describe the various types of learning and how each contributes to adaptation
    • Compare and contrast classical (Pavlovian) conditioning and instrumental (operant) conditioning
    • Discuss how classical and operant conditioning involve the learning of predictive relations between different types of events in the world
    • Explain the concept of adaptively specialized learning and give examples
    • Discuss the key aspects of observational learning according to Social Learning Theory.
    • Describe the function of habituation
    • Describe the general elements from research on cognitive learning in animals

    Overview

    Learning brings to mind school, memorization, tests, and study. But these associations are related to a single type of learning only, human verbal learning. Actually, learning, as you may already know, is a much broader phenomenon. First of all, learning of various sorts occurs in a wide range of animal species from humans to honey bees. Furthermore, across species, a wide range of behaviors can be acquired or modified by learning, giving behavior in many species a great deal of plasticity. In addition, we now know that learning is not a single unitary capacity, but instead there are many different, but related, processes, ranging from imprinting to cognitive learning. In this module, we examine several types of learning and how they interact with genetic information to serve adaptation. Both forms of conditioning, classical and operant, will be examined. In every kind of learning, learning fills in informational details that are too variable, short-term, and individually experienced to be captured by natural selection and thereby encoded into genes. By contrast, more abstract and general features, common to particular learning situations or problem types across generations, are captured by natural selection, genetically encoded, and provide innate, genetic information about the problem type (problem domain) that guides and facilitates the learning.

    Habituation, Sensitization, and Adaptation

    Habituation is a simple form of learning that produces a decrease in response to a repeated stimulus that has no adaptive significance. In other words, as an adaptively unimportant stimulus is repeatedly presented to an animal, it will gradually cease responding to the stimulus as it learns that the stimulus holds no information that might impact its biological fitness (survival and reproduction). For example, prairie dogs typically sound an alarm call when threatened by a predator, but they become habituated to the sound of human footsteps when no harm is associated with this sound; therefore, they no longer respond to the sound of human footsteps with an alarm call. Thus, habituation is a form of non-associative learning given that the stimulus is not associated with any punishment or reward.

    Habituation occurs when your brain has already extracted all the adaptive information that a stimulus or an event holds. However, what happens when, after habituation, a change in the stimulus or situation occurs? Stimulus change might signal that something important has happened. If so, the animal must reengage the processing capacity of its brain, its attentional resources, to evaluate whether the stimulus change might signal something that is adaptively important that may require a behavioral response. This recovery of responding to a habituated stimulus is called dishabituation (the undoing of habituation as the organism now responds to the stimulus situation it had previously stopped responding to).

    Flock of birds feeding on bugs

    Figure \(\PageIndex{1}\): Habituation is adaptive. Birds feeding along a road eventually habituate to passing traffic preventing them from wasting time and energy by flying away (escape behavior) unnecessarily in the absence of real threat, thereby increasing time and energy available for feeding (Image from Wikimedia Commons; File:Birds feeding on newly ploughed land with Ballymagreehan Hill in the background - geograph.org.uk - 2202208.jpg; https://commons.wikimedia.org/wiki/F..._-_2202208.jpg; by Eric Jones; licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license.).

    inside a submarine torpedo room,

    Figure \(\PageIndex{2}\): After habituation to the loud sound of their submarine's engines, sailors sleep through the noise, but are awakened by the silence if the engines stop. Stimulus change reverses habituation. This is called dishabituation and is highly adaptive; see text (Image from Wikimedia Commons; File:Vladivostok Submarine S-56 Forward torpedo room P8050522 2475.jpg; https://commons.wikimedia.org/wiki/F...50522_2475.jpg; by Alexxx1979; licensed under the Creative Commons Attribution-Share Alike 4.0 International license).

    Sensitization is a similar, form of non-associative learning, however it produces an increase in response to a repeated stimulus over time.In this case, the first occurrence of this stimulus is more intense than those that follow, thus priming or sensitizing the individual to these future, less-intense stimuli.

    Sensitization is a useful model in studying the underlying causes of pathologies such as asthma, substance dependence, allergies, pain-related illnesses, psychological disorders, etc. Now, there are different types of sensitization therapies not only utilized for medical purposes (allergies, cancer and other tumors) but also for marriage counseling and family psychotherapy (Sarah Mae Sincero (Jul 15, 2011). Sensitization. Retrieved Jun 30, 2023 from Explorable.com: https://explorable.com/sensitization)

    One lesson is clear with both habituation and sensitization: stimulus change is very potent in causing the brain to become alert and responsive. This implies that the brain must hold an ongoing representation or neural model of a current situation and must respond to any mismatch between that ongoing memory and the current stimulus situation This is a very adaptive property of the brain's functioning. For example, imagine you are studying psychology late at night. Your window is open and a breeze gently bangs the blinds against the window frame as you read. You will probably habituate to the sound, and thus pay no attention to it, to the point where you don't even hear it as you continue to study. But then, if the lights in your house suddenly go out, you probably will dishabituate and pay attention to every little sound, including the sounds coming from the blinds. In other words, you are now sensitized to any little sound. Your brain is programmed to respond to stimulus change, to novelty, because in stimulus change there may be information important for survival and reproduction, demanding an adaptive response.

    Conditioning and Biological Adaptation

    Basic principles of learning are always operating and always influencing human and animal behavior. This section continues by discussing two fundamental forms of associative learning: classical (Pavlovian) and operant (instrumental) conditioning. Both classical and operant conditioning involve the learning of predictive relationships between events (if event A occurs, then event B is likely to follow) and how this contributes to adaptation. Through them, we respectively learn to associate 1) stimuli in the environment, or 2) our own behaviors, with adaptively significant events, such as rewards and punishments or other stimuli. The two types of learning have been intensively studied because they have powerful effects on behavior. They also allow scientists to rigorously analyze learning processes in detail and the physical basis of learning and memory in the brain.

    Classical Conditioning

    Many people are familiar with the classic study of “Pavlov’s dog,” but rarely do they understand the significance of Pavlov's discovery. In fact, Pavlov’s work helps explain why some people get anxious just looking at a crowded bus, why the sound of a morning alarm is so hated, and even why we swear off certain foods we’ve only tried once. Classical (or Pavlovian) conditioning is one of the fundamental ways we learn about the world, specifically its predictive relationships between events. This involves learning what leads to what in an organism's environment. This is extremely valuable adaptive information that animals and humans appear to incorporate into brain-mediated cognitive models of how the world works--information which allows prediction and therefore improved organization of behavior into adaptive patterns. But classical conditioning is far more than just a theory of learning; it is also arguably a theory of identity. Your favorite music, clothes, even political candidate, might all be a result of the same process that makes a dog drool at the sound of bell.

    A dog looks up from the kitchen floor with expectant eyes and its tongue hanging out.

    Figure \(\PageIndex{3}\): Does your dog learn to beg for food because you reinforce her by feeding her from the table? Classical conditioning signals to your dog when food reward may be available and operant conditioning (see below) occurs when begging is reinforced by a food reward [Image: David Mease, https://nobaproject.com/modules/cond...g-and-learning, CC BY-NC 2.0]

    In his famous experiment, Pavlov rang a bell and then gave a dog some food. After repeating this pairing multiple times, the dog eventually treated the bell as a signal for food, and began salivating in anticipation of the treat. This kind of result has been reproduced in the lab using a wide range of signals (e.g., tones, light, tastes, settings) paired with many different events besides food (e.g., drugs, shocks, illness; see below).

    We now believe that this same learning process, classical conditioning, is engaged, for example, when humans associate a drug they’ve taken with the environment in which they’ve taken it; when they associate a stimulus (e.g., a symbol for vacation, like a big beach towel) with an emotional event (like a burst of happiness); and when a cat associates the sound of an electric can opener with feeding time. Classical conditioning is strongest if at least some of the stimuli are intense or particularly meaningful to the individual. It is also best if at least some of the stimuli involved are relatively new to the individual. And it is especially strong if the organism’s biology (its genetic evolution) has prepared it to associate a particular stimuli together. For example, rats, coyotes, and humans are naturally inclined by natural selection to associate an illness with a flavor, rather than with a light or tone. Although classical conditioning may seem “old” or “too simple” a theory, it is still widely studied today because it is a straightforward test of associative learning that can be used to study other, more complex behaviors. Also, biological psychologists can use it to study how at least some forms of learning occur in the brain.

    Conditioning Involves Learning Predictive Relations

    Pavlov was studying the salivation reflex, reflexive drooling in response to food placed in the mouth. A reflex is an innate, adaptive, genetically built-in stimulus-response (S-R) relationship; in this case, the stimulus is the food in the mouth and the respons is the salivation. Specifically, the food stimulus is referred to as an unconditional stimulus or US (i.e. it is not dependent upon prior learning), and the salivation is referred to as the unconditional response or UR (i.e. it is not dependent upon prior learning).

    Recall that Pavlov found that if he rang a bell just before feeding his dogs, the dogs came to associate the sound of the bell with the coming presentation of food. Thus, after this classical conditioning had occurred, the bell alone caused the dog to salivate, before the presentation of food. The bell stimulus became known as the conditioned stimulus or CS (i.e. it's action is dependent on prior learning). The CS is a signal that has little or no importance to the organism until it is paired with something that does have adaptive significance, in this case, food. Reliable pairing of CS and US close together in time (temporal contiguity) is important to the processes of classical conditioning. However, temporal contiguity alone is not sufficient for classical conditioning. The pairing of stimuli must be reliable so that a predictive relationship is maintained between CS and US. This usually requires the CS and US to be paired repeatedly. Predictiveness between CS and US determines whether or not an association is formed. If the CS does not predict occurrence of the US, no conditioning occurs (Gallistel, et al., 1991).

    Pavlov also discovered extinction. Recall from Introductory Psychology that if the CS is not followed by the US at least some of the time, then eventually the CR will no longer occur in response to the presentation of the CS. When the animal stops producing the CR in response to the CS, we say that extinction of the CR has occurred.

    Research findings emphasize that conditioning is about learning to predict what leads to what in the environment. Classical conditioning is the learning of predictive relations between stimuli, leading to the learned emission of responses that prepare for the coming US. A classical CS (e.g., the bell) does not merely elicit a simple, unitary reflex. In Pavlov's experiments, his bell almost certainly elicited a whole system of responses that functioned to get the organism ready for the upcoming US (food). For example, in addition to salivation, the bell most likely caused changes in attention, excitement, and the secretion of gastric acid, pancreatic enzymes, and insulin (which gets blood glucose into cells). All of these responses anticipate coming food and prepare the body and mind for it's arrival.

    Modern studies of classical conditioning use a very wide range of CSs and USs and measure a wide range of conditioned responses including emotional responses. If an experimenter sounds a tone just before applying a mild shock to a rat’s feet, the tone will elicit fear or anxiety after one or two pairings. This is an example of fear conditioning, which plays a role in creating many anxiety disorders in humans, such as phobias and panic disorders (Mineka & Zinbarg, 2006). Here, rather than a response like salivation, the CS triggers an emotion. Have you experienced conditioned emotional responses to formerly emotionally neutral stimuli? How about the emotional response you might have to a particular song, or a particular place, that once was your and your ex-partner's favorite song or the place you would go to meet one another? Or after you break up with someone, you seem to see their car (or cars like theirs) everywhere and you have a brief moment of anticipation that you might see them. Classical conditioning plays a large role in our emotional lives, and in the emotional lives of other animals as well.

    a young couple sitting close on a stone bench

    Figure \(\PageIndex{5}\): Intense emotions can be classically conditioned to originally neutral stimuli such as places or songs associated with a special person. (Image from Wikimedia Commons; File:A couple looking at the sea.jpg; https://commons.wikimedia.org/wiki/F...at_the_sea.jpg; by Joydip dutt; https://commons.wikimedia.org/wiki/F...at_the_sea.jpg; licensed under the Creative Commons Attribution-Share Alike 4.0 International license).

    Where classical conditioning takes place in the nervous systems varies with the nature of the stimuli involved. For example, clearly an auditory CS such as a bell will involve auditory pathways including the auditory system's medial geniculate nucleus of the thalamus (Fanselow & Poulos, 2005) and auditory cortex in the temporal lobe, while a visual CS will involve visual pathways. A US such as food will involve taste pathways, whereas presentation of a painful shock US will involve pain and fear pathways. Researchers have identified a number of brain areas that become active during fear conditioning.

    In response to a painful shock during conditioning, there is increased neural activity in a variety of areas but most notably in the amygdala and parts of the anterior cingulate cortices (Linnman, et al., 2011). Other biological psychologists found that the cerebellum has a special role in simpler forms such as the conditioning of the eye blink reflex in rabbits (Thompson & Steinmetz, 2009), whereas more complex conditioning involves the hippocampus and hippocampal-cerebellar interactions (Schmajuk & DiCarlo, 1991). More on this anatomy and other models of conditioning at the synatic level will be presented later in this chapter.

    Operant Conditioning

    Operant conditioning is another kind of conditioning in which the organism actively operates on the environment. Like classical conditioning, it also involves learning a predictive relationship between events, but the events are different than in classical conditioning. In operant or instrumental conditioning, the animal or human learns a predictive relationship between its own voluntary behavior and the outcome or effect of that behavior. In other words, individuals learn the association between their behavior (i.e. responses) and the consequences of that behavior.

    a pigeon in a operant chamber facing a colored display

    Figure \(\PageIndex{6}\): An operant conditioning chamber. Pecking on the correct color will deliver food reinforcement to the pigeon (Image from Wikimedia Commons; File:The pigeons’ training environment.png; https://commons.wikimedia.org/wiki/F...nvironment.png; Creative Commons CC0 1.0 Universal Public Domain Dedication).

    Operant conditioning research studies how the effects of a behavior influence the probability that it will occur again. For example, the effects of the rat’s lever-pressing behavior (i.e., receiving a food pellet) influences the probability that it will keep pressing the lever. According to Thorndike’s law of effect, when a behavior has a positive (satisfying) effect or consequence, it is likely to be repeated in the future. However, when a behavior has a negative (painful/unpleasant) consequence, it is less likely to be repeated in the future. In other words, the effect of a response, determines its future probability. Effects that increase the frequency of behaviors are referred to as reinforcers, and effects that decrease their frequency are referred to as punishers.

    In general, operant conditioning involves an animal or human tracking the reinforcement contingencies or dependencies in its environment and exploiting them to its advantage. Clearly, operant conditioning is highly adaptive. Operant conditioning shapes the voluntary behavior of the organism to maximize reinforcement and minimize punishment, just as we would expect from the law of effect. The law of effect in turn depends upon reward circuits in the mesolimbic system (see Chapter on Psychoactive Drugs) and on circuits for pain and the emotional response to pain which are distributed in many regions of the brain including the somatosensory cortex, insula, amygdala, anterior cingulate cortex, and the prefrontal cortex (see chapter on Sensory processes). Reinforcers activate mesolimbic pleasure circuitry in the brain or reduce activity in pain circuits. This gives the organism feedback about its actions. Reinforcers and activation of pleasure circuits tend to be associated with enhanced adaptation (food, a reinforcer to an animal deprived of food, enhances chances of survival). Punishers activate circuitry for physical or emotional pain and tend to be associated with reduced adaptation and biological fitness (i.e. reduced chances of survival and reproduction; physical pain is associated with potential tissue damage, while emotional pain is often associated with loss of things or persons upon which one depends or highly values, including romantic partners, financial or social status, etc.). Voluntary behaviors by the animal which lead to positive consequences (and increased pleasure; reinforcement) for the animal tend to be repeated (increase in probability in the future). Voluntary behaviors which the animal produces that lead to nothing or lead to negative outcomes for the animal (a reduction in pleasure or the occurrence of pain) tend not to be repeated, but avoided by the animal in the future.

    A long row of coin-operated gumball machines.
    Figure \(\PageIndex{7}\): Receiving a reward can condition you toward certain behaviors. For example, when you were a child, your mother may have offered you this deal: "Don't make a fuss when we're in the supermarket and you'll get a candy treat on the way out." [Image: Oliver Hammond, https://nobaproject.com/modules/cond...g-and-learning; CC BY-NC-SA 2.0]

    This is a wonderful mechanism for assuring the adaptive organization of behavior in species which have behavioral capacities beyond rigidly genetically programmed behavior. In species such as fish, reptiles, amphibians, and many invertebrate species, the larger portion of behavioral adaptation is organized by information in the genes, honed over millions of years of evolution by natural selection. These reflexes and "instincts" generally do not rely very much, if at all, upon information acquired during the lifetime of the individual animal (i.e. learning), but instead upon information acquired over the evolutionary history of the species and stored in DNA.

    Learned voluntary behavior is flexible but must also be directed into adaptive patterns by some principle, and that principle is the law of effect. The law of effect allows behavioral flexibility but also provides a mechanism for assuring that the animal (or human) learns adaptive behavior, behavior good for it and its genes, most of the time. "Voluntary" behaviors are not rigidly pre-formed by genetic information, but are organized in a more general way which allows for their modification by information gathered from the animal's current environment (experienced day to day and even moment to moment).

    Like classical conditioning, operant conditioning is part of the way an animal forms a mental model or neural representation of its environment and the predictive relationships among events in that environment. In classical conditioning, it is the predictive relationships between two stimulus events that is learned. In operant conditioning, it is the predictive relationships between a voluntary response and its outcome or consequence that is learned. Prediction allows preparation for future, even if the future is moments away, and preparation for what is coming next improves chances of survival and reproduction.

    When we examine operant conditioning, we see that it bears some similarities to evolution by natural selection. In operant conditioning, voluntary responses by an animal that are successful usually are reproduced (repeated) and those that are unsuccessful (not reinforced) get weeded out (they are not repeated). In evolution by natural selection, genetic alternatives that are successful (lead to better adaptation) get reproduced (and appear in future generations) and those that are unsuccessful (do not lead to improved adaptation or are maladaptive) get weeded out (they are not replicated in future generations). In each case, a selection mechanism (natural selection or the law of effect) preserves some alternatives (genetic or behavioral, respectively) into the future, while eliminating others. In both cases, the result is improved adaptation.

    Cognition and Operant Conditioning

    Modern research also indicates that reinforcers do more than merely strengthen or “stamp in” the behaviors they are a consequence of. Instead, animals learn about the specific consequences of each behavior, and will perform a behavior depending on how much they currently want—or “value”—its consequence. This idea is best illustrated by a phenomenon called the reinforcer devaluation effect (see Colwill & Rescorla, 1986). A rat is first trained to perform two instrumental actions (e.g., pressing a lever on the left, and on the right), each paired with a different reinforcer (e.g., a sweet sucrose solution, and a food pellet). At the end of this training, the rat tends to press both levers, alternating between the sucrose solution and the food pellet. In a second phase, one of the reinforcers (e.g., the sucrose) is then separately paired with illness. This conditions a taste aversion to the sucrose. In a final test, the rat is returned to the Skinner box and allowed to press either lever freely. No reinforcers are presented during this test (i.e., no sucrose or food comes from pressing the levers), so behavior during testing can only result from the rat’s memory of what it has learned earlier. Importantly here, the rat chooses not to perform the response that once produced the reinforcer that it now has an aversion to (e.g., it won’t press the sucrose lever). This means that the rat has learned and remembered the reinforcer associated with each response, and can combine that knowledge with the knowledge that the reinforcer is now “bad.” Reinforcers do not merely stamp in responses; response varies with how much the rat wants/doesn’t want a reinforcer. As described above, in operant conditioning, the animal tracks the changing reinforcement and punishment contingencies in its environment, as part of a dynamic mental model or neural representation of its world, and it adjusts its behavior accordingly.

    Habituation, sensitization, classical conditioning, and operant conditioning are just a few types of learning. Each contributes to adaptation and increases biological fitness (chances of solving the problems associated with survival and reproduction). There are many other types of learning as well, often quite specialized to perform a particular biological function. These specialized forms of learning, also known as adaptive specializations of learning, have been studied mostly by ethologists and behavioral biologists, but biological psychologists are becoming increasingly interested in such forms of learning and their importance. For instance, psychologists have studied one of these specialized forms of learning, taste aversion learning, extensively. In addition to this form, we will now also examine adaptive specializations of learning involved in bird navigation by the stars during migration, bee navigation by the sun, and acquisition of bird song, which some researchers have compared to human language acquisition.

    Specialized Forms of Learning

    "Biological mechanisms are adapted to the exigencies of the functions they serve. The function of memory is to carry information forward in time. The function of learning is to extract from experience properties of the environment likely to be useful in the determination of future behavior." (Gallistel, 2003, p. 259).

    "One cannot use a hemoglobin molecule as the first stage in light transduction and one cannot use a rhodopsin molecule as an oxygen carrier, any more than one can see with an ear or hear with an eye. Adaptive specialization of mechanism is so ubiquitous and so obvious in biology, . . . it is odd but true that most past and contemporary theorizing about learning does not assume that learning mechanisms are adaptively specialized for the solution of particular kinds of problems. Most theorizing assumes that there is a general purpose learning process in the brain, a process adapted only to solving the problem of learning. . . . , this is equivalent to assuming that there is a general purpose sensory organ, which solves the problem of sensing." (Gallistel, 2000, p. 1179).

    As the quotes above imply, learning, the acquisition of information during the lifetime of the individual animal, comes in many different forms. Many specialized forms of learning are highly specific for the solution of specific adaptive problems (problem domains) often found in only one or a few species. These types of learning probably involve specialized neural circuits, organized for their particular specialized form of learning by natural selection.

    A familiar sight is ducklings walking or swimming after their mothers. Hatchling ducks recognize the first adult they see, usually their mother, and make a bond with her which induces them to follow her. This type of non-associative learning is known as imprinting. Imprinting is a form of learning occurring at a particular age or a particular life stage that is very important in the maturation process of these animals as it encourages them to stay near their mother in order to be protected, greatly increasing their chances of survival. Imprinting provides a powerful example of biologically prepared learning in response to particular genetically determined cues. In the case of imprinting, the duckling becomes imprinted on the first moving object larger than itself that it sees after hatching. Because of this, if newborn ducks see a human before they see their mother, they will imprint on the human and follow him or her in just the same manner as they would follow their real mother. Because this form of learning is biologically prepared, some cues that trigger the duckling to learn to follow its mother (or a person) are innately programmed into the duckling's genetically controlled brain circuitry while the details of the imprinting object (usually their real mother) are learned from the duckling's initial experience with a moving object larger than itself (again, most likely its real mother). Though this learning is very rapid because it is genetically (biologically) facilitated, it is also very resilient, its effects on behavior lasting well into adulthood. This form of learning illustrates well the general principle that all learning relies upon an underpinning of genetic information about some features of the learning situation while details are filled in by learning through experience. For example, genetic information programs the duckling's brain to follow the first moving thing it sees after hatching that is larger than itself, the details about what that object looks like are added into the bird's memory by learning during imprinting.

    seven baby ducklings following their mother
    Figure \(\PageIndex{10}\): Imprinting is the learned attachment of ducklings to their mother or, under rare circumstances, to the first moving object larger than themselves that they see after hatching (Image from LibreTexts, Book: General Biology (Boundless), Learned Animal Behavior 45.7A; caption by Kenneth A. Koenigshofer, Ph.D.; licensed under

    Taste Aversion Learning

    Taste aversion learning is a specialized form of learning that helps omnivorous animals (those that eat a wide range of foods) to quickly learn to avoid eating substances that might be poisonous. In rats, coyotes, and humans, for example, eating a new food that is later followed by sickness causes avoidance of that food in the future. With food poisoning, although having fish for dinner may not normally be something to be concerned about (i.e., a “neutral stimulus”), if it causes you to get sick, you will now likely associate that neutral stimulus (the fish) with the adaptively significant event of getting sick.

    This specialized form of learning is genetically or "biologically prepared" so that taste-illness associations are easily formed by a single pairing of a taste with illness, even if taste and illness are separated by extended periods of time from 10-15 minutes up to several hours. Colors of food, or sounds present when the food is consumed, cannot be associated with illness, only taste and illness can be associated. This is known as "belongingness," an example of "biological preparedness," in which learning has specialized properties as a result of genetic evolution--in this case, only taste and illness can be readily associated, not visual or auditory stimuli and illness (Garcia and Koelling, 1966; Seligman, 1971). A second difference is that taste aversion learning requires only a single pairing of taste and illness, whereas classical conditioning usually requires many pairings of the CS and US. The usual requirements for multiple pairings and close temporal contiguity between stimuli don't apply in learned taste aversion. This makes adaptive sense, because in the wild, sickness from a new food which is toxic won't occur immediately, but only after it has had time to pass into the digestive system and be absorbed. And when it comes to poisons, you may not get a second chance to learn to avoid that substance in the future.

    This genetically prepared form of learning evolved over generations of natural selection in omnivorous species which consume a large variety of foods. Species that are very specialized feeders, such as Koala bears which eat eucalyptus leaves exclusively or baleen whales which filter ocean water for microscopic organisms, have not evolved taste aversion learning. They simply don't need it because they never experience novel foods that could possibly pose a threat.

    Adaptively Specialized Learning for Navigation and Song Acquisition

    Two other examples of specialized forms of learning are of interest here. Bees learn to navigate by the position of the sun which changes with the date, time, and place on the Earth's surface. When moved to a new location, they learn to update their navigation to compensate for the change in location. This learning is strongly guided by innate, genetically evolved information stored in the bee brain (Dyer & Dickson, 1994; Towne, 2008).

    One other example is of special interest, song learning in song birds. Birds of a specific species can only learn the song of their own species illustrating genetic constraints on what songs they can learn (Gallistel, et al., 1991, p. 21). White-crowned sparrows show variations in their song depending upon their geographical location, akin to dialects in human language. Experiments have shown that young white-crowned sparrows learn the specific dialect by exposure to it during a critical period for song learning in the species. A critical period during which learning must occur indicates another genetic constraint on learning and is similar to critical periods evident in imprinting, learning the night sky by Indigo Buntings, and a sensitive period in humans prior to adolescence for language acquisition.

    The learning involved in the above examples is not classical or operant conditioning, but each is a very specialized form of learning in one particular species for solution to a specific adaptive problem (a specific domain). Note that the learned information supplements and interacts with information in the genes, guiding the animal to attend to and to readily learn highly specific information. In the case of the buntings, in a sense, the birds "imprint" on the correct stars for navigation later in life.

    Observation learning

    Observation learning is learning by watching the behavior of others. It is obviously an extremely important form of learning in us, but it is also "an ability common to primates, birds, rodents, and insects" (Dawson et al., 2013). It plays a crucial role in human social learning. Imagine a child walking up to a group of children playing a game on the playground. The game looks fun, but it is new and unfamiliar. Rather than joining the game immediately, the child opts to sit back and watch the other children play a round or two. Observing the others, the child takes note of the ways in which they behave while playing the game. By watching the behavior of the other kids, the child can figure out the rules of the game and even some strategies for doing well at the game.

    Photo showing a group of children standing by, intently watching an adult playing a game of chess.
    Figure \(\PageIndex{11}\): Children observing a social model (an experienced chess player) to learn the rules and strategies of the game of chess. [Image: David R. Tribble, https://nobaproject.com/modules/cond...g-and-learning, NOBA, Conditioning and Learning; CC BY-SA 3.0]

    Observational learning is a component of Albert Bandura’s Social Learning Theory (Bandura, 1977), which posits that individuals can learn novel responses via observation of key others’ behaviors. Observational learning does not necessarily require reinforcement, but instead hinges on the presence of others, referred to as social models. Social models in humans are typically of higher status or authority compared to the observer, examples of which include parents, teachers, and older siblings. In the example above, the children who already know how to play the game could be thought of as being authorities—and are therefore social models—even though they are the same age as the observer. By observing how the social models behave, an individual is able to learn how to act in a certain situation. Other examples of observational learning might include a child learning to place her napkin in her lap by watching her parents at the dinner table, or a customer learning where to find the ketchup and mustard after observing other customers at a hot dog stand.

    Bandura theorizes that the observational learning process consists of four parts. The first is attention—as, quite simply, one must pay attention to what s/he is observing in order to learn. The second part is retention: to learn one must be able to retain the behavior s/he is observing in memory. The third part of observational learning, initiation, acknowledges that the learner must be able to execute (or initiate) the learned behavior. Lastly, the observer must possess the motivation to engage in observational learning. In our vignette, the child must want to learn how to play the game in order to properly engage in observational learning. Bandura, Ross, & Ross (1963) demonstrated that children who observed aggression in adults showed less aggressive behavior if they witnessed the adult model receive punishment for their aggression. Bandura referred to this process as vicarious reinforcement, as the children did not experience the reinforcement or punishment directly, yet were still influenced by observing it.

    Observation Learning and Cultural Transmission Improve Biological Fitness in Non-human Animals

    Orangutans in protected preserves have been seen copying humans washing clothes in a river. After watching humans engage in this behavior, one of the animals took pieces of clothing from a pile of clothes to be washed and engaged in clothes washing behavior in the river, imitating behavior it had recently observed in humans. Orangutans also use observation learning to copy behaviors of other orangutans. Observation learning has also been reported in wild and captive chimpanzees and in other primates such as Japanese macaque monkeys. One of the most thoroughly studied examples of observation learning in animals is in Japanese macaques.

    There is a large troop of macaques (Old World monkeys; see Chapter 3) which lives near the beaches on an island in Japan, Koshima Island. Researchers were interested to see how these animals would respond if the researchers scattered novel foods such as wheat grain on the sand. The animals meticulously picked out the grain from the sand one grain at a time, laboriously cleaning it a grain at a time. However, researchers reported that after a while one Japanese macaque in the troop invented an efficient method for cleaning the grain by scooping up handfuls of wheat grain and sand and throwing the mixture into the water. The wheat grains floated while the sand sank. This macaque monkey then scooped up quantities of clean grain floating on the surface of the water and ate its fill, repeating its novel grain cleaning behavior again and again. Although this showed impressive intelligence and inventiveness on the part of this monkey, just as significant was the fact that other members of the troop observed this behavior and copied it. By observation learning, most of the troop learned this innovative method of cleaning and separating grain from sand. As time passed, youngsters observed older members engaging in this learned behavior and copied it so that the behavior was passed over several generations (Schofield et al., 2018). This example of observation learning illustrates one of its most important biological functions--observation learning is a primary mechanism of cultural transmission of learned behavior across generations, not only in animals like the macaques, but even more so in humans. The effect of the cultural transmission of learned behavioral adaptations from generation to generation produces "cumulative culture . . . characterized as a ‘ratchet,’ yielding progressive innovation and improvement over generations (Tomasello et al. 1993). The process can be seen as repeated inventiveness that leads to incrementally better adaptation; that is, more efficient, secure, . . . survival and reproduction" (Schofield et al., 2018, p. 113). Efficient cultural transmission of successful learned behavior is enormously powerful in boosting biological fitness, and it accounts for those features of human life, such as science, technology, governments, and so on, that distinguish us most from all other species on the planet (Koenigshofer, 2011, 2016).

    Cognitive Learning

    Classical and operant conditioning are only two of the ways for humans and other intelligent animals to learn. Some primates, including humans, are able to learn by imitating the behavior of others and by taking instructions. The development of complex language by humans has made cognitive learning, a change in knowledge as a result of experience or mental manipulation of existing knowledge, the most prominent method of human learning. In fact, that is how you are learning right now by reading this information, you are experiencing a change in your knowledge. Humans, and probably some non-human animals, can make mental images of objects or organisms, imagining changes to them or behaviors by them as they anticipate the consequences. Cognitive learning is so powerful that it can be used to understand conditioning (discussed in the previous modules). In the reverse scenario, conditioning cannot help someone learn about cognition.

    Classic work on cognitive learning was done by Wolfgang Köhler with chimpanzees. He demonstrated that these animals were capable of abstract thought by showing that they could learn how to solve a puzzle. When a banana was hung in their cage too high for them to reach, along with several boxes placed randomly on the floor, some of the chimps were able to stack the boxes one on top of the other, climb on top of them, and get the banana. This implies that they could visualize the result of stacking the boxes even before they had performed the action. This type of learning is much more powerful and versatile than conditioning.

    Cognitive learning is not limited to primates, although they are the most efficient in using it. Maze-running experiments done with rats in the 1920s were the first to show cognitive skills in a simple mammal, the rat. The motivation for the animals to work their way through the maze was the presence of a piece of food at its end.

    In these studies, the animals in Group I were run in one trial per day and had food available to them each day on completion of the run. Group II rats were not fed in the maze for the first six days and then subsequent runs were done with food for several days after. Group III rats had food available on the third day and every day thereafter. The results were that the control rats, Group I, learned quickly, figuring out how to run the maze in seven days. Group III did not learn much during the three days without food, but rapidly caught up to the control group when given the food reward. Group II learned very slowly for the six days with no reward to motivate them. They did not begin to catch up to the control group until the day food was given; it then took two days longer to learn the maze. Results suggested that although there was no reward for the rats in groups II and III during several days at the beginning of the experiment, the rats were still learning. This is evidenced particularly by the performance of group III. Although not given any food reward in the maze for the first three days of the experiment, nevertheless, once food reward was added on day 4, the maze learning performance of rats in this group rapidly caught up with the control group (Group I) that had received food reward in the maze every day from the start of the experiment.

    These results showed that even in the absence of food reward, the rats were learning about the maze. This was important because at that time many psychologists believed that learning can only occur in the presence of reinforcement. This experiment showed that learning information about a maze, gaining knowledge about it, can occur even in the absence of reinforcement. Some have referred to this as "latent learning." The learning that has taken place remains hidden or latent in behavior until a motivating factor such as food reward stimulates action that reveals the learning that had previously not been apparent in observable behavior.

    Cognitive learning involves acquisition of knowledge, in this case, about a maze; in this example, it took place in rats without food reward, as evidenced by their performance in the maze once food reward was presented as a motivator. This procedure revealed the learning that had taken place in the first few days of the experiment.

    Drawing of a maze  a graph of performance of rats in three different conditions
    Figure \(\PageIndex{12}\): Cognitive learning: Group I (the green solid line) found food at the end of each trial; group II (the blue dashed line) did not find food for the first 6 days; and group III (the red dotted line) did not find food during runs on the first three days. Notice that rats given food earlier learned faster and eventually caught up to the control group. The orange dots on the group II and III lines show the days when food rewards were added to the mazes. (Image from: Book: General Biology (Boundless), Learned Animal Behavior 45.7C Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Legal. https://bio.libretexts.org/Bookshelv...d_Sociobiology

    Clearly this type of learning is different from conditioning. Although one might be tempted to believe that the rats simply learned how to find their way through a conditioned series of right and left turns, Edward C. Tolman proved a decade later that the rats were making a representation of the maze in their minds, which he called a “cognitive map.” This was an early demonstration of the power of cognitive learning and how these abilities were not limited just to humans.

    Research discussed more fully in the Cognition and Intelligence chapter will demonstrate the involvement of a variety of cortical areas as well as the hippocampus in the ability to form cognitive maps and engage in many types of cognitive learning.

    References

    Chiappe, D., and MacDonald, K. (2005). The evolution of domain-general mechanisms in intelligence and learning. Journal of General Psychology, 132 (1), 5-40.

    Dawson, E. H., Avarguès-Weber, A., Chittka, L., & Leadbeater, E. (2013). Learning by observation emerges from simple associations in an insect model. Current Biology, 23(8), 727-730.

    Dyer, F.C., and Dickinson, J.A. (1994). Development of sun compensation by honeybees: How partially experienced bees estimate the sun's course. Proceedings of the National Academy of Sciences, USA, 91, 4471-4474.

    Gallistel, C. R., Brown, A. L., Carey, S., Gelman, R., & Keil, F. (1991). Lessons from animal learning for the study of cognitive development. The epigenesis of mind: Essays on biology and cognition, 3-36. Eds. Susan Carey and Rochel Gelman. Psychology Press, 2014.

    Gallistel, CR. (1992). Classical conditioning as an adaptive specialization: A computational model. In D.L. Medin (Ed.), The psychology of learning and motivation: Advances in research and theory (pp. 35-67). San Diego: Academic Press.

    Gallistel, C. R. (2000). The replacement of general-purpose learning models with adaptively specialized learning modules. The Cognitive Neurosciences, 2, 1179-1191.

    Gallistel, C. R. (2003). The principle of adaptive specialization as it applies to learning and memory. In Principles of learning and memory (pp. 259-280). Birkhäuser, Basel.

    Garcia, J., and Koelling R. A. (1966). Relation of cue to consequence in avoidance learning. Psychonomic Science, 4, 123-124.

    Kandel, E. (1976). Cellular Basis of Behavior. San Francisco. W.H. Freeman and Company.

    Koenigshofer, K.A. (2011). Mind Design: The Adaptive Organization of Human Nature, Minds, and Behavior. Pearson Education. Boston.

    Koenigshofer, K.A. (2016). Mind Design: The Adaptive Organization of Human Nature, Minds, and Behavior. Revised Edition. Amazon e-book.

    Koenigshofer, K. A. (2017). General Intelligence: Adaptation to Evolutionarily Familiar Abstract Relational Invariants, Not to Environmental or Evolutionary Novelty. The Journal of Mind and Behavior, 119-153.

    Linnman, C., Rougemont-Bücking, A., Beucke, J. C., Zeffiro, T. A., & Milad, M. R. (2011). Unconditioned responses and functional fear networks in human classical conditioning. Behavioural brain research, 221(1), 237-245.

    Schmajuk, N. A., & DiCarlo, J. J. (1991). A neural network approach to hippocampal function in classical conditioning. Behavioral neuroscience, 105 (1), 82.

    Schofield, D. P., McGrew, W. C., Takahashi, A., & Hirata, S. (2018). Cumulative culture in nonhumans: overlooked findings from Japanese monkeys?. Primates, 59 (2), 113-122.

    Seligman, M. (1971). Phobias and preparedness. Behavior Therapy, 2, 307–321.

    Tolman, E. C., and Brunswik, E. (1935). The organism and the causal texture of the environment. Psychological review, 42 (1), 43.

    Towne, W. F. (2008). Honeybees can learn the relationship between the solar ephemeris and a newly-experienced landscape. Journal of Experimental Biology, 211(23), 3737-3743.

    Attributions

    "Overview," "Habituation and Adaptation," "Conditioning and Biological Adaptation," "Conditioning Involves Learning Predictive Relations," "Operant Conditioning," "Specialized Forms of Learning," "Taste Aversion Learning," "Adaptively Specialized Learning for Navigation and Song Acquisition," and "Observation Learning and Cultural Transmission Improve Biological Fitness in Non-human Animals" were written by Kenneth A. Koenigshofer, PhD. and are licensed under CC BY 4.0.

    Some text and images adapted by Kenneth A. Koenigshofer, PhD, from LibreTexts, Book: General Biology (Boundless), Learned Animal Behavior 45.7A, and from Mark E. Bouton (2021), University of Vermont, Conditioning and Learning. In R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba; Conditioning and Learning by Mark E. Bouton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Permissions beyond the scope of this license may be available in our Licensing Agreement.

    Cognitive Learning adapted by Kenneth A. Koenigshofer, Ph.D., from LibreTexts, Book: General Biology (Boundless), Learned Animal Behavior 45.7C Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Legal.

    Further editing completed by Alan Keys, Ph.D., Sacramento City College, Sacramento, CA.

    CC LICENSED CONTENT, SHARED PREVIOUSLY

    • Curation and Revision. Provided by: Boundless.com. License: CC BY-SA: Attribution-ShareAlike

    CC LICENSED CONTENT, SPECIFIC ATTRIBUTION


    This page titled 9.2: Types of Learning and Biological Adaptation is shared under a mixed license and was authored, remixed, and/or curated by Kenenth A. Koenigshofer.