Skip to main content
Social Sci LibreTexts

6.2: The Visual System and Present Controversy

  • Page ID
    129525
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    It is well documented that human beings have exceptional visual capabilities. While an owl may see with acuity at night, and the lizard may lack a visual blind spot, the human visual system, which has evolved from a shared primate brain, allows for flexibility and an emergent, powerful ability to predict. The combination of our physiology and cognitive abilities enables the integration of vast amounts of visual information to create perceptual experiences that do not deviate much from those around us. Indeed, visual experience requires some uniformity to ground humans in an agreed upon representation of reality. Perception is, therefore, rooted in this understanding and must be tethered to some similarity across people. By counter example, hallucinations demonstrate how perception without reality constraints lacks any observable order (see Clark, 2013). We must agree that a particular pattern of waves that hit the retina yield the color green, this is the first step in semantic development and abstracting away important ideas. But the question remains, how do we (or do we at all) use previous information, memories and states to inform and facilitate visual perception?

    However, these questions are not novel. Beginning as far back as Descartes (1637), there has been a marked intrigue in how, mechanistically, humans are able to assimilate the extensive visual information present at any given moment to adequately traverse our social environments. This curiosity has not waned. Visual system-centered work has extended beyond m using of Da Vinci and Descartes to more modern-day science like Hubel (Polyak,1957; Schmolesky, 1995). Today, we capitalize on access to neurophysiological components of the visual system and a general template for information processing to inform questions and research concerning vision. Research done with similar primate visual systems, the macaque monkey, illuminated some key neural structures that process visual information (Fitzpatrick, Itoh, & Diamond, 1983; Shipp & Zeki, 1989). By mapping specific connections across brain regions, we have begun to piece together where the visual system is distributed in the brain, which regions are most crucial for processing visual information, and the cascade of processes and networks responsible for the bringing visual information to conscious awareness.

    However, correlating activity and locating areas within the brain can only answer so many questions. It is necessary to extend these models to other domains to understanding how these areas work. Are they running in parallel or serially? Modular accounts of visual recognition suggest that distinct visual cortical areas (V1-V5) process different types of visual information and together makeup the primary cascade for recognizing objects (Ungerleider & Haxby, 1994). Visual information first hits the retina and is pooled by ganglion cells. It is here that the other important facets of the visual system are most salient. Namely, attending to specific areas allows for information to hit the field of vision, with information situated close to the fovea most effectively represented. After this, information is passed to the LGN for transduction, which finally sparks the cascade of processing to create the final representation (Van Essen, Anderson, & Felleman, 1992; Felleman & Van, 1991).

    Beginning at the primary visual cortex, or V1, rudimentary object features are calculated from raw visual information (Desimone & Ungerleider, 1989). From there, increasingly specialized areas selectively fill in missing information that builds up to a representation of the object at the conscious level (Kastner & Ungerleider 2000). Moreover, computational evidence suggests that the cascade of processing within the visual system that leads to the conscious categorization and subsequent identification of objects proceeds in a hierarchical fashion. Again, using the macaque monkey’s visual system as a proxy, findings suggest that the specific cortical areas (e.g., V1, V4) process distinct components of the overall sensory input. This processing occurs through a series by which processing low-level features of the two-dimensional space eventually producing a three-dimensional (3-D) object representation (Perrett & Oram, 1993). Other theorists have extended this work to evaluate the applicability given human biological constraints, finding that physiological evidence that implicates the inferior temporal cortex (IT) can be modelled for basic performance (Riesenhuber & Poggio, 1999). Essential to this line of reasoning is the feed-forward building of complexity.

    Neurophysioloigcal mapping has established specific subdivisions, each of which contribute to the overall functioning. These subdivisions are largely made up of different cellular signal, including the magnocellular (M-pathway) and the parvocellular (P-pathway), research that has been spearheaded by studies of the macaque monkey visual system (Maunsell, Nealey, & DePriest, 1990). The M-pathway and P-pathway have also been linked to specific spatial frequency information, whereby the M-pathway, situated primarily in the dorsal stream, is sensitive to low-spatial frequency information, and the P-pathway, primarily located in the ventral stream, is sensitive to high-spatial frequency information (Burr, Morrone, & Ross, 1994; Goffaux et al., 2005). Whereas the P-pathway primarily facilitates perception of contrast and color, notably higher-order features of visual perception, the M-pathway facilitates perception of motion and coarse greyscale information (Merigan and Maunsell, 1993; Vuilleumier et al., 2003). Interestingly, these pathways are also thought to map onto unconscious (M-pathway) and conscious (P-pathway) visual processing (Tapia & Breitmeyer, 2011). These divergent features of cellular channels within the visual system highlights important characteristics of our primate visual system; namely, the parallel, coordinated nature of visual processing.

    Whereas these approaches are founded on the bottom-up nature of visual perception and object recognition, recent research has begun to challenge this view. For instance, the subjective value of an object may affect the proximity in which said object is perceived. Subjectively more desired objects (e.g., money) are estimated as closer in proximity than less desirable objects (Balcetis & Dunning, 2010). Further, research on action potential suggests that perception of hill steepness is influenced by metabolic costs (Proffitt, 2006). This line of research has revived the New Look debate, which claims that our beliefs, motivations, affordances, and more directly affect how we interpret incoming visual data (Balcetis, 2016). The updated New Look advocates a penetrability of perception by cognition, while other researchers continue to advocate a cognitively “impenetrable” V1, suggesting methodological limitations have stymied legitimate challenges to conventional conceptions of bottom-up processing (see Firestone & Scholl, 2016). Are the differences in observed behavior a function of response biases or actual perceptual modulation? Does the money actually appear closer? Or is it just a function of the relative desirability? If a judgement is driving the observed differences in responses, early regions of the visual process cascade may be independent of cognitive influence. While evidence is mounting, these questions remain to be answered.

    In the following sections, I will attempt to answer these questions through the predictive lens. First, I will touch on prominent approaches to studying object recognition in humans. While the lens from which object recognition is studied varies greatly, an overarching goal is to better understand how human brains utilize prior information to inform ongoing processing of visual information. How might stimulus-driven conceptions of vision be limiting? What units or features of visual stimuli make up the foundation for understanding complex objects encountered? The scope of our knowledge about our environments is vast, informed by physiological states, memory, and sensory signals across modalities. How all of this information is integrated represents a critical question within vision research. Beginning with neuropsychological approaches to vision and then moving to cognitive science methods, the following sections will attempt to consider the relative advances and limitations to studying vision through the predictive lens.

    To answer these questions of general perception and more specific questions of object recognition, methodological approaches began by neural pathway mapping. This was useful in identifying particular neurons that process specific information, but it does not get us closer to understanding how this happens. Computational approaches are currently gaining momentum. These approaches differ slightly from traditional neurobiological study of perception, instead assessing how to maximize information processing model fits.

    Neurophysiological Evidence

    Neural approaches have guided much of the research on the visual system and predictive vision. These approaches capitalize on the physical accessibility of neuroscientific methodology, enabled by the similarity in the visual cortices of other primates (Milner & Goodale 1995). Early work has isolated two major pathways by which our visual system preferentially processes distinct components of what we see (Milner & Goodale, 2008). Specifically, the ventral stream tells us what an object is, while the dorsal tells us where it is (Goodale & Milner, 1992; Goodale & Westwood, 2004). Moreover, the ventral stream is a low-level visual pathway, made up of descending and ascending routes, that are necessary for detailed visual information (Bar, 2000). Much of the research on ventral stream processing has focused on the traditional bottom-up approach, or on the ascending pathways, whereby visual information hits the V1, V2, and V4 cortical areas and then projects onto the high-level regions that are implicated in object perception like the IT (Bar, 2000). Generally, it is through these visual cortices (V1-V4) that the ventral visual stream hierarchically creates the visual representation (Hong, et al., 2016). Contrastingly, the dorsal stream is implicated in movement-based vision, aiding in the calibration of motor functions and detection of movement in the periphery. This pathway is less implicated in the process of object recognition but rather, fine-tuned for perception for action (Goodale & Milner, 1992). However, some research has suggested that our brain may rely on predictions from gross low-level features to supply an initial guess for other parts of the visual system to inform (Bar, 2003; Bar et al., 2006; Kveraga, Boshyan, & Bar, 2007). These findings highlight the potentially important contribution of the dorsal stream to complex visual object recognition and have opened the door to new explorations within the purview of the predictive lens.

    The dual-system process model posits that ascending neural pathways in the ventral stream hierarchically build the representation of a given object. Though this is undoubtedly a piece of the puzzle, there is much to be gleaned from incorporating feedback loops into the model of visual object recognition. Specifically, research has demonstrated the importance of context in object perception (Bar, 2004; Fenske, Aminoff, Gronau, & Bar, 2006), engagement of higher-order structures in processing degraded or ambiguous stimuli (Wyatte, Curran, & O’Reilly, 2012), and perceptibility of low spatial frequency objects (Kverega, 2007). Recent evidence has corroborated these findings, demonstrating context-dependent oscillation patterns within the prefrontal and parietal cortices (Helfrich, Huang, Wilson, & Knight, 2017). These findings emphasize the role of expectation in guiding and enhancing visual perception. Contextual factors provide cues to associate with similar objects, facilitating and even biasing visual processing.

    The two-system approach garnered a great deal of support, engendering a flood of scientific research. However, there are fundamental limitations to such approaches. Namely, these approaches rely on isolating pathways in the brain, ostensibly overlooking the complexities and interdependences that exist between these pathways. It is well documented that the brain is an iterative, dynamic organ (Cunningham & Zelazo, 2007). Research in this domain has implicated the PFC and reflective processing to extricate human neural processing from the more automatic associative processing seen in animals, favoring a hierarchical brain architecture that allows for afferent and efferent connections between brain regions (see Zelazo & Cunningham, 2007). This approach has also been situated within the object recognition literature. Work done by Bar (2000) underscores how studying the pathways separately may have neglected important contributions from prefrontal brain areas to ongoing visual processing.

    To fully understand how neurobiology might suggest a top-down, cognitive penetrability, it is important to reconcile the role of the orbitofrontal cortex (OFC) in the processing of affective information. Specifically, the amygdala has established connectivity to the OFC, which implicates it in encoding emotionally salient information (Pessoa & Adolphs, 2010). Studies have implicated the OFC in representing threat and reward (Kringelbach & Rolls 2004), as well as in processing and representing auditory and visual information (Kringelbach 2005). This research has further linked the OFC to visual processing by demonstrating that the OFC is activated around 80-130ms after stimulus onset (Lamme & Roelfsema 2000). While this is not the earliest component of visual processing (<100ms), utilization of fMRI and MEG imaging has established the temporal activity during the short latency period, a time early enough to modulate ongoing processing (Barrett & Bar, 2009; O’Callaghan, Kveraga, Shine, Adams, & Bar, 2016).

    Finally, while not the primary focus of the present chapter, attention is an important part of visual perception that often obfuscates the interpretation of visual system penetrability. Attention is an obviously critical facet of the visual system and is important to understanding the ways in which higher-order cognitive processes bias visual processing. We can only process what we attend to, and as such, visual attention operates as a sort of gate keeper in the cascade of conscious object representation. Specific stimulus properties, like emotion, are prioritized, which increases the likelihood that they will be attended to (Öhman, Flykt, & Esteves, 2001). Indeed, like a perceptual bias or expectation, attentional biases increase the probability of recognition. This makes parsing attentional versus perceptual biases difficult to disentangle. However, even at the attentional level, some findings are suggesting an influence of visual expectations (Gantman & Van Bavel, 2014). Attentional control refers to processes that facilitate both suppression of irrelevant stimuli (temporal attention) and broaden the breadth of visual field input (spatial attention), which research has shown are attentional systems differentially biased based on emotional states (Clore & Huntsinger, 2007; Gable & Harmon-Jones, 2008, 2010). Spatial, or broadened attention, increases target detection in peripheral locations, yet increases inaccurate responses due to the costs associated with unfocused processing of visual stimuli. Conversely, temporal or flexible attention requires a focused lens, making irrelevant and peripheral targets difficult to process, but increases the accuracy of target identification. Specifically, previous research has shown that high arousal emotions increase local target detection compared to happiness and sadness (Easterbrook, 1959; Eysenck et al., 2007; Gable & Harmon-Jones, 2008, 2010; Clore & Huntsinger, 2007; Wells & Matthews, 1996). These results highlight the ways in which motivations and emotions may influence attentional processing, subsequently influencing perception. Therefore, even on the attentional level, top-down expectancies can produce influences on visual processing.

    Emotions, Motivations, and Perception

    A critical piece to the present argument is that emotions, often referred to as a part of system-one (Kahneman, 2011), create powerful preferences from which we see the world. Emotions are a crucial component of our capacity to navigate social systems. As mentioned above, emotion saliency provides an interesting intersection between attention and prediction. Emotions guide the way we see our worlds, interact with others, and motivate goal-directed behavior. Research highlights the impact of emotions on our lives, demonstrating that emotions influence our attitudes (DeSteno, Dasgupta, Bartlett, & Cajdric, 2004; Esses, & Dovidio, 2002), our decisions (Lerner & Keltner, 2001; Lerner, Small, & Loewenstein, 2004), and our judgements (Forgas, 2013; Clore & Huntsinger, 2007).

    As mentioned above, recent studies have illustrated that there is a top-down contribution to object recognition stemming from the dorsal stream (Kveraga et al., 2007). One prominent theory, the Frame and Fill Theory (FnF), posits that object processing within the ventral stream relies on contributions from the dorsal stream (magnocellular connections; M-pathway), which contributes global outlines of visual input and estimates of what the object is via the OFC. The ventral visual stream (parvocellular channels; P-pathway) relies on the global template and ‘fills’ in necessary details for accurate object recognition (Bullier, 2001;Chen et al.,2007). Further, by introducing the dorsal stream as a mechanism through which emotions may bias object recognition, there may also be important implications for the biasing of attention. Research suggests that the dorsal stream governs the shifting of attention (Siegel, Donner, Oostenveld, Fries, & Engel, 2008). Thus, the FnF theory provides a cohesive model of attention and object recognition for studying biases that influence early processing, specifically in relation to biases caused by emotional content. Through this lens, the relationships between emotion and object recognition can be better tested, by biasing processing toward the M-pathway and the dorsal stream object recognition (and flexible attention) may be facilitated.

    Findings from the lens of affective neuroscience also suggest that the primacy of emotion may guide gating mechanisms of early visual inputs as well as recruit the engagement of the OFC, altering ongoing perceptual and visual processing. (Feldman-Barrett & Bar, 2009; Schmitz et al. 2009). Emotions and affective states are informationally rich, intimately interacting with cognitive processes (see Clore, Gasper, & Garvin, 2001). Together, emotion and perception optimize the visual identification process. Specific evidence has implicated positive and negative affect in the encoding of visual information, suggesting that differences between positive and negative states interact with the encoding of peripheral information by altering the field of vision (Schmitz, De Rosa, Anderson, 2009; Rowe et al., 2007). Moreover, binocular rivalry studies have unlocked interesting insights into what achieves perceptual dominance. Emotional faces (Alpers & Gerdes, 2007), affectively conditioned (Alpers et al., 2005), and motivationally valued (Balcetis, Dunning, & Granot, 2012) stimuli all overtake the perceptual experience compared to neutral and control stimuli. Collectively, these findings hint at important relationships between emotional or motivational value and visual prioritization. If emotions interact with value representations of stimuli, expectations and predictions may be reflected in the processing of visual information. However, within the dominant framing of visual processing, connectivity between emotion “centers” in the brain (e.g., the amygdala) and top-down contributors (e.g., OFC), are not well established.

    Zooming in on Fear

    Fear has emerged as an important emotion to assess different processing during visual perception. A surfeit of research has investigated the link between amygdala functioning and emotions, much of which has focused on amygdala activation in the recognition and response to potential threats through feedback from the visual cortex (Amaral et al., 1992; LeDoux, 1998; Ledoux, 2002; Pessoa et al., 2002; Adolphs & Spezio, 2006). Further, studies have shown automatic detection of threatening stimuli presented outside of conscious awareness (Öhman & Mineka, 2001; Öhman, 2005), enhanced attention in visual searches when in fearful states (Öhman, Flykt, & Esteves, 2001), and biased perceptions when afraid (Stefanucci, Proffitt, Clore, & Parekh, 2008). Such results suggest a unique role emotion, and specifically fear, may have on initial attention and perception and may influence higher-level processes like object-identification.

    While the role of the amygdala in the processing of emotion is well established, the purported mechanisms that affect cognitive processes in fear states are incompatible, relying on conflicting top-down and bottom-up processing models to explain a variety of phenomena (Pessoa & Adolphs, 2010). Specifically, two routes have been posited for amygdala directed processing. The low route, which has the advantage of speed, suggests a direct subcortical route from the thalamus, and the higher route, from the thalamus to the visual cortex to the amygdala (Rolls, 1999). Indeed, the amygdala provides a critical source of input for affective processing. Yet, evidence of a low route existing in high order species is lacking (Shi & Davis, 2001) and high route processing has yet to reconcile speed issues (Shi & Davis, 2001). This inconsistency diminishes understanding of how the amygdala gets information to the level of consciousness quickly enough to incite action.

    It is well established that fear is associated with facilitating attention toward and perception of dangerous entities compared to other emotional states. Again, the common framework for explaining these findings are primary cortical visual pathways that send low grade visual information to the amygdala, which then identifies threatening entities. Mixed findings and the lack of a unifying theory have limited understanding of how cognition might influence ongoing processing. Moreover, these approaches rely on the primacy of affect, suggesting that cognition has little to do with initial processing of affective information, a notion that continues to be contended (Lazarus, 1982; Storbeck & Clore, 2007).

    The emphasis on subcortical processing of information, restricts top-down contributions of processing emotional stimuli such as motivations, perceptions, and attitudes. For instance, the visual system is sensitive to and biased by endogenous (Balcetis & Dunning, 2006; Tiedens, Unzueta, & Young, 2007; Skelly, & Decety, 2012) and exogenous (environmental cues) factors (Proffit, 2006; Cole, Balcetis, & Dunning, 2013), which can change the nature of processing of visual features and perceptions of such objects. Similarly, endogenous and exogenous factors may even bias attention, like fear increasing attentional flexibility, enhancing the ability to detect peripheral objects, which is contrary to the standard assumption that fear only narrows attention (Awh & Pashler, 2010). Current paradigms examining such rapid detection of objects and subsequent object recognition rely on assuming the independence of dorsal and ventral streams, with scarce focus on how they interact with one another and how emotion may modulate such interactions.

    Other Emotions

    The majority of visual perception research examines how fear interacts with processing, though some research has suggested that other emotions (particularly negative valence) evidence biases. One such study investigated how faces are perceived as fundamentally different depending on context in which the face was presented (Aviezer et al., 2008). Although this is notably not a study on object recognition, it nonetheless highlights how the visual system is context dependent. This occurs with something as vital as the accurate recognition and identification of other human faces. Emotion has marked effects on a number of cognitive processes and may have dissociable effects on object recognition during instances in which emotion states are congruent with predicted sensory input compared to when they are not.

    Motivated Perception

    In addition, research within the motivational domain has suggested an influence of motivational drive on perception. The implications of goals in biasing visual system processing is a particularly consequential possibility (Weber, 1996; Inbar, & Pizarro, 2009). For instance, within the social framework, moral goals have been shown to influence our decisions (Haidt, 2007), our attitudes (Helzer & Pizarro, 2011), our emotions (Haidt, 2003), and even the ‘popping out’ of salient words (Gantman & Van Bavel, 2014). Through a motivated perceptual lens, moral goals may facilitate object recognition of salient images. Research has also shown race-based processing may also rely on low-spatial frequency cues (Correll, Hudson, Guillermo, & Earls, 2017), suggesting some integration of social conceptualizations and expectancies into the ongoing visual perceptual process.

    The effect of motivation on perception is not limited to the moral sphere. Research has also demonstrated goal- and action- driven effects on perception. For example, externally incentivizing a specific construal of ambiguous figures drives differences in reported encounters with the target construal (Balcetis & Dunning, 2006). A number of psychological studies have evidenced top-down effects informed by subjective value, race and stereotypes, and political climate. These types of social knowledge mark a high-level form of context, which has previously been implicated in object perception under isolated laboratory conditions. One study has suggested that outside of frequency, learning and biases in responses, perceptual dominance is explained by subjectively valued influences (Balcetis, Dunning, & Granot, 2012). Further, research done by Levin and Banaji (2006) revealed differences in the perceived lightness of faces matched on luminance. African Americans were seen as darker skinned compared to European-descent faces, a finding that has been attributed to top-down knowledge of facial featural differences between these two races (Levin & Banaji, 2006). Unfortunately, other research has corroborated findings suggesting racial stereotyping modulatory effects on perception. For instance, one study found that race of the target predicted erroneously firing a gun in a computer game, even when the incentive was structured to be accurate and shoot only targets who were holding a gun, not a tool (Correll, Park, Judd, & Wittenbrink, 2002). Another set of findings has demonstrated how a self-identified political group and government stability can alter perception of skin color, favoring lighter skin representations when the target is identified as a member of your political in-group (Caruso, Mead, & Balcetis, 2009) and under instances of in-group instability (Stern et al., 2016). Acquiring accurate social knowledge is important to individual functioning. However, this means that our socially determined biases may permeate into cognition and perception. These findings highlight how such knowledge constrains visual processing and biases in the direction of one’s goals or beliefs.

    Conclusion

    To synthesize the dominant themes so far, the amygdala is crucial for processing of emotional stimuli and is specifically sensitive to fear. Motivational works also provides an informative perspective on how individual internal states or internally valued goals can fundamentally alter perception. Research on the mechanisms that govern how fear impacts attention and object recognition rely on conflicting cortical processing routes, routes which preclude top-down contributions, and omit the early activation on prefrontal cortical areas of the brain. Consequently, more research is needed to establish and extend mechanistic understanding of the influence of fear. Fear imparts biases onto what we see, biases which can produce and reinforce maladaptive behavioral responses (e.g., always seeing a snake instead of a sock increases stress response, a process that is ultimately corrosive for the body). The biases and predictions we bring into our phenomenological experiences constrains what we see, especially in instances in which relevant objects or scenes are obfuscated in some way.

    On a broader level, this literature pulls at intuition because we know the human brain to be a remarkable predictor (though we are objectively bad lay statisticians). We have adapted the ability to quickly detect threatening objects and to do so in the direction of greater false positives than negatives. Given what is known about the adaptability of the brain, its proclivity to make predictions (in terms of lessening energy costs), and the false positive bias, learned associations may be driving many of these predictions. Such predictions require descending neural processing, and like our conscious navigating of our complex, social environments, they are susceptible to errors and heuristically biased assumptions.

    Limits to these approaches

    While some evidence has converged on the predictive advantage of both emotional and motivational states, there is still an ongoing debate as to where exactly these differences exist. Attention, response biases and demand characteristics are each potentially contributing to findings (for a detailed review see Firestone & Scholl, 2016). Indeed, parsing through whether prior information biases early visual processing (e.g., V1) is a controversial topic. Though scientists would agree on many substantive evaluations of visual processing (e.g., parallel processing), prefrontal cortical access to V1 is one discrepancy that is challenging to resolve using reverse inference (imaging studies) and ineffectually controlled behavioral paradigms. Critics of the descending neural pathway view of object recognition suggest that evidence cited above may be primarily attributed to judgements, and that the scope of our current technologies limits the claims that can be made. For instance, imaging studies using functional magnetic resonance imaging (fMRI) often rely on patterns of activation and lack temporal resolution.

    Moreover, it is completely uncontroversial to not the interconnected nature of the brain. A large number of studies utilizing neurophysiological or emotion/motivational methods have emphasized specific “centers” in the brain. However, it still remains unclear how selective these neural regions are. For instance, the amygdala was once thought to selectively attend to fear/threat stimuli (Davis, 1997). Recent research has suggested this may not be the case. Instead, the amygdala seems to come online for a number of stimulus features, including emotional saliency (not just fear) and novelty (Sander, Grafman, & Zalla, 2003). What about how we process faces? What makes this different than how we process objects? Additionally, dealing with assessing differences in perception versus judgement findings still marks an important and difficult task.


    This page titled 6.2: The Visual System and Present Controversy is shared under a not declared license and was authored, remixed, and/or curated by Matthew J. C. Crump via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?