Located in the back of the brain, as shown in Figure 2.1, recognition networks enable us to identify and interpret patterns of sound, light, taste, smell, and touch. These networks enable us to recognize voices, faces, letters, and words, as well as more complex patterns, such as an author's style and nuance, and abstract concepts like justice.
Take a look at the picture in Figure 2.2. Instantly, you probably recognize many of the objects depicted-people, furniture, doorways. If asked, you could identify parts of these objects, such as eyes, table legs, or doorknobs. Some of these objects are partially hidden; others are at odd angles or clustered in poor light, yet your recognition networks are so powerful that you have no difficulty determining what these objects are.
We can do more than just recognize many objects at essentially the same time; we can also recognize the same object in a number of different ways. For example, even out of context you can recognize the shape in Figure 2.3 as a chair. This is remarkable, given that this particular representation does not show the features usually associated with chairs, such as four legs and a seat. And chances are you can recognize it not only as a chair, but also as the chair from "The Unexpected Visitor" picture in Figure 2.2. Your recognition networks enable you to distinguish this specific chair from all the other chairs you have ever identified. Without articulating it, you also recognize the chair as a member of the category "furniture."
Recognition, which seems simple, is actually an incredibly complex feat. As scientists identify the salient characteristics of recognition networks, we understand more clearly how recognition actually works.
How does the brain accomplish the complex work of recognition in just a fraction of a second? Positron Emission Topography (PET) scan images give us some important clues. In Figure 2.4, we see a PET scan of the brain in the act of recognizing one set of words under two different sensory conditions. The same words have been presented orally to the brain pictured on the left and visually to the brain pictured on the right.
These contrasting images illustrate the fact that visual stimuli are recognized in one part of the cortex and auditory stimuli in another (Kandel, Schwartz, & Jessell, 1991). In other words, the general task of recognition is distributed across different areas, each specialized to handle a different component of recognition. (From this point on, we will refer to these specialized areas of the brain as "modules.") Distributed processing is not limited to differences between distinct sensory modalities, such as vision and hearing. The subprocesses within each sense modality are also distributed. For example, visual recognition is distributed across at least 30 different modules, so that elements like vertical lines, diagonal lines, color, and motion are all processed in physically discrete areas of the brain (Gazzaniga, 1995; Mountcastle, 1998; Roland & Zilles, 1998; Zeki, 1999).
An analogy may clarify how distributed modular specialization works. Think of the brain as a kitchen full of food processors. Imagine that all the processors are the same basic make and model, but each comes with a specialized attachment for blending dough, shredding cabbage, or performing another specialized task. Although each processor performs the same general function, their output is as different as piecrust is from coleslaw. By keeping a kitchen full of processors, a chef needn't switch the blade for each new task or worry about getting cabbage in the piecrust! In the brain, distributed processing provides a similar advantage. All the modules have the same basic structure, but the tissue in each region is fine-tuned to process one type of input extremely efficiently. This works more effectively than would "all-purpose" brain tissue that would have to adapt to each new task.
Recognition is quick and efficient because all the modules are working in parallel. Through parallel processing-the simultaneous performance of multiple tasks by interconnected modules-our brains process and pool information that is distributed throughout our recognition networks, all in less than half a second. The brain's modules are interconnected through multiple pathways, enabling visual, auditory, olfactory, and tactile recognition to influence one another. This accounts for the interesting observation that an auditory or tactile stimulus can bias our interpretation of a visual pattern (see Martino & Marks, 2000).
The distributed nature of recognition has profound implications for individual differences. If recognition were the product of one homogeneous area of brain tissue, recognition abilities would vary from person to person in only a limited number of ways. Differences in recognition would have global effects. For example, if an entire modality, like vision, were the product of one subnetwork, any difference would affect vision as a whole. But because recognition is actually a coordinated act of many different modules, each very small component of recognition has the potential to exhibit person-to-person differences. The differences between us may affect one module, and therefore one aspect of recognition, or many modules, and therefore many aspects of recognition.
We have learned that individual aspects of patterns, such as color, shape, orientation, and motion, are processed in parallel by separate pathways within the recognition networks. Each of these pathways is organized into a hierarchical continuum, containing some brain regions that are highly complex, some that are comparatively simple, and others that are somewhere in between.
Let's continue to use vision as an example. As visual sensory information we take in though our eyes departs from our retinas, it travels up through an increasingly complex hierarchical network, eventually reaching the visual cortex. This is called "bottom-up" processing, and it is part of the way we extract visual details from an image such as "The Unexpected Visitor." This type of processing is responsible for identifications based on particular sensory features, meaning the quality of sensory input is very important. Poor lighting, low-quality photocopies, or mumbled speech can all impede bottom-up processing and make everyday recognition tasks difficult.
Just as important as the information flowing up the hierarchy of recognition structures is the information that travels down the hierarchy. To facilitate the recognition of details, our brains make use of higher-order information, such as background knowledge, context, and the overall pattern. When examining "The Unexpected Visitor," you applied knowledge about the kind of room the scene is set in (gleaned from bottom-up recognition of the room's more visible objects) to help you identify other objects that are difficult to recognize based on visual detail alone.
Bottom-up and top-down recognition processing both play critical roles in learning. Consider learning to read: The common assumption used to be that reading was mainly a bottom-up activity, wherein letters are recognized by their features, synthesized into words and sounds, and then analyzed for meaning. But research has shown that it is easier and faster to recognize letters in the context of words than it is to recognize them in isolation. This phenomenon, termed the word superiority effect (Adams, 1994), occurs because familiarity with the larger pattern (the word) constrains the bottom-up process of individual letter recognition and leads a reader to rely more on his or her expectation of what letters will come next and less on the actual visual features of those letters. That's why proofreading is so difficult. We miss errors because our word expectations are so powerful that they influence how we see the individual letters. Use of context and meaning to predict what is coming next is another familiar example of the top-down processing used in reading.
|Activity: Try this online scanning task on the Learning Disabilities Resource Community Web site to see first-hand the "word supperiority effect" - how our brains use context to help recognize visual patterns. Click on "Try the Scanning Task," register, and try your hand at experiment 1.|
Because smoothly functioning recognition networks take advantage of both top-down and bottom-up processes, teaching to both processes rather than focusing exclusively on one or the other is the wisest choice. A positive example is the recent truce in the "phonics wars." Most programs have now adopted a form of reading instruction that incorporates both the top-down whole language method and bottom-up phonics. This balanced approach is consistent with the way the learning brain works.
Although human brains all share the same basic recognition architecture and recognize things in roughly the same way, our recognition networks come in many shapes, sizes, and patterns. In anatomy, connectivity, physiology, and chemistry, each of us has a brain that is slightly different from everyone else's.
PET scan images, such as those shown in Figure 2.4, usually represent averages across individuals. These averages highlight commonalities between individuals but obscure the fact that each individual brain actually reveals a unique pattern of activity. For example, most people, when they recognize an object visually, show increased activity in the back part of their brains; however, the exact magnitude, location, and distribution of that increased activity varies quite a bit. The active area of the cortex may be larger or smaller, more localized to the right or left hemisphere, or more widely or closely distributed. These variations undoubtedly manifest in the way people recognize things in the world-their recognition strengths, weaknesses, and preferences.
|Activity: Individual differences that affect learning are apparent in brain images such as these.|
The distributed nature of processing in the brain leads to myriad subtle differences in recognition between individual learners. Unlike the global notion of ability suggested, for example, in a Stanford-Binet IQ score (see Thorndike, Hagen, & Sattler, 1986), learners' abilities are multifaceted. When two students perform the same academic task, the patterns of activity in their brains are as unique as their fingerprints. The uniqueness may not be visible in the overall level of brain activity, but rather lies in the pattern of activation: how the activity is distributed across different brain regions. For this reason, no one measure of brain activity-and no one learning score or variable-differentiates or describes individual learners in any meaningful way.
Traditional views of disability, also based on an implied assumption of unitary brain functioning, suggest that a person either does or does not belong to the category "disabled." New understanding about the distributed nature of neural processing shows that abilities in many domains fall along a very large number of continua. Further, the importance of a particular strength or weakness depends upon what is being asked of the learner. This is why, for example, a youngster with perfect pitch who has difficulty recognizing letters is seen as disabled, but a child who is tone deaf but can read words easily is not.
Specific differences in the recognition networks of individual learners range from the subtle to the profound. The recognition cortex in Albert Einstein's brain was disproportionately allocated to spatial cognition (Harvey, Kigar, & Witelson, 1999). He had difficulty recognizing the letter patterns and sound-to-symbol connections required for reading, but he was a genius at visualizing the deepest fundamentals of physics. Awareness of these differences across his recognition networks could have helped Einstein's teachers shape instruction that would both capitalize on his spatial genius and support his areas of weakness.