ORIENTATION FEATURE BINDING IN PIGEONS Except where reference is made to the work of others, the work described in this thesis is my own or was done in collaboration with my advisory committee. This thesis does not include proprietary or classified information. John F. Magnotti IV Certificate of Approval: Ana Franco-Watkins Assistant Professor Psychology Jeffrey Katz, Chair Associate Professor Psychology Jacqueline Mitchelson Assistant Professor Psychology George Flowers Dean Graduate School ORIENTATION FEATURE BINDING IN PIGEONS John F. Magnotti IV A Thesis Submitted to the Graduate Faculty of Auburn University in Partial Fulfillment of the Requirements for the Degree of Master of Science Auburn, Alabama December 18, 2009 ORIENTATION FEATURE BINDING IN PIGEONS John F. Magnotti IV Permission is granted to Auburn University to make copies of this thesis at its discretion, upon the request of individuals or institutions and at their expense. The author reserves all publication rights. Signature of Author Date of Graduation iii THESIS ABSTRACT ORIENTATION FEATURE BINDING IN PIGEONS John F. Magnotti IV Master of Science, December 18, 2009 (B.S., James Madison University, 2007) 44 Typed Pages Directed by Jeffrey Katz Manipulating conditions under which humans search for a target is a standard ap- proach for studying perceptual errors. An important early finding was that visual search for a combination of features (e.g., color, orientation, shape, and size) was much slower than a search for any given single featurej (e.g., just color). This slow-down during ?conjunction? searches is a major component of the Feature Integration Theory of Attention. The theory posits that visual scene processing is composed of two separable stages: a pre-attentive segregation phase and an attention-mediated integration phase. The second stage is slower as it requires focused attention, thus ensuring that searches requiring integration of features will take longer than single-feature searches that can be conducted in the absence of fo- cused attention. If attention can be interrupted before features are properly integrated, or if insufficient time for binding is allowed, errors in binding may occur. First termed ?il- lusory conjunctions,? the existence of binding errors in human visual perception has been extensively studied, but the conditions that give rise to them are still debated. To date, no evidence for binding errors has been observered for non-human animals. Past research has demonstrated the slow down for conjunction targets (as compared to iv single-feature targets) in pigeons. The current study is a first attempt to determine system- atically if pigeon target search can be affected by binding errors. Through several phases, pigeons learned to search for a plus sign (+) in a two-item target search using vertical or hor- izontal bars as distractors. After acquiring the task, the stimulus viewing time was titrated until accuracy was within the 70-80% range. To demonstrate the presence of binding er- rors, false alarm rates were compared between trials in which a binding error would not be possible (feature trials; trials with only horizontal or only vertical bars) to trials in which a binding error could have occured (conjunction trials; one horizontal and one vertical bar). The results of the current study failed to show unequivocal evidence of binding er- rors in pigeon visual perception. Instead, under the reduced viewing time condition, most birds decre ased responding to the ?target present? response, lowering both target-present accuracy and false alarm rates. This shift in response profile renders any conclusions ten- tative, but despite the decreased frequency of false alarms, some birds committed more errors on conjunction trials than on feature trials. Future research should consider the role of reinforcement history in solving the target search as well as the possibility of a species difference in the processing of visual scenes between pigeons and humans. v TABLE OF CONTENTS LIST OF FIGURES vii LIST OF TABLES x INTRODUCTION 1 Feature Integration in Humans . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Pigeon Feature Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 METHOD 10 Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Apparatus & Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 RESULTS & DISCUSSION 16 GENERAL DISCUSSION 28 BIBLIOGRAPHY 32 vi LIST OF FIGURES 1 Conceptual diagram of Feature Integration Theory. Light from the envi- ronment creates an image on the retina. The information in this image is analyzed by functionally distinct neural regions and stored in hypothetical feature maps. The information in these maps is then reintegrated into an object. This reintegration process is assumed to require focused attention. . 4 2 Examples of trials from Prinzmetal (1981). Vertical and horizontal lines were blue in the original study. Displays A and B test the effect of percep- tual grouping on binding errors while holding distance constant. Display C shows an example target present trial. Display D shows an example target absent trial in which errors cannot be due to faulty binding of features. . . . 5 3 Layout of the experimental chamber. The numbers 1-4 identify stimulus positions. R indicates the position of the Ready Signal. C1 and C2 are the choice stimuli locations. Relationships not to scale. . . . . . . . . . . . . . 11 4 Trial types for Experiment 1. Stimuli in the actual experiment were gray and were displayed vertically on some trials. . . . . . . . . . . . . . . . . . 12 5 Trial progression. The top and bottom sequences are identical except for the visual mask separating the search display and choice display in the bottom sequence. Event timings varied throughout training and testing. . . 15 vii 6 Ted?s & Steve?s acquisition plots for Phases 1 & 2. Stimulus displays con- sisted of all 8 trial types displayed only horizontally. The start of Phase 2 is indicated by the solid vertical line. The break in Steve?s graph shows a gap in training because of health issues. The second vertical line in Steve?s plot marks the start of a 5-s sitmulus display viewing time. . . . . . . . . . 18 7 Ted & Steve Phase 3 acquisition. The solid lines in Ted?s plot mark the start of vertical-only remedial trials, the onset of the 5-s stimulus display viewing time, and the reinstatement of horizontal trials. Steve had a 5-s stimulus display viewing time throughout Phase 3. . . . . . . . . . . . . . 19 8 Mark & Curly Phase 3 acquisition. The solid lines denote the onset of the 5-s stimulus display viewing time. . . . . . . . . . . . . . . . . . . . . . . 20 9 Accuracy for each bird is displayed for each target status and for baseline v. transfer. Viewing time during baseline was 5s for all birds. During transfer viewing time differed by bird, Ted: 4.5s, Curly: 4s, and Mark: 2s. Dark gray bars are target absent, light gray target present. Striped bars are for transfer trials, plain bars are baseline trials. Error bars are 1sem. . . . . . . 22 10 Ted?s accuracy for each target present trials split by stimulus display type and session (baseline v. transfer). Target absent accuracy is shown for comparison of both mean and variance. Error bars are 1sem. . . . . . . . . 23 11 Curly?s accuracy for each target present trials split by stimulus display type and session (baseline v. transfer). Target absent accuracy is shown for comparison of both mean and variance. Error bars are 1sem. . . . . . . . . 24 viii 12 Mark?s accuracy for each target present trials split by stimulus display type and session (baseline v. transfer). Target absent accuracy is shown for comparison of both mean and variance. Error bars are 1sem. . . . . . . . . 25 13 Accuracy for each bird is displayed for each stimulus display orientation and for baseline v. transfer. Viewing time during baseline was 5s for all birds. During transfer viewing time differed by bird, Ted: 4.5s, Curly: 4s, and Mark: 2s. Dark gray bars are for baseline trials, light gray for transfer. Striped bars are for vertical orientation, plain bars are horizontal stimulus display orientations. Error bars are 1sem. . . . . . . . . . . . . . . . . . . 27 ix LIST OF TABLES 1 Mean False Alarms by Type during Baseline and Transfer. Standard Error in Parentheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 x Introduction Pigeons have repeatedly demonstrated the ability to make conditional discrimina- tions based on conjunctions of visual object dimensions (e.g., Cook, 1992a, 1992b; Cook, Cavoto, & Cavoto, 1996). Their success (and failure) in myriad visual tasks mimics the results of humans on comparable measures. This similarity is all the more striking be- cause of the vast differences in both quantitative (e.g., brain and retina size, number of photoreceptors) and qualitative (e.g., neural organization, type of photoreceptors) biologi- cal measures (for a discussion of the development of such systems in pigeons, see Husband & Shimizu, 2001). The juxtaposition of similarities in visual task performance and dif- ferences in biology and phylogeny makes pigeons an excellent choice for comparative re- search (Cook, 2000). Despite the apparent fertility of this research area, repeated attempts (Allan & Blough, 1989; Cook, 2000) to spur interest have not lead to much programmatic research. The research that has been conducted points to an analogous underlying mech- anism for visual perception (e.g., Allan & Blough, 1989; Cook, 1992a, 1992b; Katz & Cook, 2000). The root of these similarities may arise from similar steps taken by the visual system, as outlined in Feature Integration Theory (FIT) (e.g., Treisman & Gelade, 1980; Treisman & Sato, 1990). The proposed experiment is a first step toward establishing the extent of the similarity between the visual binding process in humans and pigeons. The first goal of this paper is to detail the seminal findings of FIT for human visual perception, in particular its implications for conjunctions of visual features. Next, studies that support the application of FIT to pigeon visual perception are discussed. Third, an experiment is described that established further the applicability of FIT to pigeon visual perception. Finally, extensions of the current result are discussed in terms of the compara- tive study of visual experience. 1 Feature Integration in Humans Although physiological data supports an elemental, feature-based approach to visual perception (e.g., Hubel & Wiesel, 1968, 1959), our subjective experience is one of whole objects. Looking about the room, we quickly identify, label, and interact with whole ob- jects. To pick up a mug, we do not necessarily notice the individual elements that are undoubtedly analyzed by our visual cortex. The visual illusions and perceptual demonstra- tions by Gestalt psychologists relied heavily on the human ability to form perceptions that went beyond simply a linear combination of visual elements. Adding to the perplexity of the issue, recognition of certain textures (i.e., regions of homogeneous visual elements) has repeatedly been demonstrated to be parallel in certain instances (e.g., Beck, 1966; Treisman & Gelade, 1980, Experiments 5 through 7), something that is not predicted by a simple lin- ear addition of visual elements. Our brains are able to collate vast amounts of information into a unified percept that is initially encoded in a bit-by-bit fashion. Treisman and Gelade (1980) conducted several target search experiments to determine if visual perception was fundamentally holistic or analytic (i.e., analyzed element by ele- ment). To solve the apparently contradicting information, Treisman and Gelade (1980) posited a two-stage theory of visual perception (see Figure 1). The first stage analyzes a visual scene into feature maps in parallel, based on shape, color, orientation, or another sep- arable (but not necessarily independent) dimension1. Although the exact structure of these maps is not well defined, they are not necessarily coding the same features detected by neurons in the primary visual cortex. Instead, these feature maps are perceptual in nature, and may store the output of several cells (Treisman, 2006). The second stage combines 1Some work has been done to establish exactly how many separable feature maps exist (Treisman & Souther, 1985; Treisman & Gormican, 1988), typically relying on inferences based on rapid search times. This approach has been shown to be at times intractable (Wolfe, 2001), and we are still left with a circular argument for separability?What defines a separable dimension? Target search was rapid. Why is search rapid? Because the target was uniquely defined by a separable dimension. Still, rapid search is a necessary condition for separability and remains a useful diagnostic. 2 the features into whole percepts based on their original locations, as stored in a location map. Importantly, the second stage requires focused attention in order to bind features to a location. In this paradigm, the term ?dimension? refers to any separable component of an object (e.g., color, shape, orientation), and ?feature? or ?feature value? will refer to a specific value along the given dimension. Therefore, red and horizontal are features along the color and orientation dimensions, respectively. The two stages were initially assumed to be independent, but recent research has relaxed this claim, demonstrating that the output of the first stage may guide the deployment of attention to locations (e.g., Wolfe, Cave, & Franzel, 1989; Treisman & Sato, 1990; Wolfe, 1994), or that a new mechanism entirely should be proposed (Huang & Pashler, 2007). Treisman and Gelade (1980) conducted several experiments to show that target searches or textures identifications that relied on the identification of a single feature were analyzed in parallel. A red X is found in a constant amount of time, regardless of the number of dis- tracting blue Os. When the search required the identification of a conjunction of features (e.g., a red X amidst red Os and blue Xs), search times increased linearly with display size. As part of the original theory, this linear increase in search times was used as evidence for a serial, self-terminating search, but this claim has been vigorously challenged (Townsend, 1990; Pashler, 1987) and is not crucial to the current experiment. The important aspect is that searches or texture identifications based on a single dimension may be processed preattentively, but identifications based on a combination of dimensions require attentional processing. This realization allows the holistic and analytic accounts of perception to co- exist, albeit at different levels of analysis. If attention is required to accurately combine features, then binding should be error prone if attention is diverted (e.g., under dual task conditions) or if the viewer does not have enough time to direct attention to the appropriate location (e.g., rapid presentation). Prinzmetal (1981) explicitly measured the effect of location and perceptual grouping on 3 Figure 1. Conceptual diagram of Feature Integration Theory. Light from the environment creates an image on the retina. The information in this image is analyzed by functionally distinct neural regions and stored in hypothetical feature maps. The information in these maps is then reintegrated into an object. This reintegration process is assumed to require focused attention. intradimensional (i.e., within a single dimension?in this case, orientation) binding errors in a target search task. He presented subjects with two groups of 4 circles arranged in either 2 rows or 2 columns (see Figure 2). Two of the circles contained one of: a horizontal line, a vertical line, or a plus sign. The only restriction was that both circles could not contain a plus sign. The two arrays were separated so that all circles within an array would be pro- cessed as a perceptual group. The stimuli, presented briefly (around 100-ms presentation 4 on average), were preceded and followed by a visual mask. The precise timing of the dis- play was titrated so that each subject maintained an overall error rate around 85%. Subjects responded ?yes? if they perceived a plus sign, and ?no? otherwise. Figure 2. Examples of trials from Prinzmetal (1981). Vertical and horizontal lines were blue in the original study. Displays A and B test the effect of perceptual grouping on binding errors while holding distance constant. Display C shows an example target present trial. Display D shows an example target absent trial in which errors cannot be due to faulty binding of features. FIT predicts that attention should be needed to correctly bind the horizontal and verti- cal lines composing a plus sign. On trials which the component pieces of the plus sign (i.e., one vertical line and one horizontal line) are present (conjunction trials), but attentional processing is denied, a binding error may occur and cause the participant to incorrectly report the presence of a plus sign. Of course, under masked, rapid presentations, false 5 alarms may arise for other reasons as well. To provide a baseline level of detection errors, Prinzmetal used the false alarm rate on trials containing two vertical lines or two horizontal lines (feature trials). In support of FIT, Prinzmetal (1981) found a higher proportion of false alarms on conjunction trials than on feature trials. Additionally, binding errors were more likely to occur within the same perceptual group than between perceptual groups, even when controlling for inter-stimulus distance. Treisman and Schmidt (1982) studied factors influencing binding errors across dimen- sions. The researchers presented three colored, English letters, flanked on either side by two black digits. The initial task was to report the identity of the two digits. Stimulus viewing time was titrated to ensure a 10% error rate for each subject, and the display was preceeded and followed by a visual mask. As in the Prinzmetal (1981) study, attempts to understand the source of errors can only be conducted when subjects are making errors. The initial task of reporting the flanking digits ensures that subjects at least perceived something. After a correct report of the digits, subjects freely reported the identity (i.e., color, letter name) of the colored letters. Subjects incorrectly combined features in the display at a rate significantly above the rate of errors from non-present features (e.g., reporting a red N when N was not a letter in the display). The frequency of conjunction errors was highest in the first item report, suggesting that memory errors are not completely the source of binding errors. More re- cent research into capacity limits suggests that using 2 digits and 3 letters may have been beyond the capacity of some subjects (Luck & Vogel, 1997). Assuming subjects could pro- cess about 4 of the display items (two of which were the digits), it is unsurprising the rate of correctly identified items was around .5. The authors reported that some subjects spon- taneously described the digits as being colored (despite being black on all trials), indicating that subjects felt confident about the conjunction errors. A follow up experiment confirmed that confidence ratings were unrelated to error type (Treisman & Schmidt, 1982). 6 Although memory errors were not solely responsible for binding errors, subjects did have to form verbal codes of each stimulus reported, which may have exacerbated the effect of perceptual errors. To control for verbal labeling, the free report task was turned into a same/different discrimination (Treisman & Schmidt, 1982, Experiment 3). Subjects reported if any two items in an array of five matched. Treisman & Schmidt controlled the number of features in the display that would need to recombine to form binding errors, including some trials containing matching stimuli and others not containing any single matching feature. Results confirmed that features could migrate between objects to cause binding errors without explicitly directing subjects to create verbal codes (although some verbal coding of visual scenes undoubtedly occurred). Treisman (1996) proposes seven types of ?binding? that the brain accomplishes during visual object recognition. For accurate detection of conjunctions of features, Treisman noted that property binding occurred to recombine aspects of the visual scene analyzed by specific neural regions. If we look at an array of red X amidst blue Os, the property binding mechanism ensures that red is bound with X, and blue is bound with O. If we report seeing a feature not in the display (e.g., a red K), a feature error has occurred. If we report seeing an incorrect combination features (e.g., red O) a binding error has occurred (some experiments refer to these as conjunction errors, which avoids presupposing a binding process that may or may not be susceptible to making errors). Pigeon Feature Integration If binding errors can occur without the use of verbal labels and arise primarily because of errors during the feature integration process, other species with feature-specific neurons may have to deal with the same problem. As previously mentioned, various processes related to visual perception appear analogous in pigeons and humans, despite differences in anatomy (Cook, 2000). Closer examination of the binding process in pigeons under a 7 feature integration framework will determine the extent to which binding errors arise as a necessary side effect of feature analysis and eventual integration. Although there is little direct comparison between humans and pigeons using the fea- ture integration approach, some studies (D. S. Blough, 1979; P. M. Blough, 1984) have demonstrated linear increases in reaction with display size increases in a single target, multiple distractor visual search task. Further, tests of priming in visual search show that attentional cues significantly affect reaction times (P. M. Blough, 1989). These findings are in accordance with FIT, as a location cue would allow the focused search to start at the target location and terminate before scanning the entire display, mimicking the flat reaction time function of a parallel search. The most comprehensive work using the FIT model comes from the texture segrega- tion paradigm. In a series of experiments, Cook (1992a, 1992b) demonstrated the difficulty for conjunction search as described by FIT in pigeons. Pigeons completed a simultaneous discrimination by pecking a target region of the screen defined by a homogeneous subset of objects amidst a larger homogeneous set of objects. When the target texture varied from the background based on color or shape, the pigeons maintained much higher accuracy than when the texture was defined by a specific shape-color combination (e.g., a target of red squares and blue circles amidst a background of blue squares and red circles). Both feature and the double conjunctive displays were discriminated at above chance levels. Cook (1992b) also tested textures in which the target dimension remained constant, but the background stimuli varied (baseline trials in Experiment 1). The results indicated that irrelevant variation did not affect the discriminability of the target textures. These results strongly support the independent (likely parallel) processing of the separate features of the display. Demonstrating this separability is a prerequisite for appropriate application FIT?without feature analysis, there cannot be feature integration. Cook also conducted the necessary translational work with humans to ensure the procedure and stimuli generated 8 comparable results. As in earlier studies (Treisman & Gelade, 1980; Prinzmetal, 1981), humans manifested the conjunction search difficulty through higher reaction times. Taken together, the previous findings suggest an analogous mechanism for feature in- tegration in humans and pigeons. Applying the feature integration model further, errors should arise under speeded presentation of stimulus displays. In the Cook (1992a, 1992b) studies, all stimuli remained on screen until a choice was made, allowing the pigeons and humans to analyze the scene for as long as necessary. A strong test of this proposed general process of visual perception is to measure the impact of binding errors during the process of conditional discriminations of a conjunction target. Because pigeons display lower ac- curacy instead of reduced reaction times with conjunction displays, it is difficult to predict how they will respond in a task that leads to higher error rates in humans. For instance, they may selectively attend to a single dimension of a given display or may develop a strategy based on memorization if the total number of stimuli (i.e., set size) is small enough. The present study employs a unidimensional conjunction search to eliminate the risk of inap- propriately attending to a single dimension. The study starts with a set size that could be memorized, but can easily be expanded to counteract this strategy. Further, by allowing some memorization in this task, the role of memory in binding errors can be quantified through comparison with larger set sizes. Experiment 1 The goal of this experiment was to look for intradimensional binding errors in a pigeon target-detection task. The experiment was derived from research previously only conducted with humans (Prinzmetal, 1981). The critical part of this experiment is the analysis of errors made on a given trial (Prinzmetal, 1981; Treisman & Schmidt, 1982). If the number of false alarms on trials with possible binding errors (i.e., trials containing vertical and horizontal bars, but no plus sign) is significantly higher than the number of false alarms on 9 trials without this possibility, this disparity will be taken as evidence for intradimensional binding errors. Method Subjects Four adult male pigeons (Columba livia) with previous experience in matching-to- sample tasks (one bird), same-different tasks (one bird), and an open field spatial navigation task (two birds) served as subjects. The pigeons were maintained at 80% to 85% of their free-feeding body weight throughout the experiment. They were housed individually and provided with constant access to grit and water in their home cages. The colony room is maintained on a 14j10-hr light/dark cycle. Experimental sessions were conducted five days a week. Apparatus & Materials Chamber. The chamber was a wooden box (38-cm wide 36.5-cm deep 39.5-cm high). In the back panel, an axial fan, Dayton Electric (Model 4WT40), provided both ventilation and noise. A houselight (lamp #1829) centered in the ceiling provided internal lighting during inter-trial intervals (ITI) and time out periods. The food hopper (custom design), operated by a computer-controlled relay interface (Kithley, ER-01), was centered below the monitor (Eizo Flexscan T566, 17? CRT; 31.75-cm 23.8125-cm viewable re- gion, 800 600-pixel resolution). A thin piece of glass mounted in a 25-cm 17.5-cm viewing window separated and protected the monitor from the pigeons? pecks. An infrared touch screen (Carroll Touch, UniTouch 17?; 25-cm 17.5-cm detection region), framed the viewing window and detected pecks at the monitor. Stimulus displays. A visual and auditory stimulus was used as a ready signal for each trial. The auditory component of the ready signal was a repeated, low-pitched beat, 10 Figure 3. Layout of the experimental chamber. The numbers 1-4 identify stimulus positions. R indicates the position of the Ready Signal. C1 and C2 are the choice stimuli locations. Relationships not to scale. played through two computer speakers located 6? behind the touch screen and under the computer monitor. The visual component was a red circle, subtending a visual angle of 37:6 horizontally and 37:6 vertically. The circle was centered on the screen, 15 cm above the chamber floor. The position of the circle also corresponds to the center of the search display, as defined next. The search display stimuli were bars of vertical (90 ) or horizontal (0 ) orientation and a plus (+) shape (Figure 4). The vertical bar subtended 37:6 vertically and 19:3 horizontally and the horizontal bar subtended 19:3 vertically and 37:6 horizontally. The plus shape was created by overlaying the horizontal and vertical bars (thus subtending 37:6 11 Figure 4. Trial types for Experiment 1. Stimuli in the actual experiment were gray and were displayed vertically on some trials. vertically and 37:6 horizontally). All shapes were gray with black backgrounds. Search displays were created according to a polar grid (e.g., r; ), with display locations every 90 and a radius of 5:624 cm. The center of the grid is 15 cm above the chamber floor, centrally-aligned with the food hopper (see Figure 3). The images used to indicate target presence/absence were a yellow clover and a cyan pentagon, both subtending visual angles of 37:6 horizontally and 37:6 vertically. The images were displayed laterally to the ready signal, at the two horizontal extremes of the search display grid (8:25cm, 0 and 180 ), regardless of the positions of the items in the 12 search display. The entire choice display subtended 134 horizontally and 37:6 vertically. A random dot stereogram was used as a visual mask and completely covered all possible search display positions, but not the response image locations. Experimental control. Experimental events were controlled and recorded using cus- tom software written in Visual Basic 6.0 (Service Pack 5) on a microcomputer (Dell Di- mension 2100). A PCI video card (ATI Xpert 98) controlled the graphics. A PCI card (Keithley KPCI-PI0, Cleveland, OH) controlled the relay interface that operated the hop- per, hopper-light, and houselight. Procedure Pretraining. Responding to the touch screen was autoshaped first to the ready signal. The subjects were trained on a concurrent FT15s FR1 schedule, and pretraining continued until responding rapidly followed the display (and thus the fixed time does not elapse). The second stage of autoshaping was identical to the first, except the ready signal display was replaced by the response images. The position of the images on the display mimicked their placement on later phases, including appropriate position counterbalancing. Meeting the FR requirement or allowing the timeout to expire resulted in 5000-ms access to a lighted food hopper. An ITI of 50s followed all trials. The houselight was illuminated only during ITIs. Two-item target detection. All birds were trained in a two-item target detection task. The target will be a plus shape. The other stimuli were a vertical bar and a horizontal bar (leading to 9 possible stimulus pairs). The plus-plus combination was not used in this experiment so that the number of target present trials is equivalent to the number of target absent trials. The remaining pairs comprise the 8 trial types that were randomized within 12, 8-trial blocks (see Figure 4 for all stimulus displays used). The subjects progressed through a pretraining phase to the 4 training phases and the final testing phase. In each 13 phase, trials started with a visual and auditory ready signal (FR 1 for termination) followed by a 400-ms delay and the presentation of the search display. For two birds (Ted & Steve), the search stimuli was arranged horizontally (i.e., one item at 0 and the other at 180 ). For the other two birds (Mark & Curly), the search stimuli were arranged horizontally or vertically (i.e., one image at 90 and the other at 270 ). Orientation of the search display was pseudo-randomized within blocks, constrained so that no orientation was repeated 3 times consecutively. Sessions contained 6, 16-trial blocks. Specific orientation-trial type pairs were not repeated within a block. Instead, within the 16-trial block, each trial type will be presented at both orientations. Phase 1. Target training. An ITI of 15s followed all trials. Each 96-trial experi- mental session contained 12, 8-trial blocks. After the search display was presented for 1000ms, the display disappeared and was followed by a 1000-ms retention interval, and then the presentation of the choice display. Responses to the yellow clover image (posi- tion counterbalanced across birds) resulted in 3000-ms access to a lighted food hopper on target present trials. Responses to the cyan pentagon (positioned on the opposite side as the yellow clover) were rewarded equivalently, but only on target absent trials. Incorrect responses (pentagon response on target present trials or clover response on target absent trials) resulted in a 15-s timeout and a repetition of the trial(See Figure 5, Box 1). Analyses do not include these correction trials unless explicitly stated. The criterion for moving past Phase 1 was 3 sessions of at least 85% accuracy or 5 sessions of at least 80% accuracy. Phase 2. Visual mask training. Phase 2 added a 200-ms visual mask immediately following the retention interval (See Figure 5, Box 2). A 0-s delay separated the termination of the mask and the choice display. The criteria for moving past Phase 2 was 1 session of at least 80% accuracy and the completion of at least 5 sessions. All other procedural details were identical to Phase 1. 14 Figure 5. Trial progression. The top and bottom sequences are identical except for the visual mask separating the search display and choice display in the bottom sequence. Event timings varied throughout training and testing. Phase 3. Orientation manipulation. In Phase 3, the search display was set to four locations (0 , 90 , 180 , and 270 ) for all birds, with two of the four positions containing stimuli on a given trial. As before, search displays were arranged horizontally or vertically 15 only (never diagonally) and orientation was counter balanced between blocks. Also at this stage, the retention interval was removed and a 0-s delay followed the search display (immediately preceding the presentation of the mask). The criteria for moving past Phase 3 was 3 sessions of at least 85% accuracy and the completion of at least 10 sessions. Any subject failing to meet this criterion after 15 sessions completed correction sessions with only the vertical displays until 80% had been reached on 3 sessions. After completion of these correction sessions, the subject returned to sessions with the horizontal and vertical search displays until meeting the original accuracy criterion. All other procedural details were identical to Phase 2. Phase 4. Viewing time titration. During the final phase of training, the view time for the search display was titrated for each subject, based on the accuracy across a session. Starting at 5000ms, the view time was decreased by 500ms if the accuracy wass above 80% (i.e., more than 76 correct) and increased by 250ms if the accuracy was below 70% (i.e., less than 67 correct). The criteria for moving past Phase 4 was 3 sessions of at least 70% accuracy but not more than 80% accuracy at a given stimulus viewing time. All other procedural details were identical to Phase 3. Phase 5. Binding error test. Each bird completed 6, 96-trial testing sessions, in keep- ing with the use of 6 blocks of 96 trials (Prinzmetal, 1981). If any bird scored higher than 85% or lower than 75% accuracy on any two sessions, then that bird went back to Phase 4 to settle at a new viewing time. All other procedural details were identical to Phase 3. Results & Discussion Phases1&2. Two birds, Ted & Steve, started in Phase 1 and quickly acquired the two- item search task (in 15 and 8 sessions, respectively). Although the test for binding errors comes much later, ensuring no strong biases toward target-absent or target-present displays develop is of great importance early in acquisition. All subsequent acquisition plots split 16 apart these trial types. All acquisition criteria relied on the mean accuracy. Figure 6 tracks the acquisition of each bird. After completion of Phase 1, birds immediately started Phase 2, which introduced a 200-ms random-dot stereogram visual mask (see Figure 5, lower panel) between the retention interval and the choice display. As seen in Figure 6, the visual mask did not disrupt performance for very long. Ted required 6 sessions of training to reach the 1 session over 80% criterion and Steve took 27. The break in the graph shows a gap in Steve?s training (a few weeks) because of his health. After returning to the task, his performance was quite variable. After session 24, the viewing time for Steve was increased to 5s, and the acquisition criterion was quickly met. Phase 3. For Phase 3, two more birds, Mark & Curly, were added. Because of the different training histories, Ted & Steve will be considered separately from Mark & Curly. Figure 7 shows the acquisition for each Ted & Steve. Steve acquired quickly, in only 13 sessions. Ted required 176 total sessions, broken down as follows. After an initial 15 sessions with both orientations, Ted received remedial training sessions with just vertical orientations. Forty sessions into the remedial training, the stimulus display viewing time was increased to 5s. After an additional 28 sessions at 5-s viewing time, the visual mask was removed (viewing time stayed at 5s). Once the mask was removed, Ted completed another 48 sessions of vertical-only stimulus displays. Although performance was not at criterion levels, the horizontal orientations were reintroduced, and Ted required 45 more sessions to meet criterion. Mark & Curly started the experiment in Phase 3. Figure 8 tracks acquistion for each bird. After 40 sessions the stimulus viewing time was increased to 5s. Mark & Curly met criterion in 8 and 73 sessions, respectively, following the viewing time increase. Compared to Ted & Steve, Mark & Curly?s acquistion of the two-item target search with vertical 17 Figure 6. Ted?s & Steve?s acquisition plots for Phases 1 & 2. Stimulus displays consisted of all 8 trial types displayed only horizontally. The start of Phase 2 is indicated by the solid vertical line. The break in Steve?s graph shows a gap in training because of health issues. The second vertical line in Steve?s plot marks the start of a 5-s sitmulus display viewing time. and horizontal accuracies proceeded much more smoothly. The lack of a visual mask and having both orientations simultaneously contributed to this difference. 18 Figure 7. Ted & Steve Phase 3 acquisition. The solid lines in Ted?s plot mark the start of vertical- only remedial trials, the onset of the 5-s stimulus display viewing time, and the reinstatement of horizontal trials. Steve had a 5-s stimulus display viewing time throughout Phase 3. Phase4. Viewing time was titrated until 6 days of 70-80% performance were achieved. Again, performance varied by bird, Ted settling at 4.5s, Curly at 4s and Mark at 2s. Steve 19 Figure 8. Mark & Curly Phase 3 acquisition. The solid lines denote the onset of the 5-s stimulus display viewing time. failed to settle at a time below 5s and will not be considered in future analyses. Although these times are much higher than typically used in human feature-binding experiments, at least one study used exposure durations of 1.5s for humans (Prinzmetal, Henderson, & Ivry, 20 1995). To discourage response biases, the correction procedure remained active through- out testing. The high variability across subjects invalidates much of group-based statistics, and all analyses will be single-subject, unless noted. For comparing baseline to transfer, the last six days of 5-s VT will be used as baseline. The transfer sessions are the first six, consecutive sessions with accuracy between 70-80%. Separate repeated-measures analy- sis of variance (RM ANOVAs) were conducted on the baseline (1-6) and transfer sessions (1-6) across birds, to ensure stability. For baseline, F(5;10) = 2:342;p>:1; for transfer, F(5;10) = :332;p>:8. Further analyses will deal with the accuracy data collapsed across session. Figure 9 shows accuracy by target status across birds. The same 2x2 RM ANOVA of Session Type (Baseline v. Transfer) x Target Status (Absent v. Present) across the six sessions was conducted for each bird. Ted showed no main effects (Fs < 2:9, ps > :1), nor interaction effect [F(1;5) = :174, p>:6]. For Curly, the RM ANOVA yielded only a main effect of Target Status, F(1;5) = 118:434, p<:001. For Mark, the analysis showed a main effect of Target Status, F(1;5) = 24:026, p = :004, and a significant Session Type x Target Status interaction, F(1;5) = 16:494, p = :01. Although the interaction was only significant for one bird, the absolute difference between accuracy on target absent and target present trials increases from baseline to transfer for each bird (Ted: 3.47%, Curly: 8.68%, and Mark: 19.10%). Not surprisingly, this magnitude increase coincides with the absolute difference between the viewing times for baseline and transfer trials. Experience. A possible explanation for the interaction between baseline and transfer is increased task experience. Prima facie, this option seems unlikely because no bird?s data had a significant main effect of Session Type, and only one showed an interaction. The amount of increased experience does not line up with days of total training either?Ted and Curly had only 5 and 2 sessions between the end of baseline and the beginning of transfer testing, respectively. For Mark, 10 days elapsed between the end of baseline and the start 21 Figure 9. Accuracy for each bird is displayed for each target status and for baseline v. transfer. Viewing time during baseline was 5s for all birds. During transfer viewing time differed by bird, Ted: 4.5s, Curly: 4s, and Mark: 2s. Dark gray bars are target absent, light gray target present. Striped bars are for transfer trials, plain bars are baseline trials. Error bars are 1sem. of testing. Comparing total experience, Ted (who showed no significant effects) had nearly twice the sessions of Curly, and over five times the number of sessions of Mark. The opposite argument could now be made, that the differences are due to a lack of steady-state training. To the extent possible, this argument is answered by the acquisition criterion and the comparable baseline performance before entering the titration/transfer testing phases. Encoding Strategy. Another, more likely, explanation relates the effect of the viewing time to the individual bird?s decision strategy. As the viewing time decreased, the dis- crimination became harder (as indicated by decreased accuracy), and the birds may have developed a strategy based on partial encoding of the target display. To see how this might 22 Figure 10. Ted?s accuracy for each target present trials split by stimulus display type and session (baseline v. transfer). Target absent accuracy is shown for comparison of both mean and variance. Error bars are 1sem. work, consider the trial types displayed in Figure 4. On 4/6 trials, a vertical or horizon- tal bar on the left side of the display indicates a target absent trial. The odds (formally, = p(x)=[1 p(x)]) are 2:1 in favor of target absent, if only half the display is consid- ered. With a reduction in viewing time, subjects may have adopted this strategy. The bias seemed to favor target absent trials for each bird. Accordingly, Figures 10 through 12 com- pares target-absent performance to individual target-present sample-display types for each bird. Although the lack of independence of observations invalidates any population infer- ences we can make from the dataset, local hypotheses regarding the effects of Session Type, Trial Type (within target present sample displays), and their interaction can be made. 23 Figure 11. Curly?s accuracy for each target present trials split by stimulus display type and session (baseline v. transfer). Target absent accuracy is shown for comparison of both mean and variance. Error bars are 1sem. Comparing R2 (a measure of explained variance) between least-squared error models with and without an interaction term allows us to infer the role of reduced viewing time on the performance of each bird. First, Trial Type was collapsed into PH-PV and HP-VP then dummy-coded as X1 (0 and 1, respectively). Session Type, X2, was coded as 0 (baseline) and 1 (transfer). This reduced model was compared to the full model containing the X1X2 interaction term. Ted?s 3-parameter model explained an additional 6.2% of the variation in accuracy over the main effect model alone (48.3% v. 54.5%). The increase is modest, demonstrating that the majority of the variance is explained by the different trial types. The substantial amount of variance explained by just the Trial Type and Session variables 24 Figure 12. Mark?s accuracy for each target present trials split by stimulus display type and session (baseline v. transfer). Target absent accuracy is shown for comparison of both mean and variance. Error bars are 1sem. indicates a change in Ted?s performance not visible in the earlier analysis. The 3-term, standardized model was: ^Y = X1(:435+:433X2) :369X2 The signs of the coefficients tell much of the story. Trial types PH and PV (X1 = 1) had a positive effect on accuracy, especially during transfer (X2 = 1). For HP and VP trial types (X1 = 0), accuracies went below the average only during transfer sessions (X2 = 1). The interaction effect can be seen by calculating ^Y when X1 = 1 and X2 = 1. In the present case, the interaction adds .43 standard deviations to accuracy for PH and PV. This effect would not be present for trial types HP and VP because X1 = 0. 25 For Curly, both the 2- and 3-parameter models fit poorly (accounting for 12.5% and 17.4% of the variance in accuracy). The magnitude of the increase in R2 is comparable to Ted?s data, although the percentage increase is 39%. As seen in Figure 11, the means for each trial type are similar to one another across baseline and transfer. The 3-term standardized model is: ^Y = X1(:472 :029X2) :383X2 The effect of the trial type by transfer testing interaction is much less pronounced, neg- ligible compared to the effect of Trial Type and Session Type. Considering the direction of the effects, PH-PV had higher accuracies than HP-VP, but during transfer accuracy for both trial-type pairs decreased. Curiously, the 2-term model had regression coefficients of equal magnitude, but opposite direction-the benefit of PH-PV during baseline was exactly equaled by the disruption of the reduced viewing time during testing. The final model, for Mark (Figure 12), shows a similar pattern of effects to Curly, but much stronger. The interaction term does little to improve the fit of the 2-term model, increasing R2 from .294 to .314, so the 2-term standardized model is preferred: ^Y = :024X1 :542X2 The effect of the trial type is negative, however, although marginal. The sign implies that Mark was doing better on PH-PV during baseline, but the reduced viewing severely dis- rupted performance on all target-present trials. Orientation. Comparing the group members, Ted?s large interaction coefficient is the most unique. The different training histories for Ted v. Mark & Curly suggest a critical role of the orientation of the stimuli. Ted took about a dozen sessions to learn with just the horizontal orientation, but then took over 150 to learn the task with both horizontal and vertical orientations, over 100 of which were sessions comprised solely of vertical orienta- tions. Figure 13 shows each bird?s performance during baseline and transfer separated by 26 Figure 13. Accuracy for each bird is displayed for each stimulus display orientation and for base- line v. transfer. Viewing time during baseline was 5s for all birds. During transfer viewing time differed by bird, Ted: 4.5s, Curly: 4s, and Mark: 2s. Dark gray bars are for baseline trials, light gray for transfer. Striped bars are for vertical orientation, plain bars are horizontal stimulus display orientations. Error bars are 1sem. orientation. A 2x2 RM ANOVA on Session Type (Baseline v. Transfer) and Orientation (Horizontal v. Vertical) across session was conducted for each bird. Ted?s data revealed a main effect of Orientation, F(1;5) = 17:797;p = :008, but no session effect nor interac- tion (both Fs < 1:0, ps >:4). No significant effects were found in Curly?s data (Fs <:9, ps >:3). For Mark, there is a marginal effect of Session Type, F(1;5) = 5:681, p = :063, indicating an overall decrease in performance from baseline to transfer. The test did not yield an effect of Orientation (F < 1, p > :3) or an interaction (F < 1:0, p > :7). As before, the potential for extrapolation based on these tests is limited, but their diagnostic power for the current situation allows for a better behavioral description. 27 Baseline Transfer Feature Binding Feature Binding Ted 4.2 (.69) 5.8 (.69) 5.1 (.73) 5.3 (.61) Curly 5 (.69) 4 (.53) 3.3 (.61) 4.3 (.65) Mark 4 (.61) 4.7 (.82) 3.3 (.49) 3.8 (.78) Table 1: Mean False Alarms by Type during Baseline and Transfer. Standard Error in Parentheses Binding Errors. Having characterized how accuracy changed from baseline to trans- fer, the errors can now be analyzed. Specifically, the false alarms can be partitioned to find evidence of binding errors. A simple analysis of error counts (cf. Prinzmetal, 1980) showed no difference in false alarm type for any of the birds (all ts < 1:7, ps >:12). Mean false alarms across the last six sessions of baseline and transfer, classified by error type, are shown in Table 1. The small differences remove any need for a more sophisticated analysis of errors at this point. Interestingly, the mean differences in false alarm during transfer are all in the ?right? direction for evidence of binding errors, and for Curly this difference is a reversal from baseline training. All together, the data are consistent with a criterion shift (as in signal detection theory) favoring target-absent responding and decreasing both false alarms and accuracy for target present trials. The magnitude of the shift was proportional to the decrease in viewing time. Even though subjects were more likely to respond ?present? to all stimulus displays during transfer (explaining the decrease in total false alarms be- tween baseline and transfer), more false alarms occurred on conjunction error trials (i.e,. HV and VH trials). General Discussion Three birds acquired the 2-item discrimination, followed by the viewing time titration procedure, and finally 6 days of testing at a reduced viewing time. During testing, perfor- mance was disrupted on target-present trials, remaining mostly unchanged on target-absent 28 trials. Looking in more detail at the target-present trials, the birds had problems with spe- cific stimulus pairings, rather than just an overall reduction in target-present accuracy (i.e., an effect of trial type beyond the effect of session type). The effect of the orientation of the stimulus display had an effect only on the performance of the bird for which the orienta- tions were trained sequentially. Finally, the assessment of false alarms showed no evidence for binding errors, although several issues still cloud the search for potential binding errors during perception of conjunction stimuli in pigeons. The current study deviated from past research in important ways. First, the viewing times used in the current task were much higher than that used with human participants. Pi- geons may be similar to humans in that, given enough time to encode the stimulus, binding errors are nearly nonexistent. The number of items to encode in the current experiment was also lower than often used in human studies. The rationale for limiting the display size was to ensure the pigeons were able to process all the items, although there is some evidence that binding errors result from an inability to accurately perceive/localize all the items in a display (Treisman & Schmidt, 1982). Another methodology difference is the stimulus size. The stimuli in the current task are an order of magnitude larger in terms of visual angle. A direct comparison between foveal and parafoveal processing in humans and pigeons is not biologically tractable (pigeons have two, independent fovea in each eye), visual angle has been shown to affect conditional discrimination performance (Katz, Bodily, & Wright, 2008). Previous studies also had no acquisition phase, only warm-up trials. The birds in this experiment were trained with each trial type. In order to move on to the testing phase, the birds had to reach an accuracy criterion of 80% correct. The problem with this criterion is that it effectively punishes (at the very least extinguishes) conjunction error responses early on in acquisition. If a subject incorrectly perceives the horizontal and vertical bar as occupying the same space, a response of ?present? (a possible binding error) is followed 29 with a time out, and then a repetition of the trial. The subject is forced to respond ?absent? before the trial can proceed. Perceptual errors during acquisition may have contributed to the length of training for each bird, as responses to ?present? when a plus was perceived was not rewarded the same way. Once the transfer tests started, the bias toward responding ?absent? may have swamped any effect of perceptual errors. In the related human studies, there was no correction procedure implemented, and access to food was not contingent upon successfully encoding the sample display. Another issue with acquisition was the initial 1-s stimulus viewing time. All the birds began the task with this 1-s viewing time, before moving to a 5-s viewing time after ac- quisition was not forthcoming. For 2 birds, Ted and Steve (who never met the requirement to enter testing), the 2-item task was learned extremely quickly (less than 15 session) with only the horizontal orientation. Perhaps the added orientation forced the birds to distribute their attention more broadly, or it may have disrupted a previous memory-based strategy. With 12 presentations of each trial type a day, a memory-based strategy seems plausible. The change in performance for target-present trials, but not target-absent trials, however, argues against this configural memory-based strategy during transfer testing. If the birds were treating the stimulus displays as configurations, there should be a consistent decrease in accuracy across all stimulus types from baseline to transfer, concomitant with decreases in viewing time. Instead, no bird showed a main effect of session type, and only one dataset (Mark?s) showed evidence for an interaction. Previous research (Hollard & Delius, 1982) also suggests that pigeons are capable of learning relationships between stimuli and their rotated counterparts. If a memory strategy was employed, transferring a response to the rotated counterpart should have happened much more easily than currently seen. Instead, the birds appeared to treat each object in the display separately, and the de- creased viewing time led to partial encoding of the display. For target-absent trials, per- ceiving half the display leads to the same conclusion as perceiving the whole display-no 30 plus sign. On target-present trials, however, there is a 50% chance that the stimulus will be in a given position, and thus the chances of getting the correct answer are much lower. As viewing time decreased, the amount of the display that can be encoded decreases, and the bird must resort to a default strategy: If plus, choose ?present? if no plus, choose ?ab- sent? about 75% of the time. This strategy leads to fairly high hit rates and only requires attending to one stimulus. With increased training, much of the biases mentioned above could be fixed. As accu- racies increase, viewing time could be reduced and eventually be comparable to previous human studies. If the birds have learned a simpler encoding strategy based on only half the available information, more complex displays must be developed to invalidate this strat- egy. Varying the complexity of the display will provide an estimate of the contribution of display on binding errors, an important issue probably linked to visual working memory capacity. False alarms on trial with potential conjunction error trials should also not be punished. Switching to a variable schedule of reinforcement and removing the correction procedure may allow for a more sensitive measure of the contribution of binding errors in two-item target search. Finally, withholding conjunction error trials during training and introducing them only after the viewing time titration may provide a better comparison to the human studies, in which participants already understood the task before being tested. The current experiment failed to find any evidence of binding errors during a pigeon 2-item target search task, but sets the stage for a more thorough investigation of the pigeon visual perceptual experience. 31 BIBLIOGRAPHY Allan, S. E., & Blough, D. S. (1989). Feature-based search asymmetries in pigeons and humans. Perception & Psychophysics, 46, 456?463. Beck, J. (1966). Perceptual grouping produced by changes in orientation and shape. Sci- ence, 538?540. Blough, D. S. (1979). Effects of the number and form of stimuli on visual search in the pigeon. Journal of Experimental Psychology: Animal Behavior Processes, 5, 211? 223. Blough, P. M. (1984). Visual search in pigeons: Effects of memory set size and display variables. Perception & Psychophysics, 35, 344?352. Blough, P. M. (1989). Attentional priming and visual search in pigeons. Journal of Experimental Psychology: Animal Behavior Processes, 15, 358?365. Cook, R. G. (1992a). Acquisition and transfer of visual texture discriminations by pigeons. Journal of Experimental Psychology: Animal Behavior Processes, 18, 341?353. Cook, R. G. (1992b). Dimensional organization and texture discrimination in pigeons. Journal of Experimental Psychology: Animal Behavior Processes, 18, 354?363. Cook, R. G. (2000). The comparative psychology of avian visual cognition. Current Directions in Psychological Science, 9, 83?89. Cook, R. G., Cavoto, B. R., & Cavoto, K. K. (1996). Mechanisms of multidimensional grouping, fusion, and search in avian texture discrimination. Animal Learning & Behavior, 24, 150?167. Hollard, V. D., & Delius, J. D. (1982). Rotational invariance in visual pattern recognition by pigeons and humans. Science, 218, 804?806. 32 Huang, L., & Pashler, H. (2007). A boolean map theory of visual attention. Psychological Review, 114, 599?631. Hubel, D. H., & Wiesel, T. N. (1959). Receptive fields of single neurones in the cat?s striate cortex. Journal of Physiology, 148, 574?591. Hubel, D. H., & Wiesel, T. N. (1968). Receptive fields and the functional architecture of monkey striate cortex. Journal of Physiology, 195, 215?243. Husband, S., & Shimizu, T. (2001). Evolution of the avian visual system. In R. G. Cook (Ed.), Avian visual cognition [on-line]. Available from www.pigeon.psy.tufts.edu/avc/husband/ Katz, J. S., Bodily, K., & Wright, A. (2008). Learning strategies in mathching to sample: If-then and configural learning by pigeons. Behavioural Processes, 77, 223?230. Katz, J. S., & Cook, R. G. (2000). Stimulus repetition effects on texture-based visual search by pigeons. Journal of Experimental Psychology: Animal Behavior Processes, 220? 236. Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature, 279?281. Pashler, H. (1987). Detecting conjunctions of color and form: Reassessing the serial search hypothesis. Perception & Psychophysics, 41, 191?201. Prinzmetal, W. (1981). Principles of feature integration in visual perception. Perception & Psychophysics, 30, 330-340. Townsend, J. T. (1990). Serial vs. parallel processing: Sometimes they look like twee- dledum and tweedledee but they can (and should) be distinguished. Psychological Science, 1, 46?54. Treisman, A. (1996). The binding problem. Current Opinion in Neurobiology, 6, 171?178. Treisman, A. (2006). How the deployment of attention determines what we see. Visual Cognition, 14, 411?443. 33 Treisman, A., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12, 97?136. Treisman, A., & Gormican, S. (1988). Feature analysis in early vision: Evidence from search asymmetries. Psychological Review, 95, 15?48. Treisman, A., & Sato, S. (1990). Conjunction search revisited. Journal of Experimental Psychology: Human Performance & Perception, 16, 459?478. Treisman, A., & Schmidt, H. (1982). Illusory conjunctions in the perception of obejcts. Cognitive Psychology, 14, 107?141. Treisman, A., & Souther, J. (1985). Search asymmetry: A disagnostic for preattentive processing of separable features. Journal of Experimental Psychology: General, 114, 285?310. Wolfe, J. M. (1994). Guided search 2:0: A revised model of visual search. Psychonomic Bulletin & Review, 1, 202?238. Wolfe, J. M. (2001). Asymmetries in visual search: An introduction. Perception & Psy- chophysics, 63, 381?389. Wolfe, J. M., Cave, K. R., & Franzel, S. L. (1989). Guided search: An alternative to the feature integration model for visual search. Journal of Experimental Psychology: Human Performance & Perception, 15, 419?433. 34