DEAD RECKONING IN A DESKTOP VIRTUAL ENVIRONMENT Except where reference is made to the work of others, the work described in this dissertation is my own or was done in the collaboration with my advisory committee. This dissertation does not include proprietary or classified information. _______________________________________ Kent Delos Bodily Certificate of Approval: _____________________________ Lewis M. Barker Professor Psychology _____________________________ Jeffrey S. Katz, Chair Alumni Associate Professor Psychology _____________________________ Ana Franco-Watkins Assistant Professor Psychology _____________________________ Bryan D. Edwards Assistant Professor Psychology ____________________________ Joe F. Pittman Interim Dean Graduate School DEAD RECKONING IN A DESKTOP VIRTUAL ENVIRONMENT Kent Delos Bodily A Dissertation Submitted to the Graduate Faculty of Auburn University in Partial Fulfillment of the Requirement for the Degree of Doctor of Philosophy Auburn, AL May, 10, 2008 iii DEAD RECKONING IN A DESKTOP VIRTUAL ENVIRONMENT Kent Delos Bodily Permission is granted to Auburn University to make copies of this dissertation at its discretion, upon request of individuals or institutions and at their expense. The author reserves all publication rights. ______________________________ Signature of Author ______________________________ Date of Graduation iv VITA Kent Delos Bodily, son of Vicki and Steven Bodily, was born on October 5, 1976 in Berlin, Germany. He graduated from Sky View High School (Smithfield, Utah) in 1994. He graduated from Ricks College (Rexburg, Idaho) in 1998. He graduated from Utah State University (Logan, Utah) with a Bachelors of Science degree in Psychology in May, 2001. He entered the Graduate School at Auburn University in August, 2001. During his graduate studies at Auburn University, he worked under the steady mentorship of Dr. Jeffrey S. Katz, and collaborated with professors Jeffrey Katz and Martha Escobar, and with fellow graduate students Bradley R. Sturz, Michelle Hern?ndez, Kelly Schmidtke, and John Magnotti. v DISSERTATION ABSTRACT DEAD RECKONING IN A DESKTOP VIRTUAL ENVIRONMENT Kent Delos Bodily Doctor of Philosophy, May 10, 2008 (M.S., Auburn University, ) (B.S., Utah State University, ) 81 Typed Pages Directed by Jeffrey Katz Dead reckoning, knowing where one is in relation to a particular location without reference to external landmarks, is a widely cited phenomenon that has been observed in a wide variety of animals. A common test for dead reckoning is the triangle-completion task in which subjects navigate forward a specified distance, then turn and navigate another specified distance, then are allowed to return directly to the starting location (making a triangle). The common finding for ants, hamsters, and blindfolded humans is to commit a positive rotational error, (i.e. to over-rotate). vi The present study used a desktop virtual environment to test human distance estimation and dead reckoning. In the first two experiments, a distance estimation task in which the only cue was optical flow was developed, and the accuracy of human distance estimation was assessed. Experiment 1 was a pilot study which instructed methods of improved experimental control for Experiment 2, in which participants produced highly accurate distance estimations in conditions of optical flow (but not in conditions without optical flow). The third experiment was a partial replication of a published virtual- environment study (Kearns, Warren, Duchon & Tarr, 2002) which tested humans in an immersive virtual environment on the triangle-completion task. The present study successfully reproduced the published results, supporting the use of the desktop virtual environment. The final experiment expanded the manipulation of key parameters of the triangle-completion task (i.e., turning angle and leg-length), to assess how larger ranges of each affected human dead reckoning. Participants? estimates on long itineraries (Experiment 4) were more accurate than on short itineraries (Experiment 3). The improved accuracy on long itineraries suggests that previous findings of highly stereotypic responses were due to methodological limitations, not human limitations. These experiments show that humans are able to make accurate distance and dead reckoning estimations in a desktop virtual environment, and demonstrate the viability of the desktop virtual environment as a research tool. vii ACKNOWLEDGEMENTS The author would like to thank Dr. Jeffrey Katz for his support and mentorship throughout the various stages of this research project. He provided the environment for unhindered curiosity, the means for experimental research, and the guidance to funnel the project to fruition. The author would also like to thank Michelle Hern?ndez and Bradley Sturz for the discussions, helpful feedback, and support throughout this project. Without their love, steady confidence, and patient ears, this project would not be what it is. The author must thank Valve Software for the developer-friendly game engine and endless pages of support documentation. The knowledge base, from which the vast majority of the information needed to utilize the software was drawn, however, was developed by the active online community of video-game fans who dedicated endless hours to understanding the quirks and exploitable bugs in the software, and compiled this information into useful, searchable websites. This information was indispensable in the programming of the experiments in this study. I owe a debt of gratitude to the unnamed and aliased Half-Life fans who, for no other reason than their own enjoyment, provided me with the tools to complete these experiments. viii Style manual used: Publication Manual of the American Psychological Association, 5th Edition. Computer software used: Microsoft Office 2003, SigmaPlot 9, SPSS, Adobe Photoshop, Half-Life ix TABLE OF CONTENTS LIST OF TABLES......................................................................................................... x LIST OF FIGURES ....................................................................................................... xi I. INTRODUCTION ................................................................................................... 1 Dead Reckoning Virtual Environment as Experimental Apparatus Separating Vestibular, Kinesthetic and Visual Stimuli Overview of the Present Research II. EXPERIMENT 1 ..................................................................................................... 17 Method Results and Discussion III. EXPERIMENT 2 ..................................................................................................... 30 Method Results and Discussion x IV. EXPERIMENT 3 ..................................................................................................... 38 Method Results and Discussion V. EXPERIMENT 4 ..................................................................................................... 50 Method Results and Discussion VI. GENERAL DISCUSSION ...................................................................................... 58 REFERENCES .............................................................................................................. 65 xi LIST OF TABLES Table 1 ......................................................................................................................... 65 xii LIST OF FIGURES Figure 1 ......................................................................................................................... 4 Figure 2 ......................................................................................................................... 8 Figure 3 ......................................................................................................................... 10 Figure 4 ......................................................................................................................... 12 Figure 5 ......................................................................................................................... 14 Figure 6 ......................................................................................................................... 21 Figure 7 ......................................................................................................................... 23 Figure 8 ......................................................................................................................... 29 Figure 9 ......................................................................................................................... 31 Figure 10 ........................................................................................................................ 37 Figure 11 ........................................................................................................................ 40 Figure 12 ........................................................................................................................ 43 Figure 13 ........................................................................................................................ 46 Figure 14 ........................................................................................................................ 49 Figure 15 ........................................................................................................................ 52 Figure 16 ........................................................................................................................ 53 Figure 17 ........................................................................................................................ 57 1 I. INTRODUCTION Humans and nonhumans use multiple cues to navigate their environment. Distal landmarks may guide orientation, proximal landmarks may be approached, and the relation between proximal landmarks may predict a goal location. Which cues are used may depend upon the extent to which all cues are in agreement (Cheng, Shettleworth, Huttenlocher, & Rieser, 2007). In addition to external cues, proprioceptive cues are produced through active movement. Kinesthetic sensations, vestibular activation, and visual self-motion cues (e.g., optical flow) all provide feedback to the navigator. Mittlelstaedt & Mittelstaedt (1980) argued that the summation, or integration, of proprioceptive cues could allow an organism to encode the distance and direction it has traveled. They called this process of automatically pooling proprioceptive information and updating a vector that points back to the locus of origin path integration. Path integration has been considered to be the mechanism that underlies the phenomenon known as dead reckoning (Shettleworth, 1998). Dead Reckoning Dead reckoning, a.k.a. homing, is knowing where one is in relation to a reference point without the help of external landmarks. Often, the reference point is a home nest or the origin of the current path. Dead reckoning is demonstrated by the ability to return to a point of origin after a circuitous outward path. Dead reckoning has been argued to be a fundamental, unlearned component of the navigational system of many mobile animals 2 (Etienne, Boulens, Maurer, Rowe & Siegrist, 2000, Kearns, Warren, Duchon, & Tarr, 2002; Shettleworth, 1998). Food and resources are generally not located in the same place as protection and safety. Foraging for food exposes animals to many dangers, including predation and the elements. Additionally, foraging often requires animals to leave their offspring in a place of safety, making the ability to return quickly genetically important. Thus, a system that allows foragers to return accurately to their place of refuge, thereby minimizing both energy expenditure and exposure to dangers, could arise through natural selection. Indeed, the general behavioral findings from species as distinct as ants (M?ller & Wehner, 1988), hamsters (Etienne & Jeffrey, 2004; Etienne, Boulens, Maurer, Rowe & Siegrist, 2000) and humans (Wallace, Choudhry & Martin, 2006) seem to support this notion. Moreover, comparative brain and behavior research has found striking neuroanatomical commonalities in spatial-cognition mechanisms across mammals, birds, reptiles and teleost fish, (Salas, Broglio & Rodr?guez, 2003) supporting the assertion that dead reckoning might be a fundamental navigational process that arose early in our evolutionary history. Due to the suggested bottom-up mechanics of dead reckoning, it has been suggested as a possible component of several phenomena. For example, dead reckoning has been suggested as an alternative to the cognitive-map explanation for novel-shortcutting (Bennett, 1996; Gibson, 2001), as a basis for both egocentric and geocentric spatial representation hypotheses (Wang & Spelke, 2000), and as a possible basis for autobiographical memory (Wishaw & Wallace, 2003). In short, dead reckoning is an interesting phenomenon with implications on a wide variety of research areas. 3 One question that has received a great deal of attention addresses which sensory inputs underlie human dead reckoning. The sensory inputs that have been considered include vestibular sensations (balance), kinesthetic sensations, and visual self-motion cues. Vestibular sensations are derived from the displacement of hair cells found in the semicircular canals and otolith organs of the inner ear, which are sensitive to rotary and linear acceleration, respectively (Lackner & DiZio, 2005). The vestibular system, due to is anatomy, provides information about changes in rotary and linear self-motion, but not about sustained motion. The kinesthetic sense is the sensation of body position and movement. Kinesthetic sensations are derived from neurons in joints and muscle fibers, which provide information about the position and motion of the body (Lackner & DiZio, 2005). Compiling kinesthetic sensations across time could provide information about the distance and direction one has traveled. Visual self-motion cues provide information about the speed, direction and rotation of self-motion, and are collectively referred to as optical flow (Gibson, 1950). Optical flow guides navigation, because as an observer travels along a forward path, a pattern of radial expansion is produced. Objects in front of the observer appear to radially expand as light reflected from the objects strikes the retina in increasing distance from the fovea. Objects farther from the observer produce less radial expansion than objects nearer the observer (i.e., motion parallax). As shown in Figure 1, the center of these differential radial expansions is called the focus of expansion, and marks the current direction of movement. Thus, an observer can control the direction of travel by directing the focus of expansion in the desired direction. 4 Figure 1. The flow of optical information during forward movement as projected on a spherical surface around the observer?s head. The point directly in front of the observer?s eyes, from which all expansion originates, is the focus of expansion. Reprinted from Gibson (1950). 5 Virtual Environment as Experimental Apparatus One of the issues that have inhibited the investigation of human dead reckoning is the difficulty of experimental control. To test dead reckoning, landmarks and orienting stimuli must be removed. These confounding stimuli include visual landmarks, auditory cues, the feel of the sun or wind on the participants? skin, and the slope of the ground underfoot. The removal of these confounding stimuli is difficult, if not impossible, in natural-environment settings. The conventional preparation has been to use blindfolds and earmuffs (e.g., Klatzky, et al., 1999; May & Klatzky, 2000; Wallace, Choudhry & Martin, 2006) to block visual and auditory stimuli. However, results obtained under conditions of sensory deprivation may not generalize to the normal human experience. For this reason, the virtual-environment apparatus has become a popular tool for spatial navigation research (e.g., Chance, Gaunet, Beall, & Loomis, 1998; Jansen-Osmann & Berendt, 2001; Waller, Bachman, Hodgson, & Beall, 2007). Virtual-environment systems present three-dimensional visual environments (and corresponding auditory stimuli) of varying fidelity to natural environments. Experimenters have complete control over these environments, allowing a level of experimental control that is unachievable in a natural environment. The extent to which findings from virtual-environment research generalize to the natural environment is an important question. Sturz and colleagues found that landmarks exercised equivalent control over search behavior in natural- (Sturz, Bodily, Katz & Kelly, under review) and virtual-environments (Sturz, Bodily & Katz, 2006). Also, Jansen-Osmann & Berendt (2002) reproduced, in a virtual environment, the results of Sadalla & Magel (1980), which demonstrated that the length of a route with more turns is 6 estimated as being longer than an equally-long route with few turns. These, and other examples (some of which are reported below), demonstrate that virtual environments can tap into the same processes as natural environments. Nevertheless, as with any experiment, the generalizability of virtual-environment research will always depend upon the specific research methods, and should be assessed for each experiment independently. Differentiating Effects of Vestibular, Kinesthetic and Visual Stimuli Warren and colleagues (Bruggeman, Zosh & Warren, 2007; Warren, Kay, Zosh, Duchon & Sahuc, 2001) pitted vestibular and kinesthetic sensations against optical flow. Participants walked toward a target in an immersive virtual environment wearing a head- mounted display (HMD) which provided visual feedback in response to lateral and rotational movements made by the participant. Warren et al displaced the focus of expansion (optical flow) presented by the virtual environment to be 10? to one side (left orright) of the actual direction of motion in the physical environment. The amount of optical flow varied across groups from no optical flow (a red line marking the target) to high optical flow (highly textured floor, walls, ceiling, doorway-target and an array of posts). Participants received several adaptation trials to the virtual environment condition. As shown in Figure 2, under conditions of no optical flow, participants walked in a normal, straight-ahead fashion in the physical environment, producing a circular path that extended in the direction of the visual offset in the virtual environment. Under conditions of high optical flow, however, participants adjusted their physical movement to a sideways ?crab?-like walk, producing a much more direct trajectory in the virtual environment. These results suggest that humans are sensitive to optical flow as well as 7 vestibular and kinesthetic cues, but that optical flow takes precedence over the other cues, and may serve to calibrate kinesthetic and vestibular cues to the current environment. Rather than putting the sensory inputs in opposition, Rieser, Pick, Ashmead, and Garing (1995) and Mohler, et al. (2007) found that optical flow calibrated kinesthetic and vestibular cues in a distance estimation task. In these studies, participants viewed a target, were blindfolded and then walked to where they estimated the target to be. After obtaining initial accuracy measures, participants walked on a treadmill and were either pulled behind a small tractor that moved faster or slower than the treadmill speed (Rieser, et al.), or viewed a virtual environment on a large projection screen that presented forward virtual motion down a hallway at speeds greater or less than the treadmill speed (Mohler, et al.). Following this exposure, subjects were blindfolded, led to the distance estimation area, and retested in the same manner as before. Both experiments found that participants subjected to optical flow conditions that were faster than the walking speed significantly underestimated the target distance relative to their pre-exposure estimations, and participants subjected to optical flow conditions that were slower than their walking speed significantly overestimated the target distance. These findings support the argument that optical flow calibrates human kinesthetic and vestibular proprioception. It is possible then, that if optical flow is the proprioceptive calibrator, that optical flow alone may be sufficient for accurate distance estimation and dead reckoning. 8 Figure 2. The four virtual worlds and results from Warren et al (2001, Exp. 1). (a) Is the target line, (b) is the line with ground, (c) is the ground, ceiling and doorway, and (d) adds posts for motion parallax. The solid line plots mean path (column ?path) and mean virtual heading error (column ?Heading?). The ?+? and ?- -" mark the kinesthetic rotational and optical flow hypotheses, respectively. Reprinted from Warren, et al (2001). 9 Assessing Dead Reckoning: The Triangle-Completion Task Dead reckoning requires estimates of direction, in addition to estimates of distance. Several tasks have been developed to simultaneously investigate the distance and rotational estimation components of dead reckoning simultaneously. The triangle completion task has been used to test a wide variety of animal species. In this task, as shown in the left panel of Figure 3, the subject is guided a pre-determined distance away from a point of origin (Leg A), rotated a predetermined angular distance (Turn 1), then guided another pre-determined distance (Leg B). At the end of the Leg B, the subjects are allowed to freely return to the point of origin. The return path (Leg C) is recorded and angular error, the difference between the Obtained Turn 2 and the correct Turn 2, is calculated. The common finding, as shown in the right panel of Figure 3, is that such diverse species as ants, hamsters and humans tend to produce over-rotations at Turn 2. This over-rotation makes the return path cross over Leg A. In natural settings, the tendency to over-rotate may be more likely to put animals in contact with previously encountered external cues and/or self-produced cues (e.g., pheromone marker, foot prints) that may guide them back to the point of origin (Wehner, et al, 2006). Since there is variability in return paths, possibly due to inherent error in path integration, a tendency to produce a positive mean error may be a more successful behavioral strategy than to produce a mean error of zero. 10 Figure 3. Left panel: Diagram of the triangle-completion task. ?S? represents the start location. Leg A and Leg B represent the lengths of the respective itinerary legs. Turn 1 represents the programmed turn angle in degrees. Leg C represents the response?the path of the estimated return to the start location. The difference between the Obtained Turn 2 and Turn 2 is the rotational error. The difference between the lengths of the Obtained Lec C and Leg C is the distance error. Right panel: Mean vectors of return paths from different species. The numbers represent the leg lengths (A & B) in meters. ?S? marks the starting location. ?P? marks the end of leg b from which subjects return to the start. Reprinted from Etienne & Jeffrey (2004). Turn 1 Turn 2 Le g A Leg B S Obtained Turn 2Obtained Leg C Leg C 11 Loomis et al (1993) tested human performance on the triangle completion task without vision. Blind (adventitiously and congenitally) and sighted subjects with blindfolds were led along the two legs and the intermediating rotation before being allowed to return to the start location. Leg lengths were 2, 4 or 6-m long, and the Turn 1 was 60?, 90?, or 120?. The visual condition of the participants had no effect on estimates, and the estimates were quite accurate. There was, however, a tendency to underestimate both the return rotation and distance to travel. Also noted, however, was a tendency for participants to produced stereotyped return paths across the different test itineraries. That is, distance and angular estimates regressed toward the mean of each. The top panel of Figure 4, which plots the obtained Turn 2 by the Correct Turn 2, shows that participants over-rotated when Turn 2 was small, but under-rotated when Turn 2 was large. Klatzky, Beall, Loomis, Golledge & Philbeck (1999) tested sighted participants when blindfolded or with vision limited to 1.5-m in front of them by a bicycle helmet fitted with an opaque visor. Participants were led through two legs of a triangle and the intermediating rotation before being allowed to return to the estimated start location. Leg A was 1, 2, 3, 4, 5, or 6-m long, and Leg B was 2-m long. The Turn 1 value was 10?, 40?, 60?, 70?, 90?, 110?, 120?, 140? or 170?. Similar to Loomis (1993), participants produced stereotyped return paths which regressed toward the mean in their Turn 2 (bottom panel of Figure 4) and Leg C estimates. This stereotyping of responses occurred irrespective of visual condition, although access to limited vision reduced error. 12 ? Partial Vision ? Blindfolded Klatzky et al (1999) OB TA IN ED TU RN 2 (de gre es) CORRECT TURN 2 (degrees) CORRECT TURN 2 (degrees) OB TA IN ED TU RN 2 (de gre es) ? Sighted (Blindfolded) ? Adventitiously Blind ? Congenitally Blind Loomis et al (1993) OB TA IN ED TU RN 2 (de gre es) OB TA IN ED TU RN 2 (de gre es) Figure 4. Regression plots of the mean obtained Turn 2 vs. the correct Turn 2. Symbols mark mean rotation angle for each vision group for particular tests. Top panel reproduced from Loomis et al (1993). Bottom panel from Klatzky et al (1999). 13 The improved accuracy in the limited vision condition, as compared to the blindfolded condition, suggests that visual deprivation contributed to the stereotyping of return paths. Perhaps if humans had full visual access to the environment, accuracy would improve and stereotypic responses would decrease. To control the visual environment, Kearns, Warren, Duchon, & Tarr (2002) tested humans in a virtual environment in which landmarks and orienting features that interfere with dead reckoning were removed. Participants wore a HMD and navigated the virtual environment either by walking or by manipulating a joystick. In the walking condition, participants physically walked about the laboratory while sensors in the HMD synchronized the participant?s movement with the visual display of virtual space. In the joystick condition, participants sat in a chair and used a joystick to move forward, backward or turn left and right in virtual space. In both conditions, participants were guided through two legs of a triangle by poles that extended from the ground, marking the end of each leg. The first pole appeared in front of the participants at the beginning of each trial, at the distance that had been assigned to Leg A (2.25 or 4.25 m). Participants approached that pole. Upon making contact with it, the pole disappeared with a popping noise and a second pole appeared to the right (out of view) of the participant. Participants rotated in place, then approached the second pole. Upon making contact with the second pole, participants were free to rotate to the direction that they estimated the start position to be, and then move to that location. Legs A and B were 2.25 or 4.25-m long, and the Turn 1 value was 60?, 90?, or 120?. 14 Figure 5. Regression plots of the mean observed return rotation vs. the correct return rotation. Symbols mark mean rotation angle for each movement-control group. Data reproduced from Kearns, Warren, Duchon & Tarr (2002). CORRECT LEG C (cm) 0 100 200 300 400 500 600 700 800 OB TA IN ED L EG C (c m) 0 100 200 300 400 500 600 700 800 Kearns et. al (2002; Joystick) Kearns et. al (2002; Walking) CORRECT TURN 2 (degrees) 15 45 90 135 165 180 OB TA IN ED T UR N 2 (de gre es) 15 45 90 135 165 180 Kearns et. al (2002; Joystick) Kearns et al (2002; Walking) 15 Turn 2 and Leg C accuracy were fairly high in both movement conditions, with a greater tendency to over-rotate in the walking condition than in the joystick condition. Figure 5 plots the obtained values against the correct values for Turn 2 (top panel) and Leg C (bottom panel). Similar to the findings from blindfolded and limited vision testing, both movement conditions produced stereotyped responses. The similarity between joystick and walking conditions is interesting. Both produced highly stereotyped responses for Turn 2 and Leg C, with the only difference being that the Walking condition tended to produce a consistently greater Turn 2. Thus, while visual feedback alone (joystick condition) was sufficient to make fairly accurate estimates, the addition of vestibular and kinesthetic feedback was not sufficient to reduce the stereotypy of responses. The stereotypy of return responses under varied motor and visual conditions and across the variety of test itineraries is troubling, as it suggests that humans are not sensitive to the itinerary changes. If humans are not sensitive to the changes in programmed distance and direction traveled, then what they may be doing could be qualitatively different from what nonhuman species do. That is, humans may be attempting to cognitively compute a return vector (top-down process), rather than detect the correct return vector (bottom-up process). Path integration, the mechanism of dead reckoning, is a bottom-up process which may elude humans more accustomed to reading maps or using visual landmarks for orientation. Overview of the Present Research The purpose of the present research is to investigate human sensitivity to optical flow when making distance and rotational estimates in a desktop-computer VE. 16 Experiment 1 tested the adaptation of an experimental task designed for honeybees to the desktop virtual environment for human participants. Experiment 2 expanded upon Experiment 1 by increasing experimental control and testing whether optical flow is sufficient for accurate distance estimation. As a result of the finding that humans could use optical flow to accurately estimate distance, Experiments 3 and 4 examined whether participant-induced optical flow is sufficient for accurate dead reckoning (i.e. distance and rotational estimation). Experiment 3 replicated the triangle-completion task of Kearns et al (2002), in order to make a direct comparison between the HMD and desktop- computer tasks. Having successfully reproduced the effects found by Kearns et al., Experiment 4 assessed the accuracy of rotational estimates across a wide range of rotations and a range of longer leg-lengths than have previously been tested in the triangle completion task. 17 II. EXPERIMENT 1 Experiment 1 tested whether humans can accurately estimated distance when exclusively provided optical flow (i.e., vestibular and kinesthetic cues are removed). Bigel & Ellard (2000) reported that blindfolded humans were inaccurate when returning to a location to which they have been previously led, but quite accurate if allowed to view the location prior to being blindfolded. The lack of accuracy following locomotor stimulation suggests that humans might rely on visual information to estimate distance. Lappe and colleagues (Frenz, Lappe, Kolesnik & B?hrmann, 2007; Lappe, Jenkin & Harris, 2007) tested distance estimation in an immersive virtual environment called the CAVE. The CAVE apparatus is a hollow cube. A computer-generated scene is projected upon the walls, ceiling and floor of the cube. Participants, standing inside the cube, were presented with various scenes of apparent movement. That is, the projected virtual environment textures scrolled toward the participant providing optical flow and giving the impression of forward movement. Following the presentation of a scene, participants estimated the distance of apparent motion by moving a virtual line, perpendicular to their line of sight, as far into the distance as they estimated to have traveled. Although estimated distances correlated with correct distance, participants underestimated the distance by 25%, on average. Various subtle manipulations of the estimation component of the task produced similar underestimation, suggesting that optical flow alone does not provide sufficient information for accurate distance estimation. 18 However, Bremmer & Lappe (1999) found that humans can make accurate distance estimates based exclusively on optical flow. Participants sat at a desktop computer. Two sequences of apparent motion were presented on the monitor. The temporal duration, velocity of apparent motion, and apparent distance traveled varied across sequences. Following the presentation of the second sequence, participants responded whether they thought the second sequence was a longer or shorter distance than the first. Participants were accurate on 97% of the trials, suggesting that humans can discriminate between two different distances exclusively by optical flow. In summary, humans can distinguish between two sequences of apparent motion, but are inaccurate when marking a distance traveled. That is, participants could accurately detect the difference between the optical flow of two sequences, but were inaccurate when having to project a marker into the distance to mark how far they had traveled. Perhaps participants would have been more accurate in the CAVE apparatus if the task had been to reproduce the distance traveled, as that would allow them to experience to optical flow of both the sample (the first sequence) and the comparison (the second sequence). Indeed, they may have been even more accurate if they had control over their movement. To date, however, no virtual-environment task has tested participants on a distance reproduction task in which participants have full control over their movements in virtual space. In the present study, participants were tested in a virtual environment in which participants sat at a desktop computer and ?moved? about in virtual space. Participants had free access to move about the environment, and controlled their movement with a mouse and keypad. The task was inspired by Srinivisan, Zhang, & Bidwell (1997), who 19 used a textured-tunnel task to test if honeybee navigation was controlled by optical flow. In this task, honeybees flew down a tunnel to find a goal which was always found at a specific distance from the start. The tunnel was textured with black-and-white alternating cross-wise stripes. After bees learned to find the goal, it was removed and the texture was changed. Where the bees searched was recorded. Bees searched accurately, provided the texture produced optical flow (i.e., random dots). However, when the texture did not produce optical flow (i.e., stripes that ran parallel to the tunnel axis), search no longer centered on the correct location. This task was adapted as directly as possible to a desktop virtual environment to assess if humans can use optical flow to estimate distance. Method Participants. 18 undergraduate students, 8 males and 10 females, enrolled in psychology courses at Auburn University were recruited for this experiment. Participants were at least 19 years old, had normal or corrected-to-normal vision, and were not susceptible to motion sickness. Each participant received a research hour that could serve as extra credit in psychology courses. Apparatus. Computer generated, dynamic 3-D VEs were constructed and rendered using Valve Hammer Editor (Version 3.4) and run on the Half-Life Team Fortress Classic platform (Version 1.1.1.0). A custom-built personal computer with a 2.06-GHz processor (AMD 2600+), 64MB video card (NVIDIA GeForce MX440), 19- inch flat-screen CRT monitor, optical mouse (Logitech), quiet-touch keyboard (Logitech), dual-analog game-pad (Logitech) and speakers (Sound-Blaster) served as the human-computer interface. The monitor (1152?864 pixels) provided a first-person perspective of the digital environment. In first-person perspective, the monitor represents 20 a view from the perspective of the participant within the virtual world; therefore, it represents a view of the digital environment that is analogous to an individual?s view of the natural environment. Computer speakers provided auditory feedback. Participants interacted with the virtual environment using a number-pad and mouse, as shown in Figure 6. The number-pad keys 4, 6, 8 and 5 served to move left, right, forward and backward, respectively. The speed of movement in the virtual environment was set at 6 m/s, roughly the speed of a quick jog. The mouse controlled the participant?s view of the VE. Moving the mouse left, right, forward and backward rotated the view left, right, upward and downward, respectively. An identical second personal computer was the server for the virtual environment. This computer recorded first- and third-person perspectives of the participants? movements within the virtual environment and recorded pre-determined events to a text file. In third-person perspective, the monitor represents an overhead view of the virtual environment. All experimental events were controlled and recorded using Half-Life Dedicated Server (Version 1.1.1.0) and Half-Life Television (Version 1.1.1.0). Computer-Generated Environments. Two distinct computer-generated 3D environments were used?one for training, and one for testing. As shown in the top panel of Figure 7, the VE-Training environment (73 m x 73 m x 13 m1) consisted of a grass- textured floor and black-textured walls and ceiling. In the center of the arena, 13 targets (white globes) were arranged in a ?figure 8? design. 1 Technically, the unit of measure is not in meters, per se, since the measurement is of a simulated environment. However, it is a common convention to convert the simulated units to meters for ease of reader comprehension and future replication. The units of measure in the virtual environment software are roughly equivalent to inches. These ?virtual? inches were converted to meters. 21 Action Move Forward Move Backward Move Left Move Right Look Upward Look Downward Rotate Left Action Rotate Right Figure 6. Diagram of the movement and view controls for Experiment 1. As shown in the bottom panels of Figure 7, the testing environment for Experiment 1 was a long tunnel (155 m x 5-m x 5-m). The floor and walls of the tunnel were textured with cross-wise, black-and-white stripes, long-wise (axial), black-and- white stripes, or a black-and-white random-dot pattern. The ceiling was always black. On training trials, a target (red key card) appeared 40 m in front of the location where participants began each trial (near the center of the tunnel). Procedure. Prior to participating in the experiment, participants completed a self- report assessment of their previous video-game experience and a training/familiarization task in the VE. The Previous Video Game Experience (PVE) questionnaire provided a self-report assessment of the video-game experience participants had prior to participating in the study. The PVE asked about the types of video games the participants 22 had played, and specifically asked about experience with first-person video games. Participants were asked to list several first-person video games they have played, and estimate the amount of time they have played first-person video games. These values were used to assess the possible role of video-game experience on the experimental tasks. After completing the PVE, Participants were familiarized with the virtual environment interface by completing a warm-up task. Participants began near the center of the VE-Training environment, looking directly at the arrangement of targets (white globes). On-screen text instructed participants to collect all of the targets, and that collected targets would reappear after 13 seconds had transpired. A target was collected by simply colliding with it. In order to complete this task, participants needed to collect all 13 targets before any of them had reappeared. If any targets reappeared before the last target had been collected, the participant would need to collect the targets that had reappeared. When all of the targets had been collected, the screen went black and on- screen text congratulated the participant on completing the task. The minimum time to complete the warm-up task was approximately 10 seconds. This task allowed participants to become familiar with the virtual environment interface while providing a measure of video-game expertise (time to complete the task). Time to complete the task was expected to correlate with self-reported video-game experience. All participants completed VE-Training prior to participating in any of the experiments. 23 VE-TRAINING ENVIRONMENT CROSS-STRIPE RANDOM DOT AXIAL-STRIPE Figure 7. Top panel. VE-Training environment from the participant?s view and from above (inset). Bottom panel. The texture conditions in the testing environment. 24 Experimental Task. Participants sat at a comfortable distance away from the computer monitor, approximately .5 m. The functions of the mouse and keypad were described, and participants were invited to arrange the controllers into a comfortable configuration on the desktop. Before beginning the task, participants read the following on-screen instructions: You will move through a series of long, straight tunnels. You will be placed at a point near the middle of each tunnel. You must find and collect a keycard that is always located at the same distance away from the start location. The more quickly you find it, the more points you earn. Sometimes the keycard will be invisible, but you must still find it. You have 60 seconds to find the key card. Collect the key card and then carry it back to the starting location to begin the next tunnel. In addition to the text, participants received verbal prompts and instructions from the experimenter on the first trial of the task to assure that they understood the task. Each of the points in the written instructions was repeated verbally by the experimenter so as to assure participants understood the task and reduce variability. All participants experienced three types of trials: training, hidden, and probe. On Training trials, participants found a visible target (red key-card) in a long tunnel (5 m x 155 m). The surface of the walls and floor of the tunnel were textured with alternating black-and-white stripes (1.625 m wide) that ran perpendicular to the tunnel axis. The target location varied along the perpendicular axis (i.e., center of tunnel or near wall), but was always 40 m from the start location. Participants collected the target by colliding with it. Collecting the target produced a distinct metallic clicking sound. After collecting 25 the target, participants returned to the start location. The start location was unmarked throughout the search component of the trial, but was marked after the target had been found and participants were within 10-m of the start location. Hidden trials were the same in all ways as training trials, except that the goal was not visible. Participants could find the invisible goal by searching in the same locations that it had been found during training trials. Colliding with the invisible goal produced the same sound that colliding with a visible goal made during training trials. Upon collecting the invisible goal, participants returned to the start location to end the trial. Hidden trials served to provide a history of finding unseen goals, and set up the conditions necessary for probe trials. Probe trials tested if participants are able to correctly estimate the trained distance to the goal. The goal was absent and the distribution of search was recorded. Each probe trial lasted 60 s. To investigate whether optical flow can influence distance estimation, the tunnel texture was manipulated across conditions of baseline (cross stripe) novel optical flow (random-dot) and no optical flow (axial stripe). The bottom panel of Figure 7 shows screen captures of each of the texture conditions. One possibility is that participants might not be sensitive to their speed of movement in the task, but instead used a timing strategy (e.g., counting seconds) to find the target. To test for this possibility, the velocity of movement was set to slow (3 m/s), normal (6 m/s), and fast (10 m/s) for each texture condition. Thus, probe trials consisted of texture/velocity combinations (e.g., random dot/fast, axial stripe/slow) producing nine different probe trials. 26 The experimental session consisted of 30 trials. The first 5 trials consisted of four training trials and one hidden trial. Of the remaining 25 trials, 6 were quasi-randomly assigned as hidden trials, 9 were quasi-randomly assigned as probe trials, and the remaining 10 were training trials. Each probe type was only presented once to prevent changes in search that may have occurred with repeated testing. Results and Discussion Exclusionary criterion. Ten of the 18 participants (4 males, 6 females) produced search distributions that were uninterpretable. Specifically, a disproportionate amount of time was spent in the extreme ends of the tunnel, where the goal had never been found, rather than searching in the location that the goal had been found. For this reason, only the data of 8 participants (4 males, 4 females) which produced a single peak in the search distribution and did not spend time in the extreme ends of the tunnel was analyzed. Previous experience. The average PVE score of Males (M = 45.5, Mdn = 22, SD = 49.7) and Females (M = 2.25, Mdn = 1.5, SD = 2.6) did not statistically differ, t(6) = 1.737, p = .133, d = 1.23. The average number of seconds to complete VE-Training for Males (M = 34.5, Mdn = 11, SD = 48.3) and Females (M = 145.25, Mdn = 151, SD = 133.6) did not statistically differ, t(6) = 1.56, p = .17, d = 1.102. The low number of participants and high variability between participants should be considered as a factor in the lack of statistical significance, especially considering the large effect sizes which suggest that there were meaningful differences between the genders on these measures. Distance estimation. Figure 8 shows the mean of mean search distributions for all participants in each texture condition across levels of movement velocities. The correct search location was 40-m, as marked by the dotted line. The results were submitted to a 27 three-way, mixed ANOVA of Texture (Cross-Stripe, Random Dot, Axial-Stripe) x Velocity (Slow, Normal, Fast) x Gender (Male, Female). There were no main effects of texture, velocity or gender, Fs < 2.66, ps > .11. Only the Texture x Velocity interaction was significant, F(4, 24) = 4.65, p = .006, ?2 = .437, suggesting that velocity influenced search area differently across texture conditions. To determine which texture conditions were affected by velocity, the results for each texture condition were submitted to separate Two-Way ANOVAs of Velocity (slow, normal, fast) x Gender . In the cross- stripe condition, there was a main effect of velocity, F(2, 12) = 10.91, p = .002, ?2 = .645, but no effect of Gender or its interaction, Fs(1, 6)< 1.04. Mean search distance decreased with increased movement velocity, as confirmed by a linear trend analysis, F(1, 6) = 33.9, p < .01, ?2 = .85. Estimates did not differ from the expected search distance of 40m under slow and normal velocities, but were significantly underestimated under fast velocity conditions, t(7) = 3.42, p = .011, d = 2.59. In the axial-stripe and random-dot conditions there was no main effect of velocity or gender, and no interaction. The effect of velocity in the cross-stripe texture condition is interesting. If participants were simply using a timing strategy to make their estimates, they would be expected to underestimate the distance in the slow-velocity condition, and overestimate the distance in the fast-velocity condition. However, these results demonstrated the opposite effect: slower movement produced larger estimates and faster movement produced an underestimate. This result suggests that participants detected the gross changes in optical flow (i.e., faster, same, slower), but were not sensitive to the precise amount of change. However, the failure to find a similar effect in the random-dot texture condition leaves open the question of sensitivity to novel optical flow fields. 28 The lack of effect or interaction of gender is surprising. Females tend to do more poorly than males in spatial tasks (e.g., Feng, Spence & Pratt, 2007), and video-game tasks specifically (Sturz, Bodily & Katz, under review). Yet, despite the observed differences in reported video-game experience and time to complete the training task, gender did not affect distance estimates. The lack of effect may suggest that humans, irrespective of gender, are equally sensitive to optical flow. However, it could also be an artifact of the small sample size or this particular method. One possible problem with the method used in Experiment 1 is the measure of search distributions produced during a fixed interval, rather than discrete estimates. The search distributions varied greatly across participants. Some participants, the ones analyzed here, produced search distributions with a single peak which presumably centered on the estimated goal location. Other participants, rather than focus search in one area, roamed as much of the search space as possible in the 60 s that were allowed. For these participants, search patterns varied with movement velocity?they covered more space in the high-velocity condition than in the low-velocity condition. One way to remove this confound is to have participants produce a discrete choice rather than a search distribution. A second potential problem is that participants? search behavior was not entirely without feedback. Specifically, participants may have initially searched where they expected to find the goal but failed to find it, as the goal was not available on probe trials. The negative feedback may have influenced participants to search in other areas for the goal. This limitation, similar to the limitation of measuring search distributions, may also 29 be addressed by adopting a discrete choice procedure. By making participants produce a discrete distance estimate, every response may be followed by the same outcome. VELOCITY SLOW NORMAL FAST ME AN O F S EA RC H DI ST AN CE (m ) 20 25 30 35 40 45 50 55 CROSS-STRIPE RANDOM DOT AXIAL-STRIPE Figure 8. Mean search distance across velocity for each texture condition. Mean search distance decreased across velocity for cross-stripe texture (filled dot), but did not change for random-dot (open dot) and axial-stripe (triangle) textures. The error bars represent the SEM. 30 III. EXPERIMENT 2 The purpose of Experiment 2 was to improve upon the methodology of Experiment 1 by addressing the limitations introduced by using a search distribution measure, i.e., velocity manipulations affected the search distribution and failing to find the goal in the expected location provided negative feedback. Rather than measure a search distribution, Experiment 2 obtained a discrete measure of distance estimation across manipulations of movement speed and tunnel textures. Participants made their estimations by moving to the estimated location, then pressing a button. This discrete measure removed the possibility that movement speed manipulations influenced estimation strategy. Additionally, every estimate produced the same outcome, removing the possibility that feedback, positive or negative, influenced estimations. Method Participants. 26 undergraduate students, 16 males and 10 females, enrolled in psychology courses at Auburn University were recruited for this experiment. Participants were at least 19 years old, had normal or corrected-to-normal vision, and were not susceptible to motion sickness. Each participant received a research hour that could serve as extra credit in psychology courses. These participants did not participate in Experiment 1. Apparatus. The apparatus was the same as described in Experiment 1, with the exception of the control interface. Participants interacted with the virtual environment 31 using a gamepad (Logitech) with dual analog joysticks and multiple buttons, as shown in Figure 9. Pushing the left joystick forward or backward moved the participant forward and backward in virtual space. Pushing the joystick left or right rotated the view to the left or right, respectively. Participants indicated their distance estimate by pressing any of the four buttons on the right side of the gamepad. Computer-Generated Environments. Two distinct computer-generated 3D environments were used. First, the VE-Training environment, the same as described in Experiment 1, was used to provide practice interacting with the virtual environment and provide an assessment of video-gaming aptitude. The second environment was a long tunnel (185 m x 5-m x 5-m) modeled after the environment in Experiment 1. The floor and walls of the tunnel were textured with cross-wise, black-and-white stripes, long-wise (axial) black-and-white stripes, or a black-and-white random-dot pattern. The ceiling was always black. On all trials, a target (i.e., a white globe) was visible in the tunnel. Participants began each trial near the center of the tunnel, directly facing the target. Move Forward Move Backward Rotate Left Rotate Right Action Figure 9. Diagram of the movement controls for Experiment 2. 32 Procedure. After completing the PVE questionnaire and the VE-training task, participants were given 3 to 5 practice trials on the current task. During practice trials, the walls and floor of the tunnel had the cross-stripe texture. Participants started, always facing the same direction, near the center of the tunnel environment. A target (white globe) was visible in the tunnel 40 m in front of the start location. Participants navigated toward the target. Upon colliding with the target, the target disappeared and a ?click? sound signaled that the target had been ?collected?. Participants then turned around and navigated back to the starting location, which was unmarked. Upon reaching what was estimated to be the start location, participants pressed a button on the gamepad to mark their estimate and end the trial. A 2-s ITI, during which the screen was black, separated trials. Feedback, in the form of number of points earned (1 point for estimates within 10 m, 3 points for estimates within 6 m, and 5 pts for estimates within 2 m) was provided during the ITI. The purpose of the practice trials was not to attain accurate distance estimates, but to provide instruction on the task. Therefore, all participants received at least 3 practice trials. However, in the case of large errors, which signaled participant confusion about the task, more trials were allowed. No participant received more than 6 practice trials before testing. The experimental session consisted of 45 trials. Each trial proceeded as described above, with four exceptions. First, no feedback was provided. Each estimate produced a 2-s black-screen, and no information about points earned was presented. Participants did accumulate points for accurate estimates, and were informed that they would be able to see the total number of points earned at the end of the session. 33 Second, in order to provide more variety in the task, the distance to the globe varied between 20, 40 and 60 m. The 20 and 60 m distances served only to reduce the monotony of the task and increase the likelihood that participants would pay attention to the distance traveled in each outward path. Test trials only occurred at the 40-m distance. Test trials were trials in which the surface texture of the floor and walls, the movement speed during the return path, or both were manipulated. As in Experiment 1, the floor and wall textures varied between cross-stripes, axial-stripes, and random dots (see bottom panel of Figure 7). The cross-stripe texture served as the baseline optical flow condition. The random-dot texture was the probe optical-flow condition, and the axial-stripe was the probe no-optical-flow condition. Similar to Experiment 1, the velocity at which participants moved was also manipulated. In the current study, participants always approached the target at normal velocity (6 m/s). After collecting the target, the velocity was set to slow (3 m/s), normal or fast (10 m/s). Thus, distance estimates during the return to the start location had to take velocity changes into consideration to be accurate. If participants simply adopted a timing strategy, e.g. counting the number of seconds it took to reach the target and then return for the same number of seconds, then participants would be expected to underestimate the return distance on slow-velocity trials, and overestimate the return distance on fast-velocity trials. In the experimental session, each texture/velocity combination was presented three times, producing 24 probe trials and three control trials (i.e., cross-stripe texture at normal return velocity). On the remaining 18 trials, the floor and walls were textured with cross-stripes, and the distance to the target was 20 m on half the trials and 60 m on the 34 other half. Trials were quazi-randomly assigned into 3 blocks such that probe trials did not immediately follow another probe trial. Each probe trial was presented only once per block. Results and Discussion Previous experience. The average PVE scores of males (M = 109.8, SD = 195.5) and females (M = 1.4, SD = 3.7) were not statistically different, t(24) = 1.74, p = .095, d = .784. However, when the single extreme outlier was removed from each gender, the males were found to score significantly higher than females, t(22) = 2.193, p = .039, d = 1.03. The average number of seconds to complete VE-Training was lower for males (M =12.4, SD =7.5) than females (M = 64.2, SD = 68.3), t(24) = 3.045, p = .006, d = 1.067. Differences between males and females on these measures of video-gaming experience were expected, as previous research also found that males tended to have more video- game experience than females (Sturz, Bodily & Katz, under review). Distance estimation. Figure 10 shows the mean choice location for all participants in each texture condition across movement velocities. The correct location was 0-m. Positive numbers represent overestimates and negative numbers represent underestimates. The results were submitted to a three-way mixed ANOVA of Texture (Cross-Stripe, Random Dot, Axial-Stripe) x Velocity (Slow, Normal, Fast) x Gender (Male, Female). There was no main effect of Gender, F(1, 24) = .151, p = .701, ?2 = .009, and no significant Gender x Texture interaction, F(2, 48) = 2.498, p = .093, ?2 = .094, or Gender x Velocity interaction, F(2, 48) = 2.826, p = .069, ?2 = .105. However, there was a main effects of Texture, F(2, 48) = 12.01, p < .001, ?2 = .334, a main effect of Velocity, F(2, 48) = 42.446, p < .001, ?2 = .639, and a significant Texture x Velocity interaction, 35 F(4, 96) = 41.97, p < .001, ?2 = .636. These significant effects were due to the effect of Velocity in the axial-texture condition. Separate two-way mixed ANOVAs of Velocity x Gender for each texture condition produced a main effect of Velocity only in the axial- stripe condition, F(2,48) = 91.3, p < .01, ?2 = .792. Participants estimates varied with return velocity, as shown by a significant linear component of a trend analysis, F(1,24) = 134.74, p < .001, ?2 = .849. These results suggests that participants relied exclusively on timing (e.g., counting seconds) to make distance estimates in the absence of optical flow. However, when optical flow was available, participants made accurate distance estimates. The consistently accurate distance estimates across return velocities in optical- flow texture conditions, but not in the no-optical-flow condition clearly demonstrates that humans are highly sensitive to optical flow in VE. Humans can derive, from optical flow information, their relative speed of travel and make adjustments to the duration of travel to reach a specific distance. Unlike in previous tasks in which participant interaction with the virtual environment was limited (i.e., Frenz, Lappe, Kolesnik & B?hrmann, 2007; Lappe, Jenkin & Harris, 2007), participants in the current experiment had complete control over their movement, and produced estimates which were very accurate in both optical-flow texture conditions. The Role of Experience. Complimenting these findings is the lack of any gender effects, especially given the difference in reported experience and in video-game aptitude. It is possible that the ease of interaction with the virtual environment and the simple nature of the task removed any impact that previous video-game experience may have had on task performance. To more directly assess the role of experience may have had in the task, scores were categorized as being Low or High for the PVE (Low = less 36 than 10, High = 10+; consistent with Sturz, Bodily & Katz, under review) and VE- Training (Low = more than 15-s, High = 15-s or less). The role of these factors (PVE- Level and VE-Training Level) were assessed in a four-way mixed ANOVA of Texture (cross-stripe, random dot, axial-stripe) x Velocity (slow, normal, fast) x PVE-Level (Low, High) x VE-Training Level (Low, High). There was no effect of PVE-Level, F(1, 22) = .149, p = .703, ?2 = .007, or VE-Training Level, F(1, 22) = .038, p = .848, ?2 = .002, and no interactions with each other or other variables, Fs < .1795, ps > .178, ?2s < .075. These results suggest that previous video game experience and video-game aptitude had no influence on the obtained results, and support the interpretation that optical flow is a common visual experience in natural and virtual environments that provides sufficient information for accurate distance estimation. Distance estimation and dead reckoning. Distance estimation is only part of the process of dead reckoning. In addition to estimating the distance traveled, observers must also be able to estimate the general direction of travel to be able to return directly and accurately to the point of origin. Experiment 2 clearly demonstrated that optical flow in a desktop virtual environment is sufficient for human participants to make accurate distance judgments. Experiments 3 and 4 assessed the accuracy of dead reckoning estimates based on optical flow derived from participant-controlled movement and rotations. 37 RETURN VELOCITY SLOW NORMAL FAST ME AN C HO IC E L OC AT IO N (m ) -20 -15 -10 -5 0 5 10 15 20 25 CROSS-STRIPE RANDOM DOT AXIAL-STRIPE Figure 10. Mean distance estimates across velocity for each texture condition. The correct distance is at 0-m. Positive values represent overestimations (walking too far), and negative values represent underestimations (not walking far enough). Mean search distance was equally accurate across velocities for cross-stripe texture (filled dot) and random-dot (open dot) texture conditions. However, choice location depended upon return velocity in the axial-stripe texture condition (triangle) in which there was no optical flow. 38 IV. EXPERIMENT 3 Experiment 3 sought to test humans? ability to make distance and rotational estimates, i.e., to dead reckon, in a desktop VE. Previous research has consistently shown a tendency for humans to produce fairly accurate distance and direction estimates that positively correlate with the correct response, but that there is a tendency for humans estimates to show regression toward the mean. That is, humans tend to underestimate large distances and directions, and tend to overestimate small distances and directions. This finding has been reported in natural-environment testing with blindfolded and blind participants (Loomis et al, 1993; Klatzky et al, 1999) and in virtual environment testing in which participants move about a virtual scene by physically walking or by manipulating a joystick (Kearns et al, 2000). There was an interesting difference between the walking and joystick groups in Kearns et al. (2002). The walking group produced more positive rotational error, which is similar to blindfolded participants, but the joystick group produced more negative rotational errors. This difference suggests that kinesthetic and vestibular cues provide the majority of the information used in dead reckoning, and that optical flow alone is insufficient for accurate rotational estimates. However, proprioceptive feedback was not the only difference between the two groups. Bakker, Werkhoven & Passenier (1999) suggested that the human-computer interface may also influence virtual environment navigation. There are many different interface apparatus, and some apparatus may leave 39 out important navigational information, and may even disrupt natural navigational processes. For example, one possibly critical difference between the HMD and joystick experiences in Kearns, et al. is that the HMD participants were free to look up-and-down in addition to side-to-side, whereas joystick participants were restricted to side-to-side rotations. This difference may influence the amount of optical flow experienced in each group (i.e., looking downward while walking increases optical flow). Additionally, since head movements did not correspond with changes in the scene in the joystick condition, there was a great deal of simulator sickness (i.e., motion sickness) that did not occur in the walking condition. Indeed, 50% of the females tested in the joystick condition had to be released from the study due to simulator sickness. It is possible that this greater incidence of simulator sickness is symptomatic of a problem with the apparatus. Experiment 3 adapted the triangle completion task in Kearns, et al (2002) to the desktop virtual environment to test human dead reckoning. We sought to reproduce the results reported in the previous research, including the accuracy and tendency to produce stereotypic responding. Thus, the same environment measurements, similar surface texture, and itineraries used in Kearns, et al. were tested. Additionally, two different interface apparatus were used. Participants used either use a joystick (the left analog stick on a game pad) or a mouse-and-keyboard configuration to navigate the VE. The mouse interface, like the HMD interface in Kearns et al., allowed participants to look up-and- down in addition to rotating side-to-side, whereas the joystick interface was limited to sideways rotation. 40 Method Participants. 29 undergraduate students, 12 males and 17 females, enrolled in psychology courses at Auburn University were recruited for this experiment. Participants were at least 19 years old, had normal or corrected-to-normal vision, and were not susceptible to motion sickness. Each participant received a research hour that could serve as extra credit in psychology courses. Participants were quasi-randomly assigned to one of two interface groups (mouse and keyboard or gamepad), with each group having an equal number of each gender. Apparatus. The apparatus was the same as described in Experiment 1, with the exception of the control interface. Participants were, matched for gender, randomly assigned to use the mouse-and-keyboard interface from Experiment 1 (see Figure 6) or the gamepad interface from Experiment 2 (see Figure 9). Computer-Generated Environments. Two distinct computer-generated 3D environments were used. First, the VE-Training environment, the same as described in Experiment 1, was used to provide training with the apparatus. The second environment, as shown in Figure 11, was a large, circular arena (radius = 35m). The surfaces of the walls and floors were textured with a blue-on-black random-dot pattern, and the ceiling was black. The only distinctive landmark was a target (white globe), which marked the locations to which participants should navigate. Apart from the targets, which only appeared one-at-a-time, there were no orienting cues. 41 Figure 11. Testing environment for Experiment 3. The white globe marks where to move. Procedure. Participants completed the PVE questionnaire and the VE-training task in the same manner described in Experiment 1. Afterwards, participants were given practice trials on the current task in the large circular arena. The itinerary was the same on each practice trial. Participants started near the center of the arena and a target appeared 4.25-m in front of them. Participants moved forward (at 3 m/s) until they collided with the target. The collision made the target disappear with a clicking sound. Participants then stopped in that position. When the first target disappeared, a second target appeared 4.25-m to the right2 of the participant (90? rotation), which was out of view. Participants rotated in place until the second target was centered in their view, and 2 Only itineraries with right turns were tested, using the same methodology as Kearns et al. (2002). 42 then walked to that location. Upon colliding with the second target, which also disappeared upon contact, participants rotated in place until facing what they estimated to be the start location. Participants then walked to where they estimated the start location to be and, upon reaching that location, pressed the ?action? button (the corresponding keypad or gamepad button) to mark their choice. Upon doing so, the screen went black and a text message on the screen provided feedback on estimate accuracy in the form of points earned and prompted participants to press the button again when ready to begin to the next trial. Estimates within 3 meters of the correct location earned 1 point, within 2 meters earned 3 points, and within 1 meter earned 5 points. Participants practiced until they were able to earn 3 points on two consecutive trials, signifying that they had learned the task. Upon meeting this criterion, participants began the testing phase. Testing trials proceeded in the same manner as training trials, with two exceptions. First, no feedback was provided. After each trial, even though the number of points earned was recorded, the participants did not know how many points they had earned until the end of testing. Second, the leg-lengths and angle of turn differed from trial to trial. The tested leg-lengths, listed in meters as Leg A x Leg B, were 2.25 x 4.45, 4.45 x 2.25, and 4.45 x 4.45. These leg lengths correspond to Leg A/Leg B ratios of .5, 1, & 2, respectively (rounded numbers). The Turn 1 value varied between 60?, 90? or 120?. Figure 12 presents the 9 unique itineraries. Itineraries were presented in 4 randomized blocks, with no itinerary repeating until all had been presented. 43 LEG A / LEG B RATIO .5 1 2 TU RN 1 60 90 120 TU RN 1 Figure 12. The testing itineraries used in the triangle-completion task in Experiment 3. Itineraries are organized with increasing Leg A/Leg B Ratio from left to right, and increasing values of Turn 1 from top to bottom. 44 Data analysis. Estimates were recorded to an external log on Cartesian coordinates. The mean search location was obtained for each itinerary for each participant. The direction and distance-traveled from the end of the Leg B to the mean search location was calculated, and the values of Obtained Leg C and Obtained Turn 2 were derived from those calculations. The differences between the Obtained Leg C and Correct Leg C, and the Obtained Turn 2 and Correct Turn 2 were determined. Difference values that were positive represented overestimations, and differences that were negative represented underestimations. These error values for each itinerary for all participants were submitted to ANOVA for statistical analysis. Results and Discussion Previous experience. PVE score and seconds to complete VE-Training were submitted to a Two-way MANOVA of Gender (Male, Female) x Input Device (Mouse, Joystick). The average PVE score of males (M = 246.9, SD = 361.4) was higher than for females (M = 0.381, SD = 1.02), F(1, 24) = 7.354, p = 0.012, ?2 = 0.235, but did not differ across input device, F(1, 24) = 0.772, p = 0.388, ?2 = 0.031. The average number of seconds to complete VE-Training was not significantly different for males (M = 30.667, SD = 45.49) and females (M = 55.812, SD = 50.257), F(1, 24) = 1.913, p = 0.179, ?2 = 0.074, and also did not differ across Input Device, F(1, 24) = 0.197, p = 0.661, ?2 = 0.008. The difference between males and females previous video-gaming experience was expected, however the lack of gender difference in VE-Training completion is surprising. Removing two male outliers (there were no female outliers) and re-running the analysis, however, revealed a difference in VE-Training time for males and females, F(1, 22) = 7.338, p = 0.013, ?2 = 0.250. Thus, males came to the task with, on average, more video 45 game experience than females. However, since males and females were evenly split between Input conditions, there was no difference between them. Contour Plots. Figure 13 shows the contour-plot distribution of estimates for all participants in each itinerary. The white line marks the itineraries and the colors represent the density of responses, with blue representing zero and red representing the highest. As shown, estimates tended to group around the same area for each itinerary. Estimates were most accurate in the 90?-turn condition, and tended to over-rotate in the 120?-turn condition and under-rotate in the 60?-turn condition. Direction estimation. Panel A of Figure 14 plots mean Turn 2 error across Turn 1 angles for each leg-length ratio. Mean Turn 2 errors on each itinerary for all participants were submitted to a 4-way mixed ANOVA of Turn 1 (60, 90, 120) x Ratio (.5, 1, 2) x Gender (Male, Female) x Input Device (Mouse, Joystick). There was no main effect of Gender, F(1,24) = 0.754, p = 0.394, ?2 = 0.03, and Gender did not interact with any other factor. There was a main effect of Input Device, F(1,24) = 4.495, p = 0.045, ?2 = 0.158, and Input Device did not interact with any other factor. The average Turn 2 errors were more positive for Mouse (M = 11.937, SD = 37.786) than for Joystick (M = -5.625, SD = 23.609) conditions. There was a main effect of Turn 1, F(2, 48) = 19.236, p < 0.001, ?2 = 0.445, a main effect of Ratio, F(2,48) = 21.736, p < 0.001, ?2 = 0.475, and a significant Turn 1 x Ratio interaction, F(4,96) = 7.049, p < .001, ?2 = 0.227. These findings suggest that estimates were not influenced by the participants? gender. Consistent with Kearns et al. (2002), the joystick condition produced more negative estimates than the more free- look condition (i.e., mouse, head-mounted display). Finally, estimates were affected by the characteristics of the itinerary, Turn 1 and leg-length Ratio. 46 Figure 13. Contour plots of the distribution of estimates for each triangle-completion itinerary. The diamond marks the start position and the white lines mark the routes of each itinerary. The blue shading marks the area where no estimates occurred. Red marks the area of the highest proportion of estimates. Ratio = .5 Ratio = 1 Ratio = 2 Tu rn 1 = 60 ? Tu rn 1 = 90 ? Tu rn 1 = 12 0? 47 Distance estimation. Panel B of Figure 14 plots mean Leg C error across Turn 1 angles for each leg-length ratio. Mean distance errors on each itinerary for each participant were submitted to a 4-way mixed ANOVA of Turn 1 (60, 90, 120) x Ratio (.5, 1, 2) x Gender (Male, Female) x Input Device (Mouse, Joystick). There was no main effect of Gender, F(1,24) = 0.034, p = 0.855, ?2 = 0.001, no main effect of Input Device, F(1,24) = 2.71, p = 0.607, ?2 = .011, and neither factor interacted with any factor. There was a main effect of Turn 1, F(2,48) = 138.525, p < 0.001, ?2 = 0.852, a main effect of Ratio, F(2,48) = 44.225, p < .001, ?2 = 0.648, and a significant Turn 1 x Ratio interaction, F(1,24) = 6.188, p < 0.001, ?2 = 0.205. These findings suggest that Gender and assigned Input Device did not influence errors in distance estimates. However, the characteristics of the programmed itinerary, Turn 1 and Ratio, did influence errors in distance estimation. Moreover, the effect of Ratio changed over levels of Turn 1. These findings are consistent with the existing literature (Kearns, et al., 2002; Klatzky, et al., 1999; Loomis, et al., 1993), suggesting that human navigation on a desktop virtual environment is dependant upon the same processes as navigation in natural environments and head- mounted-display virtual environments. Correct vs. Observed. The previous analyses found that error estimates varied across programmed itineraries. However, it is unclear from these analyses just how accurate the estimates were. Panels C and D of Figure 14 show mean observed Turn 2 and Leg C across correct Turn 2 and Leg C, respectively. If the mean estimates had all been correct, all of the symbols would fall on the line. Symbols that fall above the line represent overestimating the rotation (Panel C) or distance (Panel D). Symbols that fall below the diagonal line represent underestimation the rotation (Panel C) or distance 48 (Panel D). The plots show a clear tendency for estimates to show regression toward the mean. That is, participants in all conditions tend to overestimate shorter rotations and distances, and underestimate larger rotations and distances. This tendency is evident in the slopes of the regression lines, found in the legend of each panel. This stereotypy of response patters is consistent with the extant literature (Loomis, et al, 1993; Klatzky, et al, 1999; Kearns, et al 2002), but has yet to be critically addressed. That is, it is unknown whether the stereotypy of estimates represents human dead reckoning ability, or if it is a function of the testing conditions. Nevertheless, the present results clearly demonstrate the ability of common effects to be reproduced in a desktop virtual environment. Having successfully replicated Kearns et al (2002) and reproduced the common effects that have been reported in the literature, Experiment 4 sought to build upon these findings by testing longer itineraries and a greater range of forced-turn values. 49 A B TURN 1 ANGLE 60 90 120 ME AN T UR N 2 ER RO R (an gle ) -30 -20 -10 0 10 20 30 RATIO = .5 RATIO = 1 RATIO = 2 TURN 1 ANGLE 60 90 120 ME AN L EG C E RR OR (c m) -200 -100 0 100 200 RATIO = .5 RATIO = 1 RATIO = 2 C D CORRECT TURN 2 (degrees) 0 20 40 60 80 100 120 140 160 180 OB SE RV ED T UR N 2 (de gre es) 0 20 40 60 80 100 120 140 160 180 Experiment 3 (Joystick; b = .32) Experiment 3 (Mouse; b = .39) Kearns et al. (Joystick; b = .20) Kearns et al. (Walking; b = .38) CORRECT LEG C (cm) 0 100 200 300 400 500 600 700 800 OB SE RV ED L EG C (c m) 0 100 200 300 400 500 600 700 800 Experiment 3 (Joystick; b = .41) Experiment 3 (Mouse; b = .32) Kearns et al. (Joystick; b = .27) Kearns et al. (Walking; b = .09) Figure 14. Panel A plots mean Turn 2 error across Turn 1 angle values for each leg- length ratio. Panel B plots mean Leg C error across Turn 1 angle values for each leg- length ratio. In both upper panels, positive values are overestimates, and negative values are underestimates. Panel C plots correct Turn 2 vs. obtained Turn 2 angles, comparing Experiment 3 (black dot) to the results obtained by Kearns et al (2002). Panel D plots correct Leg C vs. obtained Leg C distances, comparing Experiment 3 to Kearns et al. 50 V. EXPERIMENT 4 Experiment 3 reproduced a common finding that Turn 1 values and leg-length ratio influence Turn 2 and Leg C errors. The purpose of Experiment 4 was to investigate the roles of Turn 1 and Leg A / Leg B across wider ranges of manipulation. Klatzky et al (1999) did test blindfolded participants across a range of Turn 1 values from 10? to 170?. The leg-lengths, however, were limited to distances of 1 to 6-m. Loomis et al (1993) tested ? values of 60?, 90? and 120?, and leg-lengths of 2-6-m. Testing a wider range of values is important to understanding how these variables influence rotational estimates. Also, given the ubiquity of stereotypic responding in the triangle completion task, it is important to see if a stereotypic function is maintained across a wider range of test itineraries. This experiment will test a broad range of Turn 1 values (i.e., 15? to 180?) and longer itinerary legs (i.e., 10, 20 and 40 m) than have been previously tested. Method Participants. 32 undergraduate students, 16 males and 16 females, enrolled in psychology courses at Auburn University were recruited for this experiment. Participants were at least 19 years old, had normal or corrected-to-normal vision, and were not susceptible to motion sickness. Each participant received a research hour that could serve as extra credit in psychology courses. Participants were randomly assigned, matching for gender, to one of two turning side groups. Half of the participants completed left-turn itineraries, and the other half 51 completed right-turn itineraries. By left- or right-turn itineraries, I mean that Turn 1 was a rotation to the left or right side, respectively, on every trial. Apparatus. The apparatus was the same as described in Experiment 2. All participants used the gamepad interface. Computer-Generated Environments. Two distinct computer-generated 3D environments were used. First, the VE-Training environment, the same as described in Experiment 1, was used. Second, the test environment, shown in Figure 15, was a large circular arena (radius = 80 m). The surface of floor was textured with blue-on-black random-dot pattern. The walls and ceiling were textured with white dots of varying size on a pure black background, giving the impression of a moonless night sky. The targets (white globes) were the same as described in Experiment 3. Procedure. After completing the PVE questionnaire and the VE-training task as described in Experiment 1, participants were trained on the current task. In the practice trials, participants started near the center of the arena and a target appeared 20 m in front of them. Participants moved forward (at 3 m/s) until they collided with the target. Touching the target made it disappear with a clicking noise and a second target appeared 10-m away from the participant at 90? to the left or right (depending on the group assignment) of the participant. Participants stopped in the location of the target, and rotated in place until the second target was centered in their view. They then navigated to the location of the second target. Upon colliding with the second target, which also disappeared upon contact, participants rotated in place until facing what they estimated to be the start location. Participants then walked to where they estimated the start location to be and, upon reaching that location, pressed the ?action? button on the gamepad to mark 52 their choice. Upon doing so, the screen went black and a text message on the screen provided feedback on estimate accuracy in the form of points earned and prompted participants to press the button again when ready to begin to the next trial. Estimates within 3 meters of the correct location earned 1 point, within 2 meters earned 3 points, and within 1 meter earned 5 points. Participants practiced until they were able to earn 3 points on two consecutive trials, signifying that they had learned the task. Upon meeting this criterion, participants began the testing phase. Figure 15. Testing environment for Experiment 4. The white globe marks where to move. 53 During testing, participants walked 11 unique itineraries, divided into two separate testing phases shown in Figure 16. In the Turn 1 Test phase, the leg-length ratio was held constant (a = 20 m, b = 10 m, a/b ratio = 2) and 6 different values of Turn 1 were presented (15?, 45?, 90?, 135?, 165?, & 180?). In the Ratio Test phase, Turn 1 was held constant (90?) and 5 different levels of a/b ratio were presented (10/40, 20/40, 20/20, 40/20, & 40/10). The test order was counterbalanced across across left- and right-handed itinerary groups. Within each test phase, itineraries were presented in 4 randomized blocks, such that each itinerary was presented once before any were repeated. The Turn 1 Test phase consisted of 24 trials, and the Ratio Test phase consisted of 20 trials. Figure 16. The testing itineraries used in Experiment 4, organized by test phases. TURN 1 45 90 13515 165 180 RATIO 1.5.25 2 4 54 Data analysis. Data analysis was conducted in the same manner as described in Experiment 4. Data from left-handed itineraries were transformed to be analyzed together with data from right-handed itineraries. Results and Discussion Previous experience. PVE score and seconds to complete VE-Training were submitted to a Three-way MANOVA of Gender (Male, Female) x Side (Left, Right) x Order (Turn 1 First, Turn 1 Second). PVE scores did not differ across Side, F(1,19) = 0.053, p = 0.82, ?2 = 0.003, or Order, F(1,19) = 4.09, p = 0.057, ?2 = 0.177. The average PVE score of males (M = 35.9, SD = 49.63) was higher than for females (M =.8, SD = 1.21), F(1,19) = 6.1, p = .023, ?2 = .243. The average number of seconds to complete VE- Training did not differ across Side, F(1,19) = 0.13, p = 0.72, ?2 = 0.007, or Order, F(1,19) = 1.23, p = 0.28, ?2 = 0.061. The time to complete VE-Training for males (M = 13.5, SD = 10) was not statistically different from females (M = 145.6, SD = 242.35), F(1,19) = 2.62, p = 0.122, ?2 = 0.121. The difference between males and females on PVE score was expected. However, the lack of gender difference in VE-Training completion raises questions. The lack of gender difference in performance of VE-Training might suggest that the task may be easily learned by some video-game na?ve participants. Direction estimation. The top-left panel of Figure 17 plots the mean Turn 2 error across leg-length ratios. The mean Turn 2 errors on each itinerary for the Ratio block were submitted to a 4-Way ANOVA of Ratio (.25, .5, 1, 2, & 4) x Side (left, right) x Phase Order (first, second) x Gender (male, female). There was a main effect of Ratio, F(4, 84) = 11.565, p < 0.001, ?2 = 0.355, and an interaction of Ratio x Gender, F(4, 84) = 5.9, p < .001, ?2 = 0.219. No other main effects or interactions were significant. 55 Independent 3-Way ANOVAs of Ratio x Phase Order x Side for males and females revealed a main effect of Ratio for Females, F(4, 48) = 16.48, p < .001, ?2 = 0.58), and no other main effects or interactions. The linear component of the trend analysis across levels of Ratio for Females was significant, F(1,12) = 28.59, p < 0.001, ?2 = 0.704, suggesting that Turn 2 errors became more positive as leg-length ratio increased. The top-right panel of Figure 17 plots the mean Turn 2 error across values of Turn 1. The mean Turn 2 errors on each itinerary of the Turn 1 test were submitted to a 4-Way ANOVA of Turn 1 (15?, 45?, 90?, 135?, 165?, 180?) x Side (left, right) x Phase Order (first, second) x Gender (male, female). There was a main effect of Turn 1, F(5,105) = 14.927, p < .001, ?2 = .415, and Gender, F(1, 21) = 9.763, p = .005, ?2 = .317, and an interaction of Turn 1 x Gender, F(5, 105) = 9.31, p < .001, ?2 = .307. No other main effects or interactions were significant. Independent 3-Way ANOVAs of Turn 1 x Phase Order x Side for males and females revealed a main effect of Turn 1 for each Gender (Males: F(5, 45) = 2.48, p = .045, ?2 = .216; Females: F(5, 60) = 16.602, p < .001, ?2 = .58), and no other main effects or interactions. These results suggest that Turn 2 errors varied across levels of Turn 1 for both genders, but did not vary equally across genders. Distance estimation. The mean return-distance errors on each itinerary for the Ratio test were submitted to a 4-Way ANOVA of Ratio (.25, .5, 1, 2, & 4) x Side (left, right) x Phase Order (first, second) x Gender (male, female). There was a main effect of Gender, F(1, 21) = 16.02, p = .001, ?2 = .433, and no other main effects or interactions were significant. Overall, males were more accurate than females, as females tended to underestimate the return distance. 56 The mean return-distance errors on each itinerary of the Turn 1 test were submitted to a 4-Way ANOVA of Turn 1 (15?, 45?, 90?, 135?, 165?, 180?) x Side (left, right) x Phase Order (first, second) x Gender (male, female). There was a main effect of Turn 1, F(5,105) = 154.07, p < .001, ?2 = .88, and a significant Turn 1 x Gender interaction, F(5, 105) = 3.42, p = .007, ?2 = .14. No other main effects or interactions were significant. Independent 3-Way ANOVAs of Turn 1 x Order x Side for males and females revealed a main effect of Turn 1 for each Gender (Males: F(5, 45) = 119.69, p < .001, ?2 = .93; Females: F(5, 60) = 54.452, p < .001, ?2 = .82), and no other main effects or interactions. These results suggest that distance error varied across levels of Turn 1 for both males and females, but did not vary equally across genders. Correct vs. Observed. The bottom-left and right panels of Figure 17 show mean observed Turn 2 and Leg C across correct Turn 2 and Leg C, respectively. If the mean estimates had all been correct, all of the symbols would fall on the line. Symbols that fall above the line represent overestimating the rotation (left panel) or distance (right panel). Symbols that fall below the diagonal line represent. The mean data from Experiment 3 is provided for comparison. Mean male estimates were very accurate across all values of Turn 2 (left panel). Mean male estimates were also accurate across values of Leg C, showing a tendency to underestimate the return distance at longer distances (right panel). The high accuracy on these longer itineraries stands in stark contrast to the poor accuracy and highly stereotyped responses produced in Experiment 3. Mean female estimates also improved relative to Experiment 3. Thus, perhaps counter-intuitively, male and female dead-reckoning estimates improved when tested on longer itineraries. 57 TURN 1 (degrees) 15 45 90 135 165 180 ME AN TU RN 2 ER RO R (de gre es) -20 0 20 40 60 80 100 LEG-LENGTH RATIO (log-scaled) 0.25 0.5 1 2 4 ME AN TU RN 2 ER RO R (de gre es) -20 0 20 40 60 80 100 MALES FEMALES CORRECT TURN 2 (degrees) 0 20 40 60 80 100 120 140 160 180 OB SE RV ED T UR N 2 (de gre es) 0 20 40 60 80 100 120 140 160 180 EXP4 - MALES (b = .96) EXP4 - FEMALES (b = .55) EXP3 - BOTH (b = .35) CORRECT LEG C (cm) 0 500 1000 1500 2000 2500 3000 3500 4000 4500 OB SE RV ED L EG C (c m) 0 500 1000 1500 2000 2500 3000 3500 4000 4500 EXP4 - MALES (b = .88) EXP4 - FEMALES (b = .55) EXP 3 - BOTH (b = .37) Figure 17. Top-left panel plots mean angular error across itinerary leg-length ratios for male (circles) and female (triangles) participants. Top-right panel plots mean angular error across values of Turn 1 for male and female participants. Bottom-left and right panels plot correct vs. obtained return rotation and return distance, respectively. Filled symbols represent mean data from males (circles) and females (triangles) in Experiment 4. Open symbols represent mean data of both genders from Experiment 3. 58 VI. GENERAL DISCUSSION The purpose of this series of studies was to test whether humans can make accurate dead-reckoning estimates (distance and rotation estimates) when limited to the use of optical flow in a desktop VE. Experiment 1 was a pilot study which adapted a task used to test whether honeybees use optical flow to estimate distance (Srinivasan, Zhang, & Bidwell, 1997) to a desktop virtual environment to test human participants. Although the variability in the results limited the conclusions that could be drawn, the observations gave rise to the more controlled methodology of Experiment 2. Experiment 2 clearly demonstrated that humans can accurately estimate distance when limited to optical flow information. Even when the movement velocity was manipulated mid-trial, participants were sensitive to the relative changes in optical flow and were able to make appropriate adjustments. Experiment 3 demonstrated that optical flow alone is sufficient for accurate dead reckoning, and successfully reproduced the common effects of stereotypic responding found in previous natural- and virtual-environment triangle-completion tasks (Kearns, et al., 2002; Klatzky, et al., 1999; Loomis, et al., 1993). Extending the programmed itineraries in Experiment 4 reduced the stereotypy of responses, and improved dead-reckoning accuracy for both males and females. These results demonstrate that the desktop virtual environment is a viable apparatus to test 59 spatial cognition. Additionally, these results provide insights on the limits and extent to which optical flow is sufficient for humans to dead reckon in a VE. The results of Experiment 4 may help explain why previous research produced stereotypic responses, and inform our thinking on how humans dead reckon. Table 1 lists the ranges of Turn 1 and leg lengths, the sensory inputs available to the participant, and the regression slopes (a measure of accuracy) for Experiments 3 and 4 and other triangle- completion studies. The closer the slopes are to a value of 1, the more accurate, and less stereotypic, the responses. The closer the slopes are to a value of 0, the less accurate and more stereotypic the responses. The tendency among all of the studies that produced stereotypic responses was the small range of leg-lengths and turning angles of the tested itineraries. The study with the least range of leg-lengths and turning angles (Kearns, et al 2002) produced the most stereotypic responses. These short distances provide little information, whether it be visual, vestibular or kinesthetic that can serve to distinguish one itinerary from another. Indeed, in many cases, the difference between the itineraries was a matter of two or three steps. If highly similar leg lengths and rotations are difficult to differentiate, it should be no surprise that humans produced stereotypic responses when confronted with very similar itineraries. In contrast to the low-variation itinerary tests, there was less stereotypy in the cases in which a wider range of turns were tested (e.g., Experiment 4 of the present study; Klatzky, et al, 1999). Another finding in this series of studies was that previous video game experience had no effect on estimate accuracy. Participants reporting no previous video-game experience were just as accurate as participants reporting a lot of previous experience. One possible reason for this null effect was that each participant was given time to 60 practice with the apparatus?and the simple nature of the tasks made it accessible to all participants. Another possible reason is that all visual humans are sensitive to optical flow. Gauging relative optical flow in a video game may not depend on previous experience with video games, but may be a basic process that all humans bring with them to the task. Optical flow sensitivity may be as basic as monocular depth cues, motion parallax, and other basic visual cues common to natural and virtual environments. These results suggest that humans should be able to process stimuli provided by any virtual environment software that produces a dynamic visual experience which reproduces the naturally-occurring depth and motion cues. In addition to finding that the desktop virtual environment is viable for dead reckoning research, these findings demonstrate the unique utility of desktop virtual environment software. Natural environment studies are limited to the space over which an experimenter may exercise experimental control and often require that participants? vision be impeded. Immersive virtual-environments which utilize a HMD and allow participants to walk in the virtual scene address the problem of removing participant vision, but are severely limited in space. The equipment necessary for the HMD to function limits this research to a laboratory room. Even the newest immersive apparatus, which is a great achievement by any measure, is still limited to an area too small to test participants on the itineraries used in Experiment 4 (Waller, Bachmann, Hodgson & Beall, 2007). The desktop virtual environment, however, is not subject to these limitations of space. The virtual scene can be as large as needed, and the stimuli remain completely under experimental control. The results of the present experiments suggest that the limitations of the desktop VE, i.e., the lack of vestibular and kinesthetic 61 feedback, do not disrupt dead reckoning. Clearly, the generalizability of desktop virtual environment research to natural settings will always be an important issue, but the present results provide a solid basis for an argument that similar processes are at work in natural and virtual settings. One of the implications of finding evidence for dead reckoning in a virtual environment is that future research in spatial cognition must take these results into consideration. For example, research in cognitive mapping generally tests for a cognitive map using a novel shortcut test. That is, participants may be taught how to travel from point A to Point B, and from Point A to Point C. Upon learning to travel both routes, participants are tested with the task of traveling from Point B to Point C to see if they will pass through Point A or traverse a more direct novel shortcut between Points B and C. However, participants may take a novel shortcut for a variety of reasons (Bennett, 1996), one of which is dead reckoning. Thus, knowing that humans can dead reckon in a desktop VE, future work in cognitive mapping must design experiments that rule out dead reckoning as a possible alternative explanation. The following are several questions that arise from the present research. How do changes in the focus of expansion affect distance estimation? Experiment 2 demonstrated that surface textures on the walls of a tunnel provide sufficient optical flow for humans to accurately estimate distance. The width of the tunnel was held constant at 5-m in the present study. Changing the width of the tunnel changes the size of the focus of expansion. A follow-up study which manipulated the width of the tunnel would inform our thinking on how the focus of expansion affects distance estimates. 62 Are humans as sensitive to motion parallax as they are to surface textures? Motion parallax can be tested by removing the walls of the tunnel and lining the area to the side of the pathway with random arrangements of objects (e.g., cylanders). These objects can be set at different distances from the edge of the path, changing the speed of motion parallax and the focus of expansion. Distance-estimation accuracy on motion parallax trials can be compared to accuracy on tunnel trials to assess the informational contribution of each. Does feedback improve performance? In the present studies, feedback was not provided throughout testing. This was done to remove the possibility that feedback would influence performance. However, the extent to which feedback influences distance estimation and dead reckoning is unknown. How consistently accurate can human estimates be? Is there a limit to the accuracy that can be obtained? Can timing account for the results of the triangle completion task? In Experiments 3 and 4, the velocity of movement was held constant while participants engaged in the triangle-completion task. Because of this, it is possible that timing, e.g., counting seconds, could account for the accuracy of their responses. To rule out this possibility, the velocity of movement should be manipulated throughout the trials. Accuracy on velocity-manipulated trials could be compared to velocity-constant trials to assess the possible role of timing. Does the interface apparatus affect long-distance dead reckoning? In the current study, the influence of interface apparatus (i.e., mouse and keyboard vs. joystick) was tested only on the short itineraries of Experiment 3. Perhaps the short itineraries did not provide enough contrast between itineraries to draw out the differences between the 63 interfaces. The interface apparatuses should be compared on longer itineraries to assess the possible role that they might have on navigation and control in virtual environments. 64 Table 1 Triangle-Completion Studies, the Sensory Inputs, Range of Turn 1, Range of Leg lengths, and Slope of the Regression Line for Correct vs. Observed Rotation and Distances Study Sensory Inputs Turn 1 range Rotation slope Leg range Distance slope Experiment 4 (Males) - Joystick O 165 0.96 30 0.88 Experiment 4 (Females) - Joystick O 165 0.55 30 0.55 Experiment 3 - Joystick O 60 0.32 2 0.41 Experiment 3 - Mouse O 60 0.39 2 0.32 Kearns et al. (2002) - Joystick O 60 0.2 2 0.27 Kearns et al. (2002) - Walking O, V, K 60 0.38 2 0.09 Loomis et al. (1993) - Blindfold V, K 60 0.56 4 0.63 Loomis et al. (1993) - Adv. Blind V, K 60 0.62 4 0.57 Loomis et al. (1993) - Con. Blind V, K 60 0.45 4 0.51 Klatzky et al. (1999) - Blindfold V, K 160 0.69 5 0.54 Klatzky et al. (1999) - Partial Vis. O, V, K 160 0.73 5 0.64 Note. O = Optical Flow, V = Vestibular, K = Kinesthetic 65 REFERENCES Bakker, N. H., Werkhoven, P. J. & Passenier, P. O. (1999). The effects of proprioceptive and visual feedback on geographical orientation in virtual environments. Presence, 8, 36-53. Bruggeman, H., Zosh, W., & Warren, W. H. (2007). Optic flow drives human visuo- locomotor adaptation. Current Biology, 17, 2035?2040. Bennett, A. T. D. (1996). Do animals have cognitive maps? The Journal of Experimental Biology, 199, 219?224. Chance, S. S., Gaunet, F., Beall, A. C., & Loomis, J. M., (1998). Locomotion mode affects the updating of objects encountered during travel: The contribution of vestibular and proprioceptive inputs to path integration. Presence, 7, 168-178. Cheng, K., Shettleworth, S. J., Huttenlocher, J., & Rieser, J. J. (2007). Bayesian integration of spatial information. Psychological Bulletin, 133, 625-637. Collett, T. S. & Collett, M. (2000). Path integration in insects. Current Opinion in Neurobiology, 10, 757-762. Etienne, A. S., Boulens, V., Maurer, R., Rowe, T. & Siegrist, C. (2000). A brief view of known landmarks reorientates path integration in hamsters. Naturwissenschaften, 87, 494-498. Etienne, A. S. & Jeffrey K. J. (2004). Path integration in Mammals. Hippocampus, 14, 180-192. 66 Gibson, Brett M. (2001). Cognitive maps not used by humans (homo sapiens) during a dynamic navigational task. Journal of Comparative Psychology, 115, 397-402. Jansen-Osmann, P. & Brerendt, B. (2002). Investigating distance knowledge using virtual environments. Environment and Behavior, 34, 178-193. Kearns, M. J., Warren, W. H., Duchon, A. P. & Tarr M. J. (2002) Path integration from optical flow and body senses in a homing task. Perception, 31, 349-374. Klatzky, R., L., Loomis, J. M., Golledge, R. G., Cicinelli, J. G., Doherty, S. & Pellegrino, J. W. (1990). Acquisition of route and survey knowledge in the absence of vision. Journal of Motor Behavior, 22, 19-43. Klatzky, R. L., Beall, A. C., Loomis, J. M., Golledge, R. G., & Philbeck, J. W. (1999). Human navigation ability: Tests of the encoding-error model of path integration. Spatial Cognition and Computation, 1, 31-65. Lackner, J. R., & DiZio, P. (2005). Vestibular, proprioceptive, and haptic contributions to spatial orientation. Annual Review of Psychology, 56, 115-147. Loomis, J. M., Klatzky, R. L., Golledge, R. G., Cicinelli, J. G., Pellegrino, J. W. & Fry, P. A. (1993). Nonvisual navigation by blind and sighted: Assessment of path integration ability. Journal of Experimental Psychology: General, 122, 73-91. Maurer, R. & S?guinot, V. (1995). What is modelling for? A critical review of the models of path integration. The Journal of Theoretical Biology, 175, 457-475. May, M. & Klatzky, L. K. (2000). Path integration while ignoring irrelevant movement. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26 (1), 169-186. 67 Mittelstaedt, M. ?L., & Mittelstaedt, H., (1980). Homing by path integration in a mammal. Naturwissenschaften, 67, 566-567. Mohler, B. J., Thompson, W. B., Creem-Regehr, S. H. Willemsen, P. Pick, Jr. H. L., & Rieser, J. J. (2007). Calibration of locomotion resulting from visual motion in a treadmill-based virtual environment. Journal of Experimental Psychology: Human Perception and Performance, 21, 1-15. M?ller, M., & Wehner, R. (1988). Path integration in desert ants, Cataglyphis fortis. Proceedings of the National Academy of Sciences, 85, 5287-5290. Reiser, J. J., Pick, Jr., H. L., Ashmead, D., & Garing, A. (1995). Calibration of human locomotion and models of perceptual-motor organization. Journal of Experimental Psychology: Human Perception and Performance, 21, 480-497. Ruddle, R. A., Payne, S. J. & Jones M. J. (1998). Navigating large-scale ?desk-top? virtual buildings: Effects of orientation aids and familiarity. Presence, 7, 179? 192. Salas, C., Broglio, C. & Rodr?guez, F. (2003). Evolution of forebrain and spatial cognition in vertebrates: Conservation across diversity. Brain, Behavior & Evolution, 62, 72-82. S?guinot, V., Maurer, R., & Etienne, A. S. (1993). Dead reckoning in a small mammal: The evaluation of distance. Journal of Comparative Physiology A, 173, 103-113. Shettleworth, S. (1998). Cognition, Evolution, and Behavior. New York: Oxford University Press. 68 Shettleworth, S. & Sutton J. E. (2005). Multiple systems for spatial learning: Dead reckoning and beackon homing in rats. The Journal of Experimental Psychology: Animal Behavior Processes, 31, 125-141. Srinivasan, M. V., Zhang, S. W. & Bidwell, N. J. (1997). Visually mediated odometry in honeybees. The Journal of Experimental Biology, 200, 2513?2522. Sturz, B. R., Bodily, K. D., & Katz, J. S. (2006). Evidence against integration of spatial maps in humans. Animal Cognition. 9, 207?217. Sturz, B. R., Bodily, K. D., & Katz, J. S. (under review). Problem Solving in a Desktop Virtual Environment. Sturz, B. R., Bodily, K. D., & Katz, J. S., Kelly, D. M. (under review). Evidence against integration of spatial maps by humans (Homo sapiens): Generality across search tasks. Wallace, D. G., Choudhry, S., & Martin, M. M. (2006). Comparative analysis of movement characteristics during dead-reckoning-based navigation in humans and rats. Journal of Comparative Psychology, 120, 331-344. Waller, D., Bachman, E., Hodgson, E., & Beall, A. C. (2007). The HIVE: A huge immersive virtual environment for research in spatial cognition. Behavior Research Methods, 39, 835-843. Wang, R. F. & Spelke, E. S. (2000). Updating egocentric representations in human navigation. Cognition, 77, 215-250. Warren, W. H., Kay, B. A., Jr., Zosh, W. D., Duchon, A. P., & Sahuc, S. (2001). Optic flow is used to control human walking. Nature Neuroscience, 4, 213-216. 69 Wehner, R., Boyer, M., Loertscher, F., Sommer, S., & Menzi, U. (2006). Ant navigation: One-way routes rather than maps. Current Biology, 16, 75-79. Wishaw, I. Q., Wallace, D. G. (2003). On the origins of autobiographical memory. Behavioral Brain Research, 138, 113-119. Wittlinger, M., Wehner, R. & Wolf, H. (2006). The ant odometer: Stepping on stilts and stumps. Science, 312, 1965-1967.