GPS and Inertial Sensor Enhancements for Vision-based Highway Lane Tracking Except where reference is made to the work of others, the work described in this thesis is my own or was done in collaboration with my advisory committee. This thesis does not include proprietary or classified information. Joshua M. Clanton Certificate of Approval: David M. Bevly, Co-chair Assistant Professor Mechanical Engineering A. Scottedward Hodel, Co-chair Associate Professor Electrical and Computer Engineering John Hung Associate Professor Electrical and Computer Engineering Stephen L. McFarland Dean Graduate School GPS and Inertial Sensor Enhancements for Vision-based Highway Lane Tracking Joshua M. Clanton A Thesis Submitted to the Graduate Faculty of Auburn University in Partial Fulfillment of the Requirements for the Degree of Master of Science Auburn, Alabama August 7, 2006 GPS and Inertial Sensor Enhancements for Vision-based Highway Lane Tracking Joshua M. Clanton Permission is granted to Auburn University to make copies of this thesis at its discretion, upon the request of individuals or institutions and at their expense. The author reserves all publication rights. Signature of Author Date of Graduation iii Vita Joshua Michael Clanton was born on October 19th, 1981 in Spartanburg, South Carolina, the first and only child of James Michael and Mary Leigh Clanton. Josh grew up in East-central Alabama, attending school in Alexander City. At a young age, Josh developed an interest in math and science. Josh attended Benjamin Russell High School in Alexander City, where he met and dated Gina Rae Thomas, who thankfully put up with him long enough to become his wife in March of 2006. After high school, Josh (and Gina) attended Auburn University, where he first majored in Computer Engineering, but then saw the light and changed majors to Electrical Engineering. As an undergraduate, Josh garnered an interest in control systems and decided to pursue a Master?s degree in Electrical Engineering. Currently, Josh is doing research work for Dr. David M. Bevly?s GPS and Vehicle Dynamics Lab at Auburn University. Upon completion of his Master?s degree, Josh and Gina will be moving to Orlando, Florida where he has accepted a position with Lockheed Martin as a Systems Engineer. In his free time (which is minimal), Josh?s enjoys playing golf, deer hunting and Auburn football. iv Thesis Abstract GPS and Inertial Sensor Enhancements for Vision-based Highway Lane Tracking Joshua M. Clanton Master of Science, August 7, 2006 (B.E.E., Auburn University, 2004) 140 Typed Pages Directed by A. Scottedward Hodel & David M. Bevly For the past decade much research in the Intelligent Transportation Systems (ITS) community has been devoted to the topic of lane departure warning (LDW). A significant portion of highway fatalities each year are attributed to vehicle lane departure. Many automobile manufacturers are developing advanced driver assistance systems, many of which include subsystems that help prevent un-intended lane departure. A consistent approach among these systems is to alert the driver when an un-intended lane departure is predicted. To predict a possible lane departure, a vision system mounted on the vehi- cle detects the lane markings on the road and determines the vehicle?s orientation and position with respect to the detected lane lines. These vision-based systems suffer from performance limitations that are brought forth by environmental constraints. Therefore, it is desirable to add support from additional sensors to compensate when the vision system loses its ability to perform lane departure warning. The first goal of this research is to present current methods of vision-based LDW v systems and to explore methods of sensor enhancement to assist the vision system. Sec- ond, several image processing and computer vision algorithms will be implemented as demonstration of how they could be used in a vision-based LDW system. Finally, the main goal of this research is to develop a method using additional sensors such as GPS and inertial sensors to enhance vision-based lane detection. The combination of GPS and the vision system together with a high accuracy lane- level map will allow a more robust highway lane tracking system. Kalman filtering is employed to incorporate inertial sensor inputs and measurements from the GPS receiver and vision system to estimate the vehicle states relative to highway lane tracking. Vehicle lateral offset, lateral velocity and heading angle are estimated to provide lane tracking when the vision-based lateral offset measurement fails. vi Acknowledgments First and foremost I would like to thank God for His presence in my life and His guidance throughout the last 24 years. I would also like to thank my parents, Mike and Judy, and my aunt Linda, for their emotional (and financial) support. They have nurtured me and led me to be the person I am today. I am extremely lucky to have family who care so much, and have always gone out of their way to give me everything I have needed to become a successful person. To my wife, Gina, I am so thankful to have you in my life. Your love and patience over the last 8 years is deeply appreciated, and I look forward to starting a new adventure in life with you. Last but certainly not least, I would like to thank my co-advisors, Dr. Hodel and Dr. Bevly. I have learned a great deal from both of you, not just how to approach engineering problems but how to approach issues in life. Both of you have been great mentors and I feel extremely privileged to have worked and learned under both of you. vii Style manual or journal used Journal of Approximation Theory (together with the style known as ?aums?). Bibliography follows van Leunen?s A Handbook for Scholars. Computer software used The document preparation package TEX (specifically LATEX) together with the departmental style-file aums.sty. viii Table of Contents List of Figures xii 1 Introduction 1 1.1 Research Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Research Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Thesis Overview and Organization . . . . . . . . . . . . . . . . . . . . . . 4 2 Previous/Related Work 6 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Lane departure warning with computer vision/image processing . . . . . . 6 2.2.1 Tasks of a lane departure warning system . . . . . . . . . . . . . . 6 2.2.2 Lane boundary detection . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.3 Lane boundary modeling . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.4 Parameters for lane departure warning . . . . . . . . . . . . . . . . 11 2.2.5 Performance limitations . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3 Vision based LDW with additional sensor enhancement . . . . . . . . . . 14 2.3.1 Vison-based LDW with INS enhancements . . . . . . . . . . . . . 14 2.3.2 Vision-based LDW enhancements with GPS/INS and high accu- racy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.3 High accuracy highway mapping . . . . . . . . . . . . . . . . . . . 17 2.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3 Vision Research 20 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.2 Optical Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.2.1 Optical Flow - Horn & Schunck?s method . . . . . . . . . . . . . . 21 3.2.2 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3 Harris corner detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.3.1 Moravec?s method . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.3.2 Auto-correlation enhancement . . . . . . . . . . . . . . . . . . . . 40 3.3.3 Corner response function . . . . . . . . . . . . . . . . . . . . . . . 43 3.3.4 Implementation and testing . . . . . . . . . . . . . . . . . . . . . . 43 4 GPS Bias Measurement using Vision 47 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.2 Offset measurement from camera system . . . . . . . . . . . . . . . . . . . 47 4.3 Offset measurement from GPS/MAP . . . . . . . . . . . . . . . . . . . . . 48 ix 4.4 Map Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.5 Error calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.5.1 Heading angle and offset between GPS and LDW . . . . . . . . . . 51 4.6 Experimental data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.6.1 GPS/LDW lane estimate . . . . . . . . . . . . . . . . . . . . . . . 54 4.6.2 Data with DGPS/LDW/MAP . . . . . . . . . . . . . . . . . . . . 56 4.6.3 Data with GPS/LDW/MAP . . . . . . . . . . . . . . . . . . . . . 56 5 GPS/INS/Vision for Highway Lane Tracking 62 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 5.2 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.2.1 Simple vehicle model . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.2.2 Inputs and measurements . . . . . . . . . . . . . . . . . . . . . . . 64 5.2.3 Estimation model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 5.2.4 Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.2.5 Simulating a lane detection failure . . . . . . . . . . . . . . . . . . 73 5.2.6 Addition of a vision-based velocity sensor . . . . . . . . . . . . . . 78 5.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 6 Conclusions 81 6.1 Successes of this Research . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 6.2 Suggestions for further research . . . . . . . . . . . . . . . . . . . . . . . . 82 Bibliography 84 Appendices 86 A Experimental Data for Lane Tracking Estimation 87 A.1 Experimental Data - Starfire DGPS receiver & LDW . . . . . . . . . . . . 87 A.1.1 Lane Tracking Estimation with Starfire (DGPS) /LDW . . . . . . 93 A.2 Experimental Data - GPS receiver & LDW . . . . . . . . . . . . . . . . . 96 A.2.1 Lane Tracking Estimation with UBLOX (GPS) /LDW . . . . . . . 99 A.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 B A Lane-level Map Database 109 B.1 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 B.2 Coordinate System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 B.3 Straight Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 B.3.1 Data Conditioning and Curve Fitting . . . . . . . . . . . . . . . . 111 B.3.2 Receiver offset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 x C Test Setup and Instrumentation 115 C.1 Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 C.1.1 Test Bed/Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . 115 C.1.2 Experimental System . . . . . . . . . . . . . . . . . . . . . . . . . 115 C.2 Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 C.2.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 C.2.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 C.2.3 Vehicle Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . 117 xi List of Figures 3.1 4x4x2 image slice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.2 Original synthetic image used to create sequence for optical flow calculation 29 3.3 First four frames of the synthetic blimp image sequence and the computed optical flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.4 Next four frames of the synthetic blimp image sequence and the computed optical flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5 Last four frames of the synthetic blimp image sequence and the computed optical flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.6 First four images in original balloon sequence . . . . . . . . . . . . . . . . 34 3.7 First four frames of the balloon image sequence and the computed optical flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.8 Next four frames of the balloon image sequence and the computed optical flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.9 Last four frames of the balloon image sequence and the computed optical flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.10 Detected corners with Harris method . . . . . . . . . . . . . . . . . . . . . 46 4.1 LDW lateral offset diagram . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.2 GPS lateral offset diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.3 Map validation for North side of track . . . . . . . . . . . . . . . . . . . . 50 4.4 Map validation for South side of track . . . . . . . . . . . . . . . . . . . . 50 4.5 Error between LDW lateral offset and GPS lateral offset . . . . . . . . . . 51 4.6 GPS and LDW offset relative to vehicle angle within the lane . . . . . . . 52 xii 4.7 Estimating lane position from LDW/GPS . . . . . . . . . . . . . . . . . . 55 4.8 DGPS/LDW Estimated Lanes vs Mapped Lanes (South) . . . . . . . . . 57 4.9 Left and right error between DGPS/MAP offset and LDW offset . . . . . 57 4.10 DGPS/LDW Estimated Lanes vs Mapped Lanes (North) . . . . . . . . . 58 4.11 Left and right error between DGPS/MAP offset and LDW offset . . . . . 58 4.12 GPS/LDW Estimated Lanes vs Mapped Lanes (South) . . . . . . . . . . 59 4.13 Left and right error between GPS/MAP offset and LDW offset . . . . . . 60 4.14 GPS/LDW Estimated Lanes vs Mapped Lanes (North) . . . . . . . . . . 60 4.15 Left and right error between GPS/MAP offset and LDW offset . . . . . . 61 5.1 Simulated vehicle path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5.2 Actual lateral acceleration and yaw rate . . . . . . . . . . . . . . . . . . . 66 5.3 Simulated accelerometer and gyro readings . . . . . . . . . . . . . . . . . 67 5.4 Simulated LDW and GPS measurements . . . . . . . . . . . . . . . . . . . 68 5.5 Estimates for states 1 through 3 . . . . . . . . . . . . . . . . . . . . . . . 74 5.6 Estimates for states 4 through 6 . . . . . . . . . . . . . . . . . . . . . . . 75 5.7 State estimation residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 5.8 Lateral offset estimation with simulated camera outage . . . . . . . . . . . 77 5.9 Lateral offset estimation with simulated camera outage (with vision- velocity sensor) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.10 Estimation error of lateral offset during lane detection failure . . . . . . . 79 A.1 Vehicle track, NCAT South Straight section . . . . . . . . . . . . . . . . . 88 A.2 Respective offset measurements . . . . . . . . . . . . . . . . . . . . . . . . 88 A.3 (DGPS/LDW, South straightaway) . . . . . . . . . . . . . . . . . . . . . . 89 xiii A.4 Accelerometer and Gyro data . . . . . . . . . . . . . . . . . . . . . . . . . 90 A.5 Vehicle track, NCAT North Straight section . . . . . . . . . . . . . . . . . 90 A.6 LDW and GPS/MAP offset measurements . . . . . . . . . . . . . . . . . . 91 A.7 Respective offset measurements . . . . . . . . . . . . . . . . . . . . . . . . 91 A.8 (DGPS/LDW, North straightaway) . . . . . . . . . . . . . . . . . . . . . . 92 A.9 Accelerometer and Gyro data . . . . . . . . . . . . . . . . . . . . . . . . . 92 A.10 Sensor inputs and LDW & GPS/MAP offset measurements . . . . . . . . 94 A.11 Estimated and measured GPS bias . . . . . . . . . . . . . . . . . . . . . . 95 A.12 Estimated and measured lateral offset . . . . . . . . . . . . . . . . . . . . 95 A.13 Estimation error during camera outage . . . . . . . . . . . . . . . . . . . . 96 A.14 Sensor inputs and LDW & GPS/MAP offset measurements . . . . . . . . 97 A.15 Estimated and measured GPS bias . . . . . . . . . . . . . . . . . . . . . . 98 A.16 Estimated and measured lateral offset . . . . . . . . . . . . . . . . . . . . 98 A.17 Vehicle track, NCAT South Straight section . . . . . . . . . . . . . . . . . 99 A.18 Respective offset measurements . . . . . . . . . . . . . . . . . . . . . . . . 100 A.19 (GPS/LDW, South straightaway) . . . . . . . . . . . . . . . . . . . . . . . 100 A.20 Vehicle track, NCAT North Straight section . . . . . . . . . . . . . . . . . 101 A.21 LDW and GPS/MAP offset measurements . . . . . . . . . . . . . . . . . . 101 A.22 Respective offset measurements . . . . . . . . . . . . . . . . . . . . . . . . 102 A.23 Bias Measurement (GPS/LDW, North straightaway) . . . . . . . . . . . . 102 A.24 Sensor inputs and LDW & GPS/MAP offset measurements . . . . . . . . 103 A.25 Estimated and measured GPS bias . . . . . . . . . . . . . . . . . . . . . . 104 A.26 Estimated and measured lateral offset . . . . . . . . . . . . . . . . . . . . 104 xiv A.27 Estimation error during camera outage . . . . . . . . . . . . . . . . . . . . 105 A.28 Sensor inputs and LDW & GPS/MAP offset measurements . . . . . . . . 107 A.29 Estimated and measured GPS bias . . . . . . . . . . . . . . . . . . . . . . 108 A.30 Estimated and measured lateral offset . . . . . . . . . . . . . . . . . . . . 108 B.1 Data collection setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 B.2 NCAT Track - Looking down from sky . . . . . . . . . . . . . . . . . . . . 110 B.3 NCAT Track - Looking up from inside Earth . . . . . . . . . . . . . . . . 111 B.4 DGPS Receiver to tire offset (north) . . . . . . . . . . . . . . . . . . . . . 112 B.5 DGPS Receiver to tire offset (south) . . . . . . . . . . . . . . . . . . . . . 114 C.1 Experimental System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 C.2 Main Screen - LDW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 C.3 Logged Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 C.4 GPS Setup/Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 C.5 CAN Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 C.6 M45 Data Acquisition Computer . . . . . . . . . . . . . . . . . . . . . . . 120 C.7 12V power sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 C.8 DC/AC power inverter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 C.9 Starfire GPS receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 C.10 Starfire on M45 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 C.11 Starfire on M45 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 C.12 CANbus interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 C.13 User I/O for M45 computer . . . . . . . . . . . . . . . . . . . . . . . . . . 123 C.14 Beeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 C.15 Beeline and Starfire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 C.16 Lane Departure Warning camera . . . . . . . . . . . . . . . . . . . . . . . 125 xv Chapter 1 Introduction For the past decade much research in the Intelligent Transportation Systems (ITS) community has been devoted to the topic of lane departure warning (LDW). A significant portion of highway fatalities each year are attributed to vehicle lane departure. Many automobile manufacturers are developing advanced driver assistance systems, many of which include subsystems that help prevent un-intended lane departure. A consistent approach among these systems is to alert the driver when an un-intended lane departure is predicted. To predict a possible lane departure, a vision system mounted on the vehicle detects the lane markings on the road and determines the vehicle?s orientation and position with respect to the detected lane lines. These vision-based systems suffer from performance limitations that are brought forth by environmental constraints. Therefore, it is desirable to add support from additional sensors to compensate when the vision system loses its ability to perform lane departure warning. The first goal of this research is to present current methods of vision-based LDW systems and to explore methods of sensor enhancement to assist the vision system. Second, several image processing and computer vision algorithms will be implemented as demonstration of how they could be used in a vision-based LDW systems. Finally, the main goal of this research is to develop a method using additional sensors such as GPS and inertial sensors to enhance vision-based lane detection. The combination of GPS and the vision system together with a high accuracy lane- level map will allow a more robust highway lane tracking system. Kalman filtering is 1 employed to incorporate inertial sensor inputs and measurements from the GPS receiver and vision system to estimate the vehicle states relative to highway lane tracking. Vehicle lateral offset, lateral velocity and heading angle are estimated to provide lane tracking when the vision-based lateral offset measurement fails. A simulation of this lane tracking estimation is performed using data generated from a simple vehicle model. The approach is then tested on an experimental system which includes a vehicle test bed equipped with inertial sensors, a GPS receiver and a lane departure warning camera system. A lane level map is created at a test-track facility to allow the GPS to assist in lane tracking. 1.1 Research Motivation The motivation behind lane departure warning as a whole is the high number of highway fatalities attributed to vehicle lane or road departure. Keeping vehicles on the roadway and in the proper lane is therefore a primary goal for those developing driver safety and assistance systems. The following are some lane departure fatality statistics collected over the last decade. ? 1/3 of highway deaths in 1991 were a result of vehicle lane departure[10] ? 13,000 fatalities 1999 resulted from vehicle lane departure[6] ? 21,000 roadway fatalities in 2004 resulted from vehicle lane departure (NHTSA) ? Studies have shown 20%-40% reduction in accidents for vehicles with Active Sta- bility Control (ESP, ESC, VDC, Lane Keeping, etc.)(NHTSA) 2 Motivation behind this research in particular is to improve on vision-based lane departure warning systems that are currently in production. Several companies have vision-based LDW systems commercially available. Most of these LDW systems use a vision sensor to perform lane departure warning. These vision-based systems are susceptible to failures and inaccuracies that result from environmental constraints. These constraints are discussed in detail in Chapter 2. This intent of this research is to add additional sensor capability to the LDW vision system to perform a more robust and reliable lane tracking system. A global positioning system (GPS), inertial sensors and a high accuracy map are fused with the LDW vision system to accomplish this goal. The goal of the sensor enhancement is to serve as a backup method of lane tracking when the vision systems undergoes failures due to some type of environmental condition. 1.2 Research Contributions The main contribution of this research is that it provides a more robust lane track- ing method that could be implemented to assist a vision-based lane departure warning system. This method utilizes sensors, such as GPS, inertial sensors and LDW vision sys- tems that already come standard or are available options on many automobile models. These sensors can be fused together to perform lane departure warning, even when the LDW camera fails to detect lane lines. This method is validated in simulation and an experimental test bed has been put in place for further experimental validation. The proposed method supports the overall advancement of vehicle safety systems. Aside from this lane tracking method, other contributions are given by this research. As a component of the lane tracking system, this research shows how vision can be used to correct GPS errors using a high accuracy map. The bias of the GPS receiver can 3 be directly measured, therefore increasing the accuracy of the receiver to a much higher level. Also, a high accuracy map has been created as part of the research experimental test bed. This map is a partial map of a test track, which is used for various vehicle testing. The high accuracy map is available for further research in this area, as well as other research projects using navigation or location information. 1.3 Thesis Overview and Organization In Chapter 2 of this thesis an overview of previously developed lane departure warn- ing techniques is given. First, several techniques of vision based lane tracking are given, along with the environmental constraints that limit their performance. This chapter will also present other research that attempts to use multi-sensor lane tracking. Various methods have been proposed in recent years on using vision along with other sensors such as GPS (Differential GPS and Real-time kinematic (RTK) GPS) and inertial mea- surement units (IMU) for lane tracking. The strengths and weaknesses of these methods are discussed. The test-bed for this research includes a pre-installed LDW vision system. The method of lane detection used by this system is proprietary and is not known. To understand how the lane detection might work, Chapter 3 gives an overview of some basic computer vision algorithms that could represent the functionality of this vision based lane detection system. Edge detection is most likely a key component of the lane detection method, but since the methods of edge detection are widely researched and employed, slightly more advanced algorithms are discussed. Methods such as optical flow and corner detection, which are dependent on edge detection, are presented in an 4 effort to explore how these algorithms could be employed in an LDW vision system. The methods are discussed as well as simulation results showing their functionality. InChapters4and5ofthisthesistheproposedlanetrackingsystemusingGPS/INS/Vision and a high accuracy map is presented. In Chapter 4, a discussion of the LDW lateral off- set measurement and the GPS/MAP lateral offset measurement is given. Then a method for measuring the GPS receiver bias using the vision measurement and the high accuracy map is demonstrated. Experimental data showing the validity of this calculation is the final discussion in Chapter 4. In Chapter 5 the proposed GPS/INS/Vision lane tracking system is developed. A Kalman filter is used to fuse the multi-sensor data and to provide optimal estimation of the vehicle states that are required for highway lane tracking. The method is validated in a lane tracking estimation with input and measurement data generated from a vehicle model. Lastly, experimental data taken from the test-bed is presented to give some preliminary analysis on how the system performs in a real-world setting. Conclusions about the proposed method will be discussed in Chapter 6. A discus- sion on the strengths and weaknesses of the approach will be also be given. Finally, suggestions for further research in this area will conclude the work presented in this thesis. 5 Chapter 2 Previous/Related Work 2.1 Introduction In this chapter a detailed description is given of current lane departure warning (LDW) systems. The lane departure warning problem is described later in this chapter. These methods range from systems that rely solely on image processing to systems that combine vision technology with GPS/INS sensors for increased performance. The strengths and weaknesses of these methods will be given as well as the need for further research in the field of lane departure warning. 2.2 Lane departure warning with computer vision/image processing Much research has been devoted to the study of lane departure warning. Many LDW systems rely on camera systems that use computer vision and image processing techniques to detect lane markers on the road. This section highlights some of the technologies that have been developed in this area. 2.2.1 Tasks of a lane departure warning system A vision-based lane departure warning (LDW) system must perform three tasks. First, an LDW system must detect the relevant lane boundary. In nearly all cases this boundary is a lane marker. Second, an LDW system must perform calculations based on the detected boundary. Finally, a vision-based LDW system must warn the driver of a 6 probable lane departure. The warning must be issued according to logic based on vision calculations from the lane detection [11]. 2.2.2 Lane boundary detection The most common approach for vision-based detection of a lane boundary is the use of gradient based methods. Images captured from a vehicle camera are run through edge- detection filters such as the Canny [2] operator to pull out contrast edges. These methods use first order differences in the x and y directions to compute gradients. These gradients are then compared to pre-defined thresholds to determine if the computed gradient is in fact an edge, or in this case a lane boundary. However, these methods cannot be the sole method of lane boundary detection. A captured view of the road ahead will detect many edges, so additional methods must be used to separate the desired edges from others that are of no interest. In [6], Hsu and Cheng propose an adaptive thresholding approach combined with a modified gradient filtering technique. Portions of the captured image are extracted for computation. These sub-windows are extracted where the lane marker is most likely to be detected. Because lane markers will appear more diagonal than horizontal or vertical within these windows, a modified gradient filter is used to extract diagonal edges, i.e. left window : ? ?? ?? ?? ? ?3 ?3 6 ?3 6 ?3 6 ?3 ?3 ? ?? ?? ?? ? (2.1) 7 right window : ? ?? ?? ?? ? 6 ?3 ?3 ?3 6 ?3 ?3 ?3 6 ? ?? ?? ?? ? . (2.2) A two-dimensional convolution is performed between these filter matrices and the ex- tracted windows of the image. Hsu and Cheng then use a low-pass smoothing filter to remove noise, filter : ? ?? ?? ?? ? 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 ? ?? ?? ?? ? (2.3) After the images have been filtered and smoothed, an adaptive threshold is used to determine whether or not the detected edge is a lane marker. Edges within an image are sensitive to the brightness present when the image is captured. Hsu and Cheng?s adaptive threshold is adjusted according to a photo-resist sensor that measures the amount of light available when the image is taken. Other researchers such as Gern and Moebus attempt to use more advanced computer vision techniques to detect lane boundaries. In [3] the authors present a horizontal optical flow calculation to detect the presence of lane boundaries. Optical flow can best be thought of as the movement of brightness patterns within an image sequence[5]. An optcial flow calculation will formally be defined in Chatper 3. Gern and Moebus use a technique that correlates rows at equal distances from the camera. This correlation gives the movement of objects within the image over discrete instances of time. 8 2.2.3 Lane boundary modeling A common constraint used in modeling lane boundaries is slowly changing curvature. Roadways are constructed under this constraint [8]. The constraint states that the curvature of a road and its corresponding lane markers will vary slowly over the distance of the road [3],[8],[9],[10]. All but one lane boundary modeling technique discussed in this chapter are constructed under this constraint. A common model for lane boundaries is the clothoid model [3] given by c(L) = c0 +c1?L (2.4) where c(L) is the curvature at length L, c0 is the initial curvature and the curvature rate is denoted by c1. Curvature is defined as c = 1r (2.5) where r is the radius of the curve [3]. In [8], Wang uses a third order polynomial as an approximation to the clothoid X = ?W2 +Xoff +?Y +C0Y 2 2 +C1 Y3 6 (2.6) where (X,Y) denotes a point on the lane border, W is the lane width, Xoff is the lateral offset between the vehicle and the lane boundary and ? is the yaw angle that relates the vehicle frame to the road frame.. The parameters C0 and C1 are the curvature and curvature rate, respectively. 9 Another lane modeling method is to use piecewise defined polynomials. In [9] Jung uses a piecewise polynomial in which a linear equation represents the lane boundary in the near field and a parabolic representation for the far field, i.e. f(x) = ? ??? ??? a+bx if x>xm c+dx+ex2 if x?xm. (2.7) Under the constraint of slowly changing curvature, the model becomes f(x) = ?? ?? ??? a+bx if x>xm 2a+xm(b?d) 2 +dx+ b?d 2xmx 2 if x?xm, (2.8) which allows the lane to be completely modeled with 3 parameters, a, b and d. These parameters are found using a weighted least squares approach with image data acquired from the vehicle?s camera. Other lane boundary models include circular equations (constant curvature radii) or Euler spiral equations [10]. In the spiral equations, the curvature of the lane boundary varies linearly with distance travelled along the road from a specified point. In [10] LeBlanc uses a Kalman filter approach to estimate the parameters of the Euler spirals. Aside from the curvature/smoothness constraint, other lane boundary modeling techniques have been explored. In [11], Lee models the roadway ahead of the vehicle as a series of connected plates. Each plate consists of vehicle and plate state variables as well as linearized models of the lane boundary. Vehicle state variables such as yaw angle and lateral offset are estimated. Plate variables such as plate pose and lane width are also estimated. 10 2.2.4 Parameters for lane departure warning The final step in a LDW system is to warn the driver of a possible unintentional lane departure. The most typical parameters used to determine lane departure are: 1. Vehicle lateral offset - the distance between the vehicle and the lane boundary [9] 2. Time-to-lane-crossing (TLC) - the time until the vehicle?s current path crosses the lane boundary [10],[11] Lateral offset can be observed from the captured image alone. Calculation of TLC requires the use of additional sensors. It requires some knowledge of current vehicle states and cannot be computed solely with a vision sensor. Determining the lateral offset The fundamental parameter of lane departure warning is the lateral offset of the a vehicle with respect to the lane boundary. This offset is a direct computation of how far a vehicle is from a right or left lane boundary. The offset can be used as the sole indicator of lane departure, but it is usually combined with other parameters to indicate a possible lane departure [3],[9],[10]. One method presented by Jung in [9] uses an uncalibrated camera to determine the lateral offset. Jung models the lane borders as hyperbolas on the image plane y?y0 = Ai(x?x0)+ Bx?x 0 , for i?{l,r}, (2.9) where i = l,r are the right and left lane boundaries. (x0,y0) are the vanishing points of the road borders next the vehicle. The linear portion of the equation corresponds to the 11 tangents of the left and right borders at the vehicle. The lateral offset is then lo W = A ?A (2.10) where lo is the lateral offset and W is the lane width. Jung then differentiates the offset over a defined set of images frames1. This rate of change of the lateral offset is combined ? with the offset calculation itself to determine a lane departure. Time-to-lane-crossing (TLC) Time-to-lane-crossing is a widely used warning criteria to determine if a vehicle is departing a lane unintentionally. LeBlanc formally defines TLC as ?the time until the vehicle CG will cross either edge of the roadway, assuming both the vehicle speed and front wheel steering angle remain unchanged?[10]. In his method for computing TLC, LeBlanc suggests an LMS (Lane Mark Sensor) to detect the lane lines. Lane markers are first located in the near field (6-20m) and then followed out to 100m, producing an estimated roadway geometry. A ?near range Kalman filter? takes near range vision data and vehicle inertial measurements (yaw rate, vehicle speed and steering angle) and estimates vehicle orientation with respect to the road edge and curvature. A second Kalman filter for the ?far-range? is used to estimate the roadway geometry. The filter takes state estimates from the near-range filter as well as far-range vision data as inputs. The path of the vehicle is computed by taking the state estimates from the first Kalman filter and propagating them forward with a linear bicycle model. Propagation 1Something is missing here. Plural vs possessive. 12 continues until the vehicle position crosses the roadway boundary predicted by the second Kalman Filter. The elapsed time of propagation is the TLC. In [11], Lee also uses TLC as a lane departure warning criteria. A distance between the front of the vehicle and a detected lane marker directly in front of the vehicle is calculated. This distance to lane crossing is then divided by the current vehicle speed to get the TLC. 2.2.5 Performance limitations All vision based lane departure warning systems face performance constraints. These constraints limit the performance of the LDW systems by making the lane boundary markings undetectable by the vision system and their supporting algorithms. Some of these limitations are listed below. ? Environmental conditions 1. Road surface occlusions - rain, snow, mud etc [3][6][10][11] 2. Lighting - nighttime, tunnels, glare from the sun[10] ? Road/highway condition 1. Poor/nonexistent lane markings [11] 2. Road construction/inconsistent markings [11] ? Other marker occlusions - a vehicle blocking the lane markings[9][10] 13 2.3 Vision based LDW with additional sensor enhancement Other researchers in the intelligent transportation system (ITS) community are looking at ways to enhance LDW systems with other technologies. These technologies are utilized in an attempt to aid lane detection systems when the vision-based lane detection fails. These enhancements include global positioning systems (GPS), inertial navigation systems (INS) and also high accuracy road and highway maps. This section discusses some of these methods and gives an analysis on the strengths and weaknesses of these methods. 2.3.1 Vison-based LDW with INS enhancements In [10] LeBlanc presents a road departure prevention system called CAPC, which stands for ?Crewman?s Associate for Path Control?. This prototype system has three functions: 1) detect the road edge 2) warn the driver if a road departure is forthcoming 3) take control of the vehicle and prevent road departure if driver does nothing. The key responsibility of the CAPC is to compute a TLC (?time to lane crossing?). TLC is computed by detecting the road path ahead of the vehicle, projecting the vehicle path and determining when the two will intersect. A warning will be issued when TLC goes below a certain threshold. If the driver does not respond to the warning a control system takes over. The control system will uses differential braking to provide certain yaw moments that will keep the vehicle in the lane. Section 2.2.4 outlines LeBlanc?s process for computing TLC. LeBlanc then discusses the warning logic used to determine if a lane departure warning should be issued to the driver or if the system needs to take control of the 14 vehicle to avoid lane departure. A threshold TLCW is set for a lane departure warning. Another threshold TLCI is set for intervention control of the vehicle. If TLC passes below one of these thresholds, the appropriate action is taken. 2.3.2 Vision-based LDW enhancements with GPS/INS and high accuracy mapping The current trend in lane departure warning research is GPS/INS technology to assist with lane departure detection. Zhang proposes a method that does not use vision but rather a GPS/INS system combined with a high accuracy roadway map [15]. Wang suggests a system that uses the classical vision based LDW but enhancing its performance with GPS/INS and a map. These two approaches are described in greater detail in the following sections. GPS/INS/MAP LDW In [15] Zhang presents a system that uses differential GPS (DGPS), dead reckoning (DR) and a map for various vehicle safety applications. The primary goal of this system is to locate a vehicle within a specific lane on a roadway. Primary applications for this system are lane keeping, vehicle control, collision avoidance and precision road mapping. Zhang suggests a DGPS system that locates a vehicle to within centimeter accuracy. By combining DGPS with a highly accurate lane-level map of a roadway, a cross-track offset can be computed from the vehicle position (DGPS) and the centerline of the roadway. However, a GPS signal is not always present. Zhang suggests integrating the DGPS with DR sensors (accelerometers and gyros) in a Kalman filter to assist in vehicle 15 positioning. Under a GPS outage, the DR sensors can be used to position the vehicle accurately for a certain amount of time. Zhang makes the target application of this technique lane departure warning. If an accurate instantaneous position of the vehicle is available, it can be compared to the map database to compute a relative cross-track offset. This offset can be used to warn the driver if the vehicle is approaching lane departure. As a test bed Zhang uses a Mercedes E240 with a DGPS receiver, a fiber optic gyro and a high accuracy map database. An on-board PC104 computer reads in the DGPS, the gyro and wheel speed via the CAN bus. The computer performs real-time data acquisition and processing. A 1 pulse-per-second (PPS) signal from the DGPS allows the measurements to be synchronized with GPS time. A six state Kalman filter is used to estimate DR errors. A map-matching algorithm is employed to compute the vehicle offset. GPS/INS/MAP/VB LDW In [8] Wang presents a system that combines a vision-based lane recognition system (VBLR) with DGPS/INS and a high accuracy lane level map to perform lane keeping. Map matching algorithms are performed on the position provided by the GPS/INS sys- tem to allow the vehicle offset to be computed. This offset is compared with the offset computed by the VBLR subsystem. The system presented by the author requires a vehicle to be fitted with a GPS receiver that is capable of receiving differential corrections and also one that has car- rier phase capability. This sort of GPS system will sometimes experience outages, so GPS/INS sensor fusion is necessary. 16 Like most other methods for lane boundary modeling, uses the constraint of slowly changing road curvature. The lane boundaries are modeled with a third order polyno- mial, which is close a close approximation to a clothoid. A point on the lane boundary is defined as P(X,Y), where X is the axis perpendicular to the direction of travel (see [8] for a formal definition of X). Through pinhole camera geometry, the point P(X,Y) can be determined from the image point p(x,y). A Kalman filter or least squares can be used to estimate the vehicle offset and road curvature parameters, which are factors that go into the computation of X (again, see [8]). In order for the VBLR to work, a lane boundary must be successfully detected by the camera. The proposed system has three main components: positioning, map matching and VBLR. For positioning, Wang uses a medium grade IMU and a RTK GPS receiver. Wheel speeds and steering angle are taken from the CAN bus of the test vehicle (Mercedes S240). A four tire vehicle model is used to estimate vehicle parameters. Wang also suggests an integrity monitoring system to determine which subsystems are behaving properly. 2.3.3 High accuracy highway mapping The drawback to these methods is that they require high accuracy lane level mapping of roadways. Currently, there are a few digital maps available that supply 5m accuracy in some metropolitan areas and 50m accuracy in less populated areas [8]. This mapping must be on the sub-meter level to correctly locate a vehicle within a certain lane on a highway [15] [8]. Such maps are not presently in existence, but there are on-going discussions between the auto industry and US government on the feasibility of high accuracy mapping [8]. 17 In order to achieve these lane level maps, Wang suggests a mapping on the fly technique. Typically, highway mapping would require surveying vehicles and specialized personnel. The author suggests using data collected from specially outfitted commercial and private vehicles. These vehicles should be equipped with low to intermediate accu- racy positioning. Data can be collected from multiple drivers going about daily routines. Through statistical methods, maps can be formulated from the multitude of collected data [8]. This approach is also highlighted in [14]. Wilson suggests employing a high number of probe vehicles on the highway system. Each vehicle is position aware (GPS with differential corrections) and position and other available information is reported back to a server. The server will then correlate the received data to form a map. The map is formed by finding a statistically ideal centerline of a lane that the vehicles are trying to follow. Wilson presents data that shows after 22 passes down a lane, a map has been fitted to the lane with 15 centimeter accuracy. 2.4 Conclusions In this chapter some of the current methods of lane departure warning are discussed. First, methods that rely solely on image processing are presented. Then other techniques that combine vision with other sensors such as GPS/INS and high accuracy roadway maps are given. All of the lane departure warning technologies presented in this chapter suffer from some type of performance limitations. In the cases where vision is used to detect lane 18 boundaries, environmental and road conditions can limit performance. For other meth- ods that use alternative sensors as enhancements to vision based lane detection, tech- nological constraints are a major dilemma. In proposed systems such as [8] and [15], high accuracy GPS receivers (DGPS and RTK) are used for positioning. In the current market, these receivers are not cost effective enough to be widely deployed. 19 Chapter 3 Vision Research 3.1 Introduction In this section an overview of several computer vision techniques will be given. These techniques are established methods that provide a foundation for future research. They also provide some general understanding of current lane departure warning systems and their functionality. The vision techniques discussed in this chapter include optical flow and corner de- tection. Optical flow calculations estimate the optical plane motion vectors of objects in a sequence of images. Corner detection is the identification of distinct corners in individual images. Corner detection relies heavily on edge detection. 3.2 Optical Flow Optical flow can best be thought of as the movement of objects in the observer?s field of view. Imagine riding down an interstate highway and looking at a large road sign up ahead. As the observer in the vehicle approach the sign, the object itself appears larger and larger, and its edges tend to move outward in your field of view. As the vehicle passes it and the observer looks back, the sign appears smaller and smaller with the objects edges appearing to shrink. By imagining the edges of the object becoming larger or smaller, the observer is projecting a three dimensional world onto a two dimensional plane. As the edges move, they have apparent velocities relative to the other objects 20 in the field of view. Optical flow techniques compute estimates of the direction and magnitude of these velocity vectors. 3.2.1 Optical Flow - Horn & Schunck?s method The first major milestone in the calculation optical flow was done by Berthold K.P. Horn and Brian G. Schunck [5]. Horn and Schunck define optical flow as ?the distribution of apparent velocities of movement of brightness patterns in an image?. If the velocities of the brightness patterns (objects within the image) are known, then a robot or vision system using the optical flow techniques will have some knowledge of how its surroundings are changing. Horn and Schunck do point out some relative difficulties before setting up their optical flow calculations. The connection between optical flow in the image plane and object velocity in three dimension is not always accurate. For example if a sphere of constant color and texture is rotating in a sequence of images and the lighting upon the object does not change, no optical flow can be calculated. The object will appear the same when project on a two dimensional plane [5]. Assumptions Horn and Schunck?s method for the calculation of optical flow is based on three assumptions. First, it is assumed the the object being imaged is a flat surface. Second, it is assumed that the illumination on the image is constant and uniform. It is also assumed that the reflectance of the object varies smoothly and has no spatial discontinuities. These assumptions assure that the image brightness or intensity is differentiable [5]. 21 Constraints Horn and Schunck derive equations that relate the change in image brightness at a particular point to the motion of the brightness pattern. To do this, a few constraints are established. Constant brightness constraint: The first constraint says that the intensity at a particular point in an object does not change over time,i.e. dE dt = 0 (3.1) We expand equation 3.1 by using the chain rule, to obtain ?E ?x dx dt + ?E ?y dy dt + ?E ?t = 0 (3.2) or, equivalently, Exu+Eyv+Et = 0 (3.3) Rearranging equation 3.3 into vector form bracketleftbigg Ex Ey bracketrightbigg ? ?? ? u v ? ?? ? = ?Et (3.4) where u = dxdt (3.5) 22 v = dydt (3.6) are the velocity vectors that refelct the motion of a pixel in the optical plane. It can be seen in equation 3.4 that the component that lies in the direction of the brightness gradient is ? EtradicalBig E2x +E2y (3.7) . This conclusion is very important in that it demonstrates the aperture effect. Because of the brightness constraint, the only component of the pixel velocity that can be observed is the portion that lies in the direction of the brightness gradient [5]. A classic example of this is the ?barber shop pole? example. Imagine a barber pole rotation. The lines on the pole are actually moving left to right, however, the lines appear to be moving upward. If an optical flow calculation were performed on a sequence of images of a moving barber shop pole, the motion vectors would point upward as well. This occurs because the brightness gradient lies in the vertical direction, not in the horizontal direction. Smoothness constraint: Another constraint placed on the optical flow problem is the smoothness constraint. Horn and Schunck note that neighboring points on a mov- ing object have similar velocities, therefore the brightness patterns in the image vary smoothly [5]. Horn and Schunck implement this constraint by the minimization of the square of the gradient magnitudes, i.e. parenleftbigg?u ?x parenrightbigg2 + parenleftbigg?u ?y parenrightbigg2 and parenleftbigg?v ?x parenrightbigg2 + parenleftbigg?v ?y parenrightbigg2 . (3.8) 23 The smoothness of the optical flow field can also be measured by the Laplacians of u and v, i.e. ?2u = ? 2u ?x2 + ?2u ?y2 (3.9) ?2v = ? 2v ?x2 + ?2v ?y2. (3.10) Partial derivative estimation Horn and Schunck estimate the partial derivatives Ex, Ey and Et by computing the the average of four first differences in two adjacent slices in the image. Figure 3.1 shows an example 4x4x2 image slice. The index i represents the pixel row and j represents the pixel column. The index k represents the image number in a sequence of images. Horn and Schunck use a row index that increases from the bottom row upward. Because simulations for this experiment are run in MATLAB, Horn and Schunck?s partial deriva- tive estimation equations have been modified to match matrix indexing in MATLAB, in which an increasing row index goes from the top of the matrix downward. Equations 3.11 through 3.13 show these equations. Ex ? 14[Ei+1,j+1,k?Ei+1,j,k +Ei,j+1,k?Ei,j,k??? +Ei+1,j+1,k+1?Ei+1,j,k+1 +Ei,j+1,k+1?Ei,j,k+1] (3.11) Ey ? 14[Ei,j,k?Ei+1,j,k +Ei,j+1,k?Ei+1,j+1,k??? +Ei,j,k+1?Ei+1,j,k+1 +Ei,j+1,k+1?Ei+1,j+1,k+1] (3.12) Ek ? 14[Ei+1,j,k+1?Ei+1,j,k +Ei,j,k+1?Ei,j,k??? 24 Figure 3.1: 4x4x2 image slice +Ei+1,j+1,k+1?Ei+1,j+1,k +Ei,j+1,k+1?Ei,j+1,k] (3.13) Laplacians For the Laplacian estimates, Horn and Schunck use the following approximation: ?2u?k(?ui,j,k?ui,j,k) and ?2u?k(?vi,j,k?vi,j,k) (3.14) where ?u and ?v are local averages of the velocity vectors. They are estimated by sub- tracting the value at a point from a weighted average of neighboring values. Equations for the calculations are ?ui,j,k = 16[uii?1,jj +uii,jj+1 +uii+1,jj +uii,jj?1]??? + 112[uii?1,jj?1 +uii?1,jj+1 +uii+1,jj+1 +uii+1,jj?1] (3.15) 25 ?vi,j,k = 16[vii?1,jj +vii,jj+1 +vii+1,jj +vii,jj?1]??? + 112[vii?1,jj?1 +vii?1,jj+1 +vii+1,jj+1 +vii+1,jj?1] (3.16) (3.17) The H&S algorithm Minimization: Given the above definitions (equations 3.1 through 3.16) Horn and Schunck minimize both the error in the brightness equation and the error in the departure from smoothness. Define the brightness error for each pixel as ?bi,j = Exu+Eyv+Et (3.18) and the error in the departure from smoothness as ?2ci,j = parenleftbigg?u ?x parenrightbigg2 + parenleftbigg?u ?y parenrightbigg2 + parenleftbigg?v ?x parenrightbigg2 + parenleftbigg?v ?y parenrightbigg2 (3.19) The total error to be minimized is ?2 = integraldisplay integraldisplay (?2?2c +?2b)dxdy (3.20) (3.21) where ?2 is a weighting factor that will take into account noise in the brightness mea- surement. Horn and Schunck define a minimization minu,v?2 (3.22) 26 (see [5] for a detailed derivation) Iterative solution: A direct solution to the minimization is computationally expense, therefore an iterative solution is suggested. This method computes a new set of velocity estimates (un+1,vn+1) based off of the estimated derivatives and the average of previously estimated velocities [5]. The iterative solution is expressed in the equations below. un+1 = ?un?Ex parenleftBigg Ex?un +Ey?vn +Et ?2 +E2x +E2y parenrightBigg (3.23) vn+1 = ?vn?Ex parenleftBigg Ex?un +Ey?vn +Et ?2 +E2x +E2y parenrightBigg (3.24) (3.25) The algorithm: With the iterative solution in place, a generic algorithm can be set up to perform the optical flow calculation. The actual MATLAB code can be seen in hsof.m in Appendix A. 1. Obtain image sequence 2. Get size of image sequence, M (x), N (y), num imgs (t) 3. set alpha 4. Choose number of iterations 5. for kk= 1:num imgs (a) for ii = 1:M for jj = 1:N 27 compute Ex, Ey and Et end end (b) for nn = 1:num its for ii = 1:M for jj = 2:N compute ?u compute ?v update u update v end end end end 3.2.2 Experimental results The Horn and Schunck algorithm was tested in MATLAB with several different image sequences. This report will show the results from two different image sequences. The first is a sequence generated from a synthetic image. Next, the algorithm is tested on a sequence of real images. 28 100 200 300 400 500 600 50 100 150 200 250 300 350 400 450 500 Figure 3.2: Original synthetic image used to create sequence for optical flow calculation Synthetic image test A synthetic image was created in Microsoft paint. The original synthetic image is seen in Figure 3.2. Using this image, a sequence was created in which the blue object (shaped like a blimp) in the image is moving down and to the right from its original position. All of the images are converted to gray-scale. Figures 3.3(a) through 3.5(a) show the frames of the synthetic image sequence. Experiment setup: The synthetic sequence created consists of twelve successive im- age frames. For simplicity, the sequence is broken into chunks of four frames. Each chunk will undergoes a separate optical flow calculation. By trial and error, the weight- ing factor ? is chosen to be 20, and the number of iterations for each frame is 32. The frame chunks and the corresponding optical flow calculations can be seen in figures 3.3 through 3.5 show 29 frame 1 50 100 150 200 250 300 50 100 150 200 250 frame 2 50 100 150 200 250 300 50 100 150 200 250 frame 3 50 100 150 200 250 300 50 100 150 200 250 frame 4 50 100 150 200 250 300 50 100 150 200 250 (a) Frames 1 - 4 0 50 100 150 200 250 300 350?300 ?250 ?200 ?150 ?100 ?50 0 (b) Optical Flow Figure 3.3: First four frames of the synthetic blimp image sequence and the computed optical flow 30 frame 1 50 100 150 200 250 300 50 100 150 200 250 frame 2 50 100 150 200 250 300 50 100 150 200 250 frame 3 50 100 150 200 250 300 50 100 150 200 250 frame 4 50 100 150 200 250 300 50 100 150 200 250 (a) Frames 5 - 8 0 50 100 150 200 250 300 350?300 ?250 ?200 ?150 ?100 ?50 0 (b) optical Flow Figure 3.4: Next four frames of the synthetic blimp image sequence and the computed optical flow 31 frame 1 50 100 150 200 250 300 50 100 150 200 250 frame 2 50 100 150 200 250 300 50 100 150 200 250 frame 3 50 100 150 200 250 300 50 100 150 200 250 frame 4 50 100 150 200 250 300 50 100 150 200 250 (a) Frames 9 - 12 0 50 100 150 200 250 300 350?300 ?250 ?200 ?150 ?100 ?50 0 (b) Optical Flow Figure 3.5: Last four frames of the synthetic blimp image sequence and the computed optical flow 32 Results: Examination of the motion vectors computed by the optical flow algorithm reveals that the vectors are in fact pointing in the true direction of motion of the blimp. The grouping of vectors that exists around the trailing end of the object correspond to the field of motion. However, the ?empty? area that exists in front of the vector field constitutes a problem. This empty area is a result of the structure of the original synthetic image. Because the object was created in a paint program, the color of the object is constant throughout the image. Therefore, there is no intensity gradient within the object. This empty area was completely covered by the object over the four frames. For this reason, the small white stripes in the original image were added to produce some gradient content within the object. Another issue observed in this experiment is the presence of vectors that exist where no motion was occurring. This, in part, is due to the smoothness constraint that Horn and Schunck used to formulate this optical flow method. For occluding objects and objects moving in different direction within an image, the Horn and Schunck algorithm does a poor job of calculating optical flow in the regions where these situations occur. Real image sequence The next step is testing the Horn and Schunck algorithm on a sequence of real images. Experiment setup: Images for this experiment are of a green balloon attached to a living room fan, which is turned on at its medium setting. Again, twelve sequential images were extracted from approximately 100 that were taken during the experiment. Figure 3.6 shows the first four of the selected images. Like the synthetic image, the 33 frame 1 50 100 150 200 250 300 50 100 150 200 frame 2 50 100 150 200 250 300 50 100 150 200 frame 3 50 100 150 200 250 300 50 100 150 200 frame 4 50 100 150 200 250 300 50 100 150 200 Figure 3.6: First four images in original balloon sequence region of motion was extracted from the image sequence, simply for the sake of saving processing time. Again, the sequence was broken into 3 separate chunks, each with four images, all of which are converted to gray-scale. The same settings for alpha (20) and the number of iterations (32) are used in this experiment. Figures 3.7 through 3.5 show the frame chunks and the corresponding optical flow calculations. Results: Results generated from the real image sequence produce mixed results. The motion vectors for Frames 1 - 4 appear to be the most accurate. The outline of the balloon is very visible, and its motion vectors are pointing in the right direction. The motion of the fan blades are also visible, and the corresponding motion vectors appear 34 frame 1 50 100 150 200 20 40 60 80 100 120 140 frame 2 50 100 150 200 20 40 60 80 100 120 140 frame 3 50 100 150 200 20 40 60 80 100 120 140 frame 4 50 100 150 200 20 40 60 80 100 120 140 (a) Frames 1 - 4 0 50 100 150 200 250?150 ?100 ?50 0 (b) Optical Flow Figure 3.7: First four frames of the balloon image sequence and the computed optical flow 35 frame 1 50 100 150 200 20 40 60 80 100 120 140 frame 2 50 100 150 200 20 40 60 80 100 120 140 frame 3 50 100 150 200 20 40 60 80 100 120 140 frame 4 50 100 150 200 20 40 60 80 100 120 140 (a) Frames 5 - 8 0 50 100 150 200 250?150 ?100 ?50 0 (b) Optical Flow Figure 3.8: Next four frames of the balloon image sequence and the computed optical flow 36 frame 1 50 100 150 200 20 40 60 80 100 120 140 frame 2 50 100 150 200 20 40 60 80 100 120 140 frame 3 50 100 150 200 20 40 60 80 100 120 140 frame 4 50 100 150 200 20 40 60 80 100 120 140 (a) Frames 9 - 12 0 50 100 150 200 250?150 ?100 ?50 0 (b) Optical Flow Figure 3.9: Last four frames of the balloon image sequence and the computed optical flow 37 to be accurate. Frames 5 - 8 produce moderate results, as the motion of the balloon and the fan blade are detected, but not as accurately as the first four frames. The optical flow algorithm has difficulty in detecting the motion in the last set of image frames. One possible reason for the difficultly encountered in the last frames is the aperture effect. In the last set of images, the balloon is beginning to move away from the camera. Due to the close proximity of the camera to the balloon, as the balloon moves away, its size relative to the surroundings in the project image plane does not change. Therefore, the gradient in all three dimensions is small, so the motion of the balloon will not be detected. 3.2.3 Conclusions The Horn and Schunck algorithm for computing optical flow was one of the first methods used to compute motion vectors within an image sequence. Many of the newer and more complex optical flow algorithms have stemmed from modifications of this method. In order to test the newer methods against this benchmark, the strengths and weaknesses of the method must be analyzed. Strengths The major strength of the Horn and Schunck method is that it works well when the constraints used to derive the method are met. For example, in the synthetic image sequence used in this project, when the intensity of the object within in image varies smoothly, the computed motion vectors are more reliable. Another strength lies in the method?s simplicity. The derivative estimates of intensity are intuitive, in the manner that they are obtained by taking a difference between intensity values in the x, y, and 38 time dimensions. Thus, the algorithm is asking what is different about this image when compared to the last one? Weaknesses In contrast, the strengths of the Horn and Schunck method can also be weaknesses. The simple approximation of motion using the estimated derivatives places the restric- tion on performance (Aperture effect). Also, the smoothness constraint is not a good estimation of objects in motion within an image plane. For most applications, a vision system would encounter multiple moving objects, including the background if the vision system itself is mounted on some type of vehicle. In this situation moving objects would occlude one another, creating a severe problem for the Horn and Schunck algorithm. Final remarks The Horn and Schunck algorithm for computing optical flow has proven to be the benchmark against which all other optical flow methods are tested. It?s ease of im- plementation make it a good option to learn the concepts of computing optical flow. However, for real time applications, the method?s constraints reduce its effectiveness. 3.3 Harris corner detection 3.3.1 Moravec?s method The Harris corner detector is quite simple in operation. It is an expansion of an earlier corner detection algorithm developed by Moravec [12]. This method takes a small window within an image and calculates the average intensity change resulting 39 from shifting the window in various directions [4]. There are three cases that result from this shifting. 1. Small change in both directions: This implies a flat (continuous) area within an image 2. Small change in one direction, large in the other: implies an edge. 3. Large change in both directions: implies an edge. Moravec?s method can be represented mathematically by: Ex,y = summationdisplay u,v wu,v|Ix+u,y+v?Iu,v|2 (3.26) where I is the image intensity and w is the window (a square unit step function). Moravec?s approach is simply to look for local maxima in min(E) that lie above some threshold [4]. 3.3.2 Auto-correlation enhancement Harris states there are three main issues with the Moravec method: 1. The response is anisotropic (does not respond equally in all direction) because only a discrete set of shifts are used. Harris corrects this by per- forming a Taylor Series expansion about the shift origin that will cover all possible shifts [4]. Ix+u,y+v = Iu,v + ?I?x(u)x+ ?I?y(v)y+O(x2,y2) (3.27) Ix+u,y+v?Iu,v = Xx+Yx+O(x2,y2) (3.28) 40 where X = Icirclemultiplydisplay(?1,0,1) = ?I?x(u) Y = Icirclemultiplydisplay(?1,0,1)T = ?I?y(v) and O(x2,y2) denotes the omitted higher order terms. Now, Moravec?s change E becomes Ex,y = summationdisplay u,v wu,v [Ix+u,y+v?Iu,v]2 (3.29) = summationdisplay u,v wu,v bracketleftBig xX +yY +O(x2,y2) bracketrightBig2 (3.30) For small shifts the higher order terms O(x2,y2) can be ignored. Expanding the squared portion yields Ex,y = summationdisplay u,v wu,v parenleftBig (xX)2 +2xXyY +(yY)2 parenrightBig (3.31) A convolution can replace the summation, so the equation can be simplified to E(x,y) = Ax2 +2Cxy+By2 (3.32) where A = X2circlemultiplydisplayw 41 B = Y2circlemultiplydisplayw C = (XY)circlemultiplydisplayw 2. The response is noisy because w is a 2-D unit step. Harris suggests instead of a step, use a Gaussian for w [4]. wu,v = exp?(u2 +v2)/2?2 (3.33) 3. The change function responds to edges because only the minimization of E is taken into account. Harris changes this by reformulating the corner measure to use the variation of E with the direction of shift [4]. Harris re-writes the equation for E as E(x,y) = (x,y)M(x,y)T (3.34) where M is a symmetric matrix M = ? ?? ? A C C A ? ?? ? With this new structure, E is closely related to the local autocorrelation function. Harris denotes ? and ? as the eigenvalues of M, which are proportional to the principle curvatures of the local autocorrelation. The nature of ? and ? give rise to three possible outcomes [4]. 42 (a) If both ? and ? are small, then there is little change in E, indicating a flat region. (b) If one is small and one is large, this indicates an edge. (c) If both are large, this indicates a corner. 3.3.3 Corner response function Harris formulates a response function in order test the strength/quality of the edges that have been detected [4]. First two functions are introduced: Tr(M) = ?+? = A+B (3.35) Det(M) = ?? = AB?C2 (3.36) Then a corner response function is given as: R = Det?k?Tr2 (3.37) where R is positive in the corner regions and negative in the flat regions. 3.3.4 Implementation and testing The Harris corner detection is tested on two images captured from a web-cam/stereo- vision setup. Code for the Harris corner detection experiment is taken from the MATLAB Structure from Motion toolkit [13] and modified slightly. 43 Testing algorithm: The pseudo-code algorithm below describes the testing algorithm 1. I = imread(image.jpeg) 2. num = number of corners 3. wi = width of Gaussian smoothing filter 4. sigma = standard deviation of Gaussian filter 5. corners = harris(I,num,wi,sigma) 6. X(Y) = conv2(I, gradient mask) 7. A = conv2(X.2,w) 8. B = conv2(Y.2,w) 9. C = conv2((X.?Y).2,w) 10. M = [A,C;A,B] 11. alpha,beta = eig(M) 12. compute R 13. find local maxes (torr max3x3.dll) 14. display results 44 Code discrepancies: The m-file harris.m is a modified version of the m-file torr charris.m which is in- cluded in [13]. Examining this modified m-file reveals two discrepancies from the method proposed by Harris. The first occurs in the definition of the matrix C. Harris defines C by: C = (XY)circlemultiplydisplayw In torr charris.m, C is defined as C = (XY)2circlemultiplydisplayw The reason for this change is ambiguous at this point. The second difference in the toolbox code is in the corner response function. In torr charris.m, the function R (denoted by c) is R = Tr(M)?k?Det where Harris proposes R = Det?k?Tr2 45 50 100 150 20 40 60 80 100 120 50 100 150 20 40 60 80 100 120 50 100 150 20 40 60 80 100 120 50 100 150 20 40 60 80 100 120 Figure 3.10: Detected corners with Harris method According the author of torr charris.m, the reason for this is that, for Harris?s version, R can be strictly negative for some images. Reversing the roles of Det and Tr will prevent this from occurring. Results: Figure 3.10 shows the results of the two web-cam images with a 3x3 Gaussian smoothing filter with a standard deviation of 0.01. The strongest 25 corners are dis- played. 46 Chapter 4 GPS Bias Measurement using Vision 4.1 Introduction If a lane level map of a highway is available a vehicle equipped with a lane departure warning system (LDW) and a GPS reciever has the capbility of producing a measurement of the bias that is present in the GPS reciever. In this chapter an approach for comput- ing this measurement is presented. Experimental data from a test bed is presented to demonstrate this concept. 4.2 Offset measurement from camera system A vision system used for lane departure warning has the ability to detect lane boundary markers to the left and right of a vehicle. The LDW system is also capable of computing the lateral offset between the vehicle and the detected lane marker. The method of offset calculation employed on the test-bed, an Infiniti M-45, is currently unknown due to confidentiality constraints. However, a general idea of how such systems preform these calculations is given in Chapters 2 and 3. A diagram of a vehicle equipped with a LDW system is shown in Figure 4.1. The diagonal lines emitting form the camera represent the field of view. Detected lane lines are shown to the left and right of the vehicle. The LDW computes the lateral offset between the centerline of the vehicle and the detected lane marker at the front of vehicle. This offset is assumed to be constant throughout the length of the car, as long as the heading angle of the car with respect to the lane is small [8]. 47 Figure 4.1: LDW lateral offset diagram 4.3 Offset measurement from GPS/MAP The vehicle?s GPS receiver and a corresponding lane-level map allows another offset calculation. The vehicle?s position received from GPS is projected onto the map. The distance between the vehicles center-line (GPS) and the mapped lane marker is the vehicles lateral offset. A diagram of how the offset is calculated can be seen in Figure 4.2. Map generation for this experiment is described in detail in Appendix B. 4.4 Map Validation Validation of the mapping method described in Appendix B is shown in Figures 4.3 and 4.4. For validation, GPS data collected from NCAT testing trucks is used. A linear best fit of the NCAT truck data is computed for both the North and South straight sections of the track. The trucks use only the outer lane at the NCAT facility. If the 48 Figure 4.2: GPS lateral offset diagram mapping has been formed correctly, the centerline of the data should lie directly in the center of the middle and outer mapped lane markings. Comparing Figures 4.4 and 4.3 suggests that the South side is mapped more accurately than the North side. 4.5 Error calculation If the vehicle?s GPS receiver and the map database are perfectly accurate, the offset calculation of the LDW and the offset between the GPS position and map lane marker should be the same. However a typical GPS (non-differential) has a guaranteed accuracy of less than ten meters. This low-accuracy will not allow a vehicle to be located to within a certain, and subsequently, lane keeping cannot be performed with this type of GPS receiver. 49 ?1000 ?800 ?600 ?400 ?200 0 200?25 ?20 ?15 ?10 ?5 0 5 E (m) N (m) Map validation of North side outside lane with NCAT truck data Left marker(mapped) Center marker (mapped) Right marker (mapped) Truck data Centerline of truck data Direction of travel Figure 4.3: Map validation for North side of track ?1000 ?800 ?600 ?400 ?200 0 200?320 ?315 ?310 ?305 ?300 ?295 ?290 E (m) N (m) Map validation of South side outside lane with NCAT truck data Left marker(mapped) Center marker (mapped) Right marker (mapped) Truck data Centerline of truck data Direction of travel Figure 4.4: Map validation for South side of track 50 Figure 4.5: Error between LDW lateral offset and GPS lateral offset However, using the offset calculation from the LDW, the bias in the GPS receiver can be directly measured. This bias is the main contributor to the low accuracy of the GPS receiver. If this bias is measured and then removed, the accuracy of the GPS receiver is increased to a level matching that used to create the lane map. If a roadway and its lanes have been precisely mapped with a differential GPS receiver, then the regular GPS receiver now has the accuracy of a high end differential receiver. A diagram of the proposed error calculation can be seen in Figure 4.5. The bias measurement is formally defined as bgps = ygps?ycam (4.1) 4.5.1 Heading angle and offset between GPS and LDW In most driving conditions, a vehicle?s angle relative to the centerline of the lane is close to zero. However, if the GPS receiver and the LDW camera are not placed 51 Figure 4.6: GPS and LDW offset relative to vehicle angle within the lane close together, small angles can cause error between the LDW lateral offset and the GPS/MAP lateral offset. Therefore, these errors must be accounted for. This can be done by incorporating the angle measurement from an LDW camera system. The camera system computes the angle of the vehicle relative to the lane of travel. Knowing this angle and the spacing between the GPS receiver and the camera allows for these errors to be removed. A diagram of a vehicle traveling within a lane with a heading angle different from that of the lane centerline is shown in Figure 4.6. The offset measurement yGPS must be rotated to be parallel to the yLDW measurement to create a new GPS, y?GPS. The difference betweenyLDW andyGPS? is the distancel. Removinglyields a final GPS offset 52 measurement, y??GPS that is directly comparable to yLDW. The following calculations show how to compute y??GPS. y?GPS = yGPS cos? (4.2) y??GPS = y?GPS?l (4.3) d = yGPS sin? (4.4) p = h?d (4.5) l = ptan? (4.6) = (h?d)tan? (4.7) = (h?yGPS sin?)tan? (4.8) y??GPS = yGPS cos??(h?yGPS sin?)tan? (4.9) A similar calculation can be performed on the right lateral offset. The new GPS bias measurement is now: bgps = y??GPS?ycam (4.10) 4.6 Experimental data In this section, experimental data to show that the bias measurement is feasible is presented. The test bed is equipped with both GPS and differential GPS (DGPS) capability and a lane departure warning system used for lane departure warning (LDW). The GPS/DGPS data is received via a RS-232 serial link by a on-board data acquisition 53 PC. The LDW data is logged via a PCI card through the vehicles CAN (controller area network) bus. The test-bed and experimental hardware and software are described in more data in Appendix C. Some of the plots presented in this section show GPS/LDW data overlayed on the mapped lane lanes of the test track. GPS data and LDW data are combined to produce an estimated lane marker position. The true (mapped) lane marker position are placed on the same graphs. The difference between the LDW/GPS right (or left) lane marker and the true right (or left) lane marker represents the error described in the previous section. Following each of these plots are graphs showing the error between the LDW offset and the GPS/MAP offset. This calculation was described in the previous section. First the method for producing the LDW/GPS estimated lane marker is presented. 4.6.1 GPS/LDW lane estimate A diagram of the car as it would appear driving around the northeast corner of the NCAT track can be seen in Figure 4.7. The measurements ol and or (offset-left and offset-right) are provided by the LDW camera. The angles?and?can be computed from the GPS course reading (given by Starfire or Sapphire). From these two measurements, the points (ar,br) and (al,bl) can be computed. ? = 360??course (4.11) ? = course?270? (4.12) pl = ol sin? (4.13) ql = ol cos? (4.14) 54 Figure 4.7: Estimating lane position from LDW/GPS al = x+pl (4.15) bl = y?ql (4.16) and for (ar,br) pr = or sin? (4.17) qr = or cos? (4.18) ar = x?pr (4.19) br = y+qr (4.20) 55 4.6.2 Data with DGPS/LDW/MAP Test runs were performed on both the North and South sides of the NCAT test track. The South outside lane has consistent and visible lane markings. The North side inside lane has sections with missing lane markings. Figure 4.8 shows DGPS/LDW data logged on the South outside lane. The true lane markers are plotted along with the estimated lane markers computed from the DGPS and LDW data. the LDW and DGPS/MAP offset measurements and the associated error are shown in Figure 4.9 . The error between the two measurements is low due to the high accuracy of the DGPS receiver and the quality of the lane lines. Data logged on the North inside lane can be seen in Figure 4.10. The offset measurements and associated error can be seen in Figure 4.11. There are several cases where the camera systems does not pick up any lane lines. This failure is indicated by an offset measurement of 8 meters. Also, on the inside lane marker, there is an incorrect LDW measurement of the lane marker. In this section of track there is no lane marker, but the camera is outputting an offset measurement. This indicates that the LDW is detecting the road edge and considering it the lane boundary. 4.6.3 Data with GPS/LDW/MAP GPS/LDW data logged on the same sections of track that were used in the previ- ous section is presented in Figures 4.12 through 4.15. The LDW failures that occurred in the previous test with DGPS/LDW occurred again in this test. The error with the GPS/LDW is much larger than that of the DGPS/LDW test run. Also, there are sudden jumps in the GPS position. The jumps range from 2-5 meters. This is representative of the low accuracy of an ordinary GPS versus one with differential corrections. This 56 ?800 ?700 ?600 ?500 ?400 ?300 ?200 ?100 0?320 ?315 ?310 ?305 ?300 ?295 longitudinal distance from reference, meters lateral distance from reference, meters DGPS/LDW VS MAP (South, outside lane), 6/9/06 DGPS Right lane marker Left lane marker DGPS/LDW right lane estimate DGPS/LDW left lane estimate Direction of travel Figure 4.8: DGPS/LDW Estimated Lanes vs Mapped Lanes (South) 275 280 285 290 295 300 305 310 3151.6 1.8 2 2.2 2.4 2.6 time (s) Left offset (m) Vehicle left lateral offset from LDW & DGPS/MAP and error LDW measurement DGPS/MAP measurement 275 280 285 290 295 300 305 310 315?0.4 ?0.3 ?0.2 ?0.1 0 Error (m) time (s) Error (a) Left error 275 280 285 290 295 300 305 310 315?2.2 ?2 ?1.8 ?1.6 ?1.4 ?1.2 time (s) Right offset (m) Vehicle right lateral offset from LDW & DGPS/MAP and error LDW measurement DGPS/MAP measurement 275 280 285 290 295 300 305 310 3150.1 0.2 0.3 0.4 0.5 Error (m) time (s) Error (b) right error Figure 4.9: Left and right error between DGPS/MAP offset and LDW offset 57 ?900 ?800 ?700 ?600 ?500 ?400 ?300 ?200?25 ?20 ?15 ?10 ?5 0 5 longitudinal distance from reference, meters lateral distance from reference, meters DGPS/LDW VS MAP (North, inside lane),6/9/06 DGPS Right lane marker Left lane marker DGPS/LDW right lane estimate DGPS/LDW left lane estimate LDW failuresLDW false detection No left markings Direction of travel Figure 4.10: DGPS/LDW Estimated Lanes vs Mapped Lanes (North) 225 230 235 240 245 250 255?8 ?6 ?4 ?2 0 time (s) Left offset (m) Vehicle left lateral offset from LDW & DGPS/MAP and error LDW measurement DGPS/MAP measurement 225 230 235 240 245 250 255?8 ?6 ?4 ?2 0 2 Error (m) time (s) Error (a) Left error 225 230 235 240 245 250 2552 4 6 8 time (ns) Right offset (m) Vehicle right lateral offset from LDW & DGPS/MAP and error LDW measurement DGPS/MAP measurement 225 230 235 240 245 250 255?2 0 2 4 6 Error (m) time (s) Error (b) right error Figure 4.11: Left and right error between DGPS/MAP offset and LDW offset 58 ?1000 ?800 ?600 ?400 ?200 0 200?320 ?315 ?310 ?305 ?300 ?295 longitudinal distance from reference, meters lateral distance from reference, meters GPS/LDW VS MAP (South, outside lane), 6/9/06 GPS Right lane marker Left lane marker GPS/LDW right lane estimate GPS/LDW left lane estimate Direction of travel Figure 4.12: GPS/LDW Estimated Lanes vs Mapped Lanes (South) type of receiver is what would typically be installed on a vehicle equipped with a navi- gation system. The large error verifies that the vehicle could not be located to within a particular highway lane with this type of GPS receiver. 59 1025 1030 1035 1040 1045 1050 1055 1060 1065?1 0 1 2 3 4 time (ns) Left offset (m) Vehicle left lateral offset from LDW & GPS/MAP and error LDW measurement GPS/MAP measurement 1025 1030 1035 1040 1045 1050 1055 1060 1065?2 ?1 0 1 2 3 Error (m) time (ns) Error (a) Left error 1025 1030 1035 1040 1045 1050 1055 1060 1065?4 ?3 ?2 ?1 0 time (ns) Right offset (m) Vehicle right lateral offset from LDW & GPS/MAP and error LDW measurement GPS/MAP measurement 1025 1030 1035 1040 1045 1050 1055 1060 1065?2 ?1 0 1 2 3 Error (m) time (ns) Error (b) right error Figure 4.13: Left and right error between GPS/MAP offset and LDW offset ?1000 ?800 ?600 ?400 ?200 0?25 ?20 ?15 ?10 ?5 0 5 longitudinal distance from reference, meters lateral distance from reference, meters GPS/LDW VS MAP (North, inside lane),6/9/06 GPS Right lane marker Left lane marker GPS/LDW right lane estimate GPS/LDW left lane estimate Direction of travel Figure 4.14: GPS/LDW Estimated Lanes vs Mapped Lanes (North) 60 1085 1090 1095 1100 1105 1110 1115 1120 1125?8 ?6 ?4 ?2 0 2 time (ns) Left offset (m) Vehicle left lateral offset from LDW & GPS/MAP and error LDW measurement GPS/MAP measurement 1085 1090 1095 1100 1105 1110 1115 1120 1125?8 ?6 ?4 ?2 0 Error (m) time (ns) Error (a) Left error 1085 1090 1095 1100 1105 1110 1115 1120 11250 2 4 6 8 time (ns) Right offset (m) Vehicle right lateral offset from LDW & GPS/MAP and error LDW measurement GPS/MAP measurement 1085 1090 1095 1100 1105 1110 1115 1120 1125?2 0 2 4 6 Error (m) time (ns) Error (b) right error Figure 4.15: Left and right error between GPS/MAP offset and LDW offset 61 Chapter 5 GPS/INS/Vision for Highway Lane Tracking 5.1 Introduction In this chapter a vehicle advanced driver assistance system that uses a global posi- tioning system (GPS), inertial navigation sensors (INS), a highway map and a camera system for highway lane tracking is presented. The primary goal of this system is to support traditional vision-based lane boundary detection systems by adding additional sensors and algorithms to estimate the vehicles lateral offset with respect to the lane of travel. Vision-based lane departure warnings systems (LDW) experience many perfor- mance limitations (highlighted in Section 2.2.5). The addition of GPS, inertial sensors and a high accuracy map will allow lane tracking to continue when the primary camera system fails. The proposed system uses a Kalman filter to estimate the lateral offset of the vehicle with respect to the lane. When the camera system is active, a direct measurement of this offset is available. Another offset measurement is available by comparing the GPS position of the vehicle to a lane-level roadway map. The difference between these two measurement represents the bias in the GPS receiver. With the GPS bias known, the accuracy of the GPS positioning is reduced significantly allowing the vehicle to be located within a particular lane. A state for the GPS bias is added to the Kalman filter to allow the bias to be estimated when the camera system fails. This allows lane tracking to be continued under camera failure. 62 5.2 Simulation In this section, a simulation for a GPS/INS/MAP assisted lane departure warning system is performed. A simple vehicle model is used to generate simulation data. A Kalman filter is used to estimate critical parameters for the lane tracking simulation. First a simulation is run under good conditions for the camera to detect the lane lanes. A second simulation is then run where the camera experiences typical failures. 5.2.1 Simple vehicle model The vehicle model used for this simulation takes into account the vehicle?s velocity, heading and steering angles. Using the steering angle as the input, the vehicles East coordinate, North coordinate and heading angle can be determined. The differential equations for the model are ?E = vsin(?) (5.1) ?N = vcos(?) (5.2) ?? = v l tan(?) (5.3) where v is the vehicle velocity, ? is the vehicle heading, ? is the steering angle and l is the vehicle wheelbase. Under the assumption of small steering and heading angles, the model becomes ?E = v? (5.4) ?N = v (5.5) 63 ?? = v l? (5.6) and the corresponding state space model is ? ?? ?? ?? ? ?? ?E ?N ? ?? ?? ?? ? = ? ?? ?? ?? ? 0 0 0 v 0 0 0 0 0 ? ?? ?? ?? ? ? ?? ?? ?? ? ? E N ? ?? ?? ?? ? + ? ?? ?? ?? ? v l 0 0 0 0 v ? ?? ?? ?? ? bracketleftbigg ? 1 bracketrightbigg . (5.7) Model data is generated for a vehicle driving on a north-bound highway at 50 mph (80 kph) for 180 seconds. The vehicle drives approximately 4 km over the simulation. A small amplitude sinusoidal steer angle ? is used to generate a slow oscillation within the lane. Figure 5.1 shows the vehicle track within a simulated highway lane. The red lines represent the lane markers and the blue lines represent the vehicle path. The simulated vehicle data will be used to derive Kalman filter input data and measurement data. 5.2.2 Inputs and measurements The Kalman filter estimation will use inputs generated from the simulated vehicle model. Two inputs, lateral acceleration (ay) and yaw rate (r) will be used. These inputs represent actual sensor inputs from an accelerometer and gyro. The lateral accelerometer and yaw gryo are the only inertial sensors necessary for this experiment. To accurately simulate a system that would include the lateral accelerometer and yaw gyro, the inputs must be modeled as sensors which include the true value, a constant offset, a bias and a Gaussian random walk [7], i.e. aaccel = ay +ca +ba +wa (5.8) 64 ?5 ?4 ?3 ?2 ?1 0 1 2 3 4 50 500 1000 1500 2000 2500 3000 3500 E (m) N (m) Simulated vehicle track Vehicle track lane markers Figure 5.1: Simulated vehicle path rgyro = r+cr +br +wr. (5.9) The true lateral acceleration, ay, and yaw rate, r, can be derived from the simulated vehicle data. Because the vehicle is driving on a due north course, the lateral acceleration can be computed by ay = d 2E dt2 . (5.10) Similarly, the vehicle?s yaw rate can be derived by r = d?dt. (5.11) 65 0 20 40 60 80 100 120 140 160 180?0.08 ?0.06 ?0.04 ?0.02 0 0.02 0.04 0.06 0.08 True lateral acceleration and yaw rate time (s) m/s 2 , deg/s ay psi Figure 5.2: Actual lateral acceleration and yaw rate The actual lateral acceleration and yaw rate of the vehicle is shown in Figure 5.2. Sim- ulated accelerometer and gyro sensor readings as they would appear measuring the true values in 5.2 are presented in Figure 5.3. A constant offset, a walking bias and noise has been added. The Kalman filter use three measurements in the estimation. The first measurement will be the lateral offset reading from the LDW. Second, the filter will have another lateral offset measurement from the GPS comparison with the map database. This will include the true offset and the GPS position bias. The final measurement is heading which is taken from the GPS course reading. The corresponding measurement equations are yldw = y+?ldw (5.12) ygps = y+bgps +?gps (5.13) 66 0 20 40 60 80 100 120 140 160 180?0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 time (s) m/s 2 and deg/s Accel and gyro sensor readings (KF inputs) accel gyro Figure 5.3: Simulated accelerometer and gyro readings ?gps = ?+?gps (5.14) The simulated LDW and GPS measurements can be seen in Figure 5.4. 67 0 20 40 60 80 100 120 140 160 180?4 ?3 ?2 ?1 0 1 2 3 4 5 6 time (s) offset (m), heading (deg) Measurement data from vehicle model LDW GPS+MAP GPS HEADING Figure 5.4: Simulated LDW and GPS measurements 5.2.3 Estimation model For the Kalman Filter estimation, a six state model is chosen. States of interest are lateral velocity, lateral offset, heading and the sensor biases. The state vector is x = ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? Vy y ? ba br bg ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? (5.15) 68 which has the dynamics ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? ?Vy ?y ?? ?ba ?br ?bg ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? = ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? ay?ba?ca +wa Vy r?br?ca +wr wba wbr ?1?bg +wgps ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? . (5.16) The gyro and accelerometer biases are modeled as a random walk and the GPS receiver bias is modeled as a first order Markov process. The constant offsets for the accelerometer and gyro must be known in this model. To determine these offsets in a real world scenario, static data would be taken and the mean value of the data would be considered the constant offset. The system can be expressed in the state space form ?x = Ax+Buu+Bww (5.17) y = Cx+? (5.18) 69 as ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? ?Vy ?y ?? ?ba ?br ?bg ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? = ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? 0 0 0 ?1 0 0 1 0 0 0 0 0 0 0 0 0 ?1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ?1? ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? Vy y ? ba br bg ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? + ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? 1 0 ?ca 0 0 0 0 1 ?cr 0 0 0 0 0 0 0 0 0 ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? ? ?? ?? ?? ? ay r 1 ? ?? ?? ?? ? +??? ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? ?1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? ? ?? ?? ?? ?? ?? ?? ?? ? wa wr wba wbr wg ? ?? ?? ?? ?? ?? ?? ?? ? (5.19) ? ?? ?? ?? ? yldw ygps ?gps ? ?? ?? ?? ? = ? ?? ?? ?? ? 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 ? ?? ?? ?? ? ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? Vy y ? ba br bg ? ?? ?? ?? ?? ?? ?? ?? ?? ?? ? + ? ?? ?? ?? ? ?ldw ?gps ?gps ? ?? ?? ?? ? . (5.20) For this experiment, the constant offset for the accelerometer is chosen to be 0.5ms2, the constant offset for the gyro is chosen to be 0.1deg/s, and the time constant for the GPS bias is chosen to be 1 hour. Standard deviations for process noise are chosen to be 70 representative of typical automotive grade accelerometers and gyros [7]. ?2ay = 0.001 (5.21) ?2r = 0.05 (5.22) ?2ba = 0.0012 (5.23) ?2br = 0.005 (5.24) ?2bgps = 0.01 (5.25) The standard deviation of the error on the test-bed LDW system is not known exactly. It is assumed to operate at the minimal requirements given by the Federal Motor Carrier Safety Administrations?s required specifications for LDW systems [1]. The approximate standard deviation of the GPS position measurement and course measurement are known from static testing. ?2ldw = 0.01 (5.26) ?2gpsy = 1 (5.27) ?2gps? = 0.05 (5.28) 71 The Qc and Rd matrices are Qc = ? ?? ?? ?? ?? ?? ?? ?? ? ?2ay 0 0 0 0 0 ?2r 0 0 0 0 0 ?2ba 0 0 0 0 0 ?2br 0 0 0 0 0 ?2bgps ? ?? ?? ?? ?? ?? ?? ?? ? (5.29) Rd = ? ?? ?? ?? ? ?2LDW 0 0 0 ?2gpsy 0 0 0 ?2gps? ? ?? ?? ?? ? (5.30) Observability check Before the estimation can be performed, the observability of the proposed estimation model must be checked. The closed loop eigen values are eig(A) = bracketleftbigg 0 0 0 0 0 ?0.0003 bracketrightbigg? . (5.31) For the system to be observable, it must pass the Hautus test, i.e. rank ? ?? ? ? ?? ? sI?A C ? ?? ? ? ?? ? (5.32) for all eigen values, s. For each eigen value, the observability matrix is full rank, so therefore the system is observable. 72 5.2.4 Simulation results The results of the estimation simulation are shown in Figures 5.5 through 5.6, and thethe state estimation residuals can be seen in Figure 5.7. All states are estimated with only a small amount of estimation error present. The most important state estimate is the GPS bias. With this bias estimated well, the accuracy of the GPS receiver has been improved to level of accuracy used to create the map database. 5.2.5 Simulating a lane detection failure Lane detection failures are a problem in vision based lane departure warning sys- tems. Many factors can contribute a lane detection failure. These conditions are high- lighted in Section 2.2.5. A lane detection failure occurs when the LDW wrongly classifies an edge as a lane marker, or when no lane is detected at all. Therefore, the simulation is modified to simulate periods where the LDW has a lane detection failure. The 180 second simulation performed in the previous section is modified to undergo a 30 second period where no lane is detected and a 30 second period where the road edge is classified as the lane edge instead of the lane marker. During the first 30 second outage, the offset measurement defaults to the maximum value of offset which is eight meters. Many systems, such as the LDW used on the experiment test bed, have a warning flag that indicates when a lane is not detected. Either of these indicators can be used to alert the GPS/INS/Vision LDW that a lane is not detected. When the system is alerted, the C matrix will augmented so the Kalman filter will not use the offset measurement to update the state estimation. However, in the second outage where the lane is detected incorrectly, the system cannot be directly alerted that the lateral offset measurement is bad. 73 0 20 40 60 80 100 120 140 160 180?0.4 ?0.3 ?0.2 ?0.1 0 0.1 0.2 0.3 0.4 time (s) lateral velocity Actual and estimated lateral velocity Vy Vyhat (a) Actual and estimated lateral velocity 0 20 40 60 80 100 120 140 160 180?4 ?3.5 ?3 ?2.5 ?2 ?1.5 ?1 ?0.5 0 time (s) lateral offset (m) Actual and estimated lateral offset camera measurement estimated offset true ofset (b) Actual and estimated lateral offset 0 20 40 60 80 100 120 140 160 180?0.2 ?0.15 ?0.1 ?0.05 0 0.05 0.1 0.15 0.2 0.25 time (s) heading Actual and estimated heading psi psihat (c) Actual and estimated vehicle heading Figure 5.5: Estimates for states 1 through 3 74 0 20 40 60 80 100 120 140 160 180?0.2 ?0.15 ?0.1 ?0.05 0 0.05 time (s) accelerometer bias (m/s 2 ) Actual and estimated accel bias ba bahat (a) Actual and estimated accelerometer bias 0 20 40 60 80 100 120 140 160 180?0.2 ?0.15 ?0.1 ?0.05 0 0.05 0.1 0.15 0.2 time (s) gyro bias (deg/s) Actual and estimated gyro bias br brhat (b) Actual and estimated gyro bias 0 20 40 60 80 100 120 140 160 1800 0.5 1 1.5 2 2.5 3 3.5 4 time (s) gps bias (m) Actual and estimated gps bias bg bghat (c) Actual and estimated GPS bias Figure 5.6: Estimates for states 4 through 6 75 0 50 100 150 200?0.5 0 0.5 time (s) error x1 error, sig=9.885792e?003 0 50 100 150 200?5 0 5 time (s) error x2 error, sig=6.121358e?002 0 50 100 150 200?0.1 0 0.1 time (s) error x3 error, sig=1.795418e?002 0 50 100 150 200?0.2 0 0.2 time (s) error x4 error, sig=9.259407e?003 0 50 100 150 200?0.5 0 0.5 time (s) error x5 error, sig=9.906329e?003 0 50 100 150 200?5 0 5 time (s) error x6 error, sig=1.200791e?001 Figure 5.7: State estimation residuals 76 0 20 40 60 80 100 120 140 160 180?4 ?2 0 2 4 6 8 10 time (s) lateral offset (m) Actual and estimated lateral offset camera measurement estimated offset true ofset Figure 5.8: Lateral offset estimation with simulated camera outage The results from the system simulation that undergoes the vision-based lane tracking failuresareshowninFigure5.8. Duringthefirst30secondoutage, theGPS/INS/MAP/Vision LDW continues to estimate the vehicle lateral offset even though the camera is not de- tecting the lane boundary marker. However, when the camera system detects the lane boundary incorrectly, the estimated lateral offset follows the incorrect offset measure- ment. This behavior is un-correctable as long as the C matrix is not modified. The direct measurement of the lateral offset causes the estimation to fail. One possible solution for this problem is to place an external algorithm on the offset measurement that checks for sudden changes in the lane offset. Another possibly is to use an algorithm that uses the database to cross-check the offset to make sure that the deviation between the camera lateral offset and the GPS/MAP lateral offset is not above some threshold. However, this requires that the GPS bias has been estimated and removed a priori. 77 5.2.6 Addition of a vision-based velocity sensor A possible enhancement to the GPS/INS/MAP/Vision LDW is the addition of vision-based velocity sensor. This sensor gives the Kalman Filter a direct measurement of lateral velocity. Optical flow methods are used to provide a measurement of vehicle velocity. The horizontal component is be extracted and used as a direct measurement of the lateral velocity state. This lateral velocity computation is independent of the lane detection algorithm, so therefore, the velocity measurement is still available under lane detection failure. The C matrix is now C = ? ?? ?? ?? ?? ?? ? 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 ? ?? ?? ?? ?? ?? ? . (5.33) The results from the simulation with the added vision-based velocity sensor can be seen in Figure 5.9. Under the first 30 second outage where the camera does not detect the lane boundary marker, the estimated lateral offset is much more accurate than the estimated offset without the vision-based velocity sensor. The estimation during the 30 second outage for the sytem with and without the vision velocity sensor is presented in Figure 5.10. The standard deviation of the estimation error during the camera outage (?1 = 0.2278) has been reduced by significantly (?2 = 0.0611). The vision based velocity sensor does not affect the estimated lateral offset during the second outage. The Kalman filter is still being updated by the inaccurate offset measurement. 78 0 20 40 60 80 100 120 140 160 180?4 ?2 0 2 4 6 8 10 time (s) lateral offset (m) Actual and estimated lateral offset camera measurement estimated offset true ofset Figure 5.9: Lateral offset estimation with simulated camera outage (with vision-velocity sensor) 30 35 40 45 50 55 60?4 ?3 ?2 ?1 0 1 2 3 4 time (s) error (m) Estimation error (sigma = 1.73e?001) during LDW outage (a) Estimation error during camera outage 30 35 40 45 50 55 60?2 ?1.5 ?1 ?0.5 0 0.5 1 1.5 2 time (s) error (m) Estimation error during simulated camera outage (b) Estimation error with added velocity sen- sor Figure 5.10: Estimation error of lateral offset during lane detection failure 79 5.3 Conclusions In this chapter, a lane tracking system utilizing GPS/INS, a map database and a lane departure warning system has been presented. The system uses a Kalman filter to estimate the lateral offset of a vehicle with respect to a highway lane boundary marker. As traditional vision-based lane detection systems fail due to environmental and highway conditions, the proposed system can keep track of a vehicle?s relative lateral offset despite camera failure. 80 Chapter 6 Conclusions 6.1 Successes of this Research A major success of this research is the demonstration of how the inherent bias in a GPS receiver can be measured and removed easily. This measurement is made possible by using the offset measurement of an LDW vision together with a GPS receiver and a high accuracy map. With this bias removed, the accuracy of the GPS receiver has been increased to the accuracy level used in creating the map. This research has demonstrated the feasibility of this bias estimation with experimental data taken from a test-bed. In this case the test-bed is an actual consumer available automobile, further demonstrating that this method can be implemented on current automobiles that already include all of the required hardware. Building on the success of the GPS bias measurement, the performance of the pro- posed lane tracking system is also a major success. The suggested system adds significant performance to the vision-based lane departure warning systems that are currently be- ing deployed in the automobile industry. The environmental constraints that limit these vision-based systems will only affect the vision sensors itself. When the vision offset measurement fails, the Kalman filter continues to estimate the lateral offset. The lat- eral offset estimation is a sufficient backup to the LDW vision system until it resumes vision-based lane tracking. 81 6.2 Suggestions for further research The next step that should be taken in this research is experimental validation. Mul- tiple data collection experiments should be conducted that simulate real-world highway conditions. Test-bed data collected from the vehicle must be run through the lane track- ing estimation. Assessing how the system performs under LDW lane detection failure is the key issue in assessing the approaches? validity. In the proposed lane tracking method, only one lateral offset is estimated and only one lateral offset measurement from the LDW camera is used. The left and right offset measurements do not always fail simultaneously, so therefore, making use of the other measurement would enhance performance. The first step would be to make both a left and right lateral offset state. Next, using the lane width known from the map, a redundant LDW measurement can be taken for each state. If the lane width is known, then the left offset can be determined by the measurement of the right offset and vice versa. Another area for further research is utilizing the available angle measurement from the LDW system. The LDW camera computes the angle between the vehicle?s heading and the centerline of the detected lane. This measurement could be combined with the high accuracy map to create a vehicle heading in East, North and UP (ENU) coordinates that would be a backup to the GPS course measurement. Under a GPS outage, the vehicle?s heading could still be estimated by using the angle measurement from the LDW system. One particular LDW failure that is not corrected by the proposed lane tracking system is the case when the LDW camera system falsely detects a lane marker, and the 82 resulting offset measurement is inaccurate. In the current setup, the Kalman Filter has no way of deciphering whether or not the measurement is good or bad. It will therefore continue to use the inaccurate measurement to update the estimated states. Since this is a typical LDW failure, research must be done to develop a method to compensate for this situation. One possible solution is to watch out for sudden jumps in lateral offset. If the offset measurement changes quickly, this could indicate that the LDW is not detecting the lane marker accurately. Under this condition, the offset measurement can be disabled until it is determined that the LDW is computing the offset correctly. This method could possibly involve a map matching algorithm using between GPS (with the estimated bias removed) and the high accuracy map. Another area of possible further interest is the actual implementation of the velocity vision sensor. It was demonstrated in Chapter 5 that this sensor improved performance significantly when the LDW system failed to detect a lane marker. This would require adding a different vision system to the test bed that would give access to image and video data. This would also allow testing of in-house lane detection algorithms (using edge or corner detection and optical flow) and comparing there performance to the LDW vision system. On a larger scale, this research could become part of a larger experiment in which the goal is autonomous lane keeping. The lane tracking estimation could be combined with a control system to keep the vehicle in the lane when the driver fails to do so. This research could therefore bridge the gap between lane departure warning and lane departure prevention. 83 Bibliography [1] Concept of operations and voluntary operational requirements for lane departure warning systems (ldws) on-board commercial motor vehicles. Technical report, Fed- eral Motor Carrier Safety Administration, July 2005. [2] J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(6):679?698, November 1986. [3] Axel Gern, Rainer Moebus, and Uwe Franke. Vision-based lane recognition under adverse weather conditions using optical flow. IEEE Intelligent Vehicle Symposium, 2:652?657, June 2002. [4] C.J. Harris and M. Stephens. A combined corner and edge detector. 4th Alvey Vision Conference, Manchester, pages 147?151, 1988. [5] B.K.P Horn and B.G. Schunck. Determining optical flow. Artificial Intelligence, 17:185?203, 1981. [6] Pau-Lo Hsu, Hsu-Yuan Cheng, Bol-Yi Tsuei, and Wen-Jing Huang. The adaptive lane-departure warning system. Proceedings of the 41st SICE Annual Conference, 5:2867?2872, August 2002. [7] Warren S. Flenniken IV. Modeling inertial measurement units and analyzing the effect of their errors in navigation applications. Master?s thesis, Auburn University, December 2005. [8] Klaus Mezger Roland Ortloff Armin Joos Jin Wang, Stefan Schroedl and Thomas Passegger. Lane keeping based on location technology. IEEE Transactions on Intelligent Transportation Systems, 6(3):351 ? 356, September 2005. [9] Claudio Rosito Jung and Christian Roberto Kelber. A lane departure warning system using lateral offset with uncalibrated camera. Proceedings of the 8th Inter- national IEEE Conference on Intelligent Transportation Systems, pages 348?353, September 13-16 2005. [10] David J. LeBlanc, Gregory E. Johnson, and Raul J.Th.Venhovens. Capc: A road- departure prevention system. Control Systems Magazine, IEEE, 16(6):61?71, De- cember 1996. [11] Sukhan Lee, Woong Kwon, and Jae-Won Lee. A vision based lane departure warning system. Proceedings of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 160?165, 1999. 84 [12] H. Moravec. Obstacle avoiance and navigation in the real world by a seeing robot rover. Tech report CMU-RI-TR-3, Carnegie-Mellon University, Robotics Institute, September 1980. [13] Philip Torr. http : //www.mathtools.net/matlab/image processing/, June 2002. MATLAB Structure from Motion Toolkit. [14] Christopher K. H. Wilson, Seth Rogers, and Shawn Weisenburger. The potential of precision maps in intelligent vehicles. IEEE Intelligent Transportation Systems, October 1998. [15] Gengsheng Zhang and Christopher Wilson. An integrated dgps/dr/map system for vehicle saftey applications. Proceedings of the 2000 ION National Technical Meeting, pages 253?257, January 26 - 28 2000. 85 Appendices 86 Appendix A Experimental Data for Lane Tracking Estimation In this appendix data collected from the test-bed is presented. Test-bed data col- lected using the LDW system and a differential GPS (DGPS) is presented in Section 1. The data is then used for experimental testing of the lane tracking estimation method proposed in Chapter 5. Test-bed data collected using the LDW system and a regu- lar GPS receiver is presented in Section 2, along with results from the lane tracking estimation using this GPS/LDW data. A.1 Experimental Data - Starfire DGPS receiver & LDW The path of the vehicle along the South side straightaway in the outside lane is shown in Figure A.1. The mapped left and right lane markers are also plotted. A sinusoidal path was driven to create more dynamic lateral offset data than would normally be created by driving on a straight path. This allows the changes in the lateral offset to be viewed more easily. The left and right lateral offset readings from the LDW system, as well as the computed left and right lateral offset from the GPS receiver and map can be seen in Figure A.2 shows . The error between the two offsets represents the bias measurement that is available for the lane tracking estimation, which can be seen in Figure A.3. Sensor data from the vehicles lateral accelerometer and yaw gyro can be seen in Figure A.4. These sensors are used as inputs to the lane tracking Kalman Filter. These measurements appear very quantized. This is a result of the low level vehicle dynamics 87 ?900 ?800 ?700 ?600 ?500 ?400 ?300 ?200 ?100 0?320 ?318 ?316 ?314 ?312 ?310 ?308 ?306 ?304 ?302 ?300 East (m) North (m) Vehicle track, South straightaway at NCAT, outside lane Vehicle track Mapped lane markings Figure A.1: Vehicle track, NCAT South Straight section 105 110 115 120 125 130 1350.5 1 1.5 2 2.5 3 3.5 4 time (s) offset (m) Left lateral offset LDW left offset GPS/MAP left offset (a) Left offsets 105 110 115 120 125 130 135?3 ?2.5 ?2 ?1.5 ?1 ?0.5 0 time (s) offset (m) Right lateral offset LDW right offset GPS/MAP right offset (b) Right offsets Figure A.2: Respective offset measurements 88 105 110 115 120 125 130 135?0.2 0 0.2 0.4 0.6 0.8 1 1.2 time (s) offset error (m) Offset error (GPS bias measurement) Left error Right error Figure A.3: (DGPS/LDW, South straightaway) during data generation. The vehicle is driving relatively straight, with only a very slow oscillation within the lane. Therefore, the small changes in lateral acceleration and yaw rate are not visible because of the quantization levels in the analog to digital conversion. Data collected on the inside lane of the North straightaway of the NCAT test track is presented in Figures A.5 through A.8. The same data collected for the North side is collected for the South side. However, on the North side, the lane markings are inconsistent and the LDW systems experiences several instances where the lanes are not detected. For a significant portion of this track section, the left lane markings are not presents which results in the LDW system detecting the road edge. 89 105 110 115 120 125 130 135?2 ?1 0 1 2 3 4 time (ns) deg/s M45 yaw gyro (CAN) (a) Yaw gyro sensor readings 105 110 115 120 125 130 135?1.5 ?1 ?0.5 0 0.5 1 1.5 time (ns) m/s 2 M45 lateral accelerometer (CAN) (b) Lateral accel sensor readings Figure A.4: Accelerometer and Gyro data ?900 ?800 ?700 ?600 ?500 ?400 ?300 ?200 ?100 0 100?25 ?20 ?15 ?10 ?5 0 5 East (m) North (m) Vehicle track, North straightaway at NCAT, outside lane Vehicle track Mapped lane markings Figure A.5: Vehicle track, NCAT North Straight section 90 160 165 170 175 180 185 190 195 200?10 ?8 ?6 ?4 ?2 0 time (ns) left offset (m) Vehicle lateral offset from LDW camera (CAN) 160 165 170 175 180 185 190 195 2000 2 4 6 8 time (ns) right offset (m) (a) LDW left and right offsets 160 165 170 175 180 185 190 195 200?4 ?3 ?2 ?1 0 time (ns) left offset (m) Vehicle lateral offset from GPS & MAP 160 165 170 175 180 185 190 195 2000 1 2 3 4 right offset (m) time (ns) (b) GPS/MAP left and right offsets Figure A.6: LDW and GPS/MAP offset measurements 160 165 170 175 180 185 190 195 200?9 ?8 ?7 ?6 ?5 ?4 ?3 ?2 ?1 0 time (s) offset (m) Left lateral offset LDW left offset GPS/MAP left offset (a) Left offsets 160 165 170 175 180 185 190 195 2000 1 2 3 4 5 6 7 8 time (s) offset (m) Right lateral offset LDW right offset GPS/MAP right offset (b) Right offsets Figure A.7: Respective offset measurements 91 160 165 170 175 180 185 190 195 200?8 ?6 ?4 ?2 0 2 4 6 8 time (s) offset error (m) Offset error (GPS bias measurement) Left error Right error Figure A.8: (DGPS/LDW, North straightaway) 160 165 170 175 180 185 190 195 200?1 ?0.5 0 0.5 1 1.5 2 2.5 3 time (ns) deg/s M45 yaw gyro (CAN) (a) Yaw gyro sensor readings 160 165 170 175 180 185 190 195 200?1 ?0.8 ?0.6 ?0.4 ?0.2 0 0.2 0.4 0.6 0.8 time (ns) m/s 2 M45 lateral accelerometer (CAN) (b) Lateral accel sensor readings Figure A.9: Accelerometer and Gyro data 92 A.1.1 Lane Tracking Estimation with Starfire (DGPS) /LDW Using the collected data in the previous section, the lane tracking Kalman Filter presented in Chapter 5 is used for lateral offset estimation with actual sensor data taken from the test bed. Data from the North and South sides of the test track are run separately for two different experiments. LTE: South side of NCAT During data collection on the South section of the track, the LDW system experi- enced no lane detection failures or false detections. Therefore a lane detection is simu- lated by augmenting the offset measurement between 115 and 120 seconds to 8 meters, emulating an instance where the LDW system has not detected a lane marker. For this experiment, the left lateral offset is used. Figure A.10 shows the input and measurement data for the Kalman Filter. The measured and estimated GPS bias is shown in Figure A.11. The measured, estimated and actual lateral offset is plotted in Figure A.12. Since the actual camera measurement has been augmented to simulate a lane detection failure, the actual mea- surement is considered to be the true lateral offset of the vehicle. Under the simulated failure, the Kalman Filter estimates the lateral offset until the LDW systems resumes lane detection. The estimation error during the LDW lane detection outage is displayed in Figure A.13. LTE: North side of NCAT As discussed previously, the LDW system experienced several lane detection failures during data collection on the North side of the test track. For this experiment the data is 93 110 112 114 116 118 120 122 124 126 128 130?1 ?0.5 0 0.5 1 1.5 time (s) lateral acceleration (m/s 2 ) KF input ? lateral accel (a) Lateral accelerometer 110 112 114 116 118 120 122 124 126 128 130?1.5 ?1 ?0.5 0 0.5 1 1.5 2 2.5 time (s) yaw rate (deg/s) KF input ? yaw gyro (b) Yaw gyro 112 114 116 118 120 122 124 126 128 130?2 0 2 4 6 8 10 time (s) lateral offset (m) KF Measurement ? lateral offset LDW GPS/MAP (c) LDW & GPS/MAP offsets Figure A.10: Sensor inputs and LDW & GPS/MAP offset measurements 94 110 112 114 116 118 120 122 124 126 128 130?7 ?6 ?5 ?4 ?3 ?2 ?1 0 1 time (s) gps bias (m) Estimated and measured gps bias bgmeas bghat Figure A.11: Estimated and measured GPS bias 112 114 116 118 120 122 124 126 128 130?2 0 2 4 6 8 10 time (s) lateral offset (m) Actual, measured and estimated lateral offset true offset (LDW) camera measurement estimated offset Figure A.12: Estimated and measured lateral offset 95 115 115.5 116 116.5 117 117.5 118 118.5 119 119.5 120?0.5 ?0.4 ?0.3 ?0.2 ?0.1 0 0.1 0.2 0.3 0.4 0.5 time (s) error (m) Estimation error (sigma = 4.43e?002) during LDW outage Figure A.13: Estimation error during camera outage run through the lane tracking Kalman Filter as it is collected, with actual lane detection failures. The right lateral offset is used in this experiment. The sensor inputs and measurements are presented in Figure A.14. The GPS bias measurement and estimate can be seen in Figure A.15. The portion of collected data used in this experiment captures two lane detection failures. A 1 second loss of the lane markers occurs at approximately 177s and a larger 3 second loss occurs between 183 and 186 seconds. Under both lane detection outages, the Kalman Filter continues to estimate the lateral offset with only a small amount of estimation noise. A.2 Experimental Data - GPS receiver & LDW Data collected on the South side of NCAT using a UBLOX GPS receiver is shown in Figures A.17 through A.19. Data collected on the North side of NCAT can be seen in 96 170 172 174 176 178 180 182 184 186 188 190?1 ?0.8 ?0.6 ?0.4 ?0.2 0 0.2 0.4 0.6 0.8 1 time (s) lateral acceleration (m/s 2 ) KF input ? lateral accel (a) Lateral accelerometer 170 172 174 176 178 180 182 184 186 188 190?1 ?0.5 0 0.5 1 1.5 2 2.5 3 time (s) yaw rate (deg/s) KF input ? yaw gyro (b) Yaw gyro 170 172 174 176 178 180 182 184 186 188 190?2 0 2 4 6 8 10 time (s) lateral offset (m) KF Measurement ? lateral offset LDW GPS/MAP (c) LDW & GPS/MAP offsets Figure A.14: Sensor inputs and LDW & GPS/MAP offset measurements 97 170 172 174 176 178 180 182 184 186 188 190?8 ?7 ?6 ?5 ?4 ?3 ?2 ?1 0 1 time (s) gps bias (m) Estimated and measured gps bias bgmeas bghat Figure A.15: Estimated and measured GPS bias 170 172 174 176 178 180 182 184 186 188 190?2 0 2 4 6 8 10 time (s) lateral offset (m) Measured and estimated lateral offset camera measurement estimated offset Figure A.16: Estimated and measured lateral offset 98 ?800 ?700 ?600 ?500 ?400 ?300 ?200 ?100 0 100?320 ?315 ?310 ?305 ?300 ?295 East (m) North (m) Vehicle track, South straightaway at NCAT, outside lane Vehicle track Mapped lane markings Figure A.17: Vehicle track, NCAT South Straight section Figures A.20 through A.23. As seen in the previous experiment, no LDW failures were recorded on the South side, while several failures along with an incorrect lane detection were recorded on the North side. Yaw gyro data and lateral accelerometer data are not shown due to the similarity to data presented in the previous section. A.2.1 Lane Tracking Estimation with UBLOX (GPS) /LDW As done in the previous section, this GPS/LDW data is run through the lane track- ing estimation method developed in Chapter 5. First, the LTE is run with data collected on the South side of the test track. Then data collected on the North side is used for lane tracking estimation. 99 145 150 155 160 165 170 175 1800.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time (s) offset (m) Left lateral offset LDW left offset GPS/MAP left offset (a) Left offsets 145 150 155 160 165 170 175 180?3.5 ?3 ?2.5 ?2 ?1.5 ?1 ?0.5 0 0.5 1 time (s) offset (m) Right lateral offset LDW right offset GPS/MAP right offset (b) Right offsets Figure A.18: Respective offset measurements 145 150 155 160 165 170 175 180?1.5 ?1 ?0.5 0 0.5 1 1.5 2 2.5 time (s) offset error (m) Offset error (GPS bias measurement) Left error Right error Figure A.19: (GPS/LDW, South straightaway) 100 ?900 ?800 ?700 ?600 ?500 ?400 ?300 ?200 ?100 0?25 ?20 ?15 ?10 ?5 0 East (m) North (m) Vehicle track, North straightaway at NCAT, outside lane Vehicle track Mapped lane markings Figure A.20: Vehicle track, NCAT North Straight section 90 95 100 105 110 115 120 125 130?10 ?8 ?6 ?4 ?2 0 time (ns) left offset (m) Vehicle lateral offset from LDW camera (CAN) 90 95 100 105 110 115 120 125 1300 2 4 6 8 time (ns) right offset (m) (a) LDW left and right offsets 90 95 100 105 110 115 120 125 130?4 ?3 ?2 ?1 0 time (ns) left offset (m) Vehicle lateral offset from GPS & MAP 90 95 100 105 110 115 120 125 1300 1 2 3 4 right offset (m) time (ns) (b) GPS/MAP left and right offsets Figure A.21: LDW and GPS/MAP offset measurements 101 90 95 100 105 110 115 120 125 130?9 ?8 ?7 ?6 ?5 ?4 ?3 ?2 ?1 0 time (s) offset (m) Left lateral offset LDW left offset GPS/MAP left offset (a) Left offsets 90 95 100 105 110 115 120 125 1300 1 2 3 4 5 6 7 8 time (s) offset (m) Right lateral offset LDW right offset GPS/MAP right offset (b) Right offsets Figure A.22: Respective offset measurements 90 95 100 105 110 115 120 125 130?8 ?6 ?4 ?2 0 2 4 6 8 time (s) offset error (m) Offset error (GPS bias measurement) Left error Right error Figure A.23: Bias Measurement (GPS/LDW, North straightaway) 102 154 156 158 160 162 164 166 168 170 172 174?1 ?0.8 ?0.6 ?0.4 ?0.2 0 0.2 0.4 0.6 0.8 1 time (s) lateral acceleration (m/s 2 ) KF input ? lateral accel (a) Lateral accelerometer 154 156 158 160 162 164 166 168 170 172 174?2.5 ?2 ?1.5 ?1 ?0.5 0 0.5 1 1.5 2 2.5 time (s) yaw rate (deg/s) KF input ? yaw gyro (b) Yaw gyro 154 156 158 160 162 164 166 168 170 172 174?1 0 1 2 3 4 5 6 7 8 9 10 time (s) lateral offset (m) KF Measurement ? lateral offset LDW GPS/MAP (c) LDW & GPS/MAP offsets Figure A.24: Sensor inputs and LDW & GPS/MAP offset measurements LTE: South side of NCAT No lane detection failures occurred, so a camera output is simulated by augmenting the camera measurement. The actual camera measurement will be treated as the true lateral offset. The actual sensor data used in the Kalman Filter is shown in Figure A.24. The estimated and measured GPS bias can be seen in Figure A.25. The actual and estimated lateral offset can be seen in Figure A.26. The estimation error during the simulated camera outage is shown in Figure A.27. 103 154 156 158 160 162 164 166 168 170 172 174?7 ?6 ?5 ?4 ?3 ?2 ?1 0 1 2 3 time (s) gps bias (m) Estimated and measured gps bias bgmeas bghat Figure A.25: Estimated and measured GPS bias 154 156 158 160 162 164 166 168 170 172 174?1 0 1 2 3 4 5 6 7 8 9 10 time (s) lateral offset (m) Actual, measured and estimated lateral offset true offset (LDW) camera measurement estimated offset Figure A.26: Estimated and measured lateral offset 104 160 160.5 161 161.5 162 162.5 163 163.5 164 164.5 165?1 ?0.8 ?0.6 ?0.4 ?0.2 0 0.2 0.4 0.6 0.8 1 time (s) error (m) Estimation error (sigma = 1.26e?001) during LDW outage Figure A.27: Estimation error during camera outage LTE: North side of NCAT During data collection on the North side of NCAT, the LDW system experienced two lane detection failures and one false lane detection when attempting to detect the left lane marker and determine the left lateral offset. The sensor data collected in this experiment is plotted in Figure A.28. The LDW system failures can be seen in sub-figure A.3. the estimated GPS bias from the lane tracking Kalman filter is shown in Figure A.29. The estimated and measured lateral offset can be seen in Figure A.30. The key observation from this experiment is that the Kalman filter provides a reliable lateral offset estimation during the LDW lane detection outage. A.3 Conclusions In this appendix experimental data validating the lane tracking estimation approach presented in Chapter 5 is presented. Data sets with both DGPS/LDW and GPS/LDW 105 are given. Each data set consisted of two test runs, one with good lane markings and another with bad lane markings which resulted in lane detection failures. A camera outage is simulated in the cases where there were no lane detection failures. Using the collected inertial sensor data from the yaw gyro and lateral accelerometer as inputs and the LDW/GPS/MAP offset readings and GPS heading as measurements, the Kalman filter can provide a reliable lateral offset measurement. This offset measurement is avail- able even under a LDW lane detection failure. Therefore, the estimated lateral offset provides a backup measurement that could be used to alert a driver of an unintended lane departure. 106 100 102 104 106 108 110 112 114 116 118 120?2 ?1.5 ?1 ?0.5 0 0.5 1 1.5 2 time (s) lateral acceleration (m/s 2 ) KF input ? lateral accel (a) Lateral accelerometer 100 102 104 106 108 110 112 114 116 118 120?2 ?1 0 1 2 3 4 time (s) yaw rate (deg/s) KF input ? yaw gyro (b) Yaw gyro 100 102 104 106 108 110 112 114 116 118 120?10 ?8 ?6 ?4 ?2 0 2 time (s) lateral offset (m) KF Measurement ? lateral offset LDW GPS/MAP (c) LDW & GPS/MAP offsets Figure A.28: Sensor inputs and LDW & GPS/MAP offset measurements 107 100 102 104 106 108 110 112 114 116 118 120?2 0 2 4 6 8 10 time (s) gps bias (m) Estimated and measured gps bias bgmeas bghat Figure A.29: Estimated and measured GPS bias 100 102 104 106 108 110 112 114 116 118 120?10 ?8 ?6 ?4 ?2 0 2 time (s) lateral offset (m) Actual, measured and estimated lateral offset camera measurement estimated offset Figure A.30: Estimated and measured lateral offset 108 Appendix B A Lane-level Map Database B.1 Data Collection To gather GPS data for the NCAT track map, the M45 was outfitted with the Starfire differential GPS receiver. Data points were taken at several different locations along each section of the track. The track is marked every 200 ft with a marker. These markers indicate different sections of asphalt. Data points were taken with the middle of the front left tire parked on the intersection of the asphalt section line and the lane marker (see Figure B.1). The collected data will not be the accurate GPS position of the lane marker. Each point must be offset accordingly in order to map the lane marker correctly. B.2 Coordinate System Before map generation and curve-fitting the data, a coordinate system must be chosen. The NCAT track as it would look from the sky is shown in Figure B.2. Increasing lines of longitude run from right to left, and increasing latitude (north of the equator) goes from the bottom up. Therefore the longitude coordinates (x axis data) will increase from right to left as well. In order to view the data in a typical (x,y) coordinate system, the track and corresponding data must be viewed as it would from inside the Earth (Figure B.3). This creates a normal (x,y) coordinate system that allows for easier curve fitting. 109 Figure B.1: Data collection setup Figure B.2: NCAT Track - Looking down from sky 110 Figure B.3: NCAT Track - Looking up from inside Earth B.3 Straight Sections The North and South sides of the NCAT track are straightaways. Therefore each lane marker (inside, middle and outside) are modeled as a straight line. Data points were taken for each lane marker at four distinct locations. After the data points were taken, a line was fit to each data set that constitutes a lane marker. Each point along the line must be offset accordingly to match the true GPS locations of the lane marker. B.3.1 Data Conditioning and Curve Fitting Initial data logged from the Starfire receiver is in a (DDMM.MMMMM) format (D = degrees, M = minutes). Using MATLAB, these coordinates are then converted to decimal degrees (DDDD.DDDDDD) and then to a measurement in meters which 111 Figure B.4: DGPS Receiver to tire offset (north) denotes a distance from a reference point. The reference point for the track was chosen to be the inside lane marker at the N1 asphalt marker. After each logged GPS data point is converted to meters, the set of points for each lane marker is fit to a line. This was done using MATLAB?s cftool. A linear fit on the (x,y) data will return a corresponding slope and y-intercept. It is important that the curve fitting is done before the offset, because the slope is used in determining the proper offset. B.3.2 Receiver offset Figure B.4 is a diagram of the data collection setup as it would appear in the map coordinate system (looking from the ground up). Point (x,y) is the location of the Starfire receiver, and point (x?,y?) is the desired point to be logged (directly under the front left tire). Parametershandw are known measurements. The angle? is determined from the known slope of the measured line. 112 ? = tan?1(|slope|) (B.1) ? = tan?1(hw (B.2) ? = 90??? (B.3) ? = ?+? (B.4) With all of the angles defined, the desired location (x?,y?) can be computed. d = radicalbig h2 +w2 (B.5) p = dcos? (B.6) q = dsin? (B.7) x? = x+p (B.8) y? = y?q (B.9) On the south side of the track, the desired line will be on the opposite side of the car. Figure B.5 shows the diagram for this case. All angles and lengths for this case are identical to that of the first case. However, (x?,y?) will now be x? = x?p (B.10) y? = y+q (B.11) 113 Figure B.5: DGPS Receiver to tire offset (south) 114 Appendix C Test Setup and Instrumentation C.1 Test Setup C.1.1 Test Bed/Equipment The test vehicle for this research is an Infiniti M45 sedan. The car is equipped with a data logging system that reads a multitude of sensors, including the vehicles on-board sensors over the Control Area Network (CAN). A Navcom Starfire receiver is utilized for its sub 10 centimeter position accuracy at 5 Hz, and it is used as a truth measurement when analyzing the algorithms and GPS data with experimental data. A Novatel Beeline receiver is used for regular GPS measurements (accuracy < 10m), which also updates at 5Hz. Section C.2.3 show pictures of the vehicle instrumentation. The M45 was driven on a 1.7 mile oval test track (NCAT) with 8 degree banked turns. The straight sections are crowned 2 degrees in each lane. C.1.2 Experimental System Figure C.1 is a diagram of the experimental system. The M45 GUI reads in LDW data from the CAN, GPS data from either Beeline or Starfire and it also is setup to receive data from the DAQ board. Data is then displayed through the rear LCD screen in the M45. The important experimental data is then logged to a data file. 115 Figure C.1: Experimental System C.2 Instrumentation C.2.1 Hardware Computer The M45 is instrumented with a PC computer to serve as a data acquisition/data display machine (Figure C.6). Listed below is the operational configuration of the com- puter, as well as additional hardware necessary for the GPS/LDW experiment. In Section C.2.3 pictures of all instrumentation added to the M45 are presented. ? Configuration ? Processor - 2.66 GHz Intel Celeron ? Memory - 1GB RAM 116 ? Storage - 80 GB ? Additional components ? CANBoardXL from Vector-CANtech ? PCIDAS1002 Data Acquisition Board from Measurement Computing ? Beeline GPS (RS-232) (Figures C.14 and C.15 ) ? Starfire DGPS (RS-232) (Figures C.9, C.10 and C.11) Power To power the computer a 600W DC/AC power inverter is used. The inverter is wired directly to the car?s battery, and provides standard 60Hz AC power. the inverter and 12 volt wiring are shown in Figures C.8 and C.7. C.2.2 Software The operating system for the M45 on-board computer is Windows XP professional. Microsoft Visual Studio (VisualBasic.NET) was used to develop the graphical user inter- face (GUI) which serves as both a data display and data logger. All experiment hardware (CAN, DAQ and GPS) are initialized and controlled the the M45 GUI. Several screen shots of the M45 GUI can be seen in Figures C.2 through C.5. C.2.3 Vehicle Instrumentation 117 Figure C.2: Main Screen - LDW Figure C.3: Logged Data 118 Figure C.4: GPS Setup/Data Figure C.5: CAN Interface 119 Figure C.6: M45 Data Acquisition Computer Figure C.7: 12V power sources 120 Figure C.8: DC/AC power inverter Figure C.9: Starfire GPS receiver 121 Figure C.10: Starfire on M45 Figure C.11: Starfire on M45 122 Figure C.12: CANbus interface Figure C.13: User I/O for M45 computer 123 Figure C.14: Beeline Figure C.15: Beeline and Starfire 124 Figure C.16: Lane Departure Warning camera 125