Multi-Modal Computer Vision System for Red Light Running Detection
Date
2025-12-08Type of Degree
Master's ThesisDepartment
Computer Science and Software Engineering
Metadata
Show full item recordAbstract
Ensuring safety at urban intersections remains a critical challenge, where collisions can result in severe injuries and fatalities. A primary contributor to these incidents is vehicles running red signals. Current automated systems for detecting this behavior often suffer from high false-positive rates, frequently stemming from imprecise, zone-based detection methods. Furthermore, the efficacy of model-based approaches is typically hindered by the scarcity of specialized datasets capturing red-light-running events. To address these shortcomings, this thesis proposes a novel, dual-model approach that decouples vehicle detection from violation inference. The system utilizes a YOLOv10 model for vehicle tracking and a YOLOv11 model for traffic signal recognition operating in parallel. The vehicle model is pretrained, while the traffic signal model is fine-tuned on a custom-annotated dataset (derived from public ALGO traffic cameras) to monitor the state of traffic signals (Red, Yellow, or Green). These models are integrated with a custom logic framework that employs configurable "tripwire" lines at the entry and exit points of each lane. This logic precisely correlates a vehicle's trajectory with the concurrent traffic signal state, allowing for an accurate distinction between vehicles adhering to traffic laws and those committing violations. By saving tripwire pairs to an external configuration file, the system can be rapidly deployed and adapted to any intersection, ensuring both scalability and ease of setup. This methodology eliminates the dependency on rare, violation-specific training data and instead creates a highly effective and robust solution aimed at improving safety for all road users in urban environments.
