Equine Gait Analysis, Body Part Tracking using DeepLabCut and Mask R-CNN and Biomechanical Parameter Extraction
Type of DegreeMaster's Thesis
Computer Science and Software Engineering
MetadataShow full item record
Gait analysis plays a pivotal role in quantitatively defining equine biomechanical parameters for lameness detection and performance evaluation. Currently, equine gait analysis requires attaching markers or IMU sensors to a horse for motion capture, which is low-throughput and may impact the quality of the horse’s movement. In this study, evaluation of the feasibility and utility of deep learning-based video processing as a marker-less motion capture method for equine gait analysis was performed. For that purpose, evaluation consisted of an annotated video dataset of horses performing their natural locomotion patterns, accounting for 4075 images, each with 21 tagged landmarks on the body. Detection of landmarks utilized DeepLabCut and Mask R-CNN models for each video. A performance comparison between both methods evaluated which landmark detection paradigm achieved higher detection accuracy. A fine-tuned Mask R-CNN model had a lower overall RMSE = 30.6 compared to DeepLabCut = 128.4, whereas DeepLabCut had a lower RMSE = 20.6 for the x coordinates of kepoints of each landmark compared to Mask R-CNN (RMSE = 26.1). Based on the x-axis body landmark tracking results of DeepLabCut, algorithms were developed to extract the two biomechanical parameters of stride length and stance time. The proposed post-processing pipeline correctly detected 92% of the strides and 95% of the stances. Subsequently, interpretive accuracies were found for the automatically extracted stride length (R2 = 0.80, RMSE = 0.31) and stance time (R2 = 0.81, RMSE = 0.03). The developed video processing pipeline has promising potential to become a convenient and efficient analytics tool for animal scientists and veterinarians to study the genetic control of equine locomotion and diagnose musculoskeletal problems, respectively.