|dc.description.abstract||As agricultural technologies have progressed rapidly over the past decades, animal and plant phenotyping has become one of the primary research topics. Conventional phenotyping relies heavily on manual measurements, which are labor-intensive, time-consuming, and subject to error. The purpose of this thesis is to present an innovative solution for estimating animal pose and quantifying pine tree architecture traits using 3D stereo machine vision and deep learning techniques.
It is currently necessary to undergo a complex, costly, and labor-intensive procedure for equine locomotion research to conduct horse kinematic gait analysis (EKGA). For the measurement of equine biomechanical parameters, an automated stereo video processing pipeline has been developed and evaluated. DeepLabCut (DLC) was trained on stereo videos of 40 walking horses to detect body landmarks. Landmark detection was conducted using an ARIMA filter, which had RMSE and MAE values of 5.14 pixels and 4.87 pixels, respectively. An analysis of stride length (SL) and stance duration (SD) was performed as a case study. The Faster R-CNN model and the mode filter were applied to perform individual hoof gait phase detection, yielding precision and recall values of 0.83 and 0.95, respectively. A semi-global block matching algorithm (SGBM) was used to estimate the depth maps, and accuracy was assessed by comparing estimated head lengths to measurements taken in the field. Bland-Altman analysis for DLC-detected head coordinates when combined with SGBM, yielded a bias of -0.014m with upper and lower limits of agreement (LoA) of 0.03 m and -0.061m, in that order. Moreover, Bland-Altman analysis on SD and SL also revealed biases of -0.02 s and -0.042 m compared to image-level manual measurements. Furthermore, both the upper and lower LOAs for SD were 0.01907 and -0.24 seconds, and for SL they were 0.04 and -0.12 meters. In summary, the proposed method shows promising potential for performing EKGA in an automated, cost-efficient, and rapid manner.
A similar approach was used to provide a high throughput phenotyping solution for pine trees in 3D. Loblolly pines have long been one of the most significant forest trees for producing saw-wood in the Southern United States. The yield potential of a pine tree is significantly impacted by its stem and branch characteristics. A low-throughput technique, visual grading, is currently used in progeny trials to measure economically significant features such as stem straightness, branch angle, and branch diameter. To phenotype pine architecture, stereo 3D imaging and deep learning were combined. The stem diameter, branch angle, and branch diameter of ten loblolly pine trees belonging to different families were measured manually in a progeny test. An annotated dataset was created using contour polygons on the branches and trunks of each tree of interest. To segment branches and trunks using the dataset, a pre-trained Mask R-CNN model was fine-tuned and tested. The semi-global block matching (SGBM) algorithm was employed to reconstruct the 3D shapes of small trunks and thin branches. After the 3D point clouds were extracted, they were further processed using principal component analysis (PCA), random sample consensus (RANSAC), and statistical outlier removal. As compared to manual measurements, the three system-derived parameters had RMSEs of 0.05 m, 5.0 degrees, and 5.6 mm. Bland-Altman analysis showed that stem diameter, branch angle, and branch diameter all had standard deviations of 0.005 m, 5.0 deg, and 5.4 mm, and biases of 0.011 m, -0.4 deg, and -1.4 mm, respectively. As a precision phenotyping tool for the characterization of loblolly pine tree architecture, the proposed system shows promising potential. By facilitating the selection of tree architecture that is highly productive and resilient to severe weather events and climatic variability, it facilitates a better understanding of tree architecture.||en_US