Authors:
Supawadee Chaivivatrakul
;
Jednipat Moonrinta
and
Matthew N. Dailey
Affiliation:
Asian Institute of Technology, Thailand
Keyword(s):
Object detection, Keypoint detection, Keypoint descriptors, Keypoint classification, Image segmentation, Structure from motion, 3D reconstruction, Ellipsoid estimation, Pineapple, Mobile field robot, Agricultural automation.
Related
Ontology
Subjects/Areas/Topics:
Computer Vision, Visualization and Computer Graphics
;
Image Formation and Preprocessing
;
Implementation of Image and Video Processing Systems
Abstract:
Towards automation of crop yield estimation for pineapple fields, we present a method for detection and 3D reconstruction of pineapples from a video sequence acquired, for example, by a mobile field robot. The detection process incorporates the Harris corner detector, the SIFT keypoint descriptor, and keypoint classification using a SVM. The 3D reconstruction process incorporates structure from motion to obtain a 3D point cloud representing patches of the fruit's surface followed by least squares estimation of the quadric (in this case an ellipsoid) best fitting the 3D point cloud. We performed three experiments to establish the feasibility of the method. Experiments 1 and 2 tested the performance of the Harris, SIFT, and SVM method on indoor and outdoor data. The method achieved a keypoint classification accuracy of 87.79% on indoor data and 76.81% on outdoor data, against base rates of 81.42% and 53.83%, respectively. In Experiment 3, we performed 3D reconstruction from indoor
data. The method achieved an average of 34.96% error estimating the ratio of the fruits' major axis to short axis length. Future work will focus on increasing the robustness and accuracy of the 3D reconstruction method as well as resolving the 3D scale ambiguity.
(More)