Comparison of Pose Estimation Algorithms for a Deleafing Greenhouse Robot

More Info
expand_more

Abstract

For autonomous navigation of mobile robots in an unknown environment, mapping the robot's environment, and localizing its relative position in the environment is of utmost importance. However, in a known and controlled environment, being able to localize its position at a given time instant is sufficient for the robot to navigate autonomously. Therefore, to solve the problem of pose estimation of a mobile greenhouse robot, Visual Odometry (VO) was the best-suited approach. This thesis focuses on the pose estimation of a greenhouse robot using images from an onboard camera only for its motion on the guided rails. A review of the literature revealed that most of the current state-of-the art VO techniques were developed for autonomous navigation in an urban environment or an indoor space. Even though there were publicly available benchmark datasets for VO evaluation such as KITTI and TUM datasets, there was no VO dataset available for a greenhouse environment. A key challenge in collecting visual datasets in a greenhouse was obtaining the ground truth poses associated with all the images. Therefore, an experimental setup was developed to collect VO datasets along with ground truth using store-bought table plants, tomatoes, a mobile camera, and a stereo camera. The experimental design was focused to emulate the greenhouse environment as much as possible. The camera's motion during the experiment was also chosen to emulate the robot's motion on the guided rails. Subsequently, multiple pose estimation algorithms were run on the collected datasets for a comparison study.

Files

Thesis.pdf
(pdf | 30.6 Mb)
Unknown license