Efficient Visual Ego-Motion Estimation for Agile Flying Robots

More Info
expand_more

Abstract

Micro air vehicles (MAVs) have shown significant potential in modern society. The development in robotics and automation is changing the roles of MAVs from remotely controlled machines requiring human pilots to autonomous and intelligent robots. There is an increasing number of autonomous MAVs involved in outdoor operations. In contrast, the deployment of MAVs in GPS-denied environments is relatively less practiced. The speed when flying indoors is often slow. One reason is that MAVs are surrounded by obstacles. But it should also be noticed that ego-motion estimation becomes more difficult to remain reliable during faster flight. The reason for this is that fast motion brings challenges to the robustness and computational efficiency of ego-motion estimation solutions based on the limited onboard sensing and processing capacities. The challenge to robustness is that the motion blur induced by agile maneuvers reduces the amount of available visual information needed by the current mainstream ego-motion estimation solutions, given the fact that frame-based cameras are the primary sensor for most lightweight MAVs. The challenge of computational efficiency comes from the strong desire for smaller and smaller MAVs to better fit cluttered environments. Moreover, to compensate for the decrease in robustness, additional computational power is required to detect known landmarks or visual processing that better copes with motion blur. This dissertation responds to the challenges by investigating novel ego-motion estimation approaches that combine robustness and efficiency. First, the goal of higher efficiency in the context of traditional visual feature points is pursued, albeit at the cost of reduced accuracy. The targeted scenarios are where known landmarks exist, such as gates in autonomous drone racing. The proposed velocity estimator’s mission is to navigate the MAV until the next landmark appears in the field of view and corrects the accumulated drift in the position estimation. To prevent drift over time, a simple linear drag force model is used for estimating the pitch and roll angles of the MAV with respect to the gravity vector and its velocity within the horizontal plane of the propellers. The translational motion direction and the relative yaw angle are efficiently calculated from the correspondences of feature points using a RANSAC-based linear algorithm...

Files

Yingfu_Thesis_upload.pdf
(pdf | 43 Mb)
Unknown license