Inclined quadrotor landing using on-board sensors and computing
More Info
expand_more
expand_more
Abstract
Achieving autonomous inclined landing would be an important step towards quadrotors which are able to land anywhere and under any conditions. If a quadrotor is able to safely land anywhere, even when a landing platform is not available, this would open up many useful applications. Firstly, the quadrotor could safely land whenever its battery levels get low or when contact with an operator is lost. Secondly, the quadrotor could be used for different applications, such as reconnaissance, search and rescue, infrastructure or delivery services.
There are many challenges to the field of autonomous inclined quadrotor landing. Firstly, landing safely on an inclined surface without the use of a perching mechanism requires that the quadrotor has a low approach velocity. The quadrotor should also land at the same angle as the inclination of the landing surface. To meet these constraints, the quadrotor will be required to perform an agile landing maneuver. Furthermore, the quadrotor will also have to estimate its attitude, position and velocity towards the platform during the landing maneuver. Due to the agile landing maneuver, landmark-based localization of the quadrotor will be more difficult, since these methods require the quadrotor's on-board camera to be directed at a certain landmark which can be used for guidance. Current methods for autonomous inclined landing either use external sensors or a landing mechanism to deal with the presented challenges.
During this project, an algorithm is developed which can estimate the quadrotor's attitude, position and velocity during the landing maneuver, while using only on-board sensors. State estimations are generated using two sources: a landmark-based localization algorithm and a Visual-Inertial Odometry (VIO) algorithm. The landmark-based localization algorithm uses markers placed near the landing surface to determine the quadrotor's attitude and position relative to the landing platform. Estimations from these two systems are fused by an Extended Kalman Filter (EKF). Furthermore, we train a policy network in a deep reinforcement learning approach for control of the quadrotor during the landing maneuver. We use a field-of-view constraint during the training of this policy network to keep markers used by the localization algorithm in sight of the quadrotor's on-board camera sensor during the landing maneuver.
During a series of experiments in the Gazebo simulator, we validate performance of the state estimation system during the inclined landing maneuver. We show that the marker localization algorithm's performance is improved by implementing a field-of-view constraint during the training of the policy network. We also show that state estimation by the EKF outperforms the two individual state estimation algorithms. In the Gazebo simulator, the quadrotor is able to use the state estimation system to land without the use of external sensors.