MAVRL

Learn to Fly in Cluttered Environments With Varying Speed

More Info
expand_more

Abstract

Autonomous flight in unknown, cluttered environments is still a major challenge in robotics. Existing obstacle avoidance algorithms typically adopt a fixed flight velocity, overlooking the crucial balance between safety and agility. We propose a reinforcement learning algorithm to learn an adaptive flight speed policy tailored to varying environment complexities, enhancing obstacle avoidance safety. A downside of learning-based obstacle avoidance algorithms is that the lack of a mapping module can lead to the drone getting stuck in complex scenarios. To address this, we introduce a novel training setup for the latent space that retains memory of previous depth map observations. The latent space is explicitly trained to predict both past and current depth maps. Our findings confirm that varying speed leads to a superior balance of success rate and agility in cluttered environments. Additionally, our memory-augmented latent representation outperforms the latent representation commonly used in reinforcement learning. Furthermore, an extensive comparison of our method with the existing state-of-the-art approaches Agile-autonomy and Ego-planner shows the superior performance of our approach, especially in highly cluttered environments. Finally, after minimal fine-tuning, we successfully deployed our network on a real drone for enhanced obstacle avoidance.

Files

MAVRL_Learn_to_Fly_in_Cluttere... (pdf)
(pdf | 2.56 Mb)
Unknown license
warning

File under embargo until 30-06-2025