Unmanned Aerial Vehicles (UAVs) play a crucial role in various applications, including disaster response, infrastructure inspection, and search-and-rescue missions. To maximise their effectiveness, UAVs must achieve a high level of autonomy, particularly when navigating cluttered
...
Unmanned Aerial Vehicles (UAVs) play a crucial role in various applications, including disaster response, infrastructure inspection, and search-and-rescue missions. To maximise their effectiveness, UAVs must achieve a high level of autonomy, particularly when navigating cluttered environments. This requires real-time collision avoidance algorithms that efficiently utilise on-board computational resources. Existing approaches either maintain explicit map representations, which provide memory of past observations but require significant computation, or directly process depth images, which are computationally efficient but restrict flight paths to the sensor’s field of view (FOV). This thesis presents a novel, lightweight collision avoidance algorithm that operates directly on depth images while retaining memory of past observations. By aggregating previous depth data, the algorithm enables UAVs to plan trajectories beyond the FOV of onboard sensors without the computational overhead of explicit map maintenance. Experimental results demonstrate that the proposed method outperforms existing state-of-the-art methods in terms of computational efficiency, flight speed, energy cost, and path length until collision.