Flying insects are capable of autonomous vision-based navigation in cluttered environments, reliably avoiding objects through fast and agile manoeuvres. Meanwhile, insect-scale micro air vehicles still lag far behind their biological counterparts, displaying inferior performance
...
Flying insects are capable of autonomous vision-based navigation in cluttered environments, reliably avoiding objects through fast and agile manoeuvres. Meanwhile, insect-scale micro air vehicles still lag far behind their biological counterparts, displaying inferior performance at a fraction of the energy efficiency. In light of this, it is in our interest to try and mimic flying insects in terms of their vision-based navigation capabilities, and consequently apply gained knowledge to a manoeuvre of relevance. This thesis does so through evolving spiking neural networks for controlling divergence-based landings of micro air vehicles, while minimising the network's spike rate. We demonstrate vision-based neuromorphic control for a real-world, continuous problem, as well as the feasibility of extending this controller to one that is end-to-end-learnt, and can work with an event-based camera. Furthermore, we provide insight into the resources required for successfully solving the problem of divergence-based landing, showing that high-resolution control can be learnt with only a single spiking neuron. Finally, we look at evolving only a subset of the spiking neural network's available hyperparameters, suggesting that the best results are obtained when all parameters are affected by the learning process.