Enabling Safe and Efficient Separation through Multi-Agent Reinforcement Learning

More Info
expand_more

Abstract

Over the next decades, it is expected that the number of unmanned aerial vehicles (UAVs) operating in the airspace will grow rapidly. Both the FAA (Federal Aviation Administration) and the ICAO (International Civil Aviation Organisation) have already stated that aircraft operating autonomously or beyond their operators’ line of sight are required to have detect and avoid capabilities. At higher traffic densities these avoidance manoeuvres can, however, lead to instabilities within the airspace, causing emergent patterns that lead to knock-on effects that can harm the safety of the operations. It might be possible to formulate a cost function that encapsulates global safety, rather than individual safety, stimulating both safety and stability. One method that lends itself for optimizing such a cost function is cooperative Multi-Agent Reinforcement Learning (MARL). It has been demonstrated that MARL can be used for optimization in both competitive and cooperative (or even mixed) environments, however, when applied in a completely decentralized manner, stability issues often arise. It is therefore proposed to investigate the application of MARL for a well known centralized domain, ATC for manned aviation. This doctoral paper breaks down the proposed research project into 4 independent phases that individually contribute to the knowledge of applying MARL for ATC.

Files

ICRAT2022_paper_25.pdf
(pdf | 0.206 Mb)
Unknown license