Anticipatory Airline Disruption Management
A model-based reinforcement learning approach to anticipatory aircraft recovery under disruption uncertainty
More Info
expand_more
Abstract
Disruptive events pose a significant challenge to airlines’ everyday operations due to the highly optimized nature of their schedules. Unforeseen events force airlines to rapidly reschedule and adjust their operations. Current disruption management methods rely mostly on reactive and static models that fail to capture the dynamic and probabilistic nature of airline recovery. This study presents a model-based reinforcement Reinforcement Learning (RL) method for aircraft recovery under disruption uncertainty that anticipates future potential disruptions. The Aircraft Recovery Problem (ARP) is formulated as a Markov Decision Process (MDP) and a framework is proposed in which an Approximate Dynamic Programming (ADP) algorithm that relies on Value Function Approximation (VFA) determines optimal recovery actions considering the immediate and future impact of each action. The uncertain disruptions are modelled as aircraft unavailabilities with a fixed probability of realizing. The aim of the model is to keep flight delays and cancellations at a minimum while exploiting stochastic information on potential aircraft unavailabilities. The model is tested on multiple scenarios with different objectives and levels of disruptions and is benchmarked against an exact optimization algorithm. Results indicate that a proactive approach outperforms reactive models, particularly in high-disruption scenarios with high aircraft utilization. The comparison with the exact benchmark shows that the RL method can achieve sub-optimal solutions with considerably less corrective actions. This framework offers a decision support tool that allows airline operators to find more resilient solutions in uncertain environments by incorporating probabilistic predictions on disruptions in the decision-making process