Online intelligent flight control using reinforcement learning

More Info
expand_more

Abstract

Advancements in aerial vehicles have presented new challenges for flight control system design. The disturbed airflow caused by rotors and flapping wings and the nonlinearity and uncertainty increased by morphing components impede the identification of a globally accurate model of these vehicles. Additionally, complex components can increase the risk of structural damage and faults and pursuing large wingspans and a high aspect ratio can result in unstable flight dynamics because of structural flexibility. Moreover, current control systems are limited in their ability to adapt to unanticipated or changing circumstances.
Given these challenges, the aerospace community has turned to intelligent methods to enhance the autonomy of flight controllers. Reinforcement learning (RL) has been popular in recent years because it offers self-learning approaches that improve agent policies through interaction with the environment. Fruitful cross-fertilization of RL and control theory produced adaptive dynamic programming (ADP), which has been successfully applied to a wide range of nonlinear control systems, including aerial vehicles. ADP inherits dynamic programming’s optimality while enabling adaptability. Furthermore, with the facilitation of artificial neural networks (ANNs), ADP can handle complex control demands and demonstrate the capability of online learning. Consequently, the main research goal of this dissertation is: To improve the adaptability and online learning capability of flight control systems by designing nonlinear ADP approaches.

Files

Thesis_BoSun.pdf
(pdf | 19.5 Mb)
Unknown license