YZ
Y. Zhou
18 records found
1
Hierarchical Reinforcement Learning (HRL) provides an option to solve complex guidance and navigation problems with high-dimensional spaces, multiple objectives, and a large number of states and actions. The current HRL methods often use the same or similar reinforcement learning
...
Urban air mobility is a relatively new concept that has been proposed in recent years as a means of transporting passengers and goods in urban areas. It encompasses a diverse range of Vertical TakeOff and Landing (VTOL) vehicles that function more like passenger-carrying drones f
...
Globalized dual heuristic programming (GDHP) is the most comprehensive adaptive critic design, which employs its critic to minimize the error with respect to both the cost-to-go and its derivatives simultaneously. Its implementation, however, confronts a dilemma of either introdu
...
Optical flow-based control strategies have always inspired robotic scientists, especially those in the field of Micro Air Vehicles (MAVs), thanks to their computational efficiency and relative simplicity. A major problem is that the success of optical flow control is governed by
...
Deep Learning-based Monocular Obstacle Avoidance for Unmanned Aerial Vehicle Navigation in Tree Plantations
Faster Region-based Convolutional Neural Network Approach
In recent years, Unmanned Aerial Vehicles (UAVs) are widely utilized in precision agriculture, such as tree plantations. Due to limited intelligence, these UAVs can only operate at high altitudes, leading to the use of expensive and heavy sensors for obtaining important health in
...
Heuristic dynamic programming is a class of reinforcement learning, which has been introduced to aerospace engineering to solve nonlinear, optimal adaptive control problems. However, it requires an off-line learning stage to train a global system model to represent the system dyn
...
The use of Reinforcement Learning (RL) methods in Adaptive Flight Control has been an active research field over the past few years. Controllers that autonomously learn by interacting with the surrounding environment are highly interesting to the aerospace domain due to their ada
...
Autonomous guidance and navigation problems often have high-dimensional spaces, multiple objectives, and consequently a large number of states and actions, which is known as the ‘curse of dimensionality’. Furthermore, systems often have partial observability instead of a perfect
...
Reinforcement Learning (RL) methods are relatively new in the field of aerospace guidance, navigation, and control. This dissertation aims to exploit RL methods to improve the autonomy and online learning of aerospace systems with respect to the a priori unknown system and enviro
...
Approximate dynamic programming is a class of reinforcement learning, which solves adaptive, optimal control problems and tackles the curse of dimensionality with function approximators. Within this category, linear approximate dynamic programming provides a model-free control me
...
A self-learning controller which makes quick and successful adaptations to new conditions can considerably benefit autonomous operations of launch vehicles. To provide a model-free, adaptive process for optimal control, approximate dynamic programming has been introduced to aeros
...
This paper presents an adaptive control technique to deal with spacecraft attitude tracking and disturbance rejection problems in the presence of model uncertainties. Approximate dynamic programming has been proposed to solve adaptive, optimal control problems without using accur
...
Flapping-wing MAVs represent an attractive alternative to conventional designs with rotary wings, since they promise a much higher efficiency in forward flight. However, further insight into the flapping-wing aerodynamics is still needed to get closer to the flight performance ob
...