Reinforcement learning (RL) has grown tremendously over one and a half decades and is increasingly emerging in many real-life applications. However, the application of RL is still limited due to its low training efficiencies and surplus training cost. The sampling and computation
...
Reinforcement learning (RL) has grown tremendously over one and a half decades and is increasingly emerging in many real-life applications. However, the application of RL is still limited due to its low training efficiencies and surplus training cost. The sampling and computation complexity normally depends on the size of the state space and splitting the state space can distribute computation and accelerate learning. State abstraction as a form of data-centric method shrinks the state space and reduces learning time, however, it is challenged by the fact that abstraction throws away information and might result in a sub-optimal solution. In this thesis, we propose the hierarchical clustering-based state grouping (HCSG) method to split the ground state space into clusters and train multiple agents for each cluster without changing the dimension of the state space. This approach allows us to distribute computation and improves training efficiency without losing the overall performance, and was also shown to outperform baseline and other state-of-art data-centric methods.