Effects of action space discretization and DQN extensions on algorithm robustness and efficiency

How do the discretization of the action space and various extensions to the well-known DQN algorithm influence training and the robustness of final policies under various testing conditions?

More Info
expand_more

Abstract

Reinforcement Learning (RL) has gained atten-tion as a way of creating autonomous agents for self-driving cars. This paper explores the adap- tation of the Deep Q Network (DQN), a popular deep RL algorithm, in the Carla traffic simulator for autonomous driving. It investigates the influ- ence of action space discretization and DQN ex-
tensions on training performance and robustness. Results show that action space discretization en- hances behaviour consistency but negatively af- fects Q-values, training performance, and robust- ness. Double Q-Learning decreases training per- formance and leads to suboptimal convergence, re- ducing robustness. Prioritized Experience Replay
also performs worse during training, but consis-tently outperforms in robustness testing, reward es-timation and generalization.

Files

Research.pdf
(pdf | 0.679 Mb)
Unknown license