Safer reinforcement learning for robotics

More Info
expand_more

Abstract

Reinforcement learning is an active research area in the fields of artificial intelligence and machine learning, with applications in control. The most important feature of reinforcement learning is its ability to learn without prior knowledge about the system. However, in the real world, reinforcement learning actions may lead to serious damage of a controlled robot or its surroundings in the absence of any prior knowledge. Safety — an often neglected factor in the reinforcement learning community — requires greater attention from researchers.
Prior knowledge can increase safety during learning. At the same time, it can severely limit a possible solution set and hamper learning performance. This thesis discusses the influence of different forms of prior knowledge on learning performance and the risk to robot damage, where prior knowledge ranges from physics-based assumptions, such as the robot construction and material properties, to the knowledge of the task curriculum, or the approximate model possibly coupled with a nominal controller.

Files

Dissertation.pdf
(pdf | 9.12 Mb)
Unknown license
Propositions.pdf
(pdf | 0.155 Mb)
Unknown license