CEM: Constrained Entropy Maximization for Task-Agnostic Safe Exploration

More Info
expand_more

Abstract

Without an assigned task, a suitable intrinsic objective for an agent is to explore the environment efficiently. However, the pursuit of exploration will inevitably bring more safety risks.
An under-explored aspect of reinforcement learning is how to achieve safe efficient exploration when the task is unknown.
In this paper, we propose a practical Constrained Entropy Maximization (CEM) algorithm to solve task-agnostic safe exploration problems, which naturally require a finite horizon and undiscounted constraints on safety costs.
The CEM algorithm aims to learn a policy that maximizes the state entropy under the premise of safety.
To avoid approximating the state density in complex domains, CEM leverages a $k$-nearest neighbor entropy estimator to evaluate the efficiency of exploration.
In terms of safety, CEM minimizes the safety costs, and adaptively trades off safety and exploration based on the current constraint satisfaction. We empirically show that CEM allows learning a safe exploration policy in complex continuous-control domains, and the learned policy benefits downstream tasks in safety and sample efficiency.

Files

Reward_free_safe_exploration_A... (pdf)
(pdf | 6.83 Mb)
Unknown license

Download not available

26281_Article_Text_30344_1_2_2... (pdf)
(pdf | 5.35 Mb)
- Embargo expired in 05-10-2023
Unknown license