Safe Policies for Reinforcement Learning via Primal-Dual Methods
November 20, 2019 ยท Declared Dead ยท ๐ IEEE Transactions on Automatic Control
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Santiago Paternain, Miguel Calvo-Fullana, Luiz F. O. Chamon, Alejandro Ribeiro
arXiv ID
1911.09101
Category
eess.SY: Systems & Control (EE)
Cross-listed
cs.LG,
math.OC
Citations
126
Venue
IEEE Transactions on Automatic Control
Last Checked
2 months ago
Abstract
In this paper, we study the learning of safe policies in the setting of reinforcement learning problems. This is, we aim to control a Markov Decision Process (MDP) of which we do not know the transition probabilities, but we have access to sample trajectories through experience. We define safety as the agent remaining in a desired safe set with high probability during the operation time. We therefore consider a constrained MDP where the constraints are probabilistic. Since there is no straightforward way to optimize the policy with respect to the probabilistic constraint in a reinforcement learning framework, we propose an ergodic relaxation of the problem. The advantages of the proposed relaxation are threefold. (i) The safety guarantees are maintained in the case of episodic tasks and they are kept up to a given time horizon for continuing tasks. (ii) The constrained optimization problem despite its non-convexity has arbitrarily small duality gap if the parametrization of the policy is rich enough. (iii) The gradients of the Lagrangian associated with the safe-learning problem can be easily computed using standard policy gradient results and stochastic approximation tools. Leveraging these advantages, we establish that primal-dual algorithms are able to find policies that are safe and optimal. We test the proposed approach in a navigation task in a continuous domain. The numerical results show that our algorithm is capable of dynamically adapting the policy to the environment and the required safety levels.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Systems & Control (EE)
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey
R.I.P.
๐ป
Ghosted
Wireless Network Design for Control Systems: A Survey
R.I.P.
๐ป
Ghosted
Learning-based Model Predictive Control for Safe Exploration
R.I.P.
๐ป
Ghosted
Safety-Critical Model Predictive Control with Discrete-Time Control Barrier Function
R.I.P.
๐ป
Ghosted
Novel Multidimensional Models of Opinion Dynamics in Social Networks
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted