Self-Supervised Policy Adaptation during Deployment
July 08, 2020 Β· Entered Twilight Β· π International Conference on Learning Representations
"Last commit was 5.0 years ago (β₯5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: .gitignore, README.md, images, logs, scripts, setup, src
Authors
Nicklas Hansen, Rishabh Jangir, Yu Sun, Guillem AlenyΓ , Pieter Abbeel, Alexei A. Efros, Lerrel Pinto, Xiaolong Wang
arXiv ID
2007.04309
Category
cs.LG: Machine Learning
Cross-listed
cs.CV,
cs.RO,
stat.ML
Citations
183
Venue
International Conference on Learning Representations
Repository
https://github.com/nicklashansen/policy-adaptation-during-deployment
β 114
Last Checked
2 months ago
Abstract
In most real world scenarios, a policy trained by reinforcement learning in one environment needs to be deployed in another, potentially quite different environment. However, generalization across different environments is known to be hard. A natural solution would be to keep training after deployment in the new environment, but this cannot be done if the new environment offers no reward signal. Our work explores the use of self-supervision to allow the policy to continue training after deployment without using any rewards. While previous methods explicitly anticipate changes in the new environment, we assume no prior knowledge of those changes yet still obtain significant improvements. Empirical evaluations are performed on diverse simulation environments from DeepMind Control suite and ViZDoom, as well as real robotic manipulation tasks in continuously changing environments, taking observations from an uncalibrated camera. Our method improves generalization in 31 out of 36 environments across various tasks and outperforms domain randomization on a majority of environments.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Machine Learning
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
π»
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
π»
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
π»
Ghosted