A Divergence Minimization Perspective on Imitation Learning Methods

November 06, 2019 ยท Entered Twilight ยท ๐Ÿ› Conference on Robot Learning

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, MUJOCO_LOG.TXT, README.md, all_the_run_experiment_variants_using_multiprocessing.py, balls.py, data_gen, debug_pixels_viewer.py, debug_scripts, eval_scripts, examples, exp_pool_fns, exp_specs, expert_demos_listing.yaml, fix_npv1_csv_files.py, gen_models, latex, might_need_refactor.txt, neural_processes, plotting_scripts, reproducing, rl_swiss.yaml, rl_swiss_conda_env.yaml, rlkit, run_experiment.py, run_policy_and_render.py, run_scripts, scp_local_params.sh, scripts, test_pickup_env.py, test_pusher_env.py, test_pusher_task_env.py, test_slide_env.py, things_running.txt, todo.txt, wrapped_goal_envs.py

Authors Seyed Kamyar Seyed Ghasemipour, Richard Zemel, Shixiang Gu arXiv ID 1911.02256 Category cs.LG: Machine Learning Cross-listed stat.ML Citations 277 Venue Conference on Robot Learning Repository https://github.com/KamyarGh/rl_swiss/blob/master/reproducing/fmax_paper.md โญ 66 Last Checked 2 months ago
Abstract
In many settings, it is desirable to learn decision-making and control policies through learning or bootstrapping from expert demonstrations. The most common approaches under this Imitation Learning (IL) framework are Behavioural Cloning (BC), and Inverse Reinforcement Learning (IRL). Recent methods for IRL have demonstrated the capacity to learn effective policies with access to a very limited set of demonstrations, a scenario in which BC methods often fail. Unfortunately, due to multiple factors of variation, directly comparing these methods does not provide adequate intuition for understanding this difference in performance. In this work, we present a unified probabilistic perspective on IL algorithms based on divergence minimization. We present $f$-MAX, an $f$-divergence generalization of AIRL [Fu et al., 2018], a state-of-the-art IRL method. $f$-MAX enables us to relate prior IRL methods such as GAIL [Ho & Ermon, 2016] and AIRL [Fu et al., 2018], and understand their algorithmic properties. Through the lens of divergence minimization we tease apart the differences between BC and successful IRL approaches, and empirically evaluate these nuances on simulated high-dimensional continuous control domains. Our findings conclusively identify that IRL's state-marginal matching objective contributes most to its superior performance. Lastly, we apply our new understanding of IL methods to the problem of state-marginal matching, where we demonstrate that in simulated arm pushing environments we can teach agents a diverse range of behaviours using simply hand-specified state distributions and no reward functions or expert demonstrations. For datasets and reproducing results please refer to https://github.com/KamyarGh/rl_swiss/blob/master/reproducing/fmax_paper.md .
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning