Efficient Parallel Methods for Deep Reinforcement Learning

May 13, 2017 Β· Entered Twilight Β· πŸ› arXiv.org

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 8.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE.txt, README.md, actor_learner.py, atari_emulator.py, atari_roms, emulator_runner.py, environment.py, environment_creator.py, logger_utils.py, networks.py, paac.py, policy_v_network.py, pretrained, readme_files, runners.py, test.py, train.py

Authors Alfredo V. Clemente, Humberto N. Castejón, Arjun Chandra arXiv ID 1705.04862 Category cs.LG: Machine Learning Citations 118 Venue arXiv.org Repository https://github.com/alfredvc/paac ⭐ 201 Last Checked 2 months ago
Abstract
We propose a novel framework for efficient parallelization of deep reinforcement learning algorithms, enabling these algorithms to learn from multiple actors on a single machine. The framework is algorithm agnostic and can be applied to on-policy, off-policy, value based and policy gradient based algorithms. Given its inherent parallelism, the framework can be efficiently implemented on a GPU, allowing the usage of powerful models while significantly reducing training time. We demonstrate the effectiveness of our framework by implementing an advantage actor-critic algorithm on a GPU, using on-policy experiences and employing synchronous updates. Our algorithm achieves state-of-the-art performance on the Atari domain after only a few hours of training. Our framework thus opens the door for much faster experimentation on demanding problem domains. Our implementation is open-source and is made public at https://github.com/alfredvc/paac
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Machine Learning