Meta-Learning Representations for Continual Learning

May 29, 2019 ยท Entered Twilight ยท ๐Ÿ› Neural Information Processing Systems

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, README.md, classification_plots.py, configs, datasets, evaluate_imagenet.py, evaluate_omniglot.py, evaluate_regression.py, experiment, maml-rep_omniglot.py, model, oml, oml_imagenet.py, oml_omniglot.py, oml_omniglot_paper.py, oml_regression.py, plotting_scripts, pretraining_imagenet.py, pretraining_omniglot.py, requirements.txt, srnn_imagenet.py, srnn_omniglot.py, utils, visualize_representations.py

Authors Khurram Javed, Martha White arXiv ID 1905.12588 Category cs.LG: Machine Learning Cross-listed cs.AI, stat.ML Citations 360 Venue Neural Information Processing Systems Repository https://github.com/khurramjaved96/mrcl โญ 205 Last Checked 2 months ago
Abstract
A continual learning agent should be able to build on top of existing knowledge to learn on new data quickly while minimizing forgetting. Current intelligent systems based on neural network function approximators arguably do the opposite---they are highly prone to forgetting and rarely trained to facilitate future learning. One reason for this poor behavior is that they learn from a representation that is not explicitly trained for these two goals. In this paper, we propose OML, an objective that directly minimizes catastrophic interference by learning representations that accelerate future learning and are robust to forgetting under online updates in continual learning. We show that it is possible to learn naturally sparse representations that are more effective for online updating. Moreover, our algorithm is complementary to existing continual learning strategies, such as MER and GEM. Finally, we demonstrate that a basic online updating strategy on representations learned by OML is competitive with rehearsal based methods for continual learning. We release an implementation of our method at https://github.com/khurramjaved96/mrcl .
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning