Continual Learning of Generative Models with Maximum Entropy Generative Replay

Abstract

We study continual learning of generative models. We focus on improving generative replay, a technique that generates samples from previous tasks and that has been shown to successfully combat catastrophic forgetting. A major challenge for generative replay is the accumulation of errors through time which can push the generative model to forget previous tasks. As a remedy, we propose a method to estimate and maximize entropy of the marginal distribution over tasks. Our empirical results suggest that our proposed regularizer yields significant improvements in terms of generative modeling performance. We demonstrate the effectiveness of our approach on MNIST to Fashion MNIST and class incremental versions of the rotated MNIST and rotated FashionMNIST datasets.

Bibtex