Catastrophic Forgetting

Resources

Main Idea

Training a model on a second task/dataset can greatly degrade its performance on the first task. For instance training on a strongly Convex loss (with a proper optimizer) will give a final model that is entirely independent of the initial model. For Neural Networks, Dropout may combat this. This is also a concern for Online Machine Learning, where we may forget the earlier training.