Catastrophic Forgetting
Resources
- Goodfellow, Ian J., Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. 2015. βAn Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks.β arXiv. https://doi.org/10.48550/arXiv.1312.6211.
Related
Main Idea
Training a model on a second task/dataset can greatly degrade its performance on the first task. For instance training on a strongly Convex loss (with a proper optimizer) will give a final model that is entirely independent of the initial model. For Neural Networks, Dropout may combat this. This is also a concern for Online Machine Learning, where we may forget the earlier training.