Abstract and 1 Introduction
Related works
Problem setting
Methodology
4.1. Decision boundary-aware distillation
4.2. Knowledge consolidation
Experimental results and 5.1. Experiment Setup
5.2. Comparison with SOTA methods
5.3. Ablation study
Conclusion and future work and References
\
Supplementary Material
Tab. 1 shows the test performance of different methods on the Cifar-100 and ImageNet-100. The proposed method achieves the best performance promotion after ten consecutive IIL tasks by a large margin with a low forgetting rate. Although ISL [13] which is proposed for a similar setting of learning from new sub-categories has a low forgetting rate, it fails on the new requirement of model enhancement. Attain a better performance on the test data is more important than forgetting on a certain data.
\ In the new IIL setting, all rehearsal-based methods including iCarl [22], PODNet [4], Der [31] and OnPro [29], not perform well. Old exemplars can cause memory overfitting and model bias [35]. Thus, limited old exemplars not always have a positive influence to the stability and plasticity [26], especially in the IIL task. Forgetting rate of rehearsal-based methods is high compared to other methods, which also explains their performance degradation on the test data. Detailed performance at each learning phase is shown in Fig. 4. Compared to other methods that struggle in resisting forgetting, our method is the only one that stably promotes the existing model on both of the two datasets.
\ Following ISL [13], we further apply our method on the incremental sub-population learning as shown in Tab. 2. Sub-population incremental learning is a special case of the IIL where new knowledge comes from the new subclasses. Compared to the SOTA ISL [13], our method is notably superior in learning new subclasses over long incremental steps with a comparable small forgetting rate. Noteworthy, ISL [13] use Continual Hyperparameter Framework (CHF) [3] searching the best learning rate (such as low to 0.005 in 15-step task) for each setting. While our method learns utilizing ISL pretrained base model with a fixed learning rate (0.05). Low learning rate in ISL reduces the forgetting but hinders the new knowledge learning. The proposed method well balances learning new from unseen subclasses and resisting forgetting on seen classes.
\
:::info Authors:
(1) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);
(2) Weifu Fu, Tencent Youtu Lab;
(3) Yuhuan Lin, Tencent Youtu Lab;
(4) Jialin Li, Tencent Youtu Lab;
(5) Yifeng Zhou, Tencent Youtu Lab;
(6) Yong Liu, Tencent Youtu Lab;
(7) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);
(8) Chengjie Wang, Tencent Youtu Lab.
:::
:::info This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.
:::
\

