Auxiliary Modality Learning with Generalized Curriculum Distillation


Abstract

Driven by the need from real-world applications, Auxiliary Modality Learning (AML) offers the possibility to utilize more information from auxiliary data in training, while only requiring data from one or fewer modalities in testing, to save the overall computational cost and reduce the amount of input data for inferencing. In this work, we formally define 'Auxiliary Modality Learning' (AML), systematically classify types of auxiliary modality (in visual computing) and architectures for AML, and analyze their performance. We also analyze the conditions under which AML works well from the optimization and data distribution perspectives. To guide various choices to achieve optimal performance using AML, we propose a novel method to assist in choosing the best auxiliary modality and estimating an upper bound performance before executing AML. In addition, we propose a new AML method using generalized curriculum distillation to enable more effective curriculum learning. Our method achieves the best performance compared to other SOTA methods.

Paper

Auxiliary Modality Learning with Generalized Curriculum Distillation (ICML 2023)
Yu Shen, Xijun Wang, Peng Gao, Ming C. Lin.

Video

Demo video can be found here

Slides

Slides can be found here

Code

The GitHub repository will be posted [here]() coming soon.