Abstract
In this paper, we propose a novel learning framework for autonomous systems that uses a small amount of ``auxiliary information'' that complements the learning of the main modality, called ``small-shot auxiliary modality distillation network (AMD-S-Net)''. The AMD-S-Net contains a two-stream framework design that can fully extract information from different types of data (i.e., paired/unpaired multi-modality data) to distill knowledge more effectively.
We also propose a novel training paradigm based on the ``reset operation'' that enables the teacher to explore the local loss landscape near the student domain iteratively, providing local landscape information and potential directions to discover better solutions by the student, thus achieving higher learning performance. Our experiments show that AMD-S-Net and our training paradigm outperform other SOTA methods by up to 12.7% and 18.1% improvement in autonomous steering, respectively.
Paper
Small-shot Multi-modal Distillation for Vision-based Autonomous Steering (ICRA 2023)
Yu Shen, Luyu Yang, Xijun Wang, Ming C. Lin.
Appendix
Appendix can be found here
Video
Demo video can be found here
Slides
Slides can be found here
Code
The GitHub repository will be posted [here]() coming soon.