Improving Generalization of Transfer Learning Across Domains Using Latent Features in Autonomous Driving


Abstract

Transfer learning for autonomous driving can be limited to the similarities of features in source and target domain. The idea of transfer learning is to convey the most important aspects learned in the source domain to the target domain, and is especially crucial in an application like autonomous driving, where training in the real world is inefficient and impractical. For autonomous driving, both source and target domain will have similar features. For example, there is generally a structure in the image of sky, background information, and road outlines. While these are important for learning in autonomous driving, they are common to both source and target domain, and should not be prioritized in the transfer learning flow of information. We hypothesize that certain factors which are important to human decision making can be emphasized in the transfer learning framework to improve generalization across domains, such as projected distance from other vehicles and rotational/optical flow. First, we conduct an ablation study on these features to quantitatively capture their significance in the classifier’s decision making using a cosine similarity metric. Then, we propose a CNN+LSTM transfer learning framework that is complemented by the extracted hidden state representations of these factors. In our experiments, we then show that our improved transfer learning framework better generalizes across unseen domains compared to other transfer learning baselines on a binary classification learning task.

Proceedings
International Conference on Robotics and Automation, 2021 (under review)
Date
BibTex