Overview of BoMuDA:The input consists of N sources from which the Best-Source is selected by the Alt-Inc algorithm. The Alt-Inc algorithm proceeds in an unsupervised fashion to generate the final set of pseudo-labels that are used to perform boundless DA. The final output consists of the segmentation map of an image in the target domain.
Paper | Code |
---|---|
BoMuDA | GitHub Code |
We present an unsupervised adaptation approach for visual scene understanding in unstructured traffic environments. Our method is designed for unstructured real-world scenarios with dense and heterogeneous traffic consisting of cars, trucks, two-and three-wheelers, and pedestrians. We describe a new semantic segmentation technique based on unsupervised domain adaptation (DA), that can identify the class or category of each region in RGB images or videos. We also present a novel self-training algorithm (Alt-Inc) for multi-source DA that improves the accuracy. Our overall approach is a deep learning-based technique and consists of an unsupervised neural network that achieves 87.18% accuracy on the challenging India Driving Dataset. Our method works well on roads that may not be well-marked or may include dirt, unidentifiable debris, potholes, etc. A key aspect of our approach is that it can also identify objects that are encountered by the model for the fist time during the testing phase. We compare our method against the state-of-the-art methods and show an improvement of 5.17% - 42.9%. Furthermore, we also conduct user studies that qualitatively validate the improvements in visual scene understanding of unstructured driving environments.
Please cite our work if you found it useful,
@inproceedings{kothandaraman2021bomudanet,
title={BoMuDANet: Unsupervised Adaptation for Visual Scene Understanding in Unstructured Driving Environments},
author={Kothandaraman, Divya and Chandra, Rohan and Manocha, Dinesh},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={3966--3975},
year={2021}
}