T-UDA: Temporal Unsupervised Domain Adaptation in Sequential Point Clouds

September 15, 2023 Β· Entered Twilight Β· πŸ› IEEE/RJS International Conference on Intelligent RObots and Systems

πŸ’€ TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitignore, LICENSE, README.md, assets, configs, core, evaluate_uda.py, model_zoo.py, script, setup.cfg, tools, train_uda.py, weights

Authors Awet Haileslassie Gebrehiwot, David Hurych, Karel Zimmermann, Patrick Pérez, TomÑő Svoboda arXiv ID 2309.08302 Category cs.CV: Computer Vision Cross-listed cs.RO Citations 5 Venue IEEE/RJS International Conference on Intelligent RObots and Systems Repository https://github.com/ctu-vras/T-UDA ⭐ 2 Last Checked 1 month ago
Abstract
Deep perception models have to reliably cope with an open-world setting of domain shifts induced by different geographic regions, sensor properties, mounting positions, and several other reasons. Since covering all domains with annotated data is technically intractable due to the endless possible variations, researchers focus on unsupervised domain adaptation (UDA) methods that adapt models trained on one (source) domain with annotations available to another (target) domain for which only unannotated data are available. Current predominant methods either leverage semi-supervised approaches, e.g., teacher-student setup, or exploit privileged data, such as other sensor modalities or temporal data consistency. We introduce a novel domain adaptation method that leverages the best of both trends. Our approach combines input data's temporal and cross-sensor geometric consistency with the mean teacher method. Dubbed T-UDA for "temporal UDA", such a combination yields massive performance gains for the task of 3D semantic segmentation of driving scenes. Experiments are conducted on Waymo Open Dataset, nuScenes and SemanticKITTI, for two popular 3D point cloud architectures, Cylinder3D and MinkowskiNet. Our codes are publicly available at https://github.com/ctu-vras/T-UDA.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Computer Vision