Self-Improving SLAM in Dynamic Environments: Learning When to Mask

October 15, 2022 Β· Declared Dead Β· πŸ› British Machine Vision Conference

🦴 CAUSE OF DEATH: Skeleton Repo
Boilerplate only, no real code

Repo contents: README.md, consinv_dataset.png

Authors Adrian Bojko, Romain Dupont, Mohamed Tamaazousti, Hervé Le Borgne arXiv ID 2210.08350 Category cs.CV: Computer Vision Cross-listed cs.AI Citations 4 Venue British Machine Vision Conference Repository https://github.com/adrianbojko/consinv-dataset ⭐ 5 Last Checked 1 month ago
Abstract
Visual SLAM - Simultaneous Localization and Mapping - in dynamic environments typically relies on identifying and masking image features on moving objects to prevent them from negatively affecting performance. Current approaches are suboptimal: they either fail to mask objects when needed or, on the contrary, mask objects needlessly. Thus, we propose a novel SLAM that learns when masking objects improves its performance in dynamic scenarios. Given a method to segment objects and a SLAM, we give the latter the ability of Temporal Masking, i.e., to infer when certain classes of objects should be masked to maximize any given SLAM metric. We do not make any priors on motion: our method learns to mask moving objects by itself. To prevent high annotations costs, we created an automatic annotation method for self-supervised training. We constructed a new dataset, named ConsInv, which includes challenging real-world dynamic sequences respectively indoors and outdoors. Our method reaches the state of the art on the TUM RGB-D dataset and outperforms it on KITTI and ConsInv datasets.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Computer Vision

Died the same way β€” 🦴 Skeleton Repo