Learning a Local Feature Descriptor for 3D LiDAR Scans

September 20, 2018 ยท Entered Twilight ยท ๐Ÿ› IEEE/RJS International Conference on Intelligent RObots and Systems

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"No code URL or promise found in abstract"
"Code repo scraped from project page (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: CMakeLists.txt, LICENSE, README.md, download_dataset.sh, download_models.sh, download_test_pcd.sh, include, python_cpp, python_scripts, src

Authors Ayush Dewan, Tim Caselitz, Wolfram Burgard arXiv ID 1809.07494 Category cs.CV: Computer Vision Cross-listed cs.RO Citations 26 Venue IEEE/RJS International Conference on Intelligent RObots and Systems Repository https://github.com/ayushais/deep_3d_descriptor โญ 19 Last Checked 1 month ago
Abstract
Robust data association is necessary for virtually every SLAM system and finding corresponding points is typically a preprocessing step for scan alignment algorithms. Traditionally, handcrafted feature descriptors were used for these problems but recently learned descriptors have been shown to perform more robustly. In this work, we propose a local feature descriptor for 3D LiDAR scans. The descriptor is learned using a Convolutional Neural Network (CNN). Our proposed architecture consists of a Siamese network for learning a feature descriptor and a metric learning network for matching the descriptors. We also present a method for estimating local surface patches and obtaining ground-truth correspondences. In extensive experiments, we compare our learned feature descriptor with existing 3D local descriptors and report highly competitive results for multiple experiments in terms of matching accuracy and computation time. \end{abstract}
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision