One-Shot Informed Robotic Visual Search in the Wild

March 22, 2020 ยท Entered Twilight ยท ๐Ÿ› IEEE/RJS International Conference on Intelligent RObots and Systems

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, README.md, assets, clustering, generators, lists, network, requirements.txt, tracking, utils

Authors Karim Koreitem, Florian Shkurti, Travis Manderson, Wei-Di Chang, Juan Camilo Gamboa Higuera, Gregory Dudek arXiv ID 2003.10010 Category cs.RO: Robotics Cross-listed cs.CV, cs.LG Citations 13 Venue IEEE/RJS International Conference on Intelligent RObots and Systems Repository https://github.com/rvl-lab-utoronto/visual_search_in_the_wild โญ 11 Last Checked 1 month ago
Abstract
We consider the task of underwater robot navigation for the purpose of collecting scientifically relevant video data for environmental monitoring. The majority of field robots that currently perform monitoring tasks in unstructured natural environments navigate via path-tracking a pre-specified sequence of waypoints. Although this navigation method is often necessary, it is limiting because the robot does not have a model of what the scientist deems to be relevant visual observations. Thus, the robot can neither visually search for particular types of objects, nor focus its attention on parts of the scene that might be more relevant than the pre-specified waypoints and viewpoints. In this paper we propose a method that enables informed visual navigation via a learned visual similarity operator that guides the robot's visual search towards parts of the scene that look like an exemplar image, which is given by the user as a high-level specification for data collection. We propose and evaluate a weakly supervised video representation learning method that outperforms ImageNet embeddings for similarity tasks in the underwater domain. We also demonstrate the deployment of this similarity operator during informed visual navigation in collaborative environmental monitoring scenarios, in large-scale field trials, where the robot and a human scientist collaboratively search for relevant visual content.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Robotics