SMIT: Stochastic Multi-Label Image-to-Image Translation

December 10, 2018 Β· Entered Twilight Β· πŸ› 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 6.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .drone.yml, .gitignore, Figures, LICENSE, README.md, azure-pipelines.yml, config.py, data, data_loader.py, datasets, generate_data, main.py, misc, models, solver.py, style_python.sh, test.py, train.py

Authors Andrés Romero, Pablo ArbelÑez, Luc Van Gool, Radu Timofte arXiv ID 1812.03704 Category cs.CV: Computer Vision Citations 67 Venue 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) Repository https://github.com/BCV-Uniandes/SMIT ⭐ 38 Last Checked 2 months ago
Abstract
Cross-domain mapping has been a very active topic in recent years. Given one image, its main purpose is to translate it to the desired target domain, or multiple domains in the case of multiple labels. This problem is highly challenging due to three main reasons: (i) unpaired datasets, (ii) multiple attributes, and (iii) the multimodality (e.g., style) associated with the translation. Most of the existing state-of-the-art has focused only on two reasons, i.e. either on (i) and (ii), or (i) and (iii). In this work, we propose a joint framework (i, ii, iii) of diversity and multi-mapping image-to-image translations, using a single generator to conditionally produce countless and unique fake images that hold the underlying characteristics of the source image. Our system does not use style regularization, instead, it uses an embedding representation that we call domain embedding for both domain and style. Extensive experiments over different datasets demonstrate the effectiveness of our proposed approach in comparison with the state-of-the-art in both multi-label and multimodal problems. Additionally, our method is able to generalize under different scenarios: continuous style interpolation, continuous label interpolation, and fine-grained mapping. Code and pretrained models are available at https://github.com/BCV-Uniandes/SMIT.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Computer Vision