DAE-Net: Deforming Auto-Encoder for fine-grained shape co-segmentation

November 22, 2023 ยท Entered Twilight ยท ๐Ÿ› International Conference on Computer Graphics and Interactive Techniques

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: LICENSE, README.md, bae_net_data, checkpoint, dataset.py, main.py, model.py, render_and_get_mp4.py, teaser.jpg, train.sh, utils.py

Authors Zhiqin Chen, Qimin Chen, Hang Zhou, Hao Zhang arXiv ID 2311.13125 Category cs.CV: Computer Vision Cross-listed cs.GR Citations 10 Venue International Conference on Computer Graphics and Interactive Techniques Repository https://github.com/czq142857/DAE-Net โญ 38 Last Checked 2 months ago
Abstract
We present an unsupervised 3D shape co-segmentation method which learns a set of deformable part templates from a shape collection. To accommodate structural variations in the collection, our network composes each shape by a selected subset of template parts which are affine-transformed. To maximize the expressive power of the part templates, we introduce a per-part deformation network to enable the modeling of diverse parts with substantial geometry variations, while imposing constraints on the deformation capacity to ensure fidelity to the originally represented parts. We also propose a training scheme to effectively overcome local minima. Architecturally, our network is a branched autoencoder, with a CNN encoder taking a voxel shape as input and producing per-part transformation matrices, latent codes, and part existence scores, and the decoder outputting point occupancies to define the reconstruction loss. Our network, coined DAE-Net for Deforming Auto-Encoder, can achieve unsupervised 3D shape co-segmentation that yields fine-grained, compact, and meaningful parts that are consistent across diverse shapes. We conduct extensive experiments on the ShapeNet Part dataset, DFAUST, and an animal subset of Objaverse to show superior performance over prior methods. Code and data are available at https://github.com/czq142857/DAE-Net.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision