Spatial Context-Aware Self-Attention Model For Multi-Organ Segmentation
December 16, 2020 Β· Declared Dead Β· π IEEE Workshop/Winter Conference on Applications of Computer Vision
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Hao Tang, Xingwei Liu, Kun Han, Shanlin Sun, Narisu Bai, Xuming Chen, Huang Qian, Yong Liu, Xiaohui Xie
arXiv ID
2012.09279
Category
eess.IV: Image & Video Processing
Cross-listed
cs.CV,
cs.LG
Citations
30
Venue
IEEE Workshop/Winter Conference on Applications of Computer Vision
Last Checked
2 months ago
Abstract
Multi-organ segmentation is one of most successful applications of deep learning in medical image analysis. Deep convolutional neural nets (CNNs) have shown great promise in achieving clinically applicable image segmentation performance on CT or MRI images. State-of-the-art CNN segmentation models apply either 2D or 3D convolutions on input images, with pros and cons associated with each method: 2D convolution is fast, less memory-intensive but inadequate for extracting 3D contextual information from volumetric images, while the opposite is true for 3D convolution. To fit a 3D CNN model on CT or MRI images on commodity GPUs, one usually has to either downsample input images or use cropped local regions as inputs, which limits the utility of 3D models for multi-organ segmentation. In this work, we propose a new framework for combining 3D and 2D models, in which the segmentation is realized through high-resolution 2D convolutions, but guided by spatial contextual information extracted from a low-resolution 3D model. We implement a self-attention mechanism to control which 3D features should be used to guide 2D segmentation. Our model is light on memory usage but fully equipped to take 3D contextual information into account. Experiments on multiple organ segmentation datasets demonstrate that by taking advantage of both 2D and 3D models, our method is consistently outperforms existing 2D and 3D models in organ segmentation accuracy, while being able to directly take raw whole-volume image data as inputs.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Image & Video Processing
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Kvasir-SEG: A Segmented Polyp Dataset
R.I.P.
π»
Ghosted
Deep Learning for Hyperspectral Image Classification: An Overview
R.I.P.
π»
Ghosted
U-Net and its variants for medical image segmentation: theory and applications
R.I.P.
π»
Ghosted
Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing
R.I.P.
π»
Ghosted
ResUNet++: An Advanced Architecture for Medical Image Segmentation
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted