Precision Learning: Towards Use of Known Operators in Neural Networks

December 01, 2017 Β· Declared Dead Β· πŸ› International Conference on Pattern Recognition

πŸ‘» CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Andreas Maier, Frank Schebesch, Christopher Syben, Tobias WΓΌrfl, Stefan Steidl, Jang-Hwan Choi, Rebecca Fahrig arXiv ID 1712.00374 Category cs.CV: Computer Vision Citations 34 Venue International Conference on Pattern Recognition Last Checked 2 months ago
Abstract
In this paper, we consider the use of prior knowledge within neural networks. In particular, we investigate the effect of a known transform within the mapping from input data space to the output domain. We demonstrate that use of known transforms is able to change maximal error bounds. In order to explore the effect further, we consider the problem of X-ray material decomposition as an example to incorporate additional prior knowledge. We demonstrate that inclusion of a non-linear function known from the physical properties of the system is able to reduce prediction errors therewith improving prediction quality from SSIM values of 0.54 to 0.88. This approach is applicable to a wide set of applications in physics and signal processing that provide prior knowledge on such transforms. Also maximal error estimation and network understanding could be facilitated within the context of precision learning.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Computer Vision

Died the same way β€” πŸ‘» Ghosted