R.I.P.
👻
Ghosted
OmniLens++: Blind Lens Aberration Correction via Large LensLib Pre-Training and Latent PSF Representation
November 21, 2025 · Declared Dead · 🏛 arXiv.org
Authors
Qi Jiang, Xiaolong Qian, Yao Gao, Lei Sun, Kailun Yang, Zhonghua Yi, Wenyong Li, Ming-Hsuan Yang, Luc Van Gool, Kaiwei Wang
arXiv ID
2511.17126
Category
eess.IV: Image & Video Processing
Cross-listed
cs.CV,
cs.LG,
physics.optics
Citations
0
Venue
arXiv.org
Repository
https://github.com/zju-jiangqi/OmniLens2
⭐ 4
Last Checked
2 months ago
Abstract
Emerging deep-learning-based lens library pre-training (LensLib-PT) pipeline offers a new avenue for blind lens aberration correction by training a universal neural network, demonstrating strong capability in handling diverse unknown optical degradations. This work proposes the OmniLens++ framework, which resolves two challenges that hinder the generalization ability of existing pipelines: the difficulty of scaling data and the absence of prior guidance characterizing optical degradation. To improve data scalability, we expand the design specifications to increase the degradation diversity of the lens source, and we sample a more uniform distribution by quantifying the spatial-variation patterns and severity of optical degradation. In terms of model design, to leverage the Point Spread Functions (PSFs), which intuitively describe optical degradation, as guidance in a blind paradigm, we propose the Latent PSF Representation (LPR). The VQVAE framework is introduced to learn latent features of LensLib's PSFs, which is assisted by modeling the optical degradation process to constrain the learning of degradation priors. Experiments on diverse aberrations of real-world lenses and synthetic LensLib show that OmniLens++ exhibits state-of-the-art generalization capacity in blind aberration correction. Beyond performance, the AODLibpro is verified as a scalable foundation for more effective training across diverse aberrations, and LPR can further tap the potential of large-scale LensLib. The source code and datasets will be made publicly available at https://github.com/zju-jiangqi/OmniLens2.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
📜 Similar Papers
In the same crypt — Image & Video Processing
R.I.P.
👻
Ghosted
Kvasir-SEG: A Segmented Polyp Dataset
R.I.P.
👻
Ghosted
Deep Learning for Hyperspectral Image Classification: An Overview
R.I.P.
👻
Ghosted
U-Net and its variants for medical image segmentation: theory and applications
R.I.P.
👻
Ghosted
Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing
R.I.P.
👻
Ghosted
ResUNet++: An Advanced Architecture for Medical Image Segmentation
Died the same way — ⚰️ The Empty Tomb
R.I.P.
⚰️
The Empty Tomb
DSFD: Dual Shot Face Detector
R.I.P.
⚰️
The Empty Tomb
InstanceCut: from Edges to Instances with MultiCut
R.I.P.
⚰️
The Empty Tomb
FLNet: Landmark Driven Fetching and Learning Network for Faithful Talking Facial Animation Synthesis
R.I.P.
⚰️
The Empty Tomb