Solving Orthogonal Group Synchronization via Convex and Low-Rank Optimization: Tightness and Landscape Analysis
June 01, 2020 Β· Declared Dead Β· π Mathematical programming
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Shuyang Ling
arXiv ID
2006.00902
Category
math.OC: Optimization & Control
Cross-listed
cs.IT
Citations
30
Venue
Mathematical programming
Last Checked
2 months ago
Abstract
Group synchronization aims to recover the group elements from their noisy pairwise measurements. It has found many applications in community detection, clock synchronization, and joint alignment problem. This paper focuses on the orthogonal group synchronization which is often used in cryo-EM and computer vision. However, it is generally NP-hard to retrieve the group elements by finding the least squares estimator. In this work, we first study the semidefinite programming (SDP) relaxation of the orthogonal group synchronization and its tightness, i.e., the SDP estimator is exactly equal to the least squares estimator. Moreover, we investigate the performance of the Burer-Monteiro factorization in solving the SDP relaxation by analyzing its corresponding optimization landscape. We provide deterministic sufficient conditions which guarantee: (i) the tightness of SDP relaxation; (ii) optimization landscape arising from the Burer-Monteiro approach is benign, i.e., the global optimum is exactly the least squares estimator and no other spurious local optima exist. Our result provides a solid theoretical justification of why the Burer-Monteiro approach is remarkably efficient and effective in solving the large-scale SDPs arising from orthogonal group synchronization. We perform numerical experiments to complement our theoretical analysis, which gives insights into future research directions.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Optimization & Control
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Local SGD Converges Fast and Communicates Little
R.I.P.
π»
Ghosted
On Lazy Training in Differentiable Programming
R.I.P.
π»
Ghosted
A Review on Bilevel Optimization: From Classical to Evolutionary Approaches and Applications
R.I.P.
π»
Ghosted
Learned Primal-dual Reconstruction
R.I.P.
π»
Ghosted
On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted