Podcast cover for "From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent" by Ali Joundi et al.
Episode

From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent

Dec 8, 20258:08
eess.IVNeural and Evolutionary ComputingOptimization and Control
No ratings yet

Abstract

We consider the problem of recovering an unknown low-dimensional vector from noisy, underdetermined observations. We focus on the Generalized Projected Gradient Descent (GPGD) framework, which unifies traditional sparse recovery methods and modern approaches using learned deep projective priors. We extend previous convergence results to robustness to model and projection errors. We use these theoretical results to explore ways to better control stability and robustness constants. To reduce recovery errors due to measurement noise, we consider generalized back-projection strategies to adapt GPGD to structured noise, such as sparse outliers. To improve the stability of GPGD, we propose a normalized idempotent regularization for the learning of deep projective priors. We provide numerical experiments in the context of sparse recovery and image inverse problems, highlighting the trade-offs between identifiability and stability that can be achieved with such methods.

Links & Resources

Authors

Cite This Paper

Year:2025
Category:eess.IV
APA

Joundi, A., Traonmilin, Y., Aujol, J. (2025). From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent. arXiv preprint arXiv:2512.07397.

MLA

Ali Joundi, Yann Traonmilin, and Jean-François Aujol. "From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent." arXiv preprint arXiv:2512.07397 (2025).