Sparsifying Transform Learning With Efficient Optimal Updates and Convergence Guarantees

From MaRDI portal
Publication:4580551

DOI10.1109/TSP.2015.2405503zbMATH Open1394.94477arXiv1501.02859MaRDI QIDQ4580551FDOQ4580551


Authors: Saiprasad Ravishankar, Yoram Bresler Edit this on Wikidata


Publication date: 22 August 2018

Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)

Abstract: Many applications in signal processing benefit from the sparsity of signals in a certain transform domain or dictionary. Synthesis sparsifying dictionaries that are directly adapted to data have been popular in applications such as image denoising, inpainting, and medical image reconstruction. In this work, we focus instead on the sparsifying transform model, and study the learning of well-conditioned square sparsifying transforms. The proposed algorithms alternate between a ell0 "norm"-based sparse coding step, and a non-convex transform update step. We derive the exact analytical solution for each of these steps. The proposed solution for the transform update step achieves the global minimum in that step, and also provides speedups over iterative solutions involving conjugate gradients. We establish that our alternating algorithms are globally convergent to the set of local minimizers of the non-convex transform learning problems. In practice, the algorithms are insensitive to initialization. We present results illustrating the promising performance and significant speed-ups of transform learning over synthesis K-SVD in image denoising.


Full work available at URL: https://arxiv.org/abs/1501.02859







Cited In (7)





This page was built for publication: Sparsifying Transform Learning With Efficient Optimal Updates and Convergence Guarantees

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4580551)