Minimizing L₁ over L₂ norms on the gradient

From MaRDI portal
Publication:5076010

DOI10.1088/1361-6420/AC64FBzbMATH Open1487.94031arXiv2101.00809OpenAlexW3120838973MaRDI QIDQ5076010FDOQ5076010

Yifei Lou, James G. Nagy, Chao Wang, Chen-nee Chuah, Min Tao

Publication date: 12 May 2022

Published in: Inverse Problems (Search for Journal in Brave)

Abstract: In this paper, we study the L1/L2 minimization on the gradient for imaging applications. Several recent works have demonstrated that L1/L2 is better than the L1 norm when approximating the L0 norm to promote sparsity. Consequently, we postulate that applying L1/L2 on the gradient is better than the classic total variation (the L1 norm on the gradient) to enforce the sparsity of the image gradient. To verify our hypothesis, we consider a constrained formulation to reveal empirical evidence on the superiority of L1/L2 over L1 when recovering piecewise constant signals from low-frequency measurements. Numerically, we design a specific splitting scheme, under which we can prove subsequential and global convergence for the alternating direction method of multipliers (ADMM) under certain conditions. Experimentally, we demonstrate visible improvements of L1/L2 over L1 and other nonconvex regularizations for image recovery from low-frequency measurements and two medical applications of MRI and CT reconstruction. All the numerical results show the efficiency of our proposed approach.


Full work available at URL: https://arxiv.org/abs/2101.00809




Recommendations




Cites Work


Cited In (10)





This page was built for publication: Minimizing \(L_1\) over \(L_2\) norms on the gradient

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5076010)