Deep unfolding of a proximal interior point method for image restoration
From MaRDI portal
Publication:5220306
Abstract: Variational methods are widely applied to ill-posed inverse problems for they have the ability to embed prior knowledge about the solution. However, the level of performance of these methods significantly depends on a set of parameters, which can be estimated through computationally expensive and time-consuming methods. In contrast, deep learning offers very generic and efficient architectures, at the expense of explainability, since it is often used as a black-box, without any fine control over its output. Deep unfolding provides a convenient approach to combine variational-based and deep learning approaches. Starting from a variational formulation for image restoration, we develop iRestNet, a neural network architecture obtained by unfolding a proximal interior point algorithm. Hard constraints, encoding desirable properties for the restored image, are incorporated into the network thanks to a logarithmic barrier, while the barrier parameter, the stepsize, and the penalization weight are learned by the network. We derive explicit expressions for the gradient of the proximity operator for various choices of constraints, which allows training iRestNet with gradient descent and backpropagation. In addition, we provide theoretical results regarding the stability of the network for a common inverse problem example. Numerical experiments on image deblurring problems show that the proposed approach compares favorably with both state-of-the-art variational and machine learning methods in terms of image quality.
Recommendations
- Learning maximally monotone operators for image recovery
- Variational networks: an optimal control approach to early stopping variational methods for image restoration
- A Generative Variational Model for Inverse Problems in Imaging
- Constrained and unconstrained deep image prior optimization models with automatic regularization
- Solving ill-posed inverse problems using iterative deep neural networks
Cites work
- scientific article; zbMATH DE number 88933 (Why is no real title available?)
- scientific article; zbMATH DE number 1086923 (Why is no real title available?)
- scientific article; zbMATH DE number 1465030 (Why is no real title available?)
- A variational formulation for frame-based inverse problems
- An Interior Point Recurrent Neural Network for Convex Optimization Problems
- An Inverse Matrix Adjustment Arising in Discriminant Analysis
- Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising
- Color Image Denoising via Discriminatively Learned Iterative Shrinkage
- Computational Methods for Inverse Problems
- Convex analysis and monotone operator theory in Hilbert spaces
- Deep Convolutional Neural Network for Inverse Problems in Imaging
- Deep learning
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Epigraphical projection for solving least squares Anscombe transformed constrained optimization problems
- Implicit Functions and Solution Mappings
- New convergence results for the scaled gradient projection method
- Non-negatively constrained image deblurring with an inexact interior point method
- On early stopping in gradient descent learning
- On the convergence properties of the projected gradient method for convex optimization
- Proximal methods in view of interior-point strategies
- Proximal splitting methods in signal processing
- Signal Recovery by Proximal Forward-Backward Splitting
- Some first-order algorithms for total variation based image restoration
- Splitting forward-backward penalty scheme for constrained variational problems
- Stability of the minimizers of least squares with a non-convex regularization. I: Local behavior
- Stein Unbiased GrAdient estimator of the Risk (SUGAR) for Multiple Parameter Selection
- The use of Morozov's discrepancy principle for Tikhonov regularization for solving nonlinear ill-posed problems
- Topology preserving deformable image matching using constrained hierarchical parametric models
Cited in
(18)- Variational models for signal processing with graph neural networks
- Variational networks: an optimal control approach to early stopping variational methods for image restoration
- Constrained and unconstrained deep image prior optimization models with automatic regularization
- Learning Regularization Parameter-Maps for Variational Image Reconstruction Using Deep Neural Networks and Algorithm Unrolling
- Temporal deep unfolding for constrained nonlinear stochastic optimal control
- Explainable bilevel optimization: an application to the Helsinki Deblur Challenge
- Data-driven nonsmooth optimization
- Learning maximally monotone operators for image recovery
- Marginal likelihood estimation in semiblind image deconvolution: a stochastic approximation approach
- Bregman methods for large-scale optimization with applications in imaging
- Strengthened splitting methods for computing resolvents
- Uniformly convex neural networks and non-stationary iterated network Tikhonov (iNETT) method
- Deep unfolding as iterative regularization for imaging inverse problems
- Solution of mismatched monotone+Lipschitz inclusion problems
- Edge adaptive hybrid regularization model for image deblurring
- Heuristic computing with sequential quadratic programming for solving a nonlinear hepatitis B virus model
- Lipschitz Certificates for Layered Network Structures Driven by Averaged Activation Operators
- Applying smoothing technique and semi-proximal ADMM for image deblurring
This page was built for publication: Deep unfolding of a proximal interior point method for image restoration
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5220306)