Fast iterative regularization by reusing data
From MaRDI portal
Publication:6583087
DOI10.1515/JIIP-2023-0009MaRDI QIDQ6583087FDOQ6583087
Cristian Vega, Cesare Molinari, Silvia Villa, Lorenzo Rosasco
Publication date: 6 August 2024
Published in: Journal of Inverse and Ill-posed Problems (Search for Journal in Brave)
iterative regularizationLandweber methodearly stoppingstability and convergence analysisprimal-dual splitting algorithms
Numerical optimization and variational techniques (65K10) Convex programming (90C25) Numerical methods involving duality (49M29)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- NESTA: A Fast and Accurate First-Order Method for Sparse Recovery
- Nonlinear total variation based noise removal algorithms
- A randomized Kaczmarz algorithm with exponential convergence
- Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
- Generalized Cross-Validation as a Method for Choosing a Good Ridge Parameter
- Regularization and Variable Selection Via the Elastic Net
- A Singular Value Thresholding Algorithm for Matrix Completion
- Support Vector Machines
- Boosting with early stopping: convergence and consistency
- On early stopping in gradient descent learning
- Exact matrix completion via convex optimization
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Feature-Oriented Image Enhancement Using Shock Filters
- Understanding Machine Learning
- On the adaptive elastic net with a diverging number of parameters
- Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Compressed sensing
- Signal Recovery by Proximal Forward-Backward Splitting
- Image recovery via total variation minimization and related problems
- An algorithm for total variation minimization and applications
- A first-order primal-dual algorithm for convex problems with applications to imaging
- A mathematical introduction to compressive sensing
- Bregman Iterative Algorithms for $\ell_1$-Minimization with Applications to Compressed Sensing
- An Iterative Regularization Method for Total Variation-Based Image Restoration
- Iterative regularization methods for nonlinear ill-posed problems
- A modified Landweber iteration for solving parameter estimation problems
- A splitting algorithm for dual monotone inclusions involving cocoercive operators
- Dual averaging methods for regularized stochastic learning and online optimization
- A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms
- An Iteration Formula for Fredholm Integral Equations of the First Kind
- Linearized Bregman Iterations for Frame-Based Image Deblurring
- Efficient online and batch learning using forward backward splitting
- Forward-Douglas–Rachford splitting and forward-partial inverse method for solving monotone inclusions
- A Douglas-Rachford splitting method for solving equilibrium problems
- Convex analysis and monotone operator theory in Hilbert spaces
- Extensions of compressed sensing
- Iterative total variation schemes for nonlinear inverse problems
- On regularization algorithms in learning theory
- A generalized forward-backward splitting
- Elastic-net regularization in learning theory
- Analysis and Generalizations of the Linearized Bregman Method
- Fast linearized Bregman iteration for compressive sensing and sparse denoising
- Inexact first-order primal-dual algorithms
- A projected primal-dual method for solving constrained monotone inclusions
- Error estimation for Bregman iterations and inverse scale space methods in image restoration
- Convergence rates and source conditions for Tikhonov regularization with sparsity constraints
- Convergence of diagonally stationary sequences in convex optimization
- Elastic-net regularization: error estimates and active set methods
- Lagrangian penalization scheme with parallel forward-backward splitting
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
- Convergence properties of a randomized primal-dual algorithm with applications to parallel MRI
- On Nesterov acceleration for Landweber iteration of linear ill-posed problems
- Iterative regularization via dual diagonal descent
- Iterative regularization with a general penalty term—theory and application to L 1 and TV regularization
- Linear convergence of the randomized sparse Kaczmarz method
- Modern regularization methods for inverse problems
- Error estimates for general fidelities
- Random activations in primal-dual splittings for monotone inclusions with a priori information
- Alternating forward-backward splitting for linearly constrained optimization problems
- Convergence rates of forward-Douglas-Rachford splitting method
- On the Convergence of Stochastic Primal-Dual Hybrid Gradient
- Accelerated Iterative Regularization via Dual Diagonal Descent
- Generalized Conditional Gradient with Augmented Lagrangian for Composite Minimization
- Inexact and stochastic generalized conditional gradient with augmented Lagrangian and proximal step
This page was built for publication: Fast iterative regularization by reusing data
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6583087)