Fast iterative regularization by reusing data
From MaRDI portal
Publication:6583087
Recommendations
Cites work
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 936298 (Why is no real title available?)
- scientific article; zbMATH DE number 3227378 (Why is no real title available?)
- scientific article; zbMATH DE number 3027894 (Why is no real title available?)
- A Douglas-Rachford splitting method for solving equilibrium problems
- A Singular Value Thresholding Algorithm for Matrix Completion
- A first-order primal-dual algorithm for convex problems with applications to imaging
- A generalized forward-backward splitting
- A mathematical introduction to compressive sensing
- A modified Landweber iteration for solving parameter estimation problems
- A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms
- A projected primal-dual method for solving constrained monotone inclusions
- A randomized Kaczmarz algorithm with exponential convergence
- A splitting algorithm for dual monotone inclusions involving cocoercive operators
- A stochastic Bregman primal-dual splitting algorithm for composite optimization
- Accelerated iterative regularization via dual diagonal descent
- AdaBoost is consistent
- Alternating forward-backward splitting for linearly constrained optimization problems
- An Iteration Formula for Fredholm Integral Equations of the First Kind
- An Iterative Regularization Method for Total Variation-Based Image Restoration
- An algorithm for total variation minimization and applications
- Analysis and generalizations of the linearized Bregman method
- Boosting with early stopping: convergence and consistency
- Bregman Iterative Algorithms for $\ell_1$-Minimization with Applications to Compressed Sensing
- Compressed sensing
- Convergence of diagonally stationary sequences in convex optimization
- Convergence properties of a randomized primal-dual algorithm with applications to parallel MRI
- Convergence rates and source conditions for Tikhonov regularization with sparsity constraints
- Convergence rates of forward-Douglas-Rachford splitting method
- Convex analysis and monotone operator theory in Hilbert spaces
- Dual averaging methods for regularized stochastic learning and online optimization
- Early stopping and non-parametric regression: an optimal data-dependent stopping rule
- Efficient online and batch learning using forward backward splitting
- Elastic-net regularization in learning theory
- Elastic-net regularization: error estimates and active set methods
- Error estimates for general fidelities
- Error estimation for Bregman iterations and inverse scale space methods in image restoration
- Exact matrix completion via convex optimization
- Extensions of compressed sensing
- Fast linearized Bregman iteration for compressive sensing and sparse denoising
- Feature-Oriented Image Enhancement Using Shock Filters
- Forward-Douglas–Rachford splitting and forward-partial inverse method for solving monotone inclusions
- Generalized Cross-Validation as a Method for Choosing a Good Ridge Parameter
- Generalized conditional gradient with augmented Lagrangian for composite minimization
- Geometric approach to error-correcting codes and reconstruction of signals
- Image recovery via total variation minimization and related problems
- Inexact and stochastic generalized conditional gradient with augmented Lagrangian and proximal step
- Inexact first-order primal-dual algorithms
- Iterative regularization methods for nonlinear ill-posed problems
- Iterative regularization via dual diagonal descent
- Iterative regularization with a general penalty term-theory and application to \(L^{1}\) and \(TV\) regularization
- Iterative total variation schemes for nonlinear inverse problems
- Lagrangian penalization scheme with parallel forward-backward splitting
- Linear convergence of the randomized sparse Kaczmarz method
- Linearized Bregman Iterations for Frame-Based Image Deblurring
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Modern regularization methods for inverse problems
- NESTA: A fast and accurate first-order method for sparse recovery
- Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
- Nonlinear total variation based noise removal algorithms
- On Nesterov acceleration for Landweber iteration of linear ill-posed problems
- On early stopping in gradient descent learning
- On regularization algorithms in learning theory
- On the adaptive elastic net with a diverging number of parameters
- On the convergence of stochastic primal-dual hybrid gradient
- Random activations in primal-dual splittings for monotone inclusions with a priori information
- Regularization and Variable Selection Via the Elastic Net
- Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Signal Recovery by Proximal Forward-Backward Splitting
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
- Support Vector Machines
- Understanding machine learning. From theory to algorithms
This page was built for publication: Fast iterative regularization by reusing data
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6583087)