Regularization techniques and suboptimal solutions to optimization problems in learning from data
From MaRDI portal
Publication:3556804
Recommendations
- Optimization problems in statistical learning: duality and optimality conditions
- Learning the kernel function via regularization
- Learning with generalization capability by kernel methods of bounded complexity
- Statistical learning theory: A primer
- Iterative regularization for learning with convex loss functions
Cites work
- scientific article; zbMATH DE number 3888314 (Why is no real title available?)
- scientific article; zbMATH DE number 3551792 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- scientific article; zbMATH DE number 976323 (Why is no real title available?)
- 10.1162/153244302760200704
- 10.1162/153244303321897690
- Adaptive greedy approximations
- Approximation Bounds for Some Sparse Kernel Regression Algorithms
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Bounds on rates of variable-basis and neural-network approximation
- Comparison of worst case errors in linear and neural network approximation
- Elastic-net regularization in learning theory
- Error Estimates for Approximate Optimization by the Extended Ritz Method
- Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
- Greed is Good: Algorithmic Results for Sparse Approximation
- Kernel matching pursuit
- Learning with generalization capability by kernel methods of bounded complexity
- Least Squares Methods for Ill-Posed Problems with a Prescribed Bound
- Least angle regression. (With discussion)
- Neural Network Learning as an Inverse Problem
- On the exponential convergence of matching pursuits in quasi-incoherent dictionaries
- On the mathematical foundations of learning
- Random approximants and neural networks
- Regularization and Variable Selection Via the Elastic Net
- Regularization methods for solving inverse problems
- Regularization networks and support vector machines
- Relationship of several variational methods for the approximate solution of ill-posed problems
- Sequential greedy approximation for certain convex optimization problems
- Structural risk minimization over data-dependent hierarchies
- Well-posed optimization problems
Cited in
(19)- Some approaches to the solution of optimization problems in supervised learning
- Functional optimization by variable-basis approximation schemes
- Can dictionary-based computational models outperform the best linear ones?
- Regularizing algorithms with optimal and extra-optimal quality
- Topological Regularization via Persistence-Sensitive Optimization
- A subdivision-regularization framework for preventing over fitting of data by a model
- Optimization problems in statistical learning: duality and optimality conditions
- Laplacian twin support vector machine for semi-supervised classification
- Sign stochastic gradient descents without bounded gradient assumption for the finite sum minimization
- The weight-decay technique in learning from data: an optimization point of view
- Levenberg-Marquardt multi-classification using hinge loss function
- From inexact optimization to learning via gradient concentration
- Alternating step size method for solving ill-posed linear operator equations in energetic space
- Distributed semi-supervised support vector machines
- Flexible constraints for regularization in learning from data
- On spectral windows in supervised learning from data
- Learning with boundary conditions
- Iterative regularization for learning with convex loss functions
- An algorithm for curve identification in the presence of curve intersections
This page was built for publication: Regularization techniques and suboptimal solutions to optimization problems in learning from data
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3556804)