Elastic-net regularization in learning theory
From MaRDI portal
Publication:1023403
DOI10.1016/j.jco.2009.01.002zbMath1319.62087arXiv0807.3423OpenAlexW1997453445MaRDI QIDQ1023403
Lorenzo Rosasco, Christine De Mol, Ernesto De Vito
Publication date: 11 June 2009
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0807.3423
Lua error in Module:PublicationMSCList at line 37: attempt to index local 'msc_result' (a nil value).
Related Items (48)
Parallel block coordinate minimization with application to group regularized regression ⋮ Features Selection as a Nash-Bargaining Solution: Applications in Online Advertising and Information Systems ⋮ On grouping effect of elastic net ⋮ Generalized System Identification with Stable Spline Kernels ⋮ On an unsupervised method for parameter selection for the elastic net ⋮ Accelerated Bregman method for linearly constrained \(\ell _1-\ell _2\) minimization ⋮ Leading impulse response identification via the elastic net criterion ⋮ Generalized conditional gradient method for elastic-net regularization ⋮ The learning rate of \(l_2\)-coefficient regularized classification with strong loss ⋮ Generalized Kalman smoothing: modeling and algorithms ⋮ Learning rates for least square regressions with coefficient regularization ⋮ Sparse identification of posynomial models ⋮ Regularized learning schemes in feature Banach spaces ⋮ Stability of the elastic net estimator ⋮ Thresholding gradient methods in Hilbert spaces: support identification and linear convergence ⋮ Sparse learning of the disease severity score for high-dimensional data ⋮ On hybrid tree-based methods for short-term insurance claims ⋮ Concentration estimates for learning with unbounded sampling ⋮ A consistent algorithm to solve Lasso, elastic-net and Tikhonov regularization ⋮ Elastic-net regularization versus ℓ 1 -regularization for linear inverse problems with quasi-sparse solutions ⋮ Majorization-minimization algorithms for nonsmoothly penalized objective functions ⋮ Adaptive kernel methods using the balancing principle ⋮ Support vector machines regression with unbounded sampling ⋮ Communication-efficient estimation of high-dimensional quantile regression ⋮ Regression-based sparse polynomial chaos for uncertainty quantification of subsurface flow models ⋮ Moving horizon estimation for ARMAX processes with additive output noise ⋮ Scalable Algorithms for the Sparse Ridge Regression ⋮ Regularization Techniques and Suboptimal Solutions to Optimization Problems in Learning from Data ⋮ Sparsity-promoting elastic net method with rotations for high-dimensional nonlinear inverse problem ⋮ Consistent learning by composite proximal thresholding ⋮ ELASTIC-NET REGULARIZATION FOR LOW-RANK MATRIX RECOVERY ⋮ Learning sets with separating kernels ⋮ Proximity for sums of composite functions ⋮ Convergence of stochastic proximal gradient algorithm ⋮ Characterization of the equivalence of robustification and regularization in linear and matrix regression ⋮ Statistical analysis of the moving least-squares method with unbounded sampling ⋮ New regularization method and iteratively reweighted algorithm for sparse vector recovery ⋮ Reconstruction of functions from prescribed proximal points ⋮ Elastic-Net Regularization: Iterative Algorithms and Asymptotic Behavior of Solutions ⋮ Solving composite fixed point problems with block updates ⋮ Relaxing support vectors for classification ⋮ Generalized support vector regression: Duality and tensor-kernel representation ⋮ Consistency of the elastic net under a finite second moment assumption on the noise ⋮ On extension theorems and their connection to universal consistency in machine learning ⋮ A telescopic Bregmanian proximal gradient method without the global Lipschitz continuity assumption ⋮ Optimal rates for coefficient-based regularized regression ⋮ Boosting as a kernel-based method ⋮ Half supervised coefficient regularization for regression learning with unbounded sampling
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Sparsity in penalized empirical risk minimization
- Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint
- On regularization algorithms in learning theory
- Asymptotics for Lasso-type estimators.
- Least angle regression. (With discussion)
- Sums and Gaussian vectors
- Optimum bounds for the distributions of martingales in Banach spaces
- Weak convergence and empirical processes. With applications to statistics
- Feature selection for high-dimensional data
- Optimal rates for the regularized least-squares algorithm
- High-dimensional generalized linear models and the lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Approximation and learning by greedy algorithms
- Classifiers of support vector machine type with \(\ell_1\) complexity regularization
- Learning theory estimates via integral operators and their approximations
- Regularization without preliminary knowledge of smoothness and error behaviour
- VECTOR VALUED REPRODUCING KERNEL HILBERT SPACES OF INTEGRABLE FUNCTIONS AND MERCER THEOREM
- Recovery Algorithms for Vector-Valued Data with Joint Sparsity Constraints
- Remarks on Inequalities for Large Deviation Probabilities
- Atomic Decomposition by Basis Pursuit
- Adaptive estimation with soft thresholding penalties
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- On a Problem of Adaptive Estimation in Gaussian White Noise
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- Regularization and Variable Selection Via the Elastic Net
- A Sparsity-Enforcing Method for Learning Face Features
- Model Selection and Estimation in Regression with Grouped Variables
- On the Adaptive Selection of the Parameter in Regularization of Ill-Posed Problems
- On Learning Vector-Valued Functions
This page was built for publication: Elastic-net regularization in learning theory