Elastic-net regularization in learning theory
From MaRDI portal
Abstract: Within the framework of statistical learning theory we analyze in detail the so-called elastic-net regularization scheme proposed by Zou and Hastie for the selection of groups of correlated variables. To investigate on the statistical properties of this scheme and in particular on its consistency properties, we set up a suitable mathematical framework. Our setting is random-design regression where we allow the response variable to be vector-valued and we consider prediction functions which are linear combination of elements ({em features}) in an infinite-dimensional dictionary. Under the assumption that the regression function admits a sparse representation on the dictionary, we prove that there exists a particular ``{em elastic-net representation} of the regression function such that, if the number of data increases, the elastic-net estimator is consistent not only for prediction but also for variable/feature selection. Our results include finite-sample bounds and an adaptive scheme to select the regularization parameter. Moreover, using convex analysis tools, we derive an iterative thresholding algorithm for computing the elastic-net solution which is different from the optimization procedure originally proposed by Zou and Hastie
Recommendations
- Stability of the elastic net estimator
- Regularization and Variable Selection Via the Elastic Net
- Elastic-net regularization: error estimates and active set methods
- Consistency of the elastic net under a finite second moment assumption on the noise
- A consistent algorithm to solve Lasso, elastic-net and Tikhonov regularization
Cites work
- scientific article; zbMATH DE number 5968915 (Why is no real title available?)
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 3901506 (Why is no real title available?)
- scientific article; zbMATH DE number 45848 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 5251637 (Why is no real title available?)
- A Sparsity-Enforcing Method for Learning Face Features
- Adaptive estimation with soft thresholding penalties
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Approximation and learning by greedy algorithms
- Asymptotics for Lasso-type estimators.
- Atomic Decomposition by Basis Pursuit
- Best subset selection, persistence in high-dimensional statistical learning and optimization under l₁ constraint
- Classifiers of support vector machine type with \(\ell_1\) complexity regularization
- Feature selection for high-dimensional data
- High-dimensional generalized linear models and the lasso
- Learning multiple tasks with kernel methods
- Learning theory estimates via integral operators and their approximations
- Least angle regression. (With discussion)
- Model Selection and Estimation in Regression with Grouped Variables
- On Learning Vector-Valued Functions
- On a Problem of Adaptive Estimation in Gaussian White Noise
- On regularization algorithms in learning theory
- On the Adaptive Selection of the Parameter in Regularization of Ill-Posed Problems
- Optimal rates for the regularized least-squares algorithm
- Optimum bounds for the distributions of martingales in Banach spaces
- Recovery Algorithms for Vector-Valued Data with Joint Sparsity Constraints
- Regularization and Variable Selection Via the Elastic Net
- Regularization without preliminary knowledge of smoothness and error behaviour
- Remarks on Inequalities for Large Deviation Probabilities
- Sparsity in penalized empirical risk minimization
- Sums and Gaussian vectors
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- VECTOR VALUED REPRODUCING KERNEL HILBERT SPACES OF INTEGRABLE FUNCTIONS AND MERCER THEOREM
- Weak convergence and empirical processes. With applications to statistics
Cited in
(66)- Variable selection and regularization via arbitrary rectangle-range generalized elastic net
- Sparsity-promoting elastic net method with rotations for high-dimensional nonlinear inverse problem
- Sparse identification of posynomial models
- A telescopic Bregmanian proximal gradient method without the global Lipschitz continuity assumption
- Elastic-Net Regularization: Iterative Algorithms and Asymptotic Behavior of Solutions
- Sparse approximation of fitting surface by elastic net
- scientific article; zbMATH DE number 7338564 (Why is no real title available?)
- Moving horizon estimation for ARMAX processes with additive output noise
- Majorization-minimization algorithms for nonsmoothly penalized objective functions
- Equilibrium of elastic nets
- Consistency of the group Lasso and multiple kernel learning
- Fast iterative regularization by reusing data
- Learning rates for least square regressions with coefficient regularization
- Kernelized elastic net regularization: generalization bounds, and sparse recovery
- Generalized Kalman smoothing: modeling and algorithms
- Scalable algorithms for the sparse ridge regression
- Communication-efficient estimation of high-dimensional quantile regression
- Statistical analysis of the moving least-squares method with unbounded sampling
- Revisiting graph neural networks from hybrid regularized graph signal reconstruction
- On hybrid tree-based methods for short-term insurance claims
- Support vector machines regression with unbounded sampling
- Reconstruction of functions from prescribed proximal points
- A consistent algorithm to solve Lasso, elastic-net and Tikhonov regularization
- Half supervised coefficient regularization for regression learning with unbounded sampling
- Regularization techniques and suboptimal solutions to optimization problems in learning from data
- The use of grossone in elastic net regularization and sparse support vector machines
- On extension theorems and their connection to universal consistency in machine learning
- Elastic-net regularization for low-rank matrix recovery
- New regularization method and iteratively reweighted algorithm for sparse vector recovery
- Parallel block coordinate minimization with application to group regularized regression
- Solving composite fixed point problems with block updates
- Properties and splitting method for the \(p\)-elastic net
- Stability of the elastic net estimator
- Oracle-net for nonlinear compressed sensing in electrical impedance tomography reconstruction problems
- On an unsupervised method for parameter selection for the elastic net
- Post-Pareto analysis and a new algorithm for the optimal parameter tuning of the elastic net
- Generalized conditional gradient method for elastic-net regularization
- Feature selection guided by structural information
- Convergence of stochastic proximal gradient algorithm
- Concentration estimates for learning with unbounded sampling
- Regularization and Variable Selection Via the Elastic Net
- Linear regression via elastic net: non-enumerative leave-one-out verification of feature selection
- Generalized support vector regression: duality and tensor-kernel representation
- Regression-based sparse polynomial chaos for uncertainty quantification of subsurface flow models
- Characterization of the equivalence of robustification and regularization in linear and matrix regression
- Elastic-net regularization: error estimates and active set methods
- Adaptive kernel methods using the balancing principle
- Consistency of the elastic net under a finite second moment assumption on the noise
- Regularized learning schemes in feature Banach spaces
- Optimal rates for coefficient-based regularized regression
- Learning sets with separating kernels
- Features Selection as a Nash-Bargaining Solution: Applications in Online Advertising and Information Systems
- Leading impulse response identification via the elastic net criterion
- Accelerated Bregman method for linearly constrained \(\ell _1-\ell _2\) minimization
- Sparse learning of the disease severity score for high-dimensional data
- scientific article; zbMATH DE number 1928734 (Why is no real title available?)
- Proximity for sums of composite functions
- The learning rate of \(l_2\)-coefficient regularized classification with strong loss
- Elastic-net regularization versus ℓ 1 -regularization for linear inverse problems with quasi-sparse solutions
- Boosting as a kernel-based method
- Thresholding gradient methods in Hilbert spaces: support identification and linear convergence
- The structured elastic net for quantile regression and support vector classification
- On grouping effect of elastic net
- Relaxing support vectors for classification
- Consistent learning by composite proximal thresholding
- Generalized system identification with stable spline kernels
This page was built for publication: Elastic-net regularization in learning theory
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1023403)