Unified approach to coefficient-based regularized regression
From MaRDI portal
Publication:651513
DOI10.1016/j.camwa.2011.05.034zbMath1228.62044OpenAlexW2019022041MaRDI QIDQ651513
Publication date: 18 December 2011
Published in: Computers \& Mathematics with Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.camwa.2011.05.034
learning ratesdata dependent hypothesis spaces\(\ell ^{2}\)-empirical covering number\(l^{q}\)-regularizer
Related Items
Error analysis for \(l^q\)-coefficient regularized moving least-square regression ⋮ Statistical consistency of coefficient-based conditional quantile regression ⋮ Error Analysis of Coefficient-Based Regularized Algorithm for Density-Level Detection ⋮ Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels ⋮ On the convergence rate of kernel-based sequential greedy regression ⋮ Error analysis for coefficient-based regularized regression in additive models ⋮ A simpler approach to coefficient regularized support vector machines regression ⋮ Constructive analysis for coefficient regularization regression algorithms ⋮ Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel ⋮ Coefficient-based \(l^q\)-regularized regression with indefinite kernels and unbounded sampling ⋮ A pectoral muscle segmentation algorithm for digital mammograms using otsu thresholding and multiple regression analysis ⋮ CONVERGENCE ANALYSIS OF COEFFICIENT-BASED REGULARIZATION UNDER MOMENT INCREMENTAL CONDITION
Cites Work
- Learning by nonsymmetric kernels with data dependent spaces and \(\ell^1\)-regularizer
- Least square regression with indefinite kernels and coefficient regularization
- Multi-kernel regularized classifiers
- A note on different covering numbers in learning theory.
- The covering number in learning theory
- Compactly supported positive definite radial functions
- Concentration estimates for learning with unbounded sampling
- Learning with sample dependent hypothesis spaces
- Learning rates of least-square regularized regression
- Shannon sampling. II: Connections to learning theory
- Least-squares regularized regression with dependent samples andq-penalty
- Learning Theory
- Capacity of reproducing kernel spaces in learning theory
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- Theory of Reproducing Kernels