Classifiers of support vector machine type with \(\ell_1\) complexity regularization
From MaRDI portal
Publication:2642804
DOI10.3150/bj/1165269150zbMath1118.62067OpenAlexW2049883458MaRDI QIDQ2642804
Sara van de Geer, Bernadetta Tarigan
Publication date: 5 September 2007
Published in: Bernoulli (Search for Journal in Brave)
Full work available at URL: https://projecteuclid.org/euclid.bj/1165269150
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Inequalities; stochastic orderings (60E15) Bayesian problems; characterization of Bayes procedures (62C10)
Related Items
Overlapping group lasso for high-dimensional generalized linear models, ℓ1-Norm support vector machine for ranking with exponentially strongly mixing sequence, Optimal rates for plug-in estimators of density level sets, The learning rate of \(l_2\)-coefficient regularized classification with strong loss, Fast convergence rates of deep neural networks for classification, Non-asymptotic oracle inequalities for the Lasso and Group Lasso in high dimensional logistic model, Optimal discriminant analysis in high-dimensional latent factor models, Statistical performance of support vector machines, High-dimensional generalized linear models and the lasso, Optimal convergence rates of deep neural networks in a classification setting, Support vector machines with a reject option, Simultaneous adaptation to the margin and to complexity in classification, Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space, Support vector machines regression with \(l^1\)-regularizer, Fast learning rates for plug-in classifiers, Least Square Regression with lp-Coefficient Regularization, Quasi-likelihood and/or robust estimation in high dimensions, The Group Lasso for Logistic Regression, Elastic-net regularization in learning theory, Regularized ranking with convex losses and \(\ell^1\)-penalty, Spatially adaptive binary classifier using B-splines and total variation penalty, High-dimensional generalized linear models incorporating graphical structure among predictors, Learning rates for partially linear support vector machine in high dimensions, Unnamed Item, Unnamed Item, KERNEL METHODS FOR INDEPENDENCE MEASUREMENT WITH COEFFICIENT CONSTRAINTS
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Wedgelets: Nearly minimax estimation of edges
- Smooth discrimination analysis
- Empirical margin distributions and bounding the generalization error of combined classifiers
- About the constants in Talagrand's concentration inequalities for empirical processes.
- Complexity regularization via localized random penalties
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Optimal aggregation of classifiers in statistical learning.
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Statistical performance of support vector machines
- Square root penalty: Adaption to the margin in classification and in edge estimation
- On Talagrand's deviation inequalities for product measures
- Minimax-optimal classification with dyadic decision trees
- New tight frames of curvelets and optimal representations of objects with piecewise C2 singularities
- Rademacher penalties and structural risk minimization
- 10.1162/1532443041424319
- De-noising by soft-thresholding
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- Learning Theory
- Convexity, Classification, and Risk Bounds
- The elements of statistical learning. Data mining, inference, and prediction