Divide-and-conquer for debiased l₁-norm support vector machine in ultra-high dimensions
From MaRDI portal
Publication:4558508
Recommendations
- An error bound for \(L_1\)-norm support vector machine coefficients in ultra-high dimension
- Variable selection for support vector machines in moderately high dimensions
- OnL1-Norm Multiclass Support Vector Machines
- The doubly regularized support vector machine
- Support vector machine and its bias correction in high-dimension, low-sample-size settings
Cites work
- scientific article; zbMATH DE number 5957245 (Why is no real title available?)
- scientific article; zbMATH DE number 1950576 (Why is no real title available?)
- scientific article; zbMATH DE number 823069 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A Bahadur representation of the linear support vector machine
- A constrained \(\ell _{1}\) minimization approach to sparse precision matrix estimation
- A note on margin-based loss functions in classification
- A partially linear framework for massive heterogeneous data
- An error bound for \(L_1\)-norm support vector machine coefficients in ultra-high dimension
- An introduction to support vector machines and other kernel-based learning methods.
- Asymptotic Equivalence of Regularization Methods in Thresholded Parameter Space
- Asymptotic normality of Powell's kernel estimator
- Communication-efficient algorithms for statistical optimization
- Communication-efficient sparse regression
- Consistency of Support Vector Machines and Other Regularized Kernel Classifiers
- Convexity, Classification, and Risk Bounds
- Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates
- Fast rates for support vector machines using Gaussian kernels
- High dimensional thresholded regression and shrinkage effect
- High-dimensional generalized linear models and the lasso
- Inverses of Band Matrices and Local Convergence of Spline Projections
- Nearly unbiased variable selection under minimax concave penalty
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Oracle properties of SCAD-penalized support vector machine
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Statistical performance of support vector machines
- Support-vector networks
- The Adaptive Lasso and Its Oracle Properties
- Variable Selection for Support Vector Machines in Moderately High Dimensions
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Weak convergence and empirical processes. With applications to statistics
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
Cited in
(22)- A selective review on statistical methods for massive data computation: distributed computing, subsampling, and minibatch techniques
- Sparse high-dimensional fractional-norm support vector machine via DC programming
- A review of distributed statistical inference
- Distributed inference for linear support vector machine
- The backbone method for ultra-high dimensional sparse machine learning
- Debiased magnitude-preserving ranking: learning rate and bias characterization
- Distributed learning for sketched kernel regression
- Partitioned Approach for High-dimensional Confidence Intervals with Large Split Sizes
- Adaptive distributed support vector regression of massive data
- Statistical inference and distributed implementation for linear multicategory SVM
- Distributed estimation with empirical likelihood
- scientific article; zbMATH DE number 7306897 (Why is no real title available?)
- Two-stage online debiased Lasso estimation and inference for high-dimensional quantile regression with streaming data
- The statistical rate for support matrix machines under low rankness and row (column) sparsity
- Support vector machine in big data: smoothing strategy and adaptive distributed inference
- A communication efficient distributed one-step estimation
- Sparse additive support vector machines in bounded variation space
- An error bound for \(L_1\)-norm support vector machine coefficients in ultra-high dimension
- A divide-and-combine method for large scale nonparallel support vector machines
- Online two-way estimation and inference via linear mixed-effects models
- Online updating Huber robust regression for big data streams
- Robust distributed multicategory angle-based classification for massive data
This page was built for publication: Divide-and-conquer for debiased \(l_1\)-norm support vector machine in ultra-high dimensions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4558508)