Divide-and-conquer for debiased l₁-norm support vector machine in ultra-high dimensions
From MaRDI portal
Publication:4558508
zbMATH Open1468.68158MaRDI QIDQ4558508FDOQ4558508
Authors: Heng Lian, Zengyan Fan
Publication date: 22 November 2018
Full work available at URL: http://jmlr.csail.mit.edu/papers/v18/17-343.html
Recommendations
- An error bound for \(L_1\)-norm support vector machine coefficients in ultra-high dimension
- Variable selection for support vector machines in moderately high dimensions
- OnL1-Norm Multiclass Support Vector Machines
- The doubly regularized support vector machine
- Support vector machine and its bias correction in high-dimension, low-sample-size settings
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05) Ridge regression; shrinkage estimators (Lasso) (62J07) Applications of mathematical programming (90C90)
Cites Work
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- Weak convergence and empirical processes. With applications to statistics
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Title not available (Why is that?)
- Support-vector networks
- High-dimensional generalized linear models and the lasso
- Title not available (Why is that?)
- On asymptotically optimal confidence regions and tests for high-dimensional models
- A note on margin-based loss functions in classification
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- An introduction to support vector machines and other kernel-based learning methods.
- Variable Selection for Support Vector Machines in Moderately High Dimensions
- Asymptotic Equivalence of Regularization Methods in Thresholded Parameter Space
- A constrained \(\ell _{1}\) minimization approach to sparse precision matrix estimation
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Title not available (Why is that?)
- Title not available (Why is that?)
- Convexity, Classification, and Risk Bounds
- A Bahadur representation of the linear support vector machine
- Consistency of Support Vector Machines and Other Regularized Kernel Classifiers
- Fast rates for support vector machines using Gaussian kernels
- Statistical performance of support vector machines
- Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates
- A partially linear framework for massive heterogeneous data
- Inverses of Band Matrices and Local Convergence of Spline Projections
- Asymptotic normality of Powell's kernel estimator
- Oracle properties of SCAD-penalized support vector machine
- High Dimensional Thresholded Regression and Shrinkage Effect
- Communication-efficient sparse regression
- Communication-efficient algorithms for statistical optimization
- An error bound for \(L_1\)-norm support vector machine coefficients in ultra-high dimension
Cited In (22)
- A communication efficient distributed one-step estimation
- Distributed estimation with empirical likelihood
- Title not available (Why is that?)
- Two-stage online debiased Lasso estimation and inference for high-dimensional quantile regression with streaming data
- An error bound for \(L_1\)-norm support vector machine coefficients in ultra-high dimension
- A selective review on statistical methods for massive data computation: distributed computing, subsampling, and minibatch techniques
- The backbone method for ultra-high dimensional sparse machine learning
- Online two-way estimation and inference via linear mixed-effects models
- Online updating Huber robust regression for big data streams
- Distributed inference for linear support vector machine
- The statistical rate for support matrix machines under low rankness and row (column) sparsity
- Support vector machine in big data: smoothing strategy and adaptive distributed inference
- Debiased magnitude-preserving ranking: learning rate and bias characterization
- Sparse high-dimensional fractional-norm support vector machine via DC programming
- Distributed learning for sketched kernel regression
- A divide-and-combine method for large scale nonparallel support vector machines
- Partitioned Approach for High-dimensional Confidence Intervals with Large Split Sizes
- Sparse additive support vector machines in bounded variation space
- A review of distributed statistical inference
- Robust distributed multicategory angle-based classification for massive data
- Adaptive distributed support vector regression of massive data
- Statistical inference and distributed implementation for linear multicategory SVM
This page was built for publication: Divide-and-conquer for debiased \(l_1\)-norm support vector machine in ultra-high dimensions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4558508)