Nonparametric sparsity and regularization
From MaRDI portal
Publication:2933860
zbMATH Open1317.68183arXiv1208.2572MaRDI QIDQ2933860FDOQ2933860
Lorenzo Rosasco, Sofia Mosci, Matteo Santoro, Alessandro Verri, Silvia Villa
Publication date: 8 December 2014
Abstract: In this work we are interested in the problems of supervised learning and variable selection when the input-output dependence is described by a nonlinear function depending on a few variables. Our goal is to consider a sparse nonparametric model, hence avoiding linear or additive models. The key idea is to measure the importance of each variable in the model by making use of partial derivatives. Based on this intuition we propose a new notion of nonparametric sparsity and a corresponding least squares regularization scheme. Using concepts and results from the theory of reproducing kernel Hilbert spaces and proximal methods, we show that the proposed learning algorithm corresponds to a minimization problem which can be provably solved by an iterative procedure. The consistency properties of the obtained estimator are studied both in terms of prediction and selection performance. An extensive empirical analysis shows that the proposed method performs favorably with respect to the state-of-the-art methods.
Full work available at URL: https://arxiv.org/abs/1208.2572
Recommendations
Nonparametric regression and quantile regression (62G08) General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Cited In (53)
- Nonparametric augmented probability weighting with sparsity
- Deep networks for system identification: a survey
- Efficient learning of nonparametric directed acyclic graph with statistical guarantee
- Sparse Regularization via Convex Analysis
- Isotropic non-Lipschitz regularization for sparse representations of random fields on the sphere
- Robust and discriminative dictionary learning for face recognition
- Performance analysis of the LapRSSLG algorithm in learning theory
- Low-Rank and Sparse Dictionary Learning
- Ranking the importance of variables in nonlinear system identification
- Variable selection of high-dimensional non-parametric nonlinear systems by derivative averaging to avoid the curse of dimensionality
- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- Nonconvex regularization for sparse neural networks
- Proximal Gradient Methods for Machine Learning and Imaging
- Regularity properties for sparse regression
- Title not available (Why is that?)
- Statistical sparsity
- Kernel variable selection for multicategory support vector machines
- Efficient kernel-based variable selection with sparsistency
- Kernel Meets Sieve: Post-Regularization Confidence Bands for Sparse Additive Model
- A General Framework of Nonparametric Feature Selection in High-Dimensional Data
- Statistical inference in compound functional models
- Structure learning via unstructured kernel-based M-estimation
- A unified penalized method for sparse additive quantile models: an RKHS approach
- Sparsity-enforcing regularisation and ISTA revisited
- Proximal methods for the latent group lasso penalty
- Kernel based approaches to local nonlinear non-parametric variable selection
- A random block-coordinate Douglas-Rachford splitting method with low computational complexity for binary logistic regression
- Norm sensitivity of sparsity regularization with respect to p
- Learning sparse conditional distribution: an efficient kernel-based approach
- Regularizers for structured sparsity
- Bayesian Approximate Kernel Regression With Variable Selection
- Title not available (Why is that?)
- A classification-oriented dictionary learning model: explicitly learning the particularity and commonality across categories
- Sparse representation based Fisher discrimination dictionary learning for image classification
- Non-linear dictionary learning with partially labeled data
- Lipschitz Regularity of Graph Laplacians on Random Data Clouds
- A Fast Gradient Method for Nonnegative Sparse Regression With Self-Dictionary
- High-dimensional local linear regression under sparsity and convex losses
- Convergence of stochastic proximal gradient algorithm
- The performance of semi-supervised Laplacian regularized regression with the least square loss
- Sparse Signal Approximation via Nonseparable Regularization
- Improvement on LASSO-type estimator in nonparametric regression
- Sparsity information and regularization in the horseshoe and other shrinkage priors
- Statistical modeling of longitudinal data with non-ignorable non-monotone missingness with semiparametric Bayesian and machine learning components
- Sparse and nonnegative sparse D-MORPH regression
- ADMM Algorithmic Regularization Paths for Sparse Statistical Machine Learning
- The convergence rate of semi-supervised regression with quadratic loss
- Thresholding gradient methods in Hilbert spaces: support identification and linear convergence
- The Geometry of Sparse Analysis Regularization
- A Maximum Principle Argument for the Uniform Convergence of Graph Laplacian Regressors
- Variable Selection for Nonparametric Learning with Power Series Kernels
- Improved spectral convergence rates for graph Laplacians on \(\varepsilon \)-graphs and \(k\)-NN graphs
- Variable selection based on squared derivative averages
This page was built for publication: Nonparametric sparsity and regularization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2933860)