Fast global convergence of gradient methods for high-dimensional statistical recovery

From MaRDI portal
Publication:741793


DOI10.1214/12-AOS1032zbMath1373.62244arXiv1104.4824MaRDI QIDQ741793

Alekh Agarwal, Martin J. Wainwright, Sahand N. Negahban

Publication date: 15 September 2014

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1104.4824


62H12: Estimation in multivariate analysis

62J07: Ridge regression; shrinkage estimators (Lasso)

62F30: Parametric inference under constraints

90C25: Convex programming


Related Items

Unnamed Item, Unnamed Item, Unnamed Item, A Tight Bound of Hard Thresholding, Unnamed Item, Poisson Regression With Error Corrupted High Dimensional Features, A robust high dimensional estimation of a finite mixture of the generalized linear model, An Equivalence between Critical Points for Rank Constraints Versus Low-Rank Factorizations, THE FACTOR-LASSO AND K-STEP BOOTSTRAP APPROACH FOR INFERENCE IN HIGH-DIMENSIONAL ECONOMIC APPLICATIONS, On the finite-sample analysis of \(\Theta\)-estimators, On the finite-sample analysis of \(\Theta\)-estimators, Local linear convergence analysis of Primal–Dual splitting methods, Sharp global convergence guarantees for iterative nonconvex optimization with random data, Functional Group Bridge for Simultaneous Regression and Support Estimation, Sparse estimation in high-dimensional linear errors-in-variables regression via a covariate relaxation method, Sparse Laplacian shrinkage for nonparametric transformation survival model, Model-Assisted Uniformly Honest Inference for Optimal Treatment Regimes in High Dimension, Decentralized learning over a network with Nyström approximation using SGD, Multi-Task Learning with High-Dimensional Noisy Images, Concentration of measure bounds for matrix-variate data with missing values, Low-rank matrix estimation via nonconvex optimization methods in multi-response errors-in-variables regression, Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, On the linear convergence of the approximate proximal splitting method for non-smooth convex optimization, Weighted \(\ell_1\)-penalized corrected quantile regression for high dimensional measurement error models, High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity, Fast global convergence of gradient methods for high-dimensional statistical recovery, Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls, Convex relaxation algorithm for a structured simultaneous low-rank and sparse recovery problem, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, Local and global convergence of a general inertial proximal splitting scheme for minimizing composite functions, Stochastic greedy algorithms for multiple measurement vectors, The cost of privacy: optimal rates of convergence for parameter estimation with differential privacy, Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems, Robust non-parametric regression via incoherent subspace projections, Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning, Gradient projection Newton algorithm for sparse collaborative learning using synthetic and real datasets of applications, Robust estimation and shrinkage in ultrahigh dimensional expectile regression with heavy tails and variance heterogeneity, A data-driven line search rule for support recovery in high-dimensional data analysis, Gradient projection Newton pursuit for sparsity constrained optimization, Statistical inference for model parameters in stochastic gradient descent, Lasso guarantees for \(\beta \)-mixing heavy-tailed time series, Sorted concave penalized regression, Computational and statistical analyses for robust non-convex sparse regularized regression problem, Sparse principal component analysis with missing observations, An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization, A simple homotopy proximal mapping algorithm for compressive sensing, Structure estimation for discrete graphical models: generalized covariance matrices and their inverses, Cramér-Karhunen-Loève representation and harmonic principal component analysis of functional time series, Proximal Markov chain Monte Carlo algorithms, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, A greedy Newton-type method for multiple sparse constraint problem, Sparse Learning for Large-Scale and High-Dimensional Data: A Randomized Convex-Concave Optimization Approach, Activity Identification and Local Linear Convergence of Forward--Backward-type Methods, Score test variable screening


Uses Software


Cites Work