Fast global convergence of gradient methods for high-dimensional statistical recovery
From MaRDI portal
Publication:741793
DOI10.1214/12-AOS1032zbMath1373.62244arXiv1104.4824OpenAlexW2566240941MaRDI QIDQ741793
Alekh Agarwal, Martin J. Wainwright, Sahand N. Negahban
Publication date: 15 September 2014
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1104.4824
Estimation in multivariate analysis (62H12) Ridge regression; shrinkage estimators (Lasso) (62J07) Parametric inference under constraints (62F30) Convex programming (90C25)
Related Items
Robust estimation and shrinkage in ultrahigh dimensional expectile regression with heavy tails and variance heterogeneity, Poisson Regression With Error Corrupted High Dimensional Features, Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls, Proximal Markov chain Monte Carlo algorithms, Score test variable screening, A data-driven line search rule for support recovery in high-dimensional data analysis, Gradient projection Newton pursuit for sparsity constrained optimization, Statistical inference for model parameters in stochastic gradient descent, A robust high dimensional estimation of a finite mixture of the generalized linear model, Convex relaxation algorithm for a structured simultaneous low-rank and sparse recovery problem, Sharp global convergence guarantees for iterative nonconvex optimization with random data, Functional Group Bridge for Simultaneous Regression and Support Estimation, Sparse estimation in high-dimensional linear errors-in-variables regression via a covariate relaxation method, Sparse Laplacian shrinkage for nonparametric transformation survival model, Model-Assisted Uniformly Honest Inference for Optimal Treatment Regimes in High Dimension, Decentralized learning over a network with Nyström approximation using SGD, A simple homotopy proximal mapping algorithm for compressive sensing, Lasso guarantees for \(\beta \)-mixing heavy-tailed time series, Multi-Task Learning with High-Dimensional Noisy Images, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, Concentration of measure bounds for matrix-variate data with missing values, Low-rank matrix estimation via nonconvex optimization methods in multi-response errors-in-variables regression, Activity Identification and Local Linear Convergence of Forward--Backward-type Methods, A greedy Newton-type method for multiple sparse constraint problem, Structure estimation for discrete graphical models: generalized covariance matrices and their inverses, On the finite-sample analysis of \(\Theta\)-estimators, An Equivalence between Critical Points for Rank Constraints Versus Low-Rank Factorizations, Cramér-Karhunen-Loève representation and harmonic principal component analysis of functional time series, THE FACTOR-LASSO AND K-STEP BOOTSTRAP APPROACH FOR INFERENCE IN HIGH-DIMENSIONAL ECONOMIC APPLICATIONS, On the finite-sample analysis of \(\Theta\)-estimators, Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, On the linear convergence of the approximate proximal splitting method for non-smooth convex optimization, Weighted \(\ell_1\)-penalized corrected quantile regression for high dimensional measurement error models, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, A Tight Bound of Hard Thresholding, High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity, Local linear convergence analysis of Primal–Dual splitting methods, Local and global convergence of a general inertial proximal splitting scheme for minimizing composite functions, Unnamed Item, Unnamed Item, Stochastic greedy algorithms for multiple measurement vectors, Sorted concave penalized regression, Fast global convergence of gradient methods for high-dimensional statistical recovery, The cost of privacy: optimal rates of convergence for parameter estimation with differential privacy, Sparse Learning for Large-Scale and High-Dimensional Data: A Randomized Convex-Concave Optimization Approach, Computational and statistical analyses for robust non-convex sparse regularized regression problem, Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems, Robust non-parametric regression via incoherent subspace projections, Sparse principal component analysis with missing observations, Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning, Unnamed Item, Unnamed Item, Gradient projection Newton algorithm for sparse collaborative learning using synthetic and real datasets of applications, An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Noisy matrix decomposition via convex relaxation: optimal rates in high dimensions
- Estimation of high-dimensional low-rank matrices
- Estimation of (near) low-rank matrices with noise and high-dimensional scaling
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity
- Linear convergence of iterative soft-thresholding
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- High-dimensional analysis of semidefinite relaxations for sparse principal components
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- The benefit of group sparsity
- The composite absolute penalties family for grouped and hierarchical variable selection
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Introductory lectures on convex optimization. A basic course.
- On the conditions used to prove oracle results for the Lasso
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional graphs and variable selection with the Lasso
- Exact matrix completion via convex optimization
- Reconstruction From Anisotropic Random Measurements
- Robust principal component analysis?
- NESTA: A Fast and Accurate First-Order Method for Sparse Recovery
- Rank-Sparsity Incoherence for Matrix Decomposition
- Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
- Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- Atomic Decomposition by Basis Pursuit
- Robust PCA via Outlier Pursuit
- Robust Matrix Decomposition With Sparse Corruptions
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- A Simpler Approach to Matrix Completion
- Restricted strong convexity and weighted matrix completion: Optimal bounds with noise
- Paraconvex functions and paraconvex sets
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers