Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning
From MaRDI portal
Publication:2073715
Abstract: Modern statistical applications often involve minimizing an objective function that may be nonsmooth and/or nonconvex. This paper focuses on a broad Bregman-surrogate algorithm framework including the local linear approximation, mirror descent, iterative thresholding, DC programming and many others as particular instances. The recharacterization via generalized Bregman functions enables us to construct suitable error measures and establish global convergence rates for nonconvex and nonsmooth objectives in possibly high dimensions. For sparse learning problems with a composite objective, under some regularity conditions, the obtained estimators as the surrogate's fixed points, though not necessarily local minimizers, enjoy provable statistical guarantees, and the sequence of iterates can be shown to approach the statistical truth within the desired accuracy geometrically fast. The paper also studies how to design adaptive momentum based accelerations without assuming convexity or smoothness by carefully controlling stepsize and relaxation parameters.
Recommendations
- A Bregman stochastic method for nonconvex nonsmooth problem beyond global Lipschitz gradient continuity
- Non-smooth non-convex Bregman minimization: unification and new algorithms
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Global convergence of ADMM in nonconvex nonsmooth optimization
- An outer-inner linearization method for non-convex and nondifferentiable composite regularization problems
Cites work
- scientific article; zbMATH DE number 3850830 (Why is no real title available?)
- scientific article; zbMATH DE number 4041643 (Why is no real title available?)
- scientific article; zbMATH DE number 4079168 (Why is no real title available?)
- scientific article; zbMATH DE number 4082773 (Why is no real title available?)
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 3296905 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A Statistical View of Some Chemometrics Regression Tools
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Adaptive subgradient methods for online learning and stochastic optimization
- An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors
- Analysis of multi-stage convex relaxation for sparse regularization
- Cluster analysis: unsupervised learning via supervised learning with a non-convex penalty
- Clustering with Bregman divergences.
- Convergence Analysis of a Proximal-Like Minimization Algorithm Using Bregman Functions
- Convex Analysis
- Decoding by Linear Programming
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- Graphical models, exponential families, and variational inference
- Group Iterative Spectrum Thresholding for Super-Resolution Sparse Spectral Selection
- Ideal spatial adaptation by wavelet shrinkage
- Introductory lectures on convex optimization. A basic course.
- Iterative hard thresholding for compressed sensing
- Learning Topology and Dynamics of Large Recurrent Neural Networks
- Learning the parts of objects by non-negative matrix factorization
- MM algorithms for geometric and signomial programming
- Nearly unbiased variable selection under minimax concave penalty
- On the conditions used to prove oracle results for the Lasso
- On the finite-sample analysis of \(\Theta\)-estimators
- One-step sparse estimates in nonconcave penalized likelihood models
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Oracle inequalities and optimal inference under group sparsity
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Penalized Bregman divergence for large-dimensional regression and classification
- Recovering Sparse Signals With a Certain Family of Nonconvex Penalties and DC Programming
- Regularized \(M\)-estimators with nonconvexity: statistical and algorithmic theory for local optima
- Robust Statistics
- Simultaneous analysis of Lasso and Dantzig selector
- Sparsity oracle inequalities for the Lasso
- Strong oracle optimality of folded concave penalized estimation
- The DC (Difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Thresholding-based iterative selection procedures for model selection and shrinkage
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Variable selection using MM algorithms
- Weak convergence and empirical processes. With applications to statistics
Cited in
(2)
This page was built for publication: Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2073715)