I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error (Q1750288)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error |
scientific article
| Language | Label | Description | Also known as |
|---|---|---|---|
| default for all languages | No label defined |
||
| English | I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error |
scientific article |
Statements
I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error (English)
0 references
18 May 2018
0 references
This paper proposes a general computational framework for solving nonconvex optimisation problems such as the penalized M-estimator \(\mathrm{argmin}_{\beta\in{\mathbb R}^d}\{ {\mathcal L}(\beta) + {\mathcal R}_{\lambda}(\beta)\}\), where \({\mathcal L}(\beta)\) is a smooth loss function, \({\mathcal R}_{\lambda}(\beta)\) is a sparsity-inducing penalty with a regularization parameter \(\lambda\). The proposed strategy enables the simultaneous control of the algorithmic complexity and the statistical error when fitting high-dimensional models appearing in various problems including low rank matrix completion problems, high-dimensional graphical models and quantile regression.
0 references
algorithmic statistics
0 references
iteration complexity
0 references
local adaptive MM
0 references
nonconvex statistical optimization
0 references
optimal rate of convergence
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0.7614054679870605
0 references
0.7353281378746033
0 references
0.7341299653053284
0 references
0.719788134098053
0 references
0.7183613181114197
0 references