Deviation optimal learning using greedy Q-aggregation

From MaRDI portal
Publication:693750

DOI10.1214/12-AOS1025zbMATH Open1257.62037arXiv1203.2507MaRDI QIDQ693750FDOQ693750


Authors: Dong Dai, Philippe Rigollet, Tong Zhang Edit this on Wikidata


Publication date: 10 December 2012

Published in: The Annals of Statistics (Search for Journal in Brave)

Abstract: Given a finite family of functions, the goal of model selection aggregation is to construct a procedure that mimics the function from this family that is the closest to an unknown regression function. More precisely, we consider a general regression model with fixed design and measure the distance between functions by the mean squared error at the design points. While procedures based on exponential weights are known to solve the problem of model selection aggregation in expectation, they are, surprisingly, sub-optimal in deviation. We propose a new formulation called Q-aggregation that addresses this limitation; namely, its solution leads to sharp oracle inequalities that are optimal in a minimax sense. Moreover, based on the new formulation, we design greedy Q-aggregation procedures that produce sparse aggregation models achieving the optimal rate. The convergence and performance of these greedy procedures are illustrated and compared with other standard methods on simulated examples.


Full work available at URL: https://arxiv.org/abs/1203.2507




Recommendations




Cites Work


Cited In (22)





This page was built for publication: Deviation optimal learning using greedy \(Q\)-aggregation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q693750)