Logistic regression with total variation regularization
From MaRDI portal
Publication:5146339
Abstract: We study logistic regression with total variation penalty on the canonical parameter and show that the resulting estimator satisfies a sharp oracle inequality: the excess risk of the estimator is adaptive to the number of jumps of the underlying signal or an approximation thereof. In particular when there are finitely many jumps, and jumps up are sufficiently separated from jumps down, then the estimator converges with a parametric rate up to a logarithmic term , provided the tuning parameter is chosen appropriately of order . Our results extend earlier results for quadratic loss to logistic loss. We do not assume any a priori known bounds on the canonical parameter but instead only make use of the local curvature of the theoretical risk.
Recommendations
Cites work
- Adaptive piecewise polynomial estimation via trend filtering
- Adaptive risk bounds in univariate total variation denoising and trend filtering
- Additive models with trend filtering
- Estimation and testing under sparsity. École d'Été de Probabilités de Saint-Flour XLV -- 2015
- Nonlinear total variation based noise removal algorithms
- On the prediction performance of the Lasso
- On the total variation regularized estimator over a class of tree graphs
- Sparsity and Smoothness Via the Fused Lasso
- Splines in higher order TV regularization
- The DFS fused Lasso: linear-time denoising over general graphs
- Weak convergence and empirical processes. With applications to statistics
Cited in
(5)
This page was built for publication: Logistic regression with total variation regularization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5146339)