Variance-based regularization with convex objectives
From MaRDI portal
Publication:5381122
Abstract: We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.
Recommendations
- Convergence rates of convex variational regularization
- Heuristic Parameter-Choice Rules for Convex Variational Regularization Based on Error Estimates
- Penalty-based smoothness conditions in convex variational regularization
- A range condition for polyconvex variational regularization
- Stochastic variance-reduced cubic regularization methods
- Convex regularization in statistical inverse learning problems
- Variational regularization in inverse problems and machine learning
- Regularization with non-convex separable constraints
- Continuous regularized variable-metric proximal minimization method
- Theory and examples of variational regularization with non-metric fitting functionals
Cites work
- scientific article; zbMATH DE number 439380 (Why is no real title available?)
- scientific article; zbMATH DE number 5654889 (Why is no real title available?)
- scientific article; zbMATH DE number 49190 (Why is no real title available?)
- scientific article; zbMATH DE number 1220667 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- scientific article; zbMATH DE number 2034517 (Why is no real title available?)
- scientific article; zbMATH DE number 3446442 (Why is no real title available?)
- scientific article; zbMATH DE number 2107836 (Why is no real title available?)
- 10.1162/153244303321897690
- 10.1162/1532443041424337
- A Bennett concentration inequality and its application to suprema of empirical processes
- A result of Vapnik with applications
- An introduction to support vector machines and other kernel-based learning methods.
- Capacity of reproducing kernel spaces in learning theory
- Concentration inequalities. A nonasymptotic theory of independence
- Convexity, Classification, and Risk Bounds
- Covering numbers of Gaussian reproducing kernel Hilbert spaces
- Empirical likelihood
- Empirical likelihood ratio confidence regions
- Introduction to nonparametric estimation
- Learning without concentration
- Lectures on Stochastic Programming
- Local Rademacher complexities
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Optimal aggregation of classifiers in statistical learning.
- Oracle-based robust optimization via online learning
- Regularization and Variable Selection Via the Elastic Net
- Robust optimization
- Smooth discrimination analysis
- Smoothing spline ANOVA models
- Statistics for high-dimensional data. Methods, theory and applications.
- Statistics of robust optimization: a generalized empirical likelihood approach
- Theory of Classification: a Survey of Some Recent Advances
- Uniform Central Limit Theorems
- Weak convergence and empirical processes. With applications to statistics
Cited in
(20)- Learning models with uniform performance via distributionally robust optimization
- Distributionally robust bottleneck combinatorial problems: uncertainty quantification and robust decision making
- Heuristic Parameter-Choice Rules for Convex Variational Regularization Based on Error Estimates
- Weakly-convex-concave min-max optimization: provable algorithms and applications in machine learning
- Statistics of robust optimization: a generalized empirical likelihood approach
- Toward theoretical understandings of robust Markov decision processes: sample complexity and asymptotics
- General procedure to provide high-probability guarantees for stochastic saddle point problems
- Conditional variance penalties and domain shift robustness
- Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso
- Regularization via mass transportation
- Robust Simulation with Likelihood-Ratio Constrained Input Uncertainty
- scientific article; zbMATH DE number 7370573 (Why is no real title available?)
- Variance regularization in sequential Bayesian optimization
- Enhanced Balancing of Bias-Variance Tradeoff in Stochastic Estimation: A Minimax Perspective
- Coefficient-based regularization network with variance loss for error
- Learning with risks based on M-location
- Distributionally robust optimization. A review on theory and applications
- A survey of learning criteria going beyond the usual risk
- Convergence rates of convex variational regularization
- Robust and distributionally robust optimization models for linear support vector machine
This page was built for publication: Variance-based regularization with convex objectives
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5381122)