Composite Difference-Max Programs for Modern Statistical Estimation Problems

From MaRDI portal
Publication:4562249

DOI10.1137/18M117337XzbMath1407.62250arXiv1803.00205OpenAlexW2963562721WikidataQ128749629 ScholiaQ128749629MaRDI QIDQ4562249

Ying Cui, Jong-Shi Pang, Bodhisattva Sen

Publication date: 19 December 2018

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1803.00205



Related Items

An efficient semismooth Newton method for adaptive sparse signal recovery problems, An augmented Lagrangian method with constraint generation for shape-constrained convex regression problems, Nonconvex and nonsmooth approaches for affine chance-constrained stochastic programs, Unnamed Item, Estimation of Knots in Linear Spline Models, A study of piecewise linear-quadratic programs, Markov chain stochastic DCA and applications in deep learning with PDEs regularization, Spectrahedral Regression, Consistent approximations in composite optimization, A global two-stage algorithm for non-convex penalized high-dimensional linear regression problems, Hybrid Algorithms for Finding a D-Stationary Point of a Class of Structured Nonsmooth DC Minimization, MultiComposite Nonconvex Optimization for Training Deep Neural Networks, Approximations of semicontinuous functions with applications to stochastic optimization and statistical estimation, On the superiority of PGMs to PDCAs in nonsmooth nonconvex sparse regression, On the pervasiveness of difference-convexity in optimization and statistics, Manifold Sampling for Optimizing Nonsmooth Nonconvex Compositions, Optimality conditions for locally Lipschitz optimization with \(l_0\)-regularization, Nonconvex robust programming via value-function optimization, Proximal Distance Algorithms: Theory and Examples, Strong Metric (Sub)regularity of Karush–Kuhn–Tucker Mappings for Piecewise Linear-Quadratic Convex-Composite Optimization and the Quadratic Convergence of Newton’s Method, Estimation of Individualized Decision Rules Based on an Optimized Covariate-Dependent Equivalent of Random Outcomes, Unnamed Item, Stability and Error Analysis for Optimization and Generalized Equations, Asymptotic Properties of Stationary Solutions of Coupled Nonconvex Nonsmooth Empirical Risk Minimization, Penalty and Augmented Lagrangian Methods for Constrained DC Programming, Stochastic Difference-of-Convex-Functions Algorithms for Nonconvex Programming, High-Order Optimization Methods for Fully Composite Problems, Solving Nonsmooth and Nonconvex Compound Stochastic Programs with Applications to Risk Measure Minimization, A Study of Convex Convex-Composite Functions via Infimal Convolution with Applications


Uses Software


Cites Work