Composite Difference-Max Programs for Modern Statistical Estimation Problems
From MaRDI portal
Publication:4562249
DOI10.1137/18M117337XzbMath1407.62250arXiv1803.00205OpenAlexW2963562721WikidataQ128749629 ScholiaQ128749629MaRDI QIDQ4562249
Ying Cui, Jong-Shi Pang, Bodhisattva Sen
Publication date: 19 December 2018
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1803.00205
nonconvex optimizationsemismooth Newton methodnondifferentiable objectivecontinuous piecewise affine regression
General nonlinear regression (62J02) Nonconvex programming, global optimization (90C26) Nonsmooth analysis (49J52)
Related Items
An efficient semismooth Newton method for adaptive sparse signal recovery problems, An augmented Lagrangian method with constraint generation for shape-constrained convex regression problems, Nonconvex and nonsmooth approaches for affine chance-constrained stochastic programs, Unnamed Item, Estimation of Knots in Linear Spline Models, A study of piecewise linear-quadratic programs, Markov chain stochastic DCA and applications in deep learning with PDEs regularization, Spectrahedral Regression, Consistent approximations in composite optimization, A global two-stage algorithm for non-convex penalized high-dimensional linear regression problems, Hybrid Algorithms for Finding a D-Stationary Point of a Class of Structured Nonsmooth DC Minimization, MultiComposite Nonconvex Optimization for Training Deep Neural Networks, Approximations of semicontinuous functions with applications to stochastic optimization and statistical estimation, On the superiority of PGMs to PDCAs in nonsmooth nonconvex sparse regression, On the pervasiveness of difference-convexity in optimization and statistics, Manifold Sampling for Optimizing Nonsmooth Nonconvex Compositions, Optimality conditions for locally Lipschitz optimization with \(l_0\)-regularization, Nonconvex robust programming via value-function optimization, Proximal Distance Algorithms: Theory and Examples, Strong Metric (Sub)regularity of Karush–Kuhn–Tucker Mappings for Piecewise Linear-Quadratic Convex-Composite Optimization and the Quadratic Convergence of Newton’s Method, Estimation of Individualized Decision Rules Based on an Optimized Covariate-Dependent Equivalent of Random Outcomes, Unnamed Item, Stability and Error Analysis for Optimization and Generalized Equations, Asymptotic Properties of Stationary Solutions of Coupled Nonconvex Nonsmooth Empirical Risk Minimization, Penalty and Augmented Lagrangian Methods for Constrained DC Programming, Stochastic Difference-of-Convex-Functions Algorithms for Nonconvex Programming, High-Order Optimization Methods for Fully Composite Problems, Solving Nonsmooth and Nonconvex Compound Stochastic Programs with Applications to Risk Measure Minimization, A Study of Convex Convex-Composite Functions via Infimal Convolution with Applications
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- DC approximation approaches for sparse optimization
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- SDPNAL+: a majorized semismooth Newton-CG augmented Lagrangian method for semidefinite programming with nonnegative constraints
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- On functions representable as a difference of convex functions
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- On the convergence properties of the EM algorithm
- On the structure of convex piecewise quadratic functions
- Convergence analysis of difference-of-convex algorithm with subanalytic data
- A proximal difference-of-convex algorithm with extrapolation
- Robust multicategory support vector machines using difference convex algorithm
- The DC (Difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems
- A globally convergent Newton method for convex \(SC^ 1\) minimization problems
- Support-vector networks
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- An algorithm for the estimation of a regression function by continuous piecewise linear functions
- A nonsmooth version of Newton's method
- Automatic speech recognition. A deep learning approach
- Structural properties of affine sparsity constraints
- Enhanced proximal DC algorithms with extrapolation for a class of structured nonsmooth DC minimization
- Majorization-Minimization Procedures and Convergence of SQP Methods for Semi-Algebraic and Tame Programs
- MM Optimization Algorithms
- Computing B-Stationary Points of Nonsmooth DC Programs
- A Newton-CG Augmented Lagrangian Method for Semidefinite Programming
- Newton's Method for B-Differentiable Equations
- A Quadratically Convergent Newton Method for Computing the Nearest Correlation Matrix
- EXTENSION OF NEWTON AND QUASI-NEWTON METHODS TO SYSTEMS OF PC^1 EQUATIONS
- Regression Quantiles
- Semismooth and Semiconvex Functions in Constrained Optimization
- Variational Analysis
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- A Highly Efficient Semismooth Newton Augmented Lagrangian Method for Solving Lasso Problems
- Sparsity and Smoothness Via the Fused Lasso
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Iterative Solution of Nonlinear Equations in Several Variables
- A Semismooth Newton Method with Multidimensional Filter Globalization for $l_1$-Optimization
- A Computational Framework for Multivariate Convex Regression and Its Variants
- Minimization of $\ell_{1-2}$ for Compressed Sensing
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- Difference-of-Convex Learning: Directional Stationarity, Optimality, and Sparsity
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Convex analysis and monotone operator theory in Hilbert spaces