Self-adaptive algorithms for quasiconvex programming and applications to machine learning
From MaRDI portal
Publication:6563122
Recommendations
- A simple adaptive step-size choice for iterative optimization methods
- Quasi-Newton methods: superlinear convergence without line searches for self-concordant functions
- A new gradient projection method with self-adaptive step size
- The global convergence of self-scaling BFGS algorithm with non-monotone line search for unconstrained nonconvex optimization problems
- A class of gradient unconstrained minimization algorithms with adaptive stepsize
Cites work
- scientific article; zbMATH DE number 3928227 (Why is no real title available?)
- A neurodynamic optimization approach to supervised feature selection via fractional programming
- A one-layer recurrent neural network for nonsmooth pseudoconvex optimization with quasiconvex inequality and affine equality constraints
- Abstract convergence theorem for quasi-convex optimization problems with applications
- Convergence and efficiency of subgradient methods for quasiconvex minimization
- Convergence rates of subgradient methods for quasi-convex optimization problems
- Convex Analysis
- Convex analysis and monotone operator theory in Hilbert spaces
- First-order and stochastic optimization methods for machine learning
- Iterative Algorithms for Nonlinear Operators
- Neural network for nonsmooth pseudoconvex optimization with general convex constraints
- On the Frank-Wolfe algorithm for non-compact constrained optimization problems
- Pseudo-Convex Functions
- Simplified versions of the conditional gradient method
This page was built for publication: Self-adaptive algorithms for quasiconvex programming and applications to machine learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6563122)