Training multi-layered neural network with a trust-region based algorithm
DOI10.1051/M2AN/1990240405231zbMATH Open0707.90097OpenAlexW2583686046MaRDI QIDQ3489808FDOQ3489808
Authors:
Publication date: 1990
Published in: ESAIM: Mathematical Modelling and Numerical Analysis (Search for Journal in Brave)
Full work available at URL: https://eudml.org/doc/193605
Recommendations
Applications of mathematical programming (90C90) Nonconvex programming, global optimization (90C26) Computational methods for problems pertaining to operations research and mathematical programming (90-08) Neural networks for/in biological studies, artificial life and related topics (92B20)
Cites Work
- Computing a Trust Region Step
- Title not available (Why is that?)
- Title not available (Why is that?)
- Convex Analysis
- Title not available (Why is that?)
- Smoothing by spline functions.
- Smoothing by spline functions. II
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Testing a Class of Methods for Solving Minimization Problems with Simple Bounds on the Variables
- Title not available (Why is that?)
- On the superlinear convergence of a trust region algorithm for nonsmooth optimization
- Computing Optimal Locally Constrained Steps
- Title not available (Why is that?)
- Newton’s Method with a Model Trust Region Modification
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Approximate solution of the trust region problem by minimization over two-dimensional subspaces
- A Family of Trust-Region-Based Algorithms for Unconstrained Minimization with Strong Global Convergence Properties
- Title not available (Why is that?)
- Title not available (Why is that?)
- On the use of directions of negative curvature in a modified newton method
- A second-order method for unconstrained optimization
- Quasi-Newton Methods for Unconstrained Optimization
- An Example of Only Linear Convergence of Trust Region Algorithms for Non-smooth Optimization
- The Use of Linear Programming for the Solution of Sparse Sets of Nonlinear Equations
- A Modified Newton’s Method for Unconstrained Minimization
Cited In (4)
- Multilevel Artificial Neural Network Training for Spatially Correlated Learning
- Stability of lagrangian duality for nonconvex quadratic programming. Solution methods and applications in computer vision
- Comparative studies on mesh-free deep neural network approach versus finite element method for solving coupled nonlinear hyperbolic/wave equations
- Difference of convex functions optimization algorithms (DCA) for globally minimizing nonconvex quadratic forms on Euclidean balls and spheres
Uses Software
This page was built for publication: Training multi-layered neural network with a trust-region based algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3489808)