Practical mathematical optimization. An introduction to basic optimization theory and classical and new gradient-based algorithms. (Q1774325): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Import240304020342 (talk | contribs)
Set profile property.
 
(One intermediate revision by one other user not shown)
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 04:41, 5 March 2024

scientific article
Language Label Description Also known as
English
Practical mathematical optimization. An introduction to basic optimization theory and classical and new gradient-based algorithms.
scientific article

    Statements

    Practical mathematical optimization. An introduction to basic optimization theory and classical and new gradient-based algorithms. (English)
    0 references
    0 references
    9 May 2005
    0 references
    The Publisher's description reads as follows:``This book presents basic optimization principles and gradient-based algorithms to a general audience in a brief and easy-to-read form, without neglecting rigor. The work should enable professionals to apply optimization theory and algorithms to their own particular practical fields of interest, be it engineering, physics, chemistry, or business economics. Most importantly, for the first time in a relatively brief and introductory work, due attention is paid to the difficulties -- such as noise, discontinuities, expense of function evaluations, and the existence of multiple minima -- that often unnecessarily inhibit the use of gradient-based methods. In a separate chapter on new gradient-based methods developed by the author and his coworkers, it is shown how these difficulties may be overcome without losing the desirable features of classical gradient-based methods.'' Contents: Chapter 1: Introduction. Chapter 2: Line search descent methods for unconstrained minimization. Here first the general structure and then one-dimensional line search, Golden section method, Powell's quadratic interpolation algorithm with exercises are treated. Then first order line search descent methods such as steepest descent and conjugate gradient methods are explained as well as second order methods like modified Newton's and quasi-Newton methods (Davidon-Fletcher-Powell (DFP) and Broyden-Fletcher-Goldfarb-Shanno methods (BFGS)). The chapter ends with zero order methods and computer optimization subroutines, and test functions. Chapter 3 is entitled `Standard methods for constrained optimization. Here the author introduces to penalty function methods, classical methods (equality constraint, Lagrangian function, and KKT conditions), saddle point theory and duality, quadratic programming (active set of constraints, the method of Theil and Van de Panne), and modern methods like gradient projection, augmented Lagrangian multiplier and successive or sequential quadratic programming (SQP) methods. Chapter 4 presents some new gradient-based trajectory and approximation methods. Here he explains the dynamic trajectory optimization method (leap-frog method), the spherical quadratic steepest descent method (SQSD), the dynamic Q-optimization algorithm, gradient-only line search method for conjugate gradient methods, and global optimization using dynamic search trajectories. Chapter 5 is devoted to examples and problems on 54 pages which is very useful for undergraduates and practitioners. Chapter 6 gives more formal statements and proofs of some important classical theorems and results than presented in the first three chapters. Appendix A treats the simplex method for linear programming. This book investigates a rather broad range of methods and especially the new gradient-based methods presented in chapter 4 are very helpful in overcoming difficulties arising in practical problems.
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references