DOI10.1007/b106451zbMath1129.90002OpenAlexW2471988522MaRDI QIDQ2500511
Wen-Yu Sun, Ya-Xiang Yuan
Publication date: 17 August 2006
Published in: Springer Optimization and Its Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/b106451
Some new three-term Hestenes–Stiefel conjugate gradient methods with affine combination,
Non-convex regularization and accelerated gradient algorithm for sparse portfolio selection,
Nearest linearly structured polynomial matrix with some prescribed distinct eigenvalues,
New nonlinear conjugate gradient methods based on optimal Dai-Liao parameters,
Computing the Action Ground State for the Rotating Nonlinear Schrödinger Equation,
Homogenization for polynomial optimization with unbounded sets,
Convergence analysis of a subsampled Levenberg-Marquardt algorithm,
A nonlinear conjugate gradient method using inexact first-order information,
An Accelerated Three-Term Extension of a Descent Nonlinear Conjugate Gradient Method,
On the optimal control of some nonsmooth distributed parameter systems arising in mechanics,
A modified conjugate gradient parameter via hybridization approach for solving large-scale systems of nonlinear equations,
Two modified adaptive cubic regularization algorithms by using the nonmonotone Armijo-type line search,
A derivative-free scaling memoryless DFP method for solving large scale nonlinear monotone equations,
Two diagonal conjugate gradient like methods for unconstrained optimization,
Normalized Wolfe-Powell-type local minimax method for finding multiple unstable solutions of nonlinear elliptic PDEs,
Neural network for a class of sparse optimization with \(L_0\)-regularization,
Measuring variability and association for categorical data,
Newton’s method for uncertain multiobjective optimization problems under finite uncertainty sets,
Convergence properties of Levenberg-Marquardt methods with generalized regularization terms,
A projection-based derivative free DFP approach for solving system of nonlinear convex constrained monotone equations with image restoration applications,
Semiparametric regression analysis of doubly-censored data with applications to incubation period estimation,
An efficient spectral trust-region deflation method for multiple solutions,
Nonmonotone quasi-Newton-based conjugate gradient methods with application to signal processing,
A unified convergence analysis of the derivative-free projection-based method for constrained nonlinear monotone equations,
Adaptive trust-region method on Riemannian manifold,
An adaptive modified three-term conjugate gradient method with global convergence,
Newton-MR: inexact Newton method with minimum residual sub-problem solver,
Numerical approximation of the solution of an obstacle problem modelling the displacement of elliptic membrane shells via the penalty method,
Proximal gradient algorithm with trust region scheme on Riemannian manifold,
Huberization image restoration model from incomplete multiplicative noisy data,
A modified Levenberg-Marquardt method for solving system of nonlinear equations,
A hybrid BB-type method for solving large scale unconstrained optimization,
A family of the modified three-term Hestenes-Stiefel conjugate gradient method with sufficient descent and conjugacy conditions,
Generalized RMIL conjugate gradient method under the strong Wolfe line search with application in image processing,
Eigenvalue analyses on the memoryless Davidon-Fletcher-Powell method based on a spectral secant equation,
A NONMONOTONE ADMM-BASED DIAGONAL QUASI-NEWTON UPDATE WITH APPLICATION TO THE COMPRESSIVE SENSING PROBLEM,
Spherical Framelets from Spherical Designs,
On a scaled symmetric Dai-Liao-type scheme for constrained system of nonlinear equations with applications,
Inexact proximal DC Newton-type method for nonconvex composite functions,
A variable projection method for the general radial basis function neural network,
Memoryless quasi-Newton methods based on the spectral-scaling Broyden family for Riemannian optimization,
A restart scheme for the memoryless BFGS method,
A Five-Parameter Class of Derivative-Free Spectral Conjugate Gradient Methods for Systems of Large-Scale Nonlinear Monotone Equations,
Competitive secant (BFGS) methods based on modified secant relations for unconstrained optimization,
A hybrid HS-LS conjugate gradient algorithm for unconstrained optimization with applications in motion control and image recovery,
Greedy PSB methods with explicit superlinear convergence,
A descent family of the spectral Hestenes–Stiefel method by considering the quasi-Newton method,
Massively parallelizable proximal algorithms for large‐scale stochastic optimal control problems,
The regularization continuation method for optimization problems with nonlinear equality constraints,
A new nonmonotone line search method for nonsmooth nonconvex optimization,
A new modified scaled conjugate gradient method for large-scale unconstrained optimization with non-convex objective function,
Unnamed Item,
A new accelerated diagonal quasi-Newton updating method with scaled forward finite differences directional derivative for unconstrained optimization,
Accelerating the modified Levenberg-Marquardt method for nonlinear equations,
Identification of Elastic Orthotropic Material Parameters by the Singular Boundary Method,
Hill-Climbing Algorithm with a Stick for Unconstrained Optimization Problems,
On the Local and Superlinear Convergence of a Parameterized DFP Method,
A Filter Active-Set Algorithm for Ball/Sphere Constrained Optimization Problem,
A descent family of Dai–Liao conjugate gradient methods,
Accelerated gradient descent methods with line search,
Calculating the normalising constant of the Bingham distribution on the sphere using the holonomic gradient method,
Extended least trimmed squares estimator in semiparametric regression models with correlated errors,
Structured Quasi-Newton Methods for Optimization with Orthogonality Constraints,
Inhomogeneous polynomial optimization over a convex set: An approximation approach,
Nonmonotone conic trust region method with line search technique for bound constrained optimization,
An adaptive nonmonotone trust region method based on a modified scalar approximation of the Hessian in the successive quadratic subproblems,
A dwindling filter line search method for unconstrained optimization,
Comments on “A hybrid conjugate gradient method based on a quadratic relaxation of the Dai-Yuan hybrid conjugate gradient parameter”,
Complexity and performance of an Augmented Lagrangian algorithm,
Gravity-magnetic cross-gradient joint inversion by the cyclic gradient method,
Unnamed Item,
Accelerated multiple step-size methods for solving unconstrained optimization problems,
A novel augmented Lagrangian method of multipliers for optimization with general inequality constraints,
An efficient modification of the Hestenes-Stiefel nonlinear conjugate gradient method with restart property,
A hybrid optimization method for multiplicative noise and blur removal,
Robust ridge estimator in restricted semiparametric regression models,
Two new decomposition algorithms for training bound-constrained support vector machines,
A nonmonotone trust region method based on simple conic models for unconstrained optimization,
Trust region algorithm with two subproblems for bound constrained problems,
The research on the properties of Fourier matrix and bent function,
A modified scaling parameter for the memoryless BFGS updating formula,
A seminorm regularized alternating least squares algorithm for canonical tensor decomposition,
On the global convergence of a projective trust region algorithm for nonlinear equality constrained optimization,
A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron,
On Newton's method for the Fermat-Weber location problem,
Error estimates for the simplified iteratively regularized Gauss-Newton method in Banach spaces under a Morozov-type stopping rule,
Numerical treatment of nonlinear MHD Jeffery-Hamel problems using stochastic algorithms,
A type of modified BFGS algorithm with any rank defects and the local \(Q\)-superlinear convergence properties,
Two accelerated nonmonotone adaptive trust region line search methods,
A Barzilai-Borwein conjugate gradient method,
Two extensions of the Dai-Liao method with sufficient descent property based on a penalization scheme,
A stochastic level-value estimation method for global optimization,
A Barzilai and Borwein scaling conjugate gradient method for unconstrained optimization problems,
A double parameter scaled BFGS method for unconstrained optimization,
An adaptive three-term conjugate gradient method based on self-scaling memoryless BFGS matrix,
A note on the global convergence theorem of accelerated adaptive Perry conjugate gradient methods,
A cone constrained convex program: structure and algorithms,
Two modified scaled nonlinear conjugate gradient methods,
On optimality of the parameters of self-scaling memoryless quasi-Newton updating formulae,
A regularized Newton method for computing ground states of Bose-Einstein condensates,
A nonmonotone globalization algorithm with preconditioned gradient path for unconstrained optimization,
Two new Dai-Liao-type conjugate gradient methods for unconstrained optimization problems,
Nonmonotone second-order Wolfe's line search method for unconstrained optimization problems,
An equivalency condition of nonsingularity in nonlinear semidefinite programming,
On the worst-case evaluation complexity of non-monotone line search algorithms,
Optimal radiation fractionation for low-grade gliomas: insights from a mathematical model,
An improved trust region algorithm for nonlinear equations,
New quasi-Newton methods via higher order tensor models,
An accelerated double step size model in unconstrained optimization,
A dwindling filter line search algorithm for nonlinear equality constrained optimization,
Cubic interpolation: a line search technique for fuzzy optimization problems,
Ensemble preconditioning for Markov chain Monte Carlo simulation,
POD/DEIM reduced-order modeling of time-fractional partial differential equations with applications in parameter identification,
Two modifications of the method of the multiplicative parameters in descent gradient methods,
An improved multi-step gradient-type method for large scale optimization,
Decentralized swarm coordination: a combined coverage/connectivity approach,
A filter algorithm for nonlinear systems of equalities and inequalities,
Two effective hybrid conjugate gradient algorithms based on modified BFGS updates,
A note on ``A new iteration method for the matrix equation \(AX=B\), A note on the global convergence theorem of the scaled conjugate gradient algorithms proposed by Andrei, A structured diagonal Hessian approximation method with evaluation complexity analysis for nonlinear least squares, A modified scaled memoryless BFGS preconditioned conjugate gradient algorithm for nonsmooth convex optimization, Improved optimization methods for image registration problems, Accelerated double direction method for solving unconstrained optimization problems, An alternating structured trust region algorithm for separable optimization problems with nonconvex constraints, On Nesterov's nonsmooth Chebyshev-Rosenbrock functions, A feasible direction method for the semidefinite program with box constraints, A new nonlinear filter constructed from the Newton method and EPR in image restoration, On convergence analysis of a derivative-free trust region algorithm for constrained optimization with separable structure, Two modified three-term conjugate gradient methods with sufficient descent property, A new approximation of the matrix rank function and its application to matrix rank minimization, On the sufficient descent condition of the Hager-Zhang conjugate gradient methods, A nonmonotone hybrid method of conjugate gradient and Lanczos-type for solving nonlinear systems, Parameters estimation for a new anomalous thermal diffusion model in layered media, Application of variable-fidelity models to aerodynamic optimization, Cross-Hill: a heuristic method for global optimization, On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization, An adaptive conjugate gradient algorithm for large-scale unconstrained optimization, A combined SQP-IPM algorithm for solving large-scale nonlinear optimization problems, On solving L-SR1 trust-region subproblems, An accelerated nonmonotone trust region method with adaptive trust region for unconstrained optimization, A quadratic penalty method for hypergraph matching, An adaptive scaled BFGS method for unconstrained optimization, On optimality of two adaptive choices for the parameter of Dai-Liao method, A class of adaptive dai-liao conjugate gradient methods based on the scaled memoryless BFGS update, Reconstructing local volatility using total variation, Optimization design of an explicitly defined rack for the generation of rotors for twin-screw compressors, A modified three-term conjugate gradient method with sufficient descent property, A new subspace correction method for nonlinear unconstrained convex optimization problems, A short note on the Q-linear convergence of the steepest descent method, A hybrid trust region algorithm for unconstrained optimization, A dimension-reduced method of sensitivity analysis for stochastic user equilibrium assignment model, A reduced Hessian algorithm with line search filter method for nonlinear programming, Two new conjugate gradient methods based on modified secant equations, A feasible method for optimization with orthogonality constraints, A second-order pseudo-transient method for steady-state problems, A modified Newton's method for best rank-one approximation to tensors, A filter-line-search method for unconstrained optimization, A modified nonmonotone trust region line search method, A linear hybridization of the Hestenes-Stiefel method and the memoryless BFGS technique, A nonmonotonic trust region algorithm for a class of semi-infinite minimax programming, A new method for parameter estimation of edge-preserving regularization in image restoration, A nonmonotone supermemory gradient algorithm for unconstrained optimization, A subspace version of the Powell-Yuan trust-region algorithm for equality constrained optimization, Visual MISER: an efficient user-friendly visual program for solving optimal control problems, A framework of constraint preserving update schemes for optimization on Stiefel manifold, A modified BFGS algorithm based on a hybrid secant equation, Convergence of gradient method for Eelman networks, A filter trust region method for solving semi-infinite programming problems, Nonlinear conjugate gradient methods with sufficient descent condition for large-scale unconstrained optimization, A descent algorithm without line search for unconstrained optimization, On filter-successive linearization methods for nonlinear semidefinite programming, A trust-region method with improved adaptive radius for systems of nonlinear equations, A binary search algorithm for univariate data approximation and estimation of extrema by piecewise monotonic constraints, Derivative-free method based on DFP updating formula for solving convex constrained nonlinear monotone equations and application, Microbial community decision making models in batch and chemostat cultures, Two sufficient descent three-term conjugate gradient methods for unconstrained optimization problems with applications in compressive sensing, A hybrid conjugate gradient based approach for solving unconstrained optimization and motion control problems, Optimal scaling parameters for spectral conjugate gradient methods, A unified derivative-free projection method model for large-scale nonlinear equations with convex constraints, The regularization continuation method with an adaptive time step control for linearly constrained optimization problems, Nonmonotone diagonally scaled limited-memory BFGS methods with application to compressive sensing based on a penalty model, A hybrid quasi-Newton method with application in sparse recovery, A family of iterative methods for computing Moore-Penrose inverse of a matrix, Image restoration from noisy incomplete frequency data by alternative iteration scheme, A Dai-Liao conjugate gradient method via modified secant equation for system of nonlinear equations, A class of accelerated conjugate-gradient-like methods based on a modified secant equation, A double parameter self-scaling memoryless BFGS method for unconstrained optimization, A quadratic hybridization of Polak-Ribière-Polyak and Fletcher-Reeves conjugate gradient methods, A modified scaled memoryless symmetric rank-one method, Complex Golay pairs up to length 28: a search via computer algebra and programmatic SAT, On the sufficient descent property of the Shanno's conjugate gradient method, A perfect example for the BFGS method, Descent Perry conjugate gradient methods for systems of monotone nonlinear equations, Long range search for maximum likelihood in exponential families, A globally convergent filter-type trust region method for semidefinite programming, A brief introduction to manifold optimization, Total variation image restoration method based on subspace optimization, A nonmonotone weighting self-adaptive trust region algorithm for unconstrained nonconvex optimization, A survey of gradient methods for solving nonlinear optimization, A QSC method for fractional subdiffusion equations with fractional boundary conditions and its application in parameters identification, Quasi-Newton methods for multiobjective optimization problems, Asymptotic surrogate constraint method and its convergence for a class of semi-infinite programming, A note on a multiplicative parameters gradient method, A high-order modified Levenberg-Marquardt method for systems of nonlinear equations with fourth-order convergence, A new derivative-free SCG-type projection method for nonlinear monotone equations with convex constraints, A Dai-Liao conjugate gradient algorithm with clustering of eigenvalues, Simplified iteratively regularized Gauss-Newton method in Banach spaces under a general source condition, Two optimal Hager-Zhang conjugate gradient methods for solving monotone nonlinear equations, A mixture of nuclear norm and matrix factorization for tensor completion, Analysis of the maximum magnification by the scaled memoryless DFP updating formula with application to compressive sensing, Hybridization of accelerated gradient descent method, A modified ODE-based algorithm for unconstrained optimization problems, A biobjective approach to recoverable robustness based on location planning, A nonmonotone trust region method based on simple quadratic models, A new supermemory gradient method for unconstrained optimization problems, Uniqueness and numerical scheme for the Robin coefficient identification of the time-fractional diffusion equation, A new spectral method for \(l_1\)-regularized minimization, Nonmonotone adaptive trust region method based on simple conic model for unconstrained optimization, Global convergence of a modified two-parameter scaled BFGS method with Yuan-Wei-Lu line search for unconstrained optimization, Modified inexact Levenberg-Marquardt methods for solving nonlinear least squares problems, Essential issues on solving optimal power flow problems using soft-computing, Two--parameter scaled memoryless BFGS methods with a nonmonotone choice for the initial step length, A derivative-free trust-region algorithm for composite nonsmooth optimization, A nonmonotone PRP conjugate gradient method for solving square and under-determined systems of equations, A brief survey of methods for solving nonlinear least-squares problems, Riemannian conjugate gradient methods with inverse retraction, A generalized worst-case complexity analysis for non-monotone line searches, A family of Hager-Zhang conjugate gradient methods for system of monotone nonlinear equations, Inexact proximal memoryless quasi-Newton methods based on the Broyden family for minimizing composite functions, Gauss-Newton-type methods for bilevel optimization, Explicit pseudo-transient continuation and the trust-region updating strategy for unconstrained optimization, A new family of conjugate gradient methods for unconstrained optimization, Two adaptive Dai-Liao nonlinear conjugate gradient methods, Quadratic interpolation technique to minimize univariable fuzzy functions, An augmented memoryless BFGS method based on a modified secant equation with application to compressed sensing, A class of derivative-free CG projection methods for nonsmooth equations with an application to the LASSO problem, A comparison of general-purpose optimization algorithms for finding optimal approximate experimental designs, A novel diffeomorphic model for image registration and its algorithm, Robust Schatten-\(p\) norm based approach for tensor completion, Convergence analyses on sparse feedforward neural networks via group lasso regularization, A new CG algorithm based on a scaled memoryless BFGS update with adaptive search strategy, and its application to large-scale unconstrained optimization problems, A heuristic approach to combat multicollinearity in least trimmed squares regression analysis, Numerical inversion of the fractional derivative index and surface thermal flux for an anomalous heat conduction model in a multi-layer medium, An efficient Dai-Liao type conjugate gradient method by reformulating the CG parameter in the search direction equation, Efficient inverse solvers for thermal tomography, A stochastic trust region method for unconstrained optimization problems, Two families of scaled three-term conjugate gradient methods with sufficient descent property for nonconvex optimization, Generalized continuation Newton methods and the trust-region updating strategy for the underdetermined system, Numerical construction of spherical \(t\)-designs by Barzilai-Borwein method, Profit-based churn prediction based on minimax probability machines, Simultaneous recovery of surface heat flux and thickness of a solid structure by ultrasonic measurements, A derivative-free algorithm for spherically constrained optimization, Enhanced Dai-Liao conjugate gradient methods for systems of monotone nonlinear equations, Two descent Dai-Yuan conjugate gradient methods for systems of monotone nonlinear equations, An improved nonmonotone adaptive trust region method., Solving unconstrained optimization problems via hybrid CD-DY conjugate gradient methods with applications, On \(R\)-linear convergence analysis for a class of gradient methods, Consensus-based iterative learning of heterogeneous agents with application to distributed optimization, Multi-modality image registration models and efficient algorithms, A globally convergent BFGS method for symmetric nonlinear equations, Stable Lévy diffusion and related model fitting, A new hybrid conjugate gradient method for large-scale unconstrained optimization problem with non-convex objective function, Diagonally scaled memoryless quasi-Newton methods with application to compressed sensing, Convergence rate of the modified Levenberg-Marquardt method under Hölderian local error bound, Efficient regularized Newton-type algorithm for solving convex optimization problem, A nonmonotone scaled Fletcher-Reeves conjugate gradient method with application in image reconstruction, Distributed reconstruction of time-varying graph signals via a modified Newton's method, A primal-dual interior-point relaxation method with global and rapidly local convergence for nonlinear programs, An inexact ADMM with proximal-indefinite term and larger stepsize, Adaptive three-term family of conjugate residual methods for system of monotone nonlinear equations, Structured diagonal Gauss-Newton method for nonlinear least squares, Dai-Liao extensions of a descent hybrid nonlinear conjugate gradient method with application in signal processing, A hybrid scaling parameter for the scaled memoryless BFGS method based on the ℓ∞ matrix norm, Accelerated adaptive Perry conjugate gradient algorithms based on the self-scaling memoryless BFGS update, Computing minimum norm solution of linear systems of equations by the generalized Newton method, A new simple model trust-region method with generalized Barzilai-Borwein parameter for large-scale optimization, Global convergence of a new sufficient descent spectral three-term conjugate gradient class for large-scale optimization, A three-dimensional discrete model for approximating the deformation of a viral capsid subjected to lying over a flat surface in the static and time-dependent case, A class of descent four-term extension of the Dai-Liao conjugate gradient method based on the scaled memoryless BFGS update, A memory gradient method based on the nonmonotone technique, An efficient inexact Newton-CG algorithm for the smallest enclosing ball problem of large dimensions, The d-Level Nested Logit Model: Assortment and Price Optimization Problems, Two optimal Dai–Liao conjugate gradient methods, A new robust line search technique based on Chebyshev polynomials, A class of smooth exact penalty function methods for optimization problems with orthogonality constraints, Improved high-dimensional regression models with matrix approximations applied to the comparative case studies with support vector machines, A new trust region–sequential quadratic programming approach for nonlinear systems based on nonlinear model predictive control, A modified Hager-Zhang conjugate gradient method with optimal choices for solving monotone nonlinear equations, Numerical research on the sensitivity of nonmonotone trust region algorithms to their parameters, Minimizing a Symmetric Quasiconvex Function on a Two-Dimensional Lattice, On the iteratively regularized Gauss-Newton method in Banach spaces with applications to parameter identification problems, Global convergence and the Powell singular function, A preconditioned descent algorithm for variational inequalities of the second kind involving the \(p\)-Laplacian operator, Trace-penalty minimization for large-scale eigenspace computation, A descent extension of the Polak-Ribière-Polyak conjugate gradient method, Mollifier smoothing of \(C^0\)-Finsler structures, A subspace version of the Wang-Yuan augmented Lagrangian-trust region method for equality constrained optimization, Robust restricted Liu estimator in censored semiparametric linear models, A quasi fractional order gradient descent method with adaptive stepsize and its application in system identification, Robust time-domain output error method for identifying continuous-time systems with time delay, Two penalized mixed-integer nonlinear programming approaches to tackle multicollinearity and outliers effects in linear regression models, A derivative-free multivariate spectral projection algorithm for constrained nonlinear monotone equations, Some iterative methods for the largest positive definite solution to a class of nonlinear matrix equation, Modelling virus contact mechanics under atomic force imaging conditions, A Stochastic Trust-Region Framework for Policy Optimization, A modified conjugate gradient method based on the self-scaling memoryless BFGS update, Maximum penalized likelihood estimation of additive hazards models with partly interval censoring, A Dai-Liao-type projection method for monotone nonlinear equations and signal processing, Convergence of a stabilized SQP method for equality constrained optimization, A trust region method for solving multicriteria optimization problems on Riemannian manifolds, The Fiedler Vector of a Laplacian Tensor for Hypergraph Partitioning, Least-trimmed squares: asymptotic normality of robust estimator in semiparametric regression models, A New Adaptive Conjugate Gradient Algorithm for Large-Scale Unconstrained Optimization, A modified descent Polak-Ribiére-Polyak conjugate gradient method with global convergence property for nonconvex functions, A modified trust region method with beale's PCG technique for optimization, An exact penalty approach for optimization with nonnegative orthogonality constraints, A descent extension of a modified Polak-Ribière-Polyak method with application in image restoration problem, A New First-Order Algorithmic Framework for Optimization Problems with Orthogonality Constraints, A proximal quasi-Newton method based on memoryless modified symmetric rank-one formula, Optimal Control, Dichotomy, and Closed Range, Linearized proximal algorithms with adaptive stepsizes for convex composite optimization with applications, Feasible robust estimator in restricted semiparametric regression models based on the LTS approach, AN ADAPTIVE CONJUGACY CONDITION AND RELATED NONLINEAR CONJUGATE GRADIENT METHODS, An augmented Lagrangian approach with general constraints to solve nonlinear models of the large-scale reliable inventory systems, Linearly structured quadratic model updating using partial incomplete eigendata, Memory gradient method for multiobjective optimization, Analysis on a superlinearly convergent augmented Lagrangian method, A modified scaled memoryless BFGS preconditioned conjugate gradient method for unconstrained optimization, An adaptive nonmonotone trust region algorithm, Orthogonal canonical correlation analysis and applications, Local convergence of quasi-Newton methods under metric regularity, An improved inversion-free method for solving the matrix equation \(X + A^\ast X^{-{\alpha}}A = Q\), Gradient methods for computing the Drazin-inverse solution, Quantum circuit design for accurate simulation of qudit channels, A New Dai-Liao Conjugate Gradient Method with Optimal Parameter Choice, Low rank updates in preconditioning the saddle point systems arising from data assimilation problems, An unconstrained optimization method using nonmonotone second order Goldstein's line search, An infeasible interior-point algorithm with full-Newton step for linear optimization, Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization, Numerical multilinear algebra and its applications, A new fractional Chebyshev FDM: an application for solving the fractional differential equations generated by optimisation problem, Convergence analysis of simplified iteratively regularized Gauss–Newton method in a Banach space setting, Fast Finite Difference Approximation for Identifying Parameters in a Two-dimensional Space-fractional Nonlocal Model with Variable Diffusivity Coefficients, Successive unconstrained dual optimization method for~rank-one approximation to tensors, An ODE-based nonmonotone method for unconstrained optimization problems, The Dai-Liao nonlinear conjugate gradient method with optimal parameter choices, A parallel line search subspace correction method for composite convex optimization, The Action Gambler and Equal-Sized Wagering, A new method of moving asymptotes for large-scale unconstrained optimization, On Convergence Rates of Linearized Proximal Algorithms for Convex Composite Optimization with Applications, MATRIX ANALYSES ON THE DAI–LIAO CONJUGATE GRADIENT METHOD, Computer Algebra and Line Search, A descent hybrid modification of the Polak–Ribière–Polyak conjugate gradient method, On solving a class of linear semi-infinite programming by SDP method, MIN-MAX SOLUTIONS FOR PARAMETRIC CONTINUOUS STATIC GAME UNDER ROUGHNESS (PARAMETERS IN THE COST FUNCTION AND FEASIBLE REGION IS A ROUGH SET), The modified Levenberg-Marquardt method for nonlinear equations with cubic convergence, Applying Gröbner basis method to multiparametric polynomial nonlinear programming, A modified BFGS type quasi-Newton method with line search for symmetric nonlinear equations problems, NEW ADAPTIVE BARZILAI–BORWEIN STEP SIZE AND ITS APPLICATION IN SOLVING LARGE-SCALE OPTIMIZATION PROBLEMS, An optimal parameter choice for the Dai-Liao family of conjugate gradient methods by avoiding a direction of the maximum magnification by the search direction matrix, A novel self-adaptive trust region algorithm for unconstrained optimization, A new hybrid algorithm for convex nonlinear unconstrained optimization, Recovery of a Time-Dependent Bottom Topography Function from the Shallow Water Equations via an Adjoint Approach, Limited memory BFGS method based on a high-order tensor model, A self-adjusting conjugate gradient method with sufficient descent condition and conjugacy condition, Exact Penalty Function for $\ell_{2,1}$ Norm Minimization over the Stiefel Manifold, A modified two steps Levenberg-Marquardt method for nonlinear equations, A wedge trust region method with self-correcting geometry for derivative-free optimization, A nonmonotone scaled conjugate gradient algorithm for large-scale unconstrained optimization, Two modified spectral conjugate gradient methods and their global convergence for unconstrained optimization, Structure learning of Bayesian networks using global optimization with applications in data classification, Correction of trust region method with a new modified Newton method