Training recurrent neural networks by sequential least squares and the alternating direction method of multipliers
DOI10.1016/J.AUTOMATICA.2023.111183zbMATH Open1520.93212arXiv2112.15348OpenAlexW4385128036MaRDI QIDQ6136124FDOQ6136124
Publication date: 28 August 2023
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2112.15348
alternating direction method of multipliersnonlinear system identificationLevenberg-Marquardt algorithmrecurrent neural networksnonlinear least-squaresgeneralized Gauss-Newton methodsnon-smooth loss functions
Artificial neural networks and deep learning (68T07) Nonlinear systems in control theory (93C10) Identification in stochastic control theory (93E12) Least squares and related methods for stochastic control systems (93E24)
Cites Work
- Title not available (Why is that?)
- CasADi: a software framework for nonlinear optimization and optimal control
- PSwarm: a hybrid solver for linearly constrained global derivative-free optimization
- Model Selection and Estimation in Regression with Grouped Variables
- Convergence Analysis of Alternating Direction Method of Multipliers for a Family of Nonconvex Problems
- An Algorithm for Least-Squares Estimation of Nonlinear Parameters
- A method for the solution of certain non-linear problems in least squares
- Identification of Hammerstein systems without explicit parameterisation of non-linearity
- Global convergence of ADMM in nonconvex nonsmooth optimization
- An augmented Lagrangian based algorithm for distributed nonconvex optimization
- A BFGS-SQP method for nonsmooth, nonconvex, constrained optimization and its evaluation using relative minimization profiles
- Douglas--Rachford Splitting and ADMM for Nonconvex Optimization: Tight Convergence Results
- A simple effective heuristic for embedded mixed-integer quadratic programming
- On the smoothness of nonlinear system identification
- Learning nonlinear state-space models using autoencoders
- Recurrent Neural Network Training With Convex Loss and Regularization Functions by Extended Kalman Filtering
- Survey of sequential convex programming and generalized Gauss-Newton methods
- Variable Elimination in Model Predictive Control Based on K-SVD and QR Factorization
Cited In (3)
This page was built for publication: Training recurrent neural networks by sequential least squares and the alternating direction method of multipliers
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6136124)