Linearly constrained nonsmooth optimization for training autoencoders
From MaRDI portal
Publication:5097018
Abstract: A regularized minimization model with -norm penalty (RP) is introduced for training the autoencoders that belong to a class of two-layer neural networks. We show that the RP can act as an exact penalty model which shares the same global minimizers, local minimizers, and d(irectional)-stationary points with the original regularized model under mild conditions. We construct a bounded box region that contains at least one global minimizer of the RP, and propose a linearly constrained regularized minimization model with -norm penalty (LRP) for training autoencoders. A smoothing proximal gradient algorithm is designed to solve the LRP. Convergence of the algorithm to a generalized d-stationary point of the RP and LRP is delivered. Comprehensive numerical experiments convincingly illustrate the efficiency as well as the robustness of the proposed algorithm.
Recommendations
- Convergence analyses on sparse feedforward neural networks via group lasso regularization
- On constrained optimization with nonconvex regularization
- Smoothing neural network for \(L_0\) regularized optimization problem with general convex constraints
- Neural network for constrained nonsmooth optimization using Tikhonov regularization
- Global convergence analysis of sparse regular nonconvex optimization problems
Cites work
- scientific article; zbMATH DE number 46303 (Why is no real title available?)
- scientific article; zbMATH DE number 1943822 (Why is no real title available?)
- scientific article; zbMATH DE number 3103824 (Why is no real title available?)
- A logical calculus of the ideas immanent in nervous activity
- Adaptive subgradient methods for online learning and stochastic optimization
- Auto-association by multilayer perceptrons and singular value decomposition
- Deep learning
- Local linear convergence of the alternating direction method of multipliers on quadratic or linear programs
- Multicomposite nonconvex optimization for training deep neural networks
- Optimization for deep learning: an overview
- Penalty methods for a class of non-Lipschitz optimization problems
- Reducing the Dimensionality of Data with Neural Networks
- SGDLibrary: a MATLAB library for stochastic optimization algorithms
- Semismooth and Semiconvex Functions in Constrained Optimization
- Smoothing methods for nonsmooth, nonconvex minimization
- The subdifferential of measurable composite max integrands and smoothing approximation
This page was built for publication: Linearly constrained nonsmooth optimization for training autoencoders
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5097018)