Linearly constrained nonsmooth optimization for training autoencoders
From MaRDI portal
Publication:5097018
DOI10.1137/21M1408713zbMATH Open1497.90155arXiv2103.16232OpenAlexW3150868911MaRDI QIDQ5097018FDOQ5097018
Authors: Xin Liu, Xiaojun Chen, Wei Liu
Publication date: 19 August 2022
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Abstract: A regularized minimization model with -norm penalty (RP) is introduced for training the autoencoders that belong to a class of two-layer neural networks. We show that the RP can act as an exact penalty model which shares the same global minimizers, local minimizers, and d(irectional)-stationary points with the original regularized model under mild conditions. We construct a bounded box region that contains at least one global minimizer of the RP, and propose a linearly constrained regularized minimization model with -norm penalty (LRP) for training autoencoders. A smoothing proximal gradient algorithm is designed to solve the LRP. Convergence of the algorithm to a generalized d-stationary point of the RP and LRP is delivered. Comprehensive numerical experiments convincingly illustrate the efficiency as well as the robustness of the proposed algorithm.
Full work available at URL: https://arxiv.org/abs/2103.16232
Recommendations
- Convergence analyses on sparse feedforward neural networks via group lasso regularization
- On constrained optimization with nonconvex regularization
- Smoothing neural network for \(L_0\) regularized optimization problem with general convex constraints
- Neural network for constrained nonsmooth optimization using Tikhonov regularization
- Global convergence analysis of sparse regular nonconvex optimization problems
Cites Work
- SGDLibrary: a MATLAB library for stochastic optimization algorithms
- Adaptive subgradient methods for online learning and stochastic optimization
- Reducing the Dimensionality of Data with Neural Networks
- Title not available (Why is that?)
- Deep learning
- Title not available (Why is that?)
- Smoothing methods for nonsmooth, nonconvex minimization
- Semismooth and Semiconvex Functions in Constrained Optimization
- Local linear convergence of the alternating direction method of multipliers on quadratic or linear programs
- A logical calculus of the ideas immanent in nervous activity
- Penalty methods for a class of non-Lipschitz optimization problems
- Auto-association by multilayer perceptrons and singular value decomposition
- Title not available (Why is that?)
- The subdifferential of measurable composite max integrands and smoothing approximation
- Multicomposite nonconvex optimization for training deep neural networks
- Optimization for deep learning: an overview
Cited In (1)
This page was built for publication: Linearly constrained nonsmooth optimization for training autoencoders
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5097018)