Linearly constrained nonsmooth optimization for training autoencoders

From MaRDI portal
Publication:5097018

DOI10.1137/21M1408713zbMATH Open1497.90155arXiv2103.16232OpenAlexW3150868911MaRDI QIDQ5097018FDOQ5097018


Authors: Xin Liu, Xiaojun Chen, Wei Liu Edit this on Wikidata


Publication date: 19 August 2022

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Abstract: A regularized minimization model with l1-norm penalty (RP) is introduced for training the autoencoders that belong to a class of two-layer neural networks. We show that the RP can act as an exact penalty model which shares the same global minimizers, local minimizers, and d(irectional)-stationary points with the original regularized model under mild conditions. We construct a bounded box region that contains at least one global minimizer of the RP, and propose a linearly constrained regularized minimization model with l1-norm penalty (LRP) for training autoencoders. A smoothing proximal gradient algorithm is designed to solve the LRP. Convergence of the algorithm to a generalized d-stationary point of the RP and LRP is delivered. Comprehensive numerical experiments convincingly illustrate the efficiency as well as the robustness of the proposed algorithm.


Full work available at URL: https://arxiv.org/abs/2103.16232




Recommendations




Cites Work


Cited In (1)





This page was built for publication: Linearly constrained nonsmooth optimization for training autoencoders

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5097018)