Stop Memorizing: A Data-Dependent Regularization Framework for Intrinsic Pattern Learning

From MaRDI portal
Publication:5025786

DOI10.1137/19M1236886zbMATH Open1490.68195arXiv1805.07291OpenAlexW2970812740WikidataQ127280269 ScholiaQ127280269MaRDI QIDQ5025786FDOQ5025786


Authors: Wei Zhu, Qiang Qiu, Bao Wang, Guillermo Sapiro, Ingrid Daubechies, Jianfeng Lu Edit this on Wikidata


Publication date: 3 February 2022

Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)

Abstract: Deep neural networks (DNNs) typically have enough capacity to fit random data by brute force even when conventional data-dependent regularizations focusing on the geometry of the features are imposed. We find out that the reason for this is the inconsistency between the enforced geometry and the standard softmax cross entropy loss. To resolve this, we propose a new framework for data-dependent DNN regularization, the Geometrically-Regularized-Self-Validating neural Networks (GRSVNet). During training, the geometry enforced on one batch of features is simultaneously validated on a separate batch using a validation loss consistent with the geometry. We study a particular case of GRSVNet, the Orthogonal-Low-rank Embedding (OLE)-GRSVNet, which is capable of producing highly discriminative features residing in orthogonal low-rank subspaces. Numerical experiments show that OLE-GRSVNet outperforms DNNs with conventional regularization when trained on real data. More importantly, unlike conventional DNNs, OLE-GRSVNet refuses to memorize random data or random labels, suggesting it only learns intrinsic patterns by reducing the memorizing capacity of the baseline DNN.


Full work available at URL: https://arxiv.org/abs/1805.07291




Recommendations




Cites Work


Uses Software





This page was built for publication: Stop Memorizing: A Data-Dependent Regularization Framework for Intrinsic Pattern Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5025786)