A Note on the Regularity of Images Generated by Convolutional Neural Networks

From MaRDI portal
Publication:6136232

DOI10.1137/22M1525995zbMATH Open1521.65045arXiv2204.10588OpenAlexW4385071157MaRDI QIDQ6136232FDOQ6136232

M. Holler, Andreas Habring

Publication date: 29 August 2023

Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)

Abstract: The regularity of images generated by convolutional neural networks, such as the U-net, generative networks, or the deep image prior, is analyzed. In a resolution-independent, infinite dimensional setting, it is shown that such images, represented as functions, are always continuous and, in some circumstances, even continuously differentiable, contradicting the widely accepted modeling of sharp edges in images via jump discontinuities. While such statements require an infinite dimensional setting, the connection to (discretized) neural networks used in practice is made by considering the limit as the resolution approaches infinity. As practical consequence, the results of this paper in particular provide analytical evidence that basic L2 regularization of network weights might lead to over-smoothed outputs.


Full work available at URL: https://arxiv.org/abs/2204.10588





Cites Work


Cited In (3)


   Recommendations





This page was built for publication: A Note on the Regularity of Images Generated by Convolutional Neural Networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6136232)