Abstract: The aim of this paper is twofold. First, we show that a certain concatenation of a proximity operator with an affine operator is again a proximity operator on a suitable Hilbert space. Second, we use our findings to establish so-called proximal neural networks (PNNs) and stable tight frame proximal neural networks. Let and be real Hilbert spaces, and have closed range and Moore-Penrose inverse . Based on the well-known characterization of proximity operators by Moreau, we prove that for any proximity operator the operator is a proximity operator on equipped with a suitable norm. In particular, it follows for the frequently applied soft shrinkage operator and any frame analysis operator that the frame shrinkage operator is a proximity operator on a suitable Hilbert space. The concatenation of proximity operators on equipped with different norms establishes a PNN. If the network arises from tight frame analysis or synthesis operators, then it forms an averaged operator. Hence, it has Lipschitz constant 1 and belongs to the class of so-called Lipschitz networks, which were recently applied to defend against adversarial attacks. Moreover, due to its averaging property, PNNs can be used within so-called Plug-and-Play algorithms with convergence guarantee. In case of Parseval frames, we call the networks Parseval proximal neural networks (PPNNs). Then, the involved linear operators are in a Stiefel manifold and corresponding minimization methods can be applied for training. Finally, some proof-of-the concept examples demonstrate the performance of PPNNs.
Recommendations
- Deep neural network structures solving variational inequalities
- NETT: solving inverse problems with deep neural networks
- Nonlinear Power Method for Computing Eigenvectors of Proximal Operators and Neural Networks
- Proximal splitting methods in signal processing
- Proximity for sums of composite functions
Cites work
- scientific article; zbMATH DE number 6159604 (Why is no real title available?)
- scientific article; zbMATH DE number 5223994 (Why is no real title available?)
- A Convergent Image Fusion Algorithm Using Scene-Adapted Gaussian-Mixture-Based Denoising
- A MULTISCALE WAVELET-INSPIRED SCHEME FOR NONLINEAR DIFFUSION
- A feasible method for optimization with orthogonality constraints
- An introduction to frames and Riesz bases
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Convex analysis and monotone operator theory in Hilbert spaces
- First order algorithms in variational image processing
- First-order methods in optimization
- Incremental proximal methods for large scale convex optimization
- Monotone operator theory in convex optimization
- On the Equivalence of Soft Wavelet Shrinkage, Total Variation Diffusion, Total Variation Regularization, and SIDEs
- On the rotational invariant \(L_1\)-norm PCA
- Operator splittings, Bregman methods and frame shrinkage in image processing
- Proximité et dualité dans un espace hilbertien
- Signal Recovery by Proximal Forward-Backward Splitting
- Variable metric forward-backward algorithm for minimizing the sum of a differentiable function and a convex function
- Weak convergence theorems for nonexpansive mappings in Banach spaces
Cited in
(21)- Variational models for signal processing with graph neural networks
- Designing stable neural networks using convex analysis and ODEs
- Inertial stochastic PALM and applications in machine learning
- Motion detection in diffraction tomography by common circle methods
- Approximation of Lipschitz Functions Using Deep Spline Neural Networks
- Deep solution operators for variational inequalities via proximal neural networks
- Convergence of deep convolutional neural networks
- NESTANets: stable, accurate and efficient neural networks for analysis-sparse inverse problems
- On \(\alpha\)-firmly nonexpansive operators in \(r\)-uniformly convex spaces
- Sparse additive function decompositions facing basis transforms
- Convolutional proximal neural networks and plug-and-play algorithms
- Proximal neural networks and stochastic normalizing flows for inverse problems
- A Bregman stochastic method for nonconvex nonsmooth problem beyond global Lipschitz gradient continuity
- Compressive sensing and neural networks from a statistical learning perspective
- Frame soft shrinkage operators are proximity operators
- Neural-network-based regularization methods for inverse problems in imaging
- Stabilizing Invertible Neural Networks Using Mixture Models
- Robustness and exploration of variational and machine learning approaches to inverse problems: an overview
- Designing rotationally invariant neural networks from PDEs and variational methods
- Stochastic Gauss-Seidel type inertial proximal alternating linearized minimization and its application to proximal neural networks
- Learning weakly convex regularizers for convergent image-reconstruction algorithms
This page was built for publication: Parseval proximal neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q785901)