CIFAR
From MaRDI portal
Cited in
(only showing first 100 items - show all)- PAC-Bayesian framework based drop-path method for 2D discriminative convolutional network pruning
- Adversarial noise attacks of deep learning architectures: stability analysis via sparse-modeled signals
- LSALSA: accelerated source separation via learned sparse coding
- Sparse deep neural networks using \(L_{1,\infty}\)-weight normalization
- An iterative stacked weighted auto-encoder
- Hyperband: a novel bandit-based approach to hyperparameter optimization
- GADE: a generative adversarial approach to density estimation and its applications
- An inertial Newton algorithm for deep learning
- A spin glass model for the loss surfaces of generative adversarial networks
- Confident learning: estimating uncertainty in dataset labels
- Nonparametric guidance of autoencoder representations using label information
- Noise Robust Projection Rule for Klein Hopfield Neural Networks
- A mathematical motivation for complex-valued convolutional networks
- Global optimization based on active preference learning with radial basis functions
- An adaptive Polyak heavy-ball method
- Machine unlearning: linear filtration for logit-based classifiers
- Stabilize deep ResNet with a sharp scaling factor \(\tau\)
- Towards harnessing feature embedding for robust learning with noisy labels
- Recurrence of optimum for training weight and activation quantized networks
- Ada-boundary: accelerating DNN training via adaptive boundary batch selection
- Deep learning of CMB radiation temperature
- A comparative study of data-dependent approaches without learning in measuring similarities of data objects
- Sparse kernel deep stacking networks
- Drop-activation: implicit parameter reduction and harmonious regularization
- Understanding image representations by measuring their equivariance and equivalence
- Generative adversarial networks with joint distribution moment matching
- Qsun
- Distributed Bayesian learning with stochastic natural gradient expectation propagation and the posterior server
- Binary quantized network training with sharpness-aware minimization
- Analysis of convolutional neural network image classifiers in a hierarchical max-pooling model with additional local pooling
- Mean Field Analysis of Deep Neural Networks
- Moderate deviation and restricted equivalence functions for measuring similarity between data
- Train Like a (Var)Pro: Efficient Training of Neural Networks with Variable Projection
- Loss-sensitive generative adversarial networks on Lipschitz densities
- MgNet: a unified framework of multigrid and convolutional neural network
- Efficient and sparse neural networks by pruning weights in a multiobjective learning approach
- A tale of three probabilistic families: discriminative, descriptive, and generative models
- Stochastic nested variance reduction for nonconvex optimization
- scientific article; zbMATH DE number 7626736 (Why is no real title available?)
- MINRES: from negative curvature detection to monotonicity properties
- Quaternion-valued recurrent projection neural networks on unit quaternions
- Discriminative clustering with representation learning with any ratio of labeled to unlabeled data
- Use of static surrogates in hyperparameter optimization
- Joint optimization of an autoencoder for clustering and embedding
- scientific article; zbMATH DE number 7306906 (Why is no real title available?)
- On data preconditioning for regularized loss minimization
- Multiobjective Tree-Structured Parzen Estimator
- Quasi-Newton methods for machine learning: forget the past, just sample
- \((1 + \varepsilon)\)-class classification: an anomaly detection method for highly imbalanced or incomplete data sets
- On the antiderivatives of \(x^p/(1 - x)\) with an application to optimize loss functions for classification with neural networks
- Black-box adversarial attacks by manipulating image attributes
- Pruning deep convolutional neural networks architectures with evolution strategy
- Splicing learning: a novel few-shot learning approach
- Manifold-based synthetic oversampling with manifold conformance estimation
- scientific article; zbMATH DE number 7306873 (Why is no real title available?)
- scientific article; zbMATH DE number 7306897 (Why is no real title available?)
- ESAE: Evolutionary Strategy-Based Architecture Evolution
- Efficient Evolutionary Neural Architecture Search (NAS) by Modular Inheritable Crossover
- Effect of depth and width on local minima in deep learning
- Distributed Deep Learning on Heterogeneous Computing Resources Using Gossip Communication
- A Derivative-Free Method for Structured Optimization Problems
- Deep neural networks motivated by partial differential equations
- Residual networks as flows of diffeomorphisms
- The relative performance of ensemble methods with deep convolutional neural networks for image classification
- An exact penalty approach for optimization with nonnegative orthogonality constraints
- Regularisation of neural networks by enforcing Lipschitz continuity
- Fractional spectral graph wavelets and their applications
- Understanding generalization error of SGD in nonconvex optimization
- Theoretical investigation of generalization bounds for adversarial learning of deep neural networks
- scientific article; zbMATH DE number 7387621 (Why is no real title available?)
- Linear feature transform and enhancement of classification on deep neural network
- Dictionary learning for fast classification based on soft-thresholding
- Supervised t-Distributed Stochastic Neighbor Embedding for Data Visualization and Classification
- Deep convolutional neural networks for image classification: a comprehensive review
- Probabilistic line searches for stochastic optimization
- Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches
- Unsupervised domain adaptation in the wild via disentangling representation learning
- Deep relaxation: partial differential equations for optimizing deep neural networks
- On Kernel Method–Based Connectionist Models and Supervised Deep Learning Without Backpropagation
- Mini-Batch Metropolis–Hastings With Reversible SGLD Proposal
- Analysis of classifiers' robustness to adversarial perturbations
- Search direction correction with normalized gradient makes first-order methods faster
- scientific article; zbMATH DE number 7370631 (Why is no real title available?)
- scientific article; zbMATH DE number 7370598 (Why is no real title available?)
- Extreme value theory for anomaly detection -- the GPD classifier
- A new initialization method based on normed statistical spaces in deep networks
- A game-based approximate verification of deep neural networks with provable guarantees
- Towards understanding sparse filtering: a theoretical perspective
- Deformable classifiers
- Improved ArtGAN for Conditional Synthesis of Natural Image and Artwork
- scientific article; zbMATH DE number 7625162 (Why is no real title available?)
- Might a Single Neuron Solve Interesting Machine Learning Problems Through Successive Computations on Its Dendritic Tree?
- Graph interpolating activation improves both natural and robust accuracies in data-efficient deep learning
- Learning in the machine: random backpropagation and the deep learning channel
- Deep learning: an introduction for applied mathematicians
- Equilibrium and non-equilibrium regimes in the learning of restricted Boltzmann machines*
- Relative stability toward diffeomorphisms indicates performance in deep nets*
- ForestDSH: a universal hash design for discrete probability distributions
- Gaussian-spherical restricted Boltzmann machines
- CleanNet
This page was built for software: CIFAR