MNIST
From MaRDI portal
Software:24783
swMATH12859MaRDI QIDQ24783FDOQ24783
Author name not available (Why is that?)
Cited In (only showing first 100 items - show all)
- Generative adversarial networks with decoder-encoder output noises
- Training of deep neural networks for the generation of dynamic movement primitives
- Vulnerability of classifiers to evolutionary generated adversarial examples
- Verification of piecewise deep neural networks: a star set approach with zonotope pre-filter
- Adversarial noise attacks of deep learning architectures: stability analysis via sparse-modeled signals
- Title not available (Why is that?)
- The loss surfaces of neural networks with general activation functions
- Ensemble Kalman inversion: a derivative-free technique for machine learning tasks
- Simplified energy landscape for modularity using total variation
- Development of an algorithm for reconstruction of droplet history based on deposition pattern using computational fluid dynamics and convolutional neural network
- On recovery guarantees for one-bit compressed sensing on manifolds
- Convergence of stochastic gradient descent in deep neural network
- IQN: an incremental quasi-Newton method with local superlinear convergence rate
- Local regularization of noisy point clouds: improved global geometric estimates and data analysis
- Title not available (Why is that?)
- A dimension reduction technique for large-scale structured sparse optimization problems with application to convex clustering
- Kernel flows: from learning kernels from data into the abyss
- Graph interpolating activation improves both natural and robust accuracies in data-efficient deep learning
- Convex programming based spectral clustering
- Title not available (Why is that?)
- Cautious active clustering
- Efficient set-valued prediction in multi-class classification
- Scaling description of generalization with number of parameters in deep learning
- Data-driven algorithm selection and tuning in optimization and signal processing
- A stochastic subgradient method for distributionally robust non-convex and non-smooth learning
- An implicit memory-based method for supervised pattern recognition
- Convex optimization learning of faithful Euclidean distance representations in nonlinear dimensionality reduction
- MODES: model-based optimization on distributed embedded systems
- Quick-means: accelerating inference for K-means by learning fast transforms
- SPEED: secure, private, and efficient deep learning
- Evaluation metrics for conditional image generation
- Consensus guided incomplete multi-view spectral clustering
- MimicGAN: robust projection onto image manifolds with corruption mimicking
- EGC: entropy-based gradient compression for distributed deep learning
- Interpretable machine learning: fundamental principles and 10 grand challenges
- Hybrid tensor decomposition in neural network compression
- Neural networks and deep learning. A textbook
- Mini-batch learning of exponential family finite mixture models
- Landscape and training regimes in deep learning
- Optimization of neural network training for image recognition based on trigonometric polynomial approximation
- A comprehensive survey and analysis of generative models in machine learning
- Learning context-dependent choice functions
- Bayesian distillation of deep learning models
- Risk bounds for the majority vote: from a PAC-Bayesian analysis to a learning algorithm
- Nonlinearly preconditioned optimization on Grassmann manifolds for computing approximate Tucker tensor decompositions
- Joint dimensionality reduction and metric learning for image set classification
- Search for the global extremum using the correlation indicator for neural networks supervised learning
- Exact imposition of boundary conditions with distance functions in physics-informed deep neural networks
- Double data piling leads to perfect classification
- Topological measurement of deep neural networks using persistent homology
- Bayesian Imaging with Data-Driven Priors Encoded by Neural Networks
- Scheduled restart momentum for accelerated stochastic gradient descent
- A weight initialization based on the linear product structure for neural networks
- Fully hyperbolic convolutional neural networks
- How does momentum benefit deep neural networks architecture design? A few case studies
- Dualize, split, randomize: toward fast nonsmooth optimization algorithms
- Solution of physics-based Bayesian inverse problems with deep generative priors
- Towards out of distribution generalization for problems in mechanics
- Echo state network activation function based on bistable stochastic resonance
- Disentangled Representation Learning and Generation With Manifold Optimization
- Approximating morphological operators with part-based representations learned by asymmetric auto-encoders
- Modified Cheeger and ratio cut methods using the Ginzburg–Landau functional for classification of high-dimensional data
- Multidimensional scaling of noisy high dimensional data
- Variance-based single-call proximal extragradient algorithms for stochastic mixed variational inequalities
- Tropical coordinates on the space of persistence barcodes
- Probably correct \(k\)-nearest neighbor search in high dimensions
- Reachable sets of classifiers and regression models: (non-)robustness analysis and robust training
- Drop-activation: implicit parameter reduction and harmonious regularization
- Deep learning model selection of suboptimal complexity
- Enhanced ensemble-based classifier with boosting for pattern recognition
- An improvement of the parameterized frequent directions algorithm
- A unifying framework of synaptic and intrinsic plasticity in neural populations
- Second-order stochastic optimization for machine learning in linear time
- Fast and Robust Learning by Reinforcement Signals: Explorations in the Insect Brain
- Uncertainty quantification in graph-based classification of high dimensional data
- High-dimensional dynamics of generalization error in neural networks
- Sequential changepoint detection in neural networks with checkpoints
- Hypocoercivity properties of adaptive Langevin dynamics
- Acceleration of hierarchical Bayesian network based cortical models on multicore architectures
- Title not available (Why is that?)
- Embedding sample points uncertainty measures in learning algorithms
- Title not available (Why is that?)
- Improved ArtGAN for Conditional Synthesis of Natural Image and Artwork
- Approximate kernel partial least squares
- A random matrix approach to neural networks
- On the working set selection in gradient projection-based decomposition techniques for support vector machines
- Accurately computing the log-sum-exp and softmax functions
- A convolutional recursive modified self organizing map for handwritten digits recognition
- Global binary optimization on graphs for classification of high-dimensional data
- Title not available (Why is that?)
- A hybrid objective function for robustness of artificial neural networks -- estimation of parameters in a mechanical system
- Subset selection for visualization of relevant image fractions for deep learning based semantic image segmentation
- Stimulus space complexity determines the ratio of specialist and generalist neurons during pattern recognition
- Regularized greedy column subset selection
- Online multikernel learning based on a triple-norm regularizer for semantic image classification
- Spectral clustering with local projection distance measurement
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- Analysis of the IJCNN 2007 agnostic learning vs. prior knowledge challenge
- Multilayer in-place learning networks for modeling functional layers in the laminar cortex
- Plug-and-play dual-tree algorithm runtime analysis
This page was built for software: MNIST