Reducing the Dimensionality of Data with Neural Networks

From MaRDI portal
Revision as of 21:50, 3 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:3101410

DOI10.1126/science.1127647zbMath1226.68083OpenAlexW2100495367WikidataQ31050179 ScholiaQ31050179MaRDI QIDQ3101410

Ruslan R. Salakhutdinov, Geoffrey E. Hinton

Publication date: 28 November 2011

Published in: Science (Search for Journal in Brave)

Full work available at URL: https://semanticscholar.org/paper/213d7af7107fa4921eb0adea82c9f711fd105232




Related Items (only showing first 100 items - show all)

Research on three-step accelerated gradient algorithm in deep learningParametric UMAP Embeddings for Representation and Semisupervised LearningEffects of depth, width, and initialization: A convergence analysis of layer-wise training for deep linear neural networksSymplectic Model Reduction of Hamiltonian Systems on Nonlinear Manifolds and Approximation with Weakly Symplectic AutoencoderApproximate and discrete Euclidean vector bundlesA deep learning approach to Reduced Order Modelling of parameter dependent partial differential equationsMean-field inference methods for neural networksInterpolating between boolean and extremely high noisy patterns through minimal dense associative memoriesLegendre equivalences of spherical Boltzmann machinesLearning quantum structures in compact localized eigenstatesA Mixed Wavelet-Learning Method of Predicting Macroscopic Effective Heat Transfer Conductivities of Braided Composite MaterialsForward Stepwise Deep Autoencoder-Based Monotone Nonlinear Dimensionality Reduction MethodsFeatures Reweighting and Selection in ligand-based Virtual Screening for Molecular Similarity Searching Based on Deep Belief NetworksOn the combination of kernel principal component analysis and neural networks for process indirect controlA Nonlinear Matrix Decomposition for Mining the Zeros of Sparse DataRANDOM NEURAL NETWORK METHODS AND DEEP LEARNINGLearning probabilistic neural representations with randomly connected circuitsUnnamed ItemUnnamed ItemUnnamed ItemJustifying and Generalizing Contrastive DivergenceLearning the Dynamics of Objects by Optimal Functional InterpolationEnhanced Gradient for Training Restricted Boltzmann MachinesLinearly Constrained Nonsmooth Optimization for Training AutoencodersA converged deep graph semi-NMF algorithm for learning data representationKoopman analysis of nonlinear systems with a neural network representationThermodynamics of bidirectional associative memoriesRepresentations of hypergraph states with neural networks*LSTM-based approach for predicting periodic motions of an impacting system via transient dynamicsA distributed optimisation framework combining natural gradient with Hessian-free for discriminative sequence trainingTransfer-RLS method and transfer-FORCE learning for simple and fast training of reservoir computing modelsEpicasting: an ensemble wavelet neural network for forecasting epidemicsNonlinear reduced-order modeling for three-dimensional turbulent flow by large-scale machine learningEstimating propensity scores using neural networks and traditional methods: a comparative simulation studyProbabilistic partition of unity networks for high‐dimensional regression problemsStatistical Inference, Learning and Models in Big DataDiscriminative group-sparsity constrained broad learning system for visual recognitionPredicting turbulent dynamics with the convolutional autoencoder echo state networkHybrid analysis and modeling, eclecticism, and multifidelity computing toward digital twin revolutionThe emergence of a concept in shallow neural networksSurrogate modeling for high dimensional uncertainty propagation via deep kernel polynomial chaos expansionSuccessfully and efficiently training deep multi-layer perceptrons with logistic activation function simply requires initializing the weights with an appropriate negative meanMultibody dynamics and control using machine learningFast convergence rates of deep neural networks for classificationDynamics of a data-driven low-dimensional model of turbulent minimal Couette flowDeep multimodal autoencoder for crack criticality assessmentHousehold financial health: a machine learning approach for data-driven diagnosis and prescriptionPredicting circRNA-disease associations based on autoencoder and graph embeddingNon‐intrusive reduced‐order modeling using convolutional autoencodersPhysics-informed data-driven model for fluid flow in porous mediaA taxonomy for similarity metrics between Markov decision processesAirfoil-based convolutional autoencoder and long short-term memory neural network for predicting coherent structures evolution around an airfoilSynthetic data generation: state of the art in health care domainUnnamed ItemUnnamed ItemRecent Deep Learning Methods for Melanoma Detection: A ReviewApplying neural network Poisson regression to predict cognitive score changesLarge Margin Low Rank Tensor AnalysisA Survey on Deep Learning for Multimodal Data FusionThe Stochastic Delta Rule: Faster and More Accurate Deep Learning Through Adaptive Weight NoiseOn Kernel Method–Based Connectionist Models and Supervised Deep Learning Without BackpropagationDeep Learning with Dynamic Spiking Neurons and Fixed Feedback WeightsDeep Convolutional Neural Networks for Image Classification: A Comprehensive ReviewConvergence of Markovian stochastic approximation for Markov random fields with hidden variablesThe structure of reconstructed flows in latent spacesMachine-learning-based spatio-temporal super resolution reconstruction of turbulent flowsOpen quantum generalisation of Hopfield neural networksDecreasing the Size of the Restricted Boltzmann MachineSemisupervised Deep Stacking Network with Adaptive Learning Rate Strategy for Motor Imagery EEG RecognitionSparse identification of nonlinear dynamics with low-dimensionalized flow representationsDynamics of Learning in MLP: Natural Gradient and Singularity RevisitedStatistics of Visual Responses to Image Object Stimuli from Primate AIT Neurons to DNN NeuronsUnnamed ItemUnnamed ItemLearning Binary Hash Codes for Large-Scale Image SearchHierarchical model of natural images and the origin of scale invarianceVisual Recognition and Inference Using Dynamic Overcomplete Sparse LearningWasserstein Dictionary Learning: Optimal Transport-Based Unsupervised Nonlinear Dictionary LearningEnsemble Kalman inversion: a derivative-free technique for machine learning tasksExpansion of the effective action around non-Gaussian theoriesConvolutional autoencoder and conditional random fields hybrid for predicting spatial-temporal chaosDeep learning the holographic black hole with chargeDNN-PPI: A LARGE-SCALE PREDICTION OF PROTEIN–PROTEIN INTERACTIONS BASED ON DEEP NEURAL NETWORKSSwitchNet: A Neural Network Model for Forward and Inverse Scattering ProblemsMultilevel Artificial Neural Network Training for Spatially Correlated LearningNonlinear mode decomposition with convolutional neural networks for fluid dynamicsUnnamed ItemOn the Achievability of Blind Source Separation for High-Dimensional Nonlinear Source MixturesBucket renormalization for approximate inferenceLearning Individualized Treatment Rules for Multiple-Domain Latent OutcomesEnhancing performance of the back-propagation algorithm based on a novel regularization method of preserving inter-object-distance of dataUnnamed ItemSolving parametric PDE problems with artificial neural networksDynamical Variational Autoencoders: A Comprehensive ReviewNonlinear Level Set Learning for Function Approximation on Sparse Data with Applications to Parametric Differential EquationsAdaptive bridge control strategy for opinion evolution on social networksA Tale of Two Bases: Local-Nonlocal Regularization on Image Patches with Convolution FrameletsGaussian-spherical restricted Boltzmann machinesDeepSym: Deep Symbol Generation and Rule Learning for Planning from Unsupervised Robot InteractionLow Rank Tensor Manifold Learning


Uses Software





This page was built for publication: Reducing the Dimensionality of Data with Neural Networks