Machine learning with squared-loss mutual information
From MaRDI portal
Recommendations
- Estimating Squared-Loss Mutual Information for Independent Component Analysis
- An estimate of mutual information that permits closed-form optimisation
- Information-maximization clustering based on squared-loss mutual information
- Sufficient dimension reduction via squared-loss mutual information estimation
- Mutual information equals copula entropy
Cites work
- scientific article; zbMATH DE number 5957245 (Why is no real title available?)
- scientific article; zbMATH DE number 1220060 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 6253916 (Why is no real title available?)
- scientific article; zbMATH DE number 3322635 (Why is no real title available?)
- scientific article; zbMATH DE number 3340881 (Why is no real title available?)
- $f$-Divergence Estimation and Two-Sample Homogeneity Test Under Semiparametric Density-Ratio Models
- 10.1162/153244303322753616
- 10.1162/153244303768966085
- A least-squares approach to direct importance estimation
- Asymptotic Statistics
- Blind separation of sources. I: An adaptive algorithm based on neuromimetic architecture
- Canonical correlation analysis based on information theory
- Canonical dependency analysis based on squared-loss mutual information
- Density-ratio matching under the Bregman divergence: a unified framework of density-ratio estimation
- Dimensionality reduction for density ratio estimation in high-dimensional spaces
- Direct density-ratio estimation with dimensionality reduction via least-squares hetero-distributional subspace search
- Direct importance estimation for covariate shift adaptation
- Divergence Estimation of Continuous Distributions Based on Data-Dependent Partitions
- Edgeworth Approximation of Multivariate Differential Entropy
- Estimating Divergence Functionals and the Likelihood Ratio by Convex Risk Minimization
- Estimation of the information by an adaptive partitioning of the observation space
- Gaussian processes for machine learning.
- Independent coordinates for strange attractors from mutual information.
- Kernel dimension reduction in regression
- Least angle regression. (With discussion)
- Least-squares independent component analysis
- Least-squares two-sample test
- Nonparametric and semiparametric models.
- On Information and Sufficiency
- On the influence of the kernel on the consistency of support vector machines
- RELATIONS BETWEEN TWO SETS OF VARIATES
- Robust and efficient estimation by minimising a density power divergence
- Save: a method for dimension reduction and graphics in regression
- Sequential Fixed-Point ICA Based on Mutual Information Minimization
- Statistical analysis of kernel-based least-squares density-ratio estimation
- The Geometry of Algorithms with Orthogonality Constraints
- The estimation of the gradient of a density function, with applications in pattern recognition
- Weak convergence and empirical processes. With applications to statistics
Cited in
(13)- Simple strategies for semi-supervised feature selection
- Direct estimation of the derivative of quadratic mutual information with application in supervised dimension reduction
- Dealing with under-reported variables: an information theoretic solution
- Improved neural networks based on mutual information via information geometry
- Interpretable fault detection using projections of mutual information matrix
- Smoothed noise contrastive mutual information neural estimation
- Generalized twin Gaussian processes using Sharma-Mittal divergence
- A unified definition of mutual information with applications in machine learning
- Semi-supervised information-maximization clustering
- Information-maximization clustering based on squared-loss mutual information
- Canonical dependency analysis based on squared-loss mutual information
- Functional sufficient dimension reduction through information maximization with application to classification
- Information-theoretic representation learning for positive-unlabeled classification
This page was built for publication: Machine learning with squared-loss mutual information
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q742658)