Improving predictive inference under covariate shift by weighting the log-likelihood function
From MaRDI portal
Publication:1591282
DOI10.1016/S0378-3758(00)00115-4zbMath0958.62011MaRDI QIDQ1591282
Publication date: 9 April 2001
Published in: Journal of Statistical Planning and Inference (Search for Journal in Brave)
importance samplingAkaike information criterionKullback-Leibler divergencemisspecificationweighted least squares
Linear inference, regression (62J99) Sampling theory, sample surveys (62D05) Statistical aspects of information-theoretic topics (62B10)
Related Items (73)
Learning models with uniform performance via distributionally robust optimization ⋮ Reward-Weighted Regression with Sample Reuse for Direct Policy Search in Reinforcement Learning ⋮ Statistical analysis of distance estimators with density differences and density ratios ⋮ Adapting a classification rule to local and global shift when only unlabelled data are available ⋮ Transfer learning for nonparametric classification: minimax rate and adaptive classifier ⋮ Training classifiers under covariate shift by constructing the maximum consistent distribution subset ⋮ Semi-supervised learning with density-ratio estimation ⋮ Necessary and sufficient conditions of proper estimators based on self density ratio for unnormalized statistical models ⋮ Density-ratio matching under the Bregman divergence: a unified framework of density-ratio estimation ⋮ Effects of unlabeled data on classification error in normal discriminant analysis ⋮ Domain adaptation and sample bias correction theory and algorithm for regression ⋮ A batch ensemble approach to active learning with model selection ⋮ Equal percent bias reduction and variance proportionate modifying properties with mean-covariance preserving matching ⋮ Semiparametric Estimation of the Transformation Model by Leveraging External Aggregate Data in the Presence of Population Heterogeneity ⋮ Estimating the Area under the ROC Curve When Transporting a Prediction Model to a Target Population ⋮ Stationary Subspace Analysis ⋮ Hierarchical optimal transport for unsupervised domain adaptation ⋮ Active learning algorithm using the maximum weighted log-likelihood estimator ⋮ Regularized Nyström Subsampling in Covariate Shift Domain Adaptation Problems ⋮ MapFlow: latent transition via normalizing flow for unsupervised domain adaptation ⋮ Multimodel ensemble analysis with neural network Gaussian processes ⋮ Separation of stationary and non-stationary sources with a generalized eigenvalue problem ⋮ Improving importance estimation in pool-based batch active learning for approximate linear regression ⋮ Statistical analysis of kernel-based least-squares density-ratio estimation ⋮ Optimally tackling covariate shift in RKHS-based nonparametric regression ⋮ Geometry of the log-likelihood ratio statistic in misspecified models ⋮ Learning kernels for unsupervised domain adaptation with applications to visual object recognition ⋮ Domain adaptation for face recognition: targetize source domain bridged by common subspace ⋮ Domain adaptation for structured regression ⋮ Computational complexity of kernel-based density-ratio estimation: a condition number analysis ⋮ Multi-parametric solution-path algorithm for instance-weighted support vector machines ⋮ Optimal tuning parameter estimation in maximum penalized likelihood method ⋮ Bayesian hierarchical stacking: some models are (somewhere) useful ⋮ A Hilbert Space Embedding for Distributions ⋮ A survey of Bayesian predictive methods for model assessment, selection and comparison ⋮ Transfer estimation of evolving class priors in data stream classification ⋮ Multi-task clustering via domain adaptation ⋮ Relative deviation learning bounds and generalization with unbounded loss functions ⋮ Pool-based active learning in approximate linear regression ⋮ Semi-supervised local Fisher discriminant analysis for dimensionality reduction ⋮ Efficient Sample Reuse in Policy Gradients with Parameter-Based Exploration ⋮ Mismatched Training and Test Distributions Can Outperform Matched Ones ⋮ Instance weighting through data imprecisiation ⋮ An information criterion for model selection with missing data via complete-data divergence ⋮ On Prior Selection and Covariate Shift of β-Bayesian Prediction Under α-Divergence Risk ⋮ Direct density-ratio estimation with dimensionality reduction via least-squares hetero-distributional subspace search ⋮ Bootstrap prediction and Bayesian prediction under misspecified models ⋮ Unnamed Item ⋮ Semi-supervised speaker identification under covariate shift ⋮ Adaptive importance sampling for value function approximation in off-policy reinforcement learning ⋮ Dimensionality reduction for density ratio estimation in high-dimensional spaces ⋮ Learning from imprecise and fuzzy observations: data disambiguation through generalized loss minimization ⋮ Direct importance estimation for covariate shift adaptation ⋮ Domain Adaptation Using the Grassmann Manifold ⋮ Variable Selection for Nonparametric Learning with Power Series Kernels ⋮ Theory and Algorithm for Learning with Dissimilarity Functions ⋮ On handling negative transfer and imbalanced distributions in multiple source transfer learning ⋮ Constructive setting for problems of density ratio estimation ⋮ Relative Density-Ratio Estimation for Robust Distribution Comparison ⋮ Variational learning from implicit bandit feedback ⋮ Robust randomized optimization with k nearest neighbors ⋮ On a regularization of unsupervised domain adaptation in RKHS ⋮ Hierarchical resampling for bagging in multistudy prediction with applications to human neurochemical sensing ⋮ Risk bound of transfer learning using parametric feature mapping and its application to sparse coding ⋮ Learning using privileged information: SVM+ and weighted SVM ⋮ Safe semi-supervised learning based on weighted likelihood ⋮ Unnamed Item ⋮ Tnn: a transfer learning classifier based on weighted nearest neighbors ⋮ Statistical learning from biased training samples ⋮ Conformal prediction: a unified review of theory and new challenges ⋮ Semi-supervised logistic discrimination via labeled data and unlabeled data from different sampling distributions ⋮ Doubly robust policy evaluation and optimization ⋮ Adaptive Mixtures of Regressions: Improving Predictive Inference when Population has Changed
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The geometry of exponential families
- An Akaike information criterion for model selection in the presence of incomplete data.
- Efficiency versus robustness: The case for minimum Hellinger distance and related methods
- Minimum disparity estimation for continuous models: Efficiency, distributions and robustness
- Differential-geometrical methods in statistics
- Approximate predictive likelihood
- On asymptotic properties of predictive distributions
- Asymptotic prediction analysis
- Weighting for Unequal Selection Probabilities in Multilevel Models
- Generalised information criteria in model selection
- Robust Estimation: A Weighted Maximum Likelihood Approach
- Asymptotic Expansions Associated with Posterior Distributions
- Maximum Likelihood Estimation of Misspecified Models
- A new look at the statistical model identification
This page was built for publication: Improving predictive inference under covariate shift by weighting the log-likelihood function