Unsupervised domain adaptation in the wild via disentangling representation learning
DOI10.1007/S11263-020-01364-5zbMATH Open1483.68437OpenAlexW3048967152MaRDI QIDQ2056435FDOQ2056435
Authors: Haoliang Li, Renjie Wan, Shiqi Wang, Alex C. Kot
Publication date: 2 December 2021
Published in: International Journal of Computer Vision (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11263-020-01364-5
Recommendations
- MapFlow: latent transition via normalizing flow for unsupervised domain adaptation
- Robust unsupervised domain adaptation for neural networks via moment alignment
- Domain-adversarial training of neural networks
- Learning kernels for unsupervised domain adaptation with applications to visual object recognition
- Learning smooth representations with generalized softmax for unsupervised domain adaptation
Learning and adaptive systems in artificial intelligence (68T05) Pattern recognition, speech recognition (68T10) Machine vision and scene understanding (68T45)
Cites Work
Cited In (9)
- On the Hardness of Domain Adaptation and the Utility of Unlabeled Target Samples
- An Embarrassingly Simple Approach to Visual Domain Adaptation
- Reducing bias to source samples for unsupervised domain adaptation
- Finding robust transfer features for unsupervised domain adaptation
- Learning joint latent representations based on information maximization
- Unsupervised domain adaptation with non-stochastic missing data
- Unsupervised discovery, control, and disentanglement of semantic attributes with applications to anomaly detection
- MapFlow: latent transition via normalizing flow for unsupervised domain adaptation
- NaCL: noise-robust cross-domain contrastive learning for unsupervised domain adaptation
Uses Software
This page was built for publication: Unsupervised domain adaptation in the wild via disentangling representation learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2056435)