Abstract
One of the characteristics of the domain shift problem is that the source and target data have been drawn from different distributions. A natural approach to addressing this problem therefore consists of learning an embedding of the source and target data such that they have similar distributions in the new space. In this chapter, we study several methods that follow this approach. At the core of these methods lies the notion of distance between two distributions. We first discuss domain adaptation (DA) techniques that rely on the Maximum Mean Discrepancy to measure such a distance. We then study the use of alternative distribution distance measures within one specific Domain Adaptation framework. In this context, we focus on f-divergences, and in particular on the KL divergence and the Hellinger distance. Throughout the chapter, we evaluate the different methods and distance measures on the task of visual object recognition and compare them against related baselines on a standard DA benchmark dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Note that, even with known parameters, computing the Fisher-Rao metric may not be feasible in closed-form.
- 2.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Baktashmotlagh, M., Harandi, M., Salzmann, M. (2017). Learning Domain Invariant Embeddings by Matching Distributions. In: Csurka, G. (eds) Domain Adaptation in Computer Vision Applications. Advances in Computer Vision and Pattern Recognition. Springer, Cham. https://doi.org/10.1007/978-3-319-58347-1_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-58347-1_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-58346-4
Online ISBN: 978-3-319-58347-1
eBook Packages: Computer ScienceComputer Science (R0)