Skip to main content

Drift Correction Using Maximum Independence Domain Adaptation

  • Chapter
  • First Online:
Breath Analysis for Medical Applications

Abstract

Transfer samples are required by the drift correction algorithms in the last three chapters. When transfer samples are not available, we can resort to unsupervised domain adaptation approaches. Maximum independence domain adaptation (MIDA) is proposed in this chapter for unsupervised drift correction. MIDA borrows the definition of domain features in the last chapter and learns features which have maximal independence with them, so as to reduce the inter-domain discrepancy in distributions. A feature augmentation strategy is designed so that the learned subspace is background-specific. Semi-supervised MIDA (SMIDA) extends MIDA by exploiting the label information. The proposed algorithms are flexible and fast. The effectiveness of our approaches is verified by experiments on synthetic datasets and three real-world ones on sensors and measurement.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Barshan E, Ghodsi A, Azimifar Z, Jahromi MZ (2011) Supervised principal component analysis: visualization, classification and regression on subspaces and submanifolds. Pattern Recogn 44(7):1357–1371

    Google Scholar 

  • Belkin M, Niyogi P, Sindhwani V (2006) Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J Mach Learn Res 7:2399–2434

    Google Scholar 

  • Blitzer J, Dredze M, Pereira F (2007) Biographies, bollywood, boom-boxes and blenders: domain adaptation for sentiment classification. ACL 7:440–447

    Google Scholar 

  • Chen M, Xu Z, Weinberger K, Sha F (2012) Marginalized denoising autoencoders for domain adaptation. In: 29th international conference on machine learning

    Google Scholar 

  • Cui Z, Li W, Xu D, Shan S, Chen X, Li X (2014) Flowing on riemannian manifold: domain adaptation by shifting covariance. IEEE Trans Cybern 44(12):2264–2273

    Google Scholar 

  • Daum III H (2007) Frustratingly easy domain adaptation. In: Proceedings of 45th annual meeting of the association for computational linguistics

    Google Scholar 

  • Fernando B, Habrard A, Sebban M, Tuytelaars T (2013) Unsupervised visual domain adaptation using subspace alignment. In: Proceedings of the IEEE international conference on computer vision, pp 2960–2967

    Google Scholar 

  • Gama J, Žliobaite I, Bifet A, Pechenizkiy M, Bouchachia A (2014) A survey on concept drift adaptation. ACM Comput Surv (CSUR) 46(4):44

    Google Scholar 

  • Gong B, Shi Y, Sha F, Grauman K (2012) Geodesic flow kernel for unsupervised domain adaptation. In: 2012 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 2066–2073

    Google Scholar 

  • Gong B, Grauman K, Sha F (2014) Learning kernels for unsupervised domain adaptation with applications to visual object recognition. Int J Comput Vis 109(1–2):3–27

    Google Scholar 

  • Gretton A, Bousquet O, Smola A, Schlkopf B (2005) Measuring statistical dependence with hilbert-schmidt norms. In: Algorithmic learning theory. Springer, pp 63–77

    Google Scholar 

  • Jiang M, Huang W, Huang Z, Yen G (2016) Integration of global and local metrics for domain adaptation learning via dimensionality reduction. IEEE Trans Cybern

    Google Scholar 

  • Liu Q, Li X, Ye M, Ge SS, Du X (2014) Drift compensation for electronic nose by semi-supervised domain adaption. IEEE Sens J 14(3):657–665

    Google Scholar 

  • Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359

    Google Scholar 

  • Pan SJ, Tsang IW, Kwok JT, Yang Q (2011) Domain adaptation via transfer component analysis. IEEE Trans Neural Netw 22(2):199–210

    Google Scholar 

  • Patel VM, Gopalan R, Li R, Chellappa R (2015) Visual domain adaptation: a survey of recent advances. IEEE Signal Process Mag 32(3):53–69

    Google Scholar 

  • Schlkopf B, Smola A, Mller KR (1998) Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput 10(5):1299–1319

    Google Scholar 

  • Scholkopft B, Mullert KR (1999) Fisher discriminant analysis with kernels. Neural Netw Signal Process IX:41–48

    Google Scholar 

  • Shao M, Kit D, Fu Y (2014) Generalized transfer subspace learning through low-rank constraint. Int J Comput Vis 109(1–2):74–93

    Google Scholar 

  • Shi Y, Sha F (2012) Information-theoretical learning of discriminative clusters for unsupervised domain adaptation. In: Proceedings of the international conference on machine learning (ICML)

    Google Scholar 

  • Song L, Gretton A, Borgwardt KM, Smola AJ (2007) Colored maximum variance unfolding. In: Advances in neural information processing systems, pp 1385–1392

    Google Scholar 

  • Song L, Smola A, Gretton A, Bedo J, Borgwardt K (2012) Feature selection via dependence maximization. J Mach Learn Res 13(1):1393–1434

    Google Scholar 

  • Von Bünau P, Meinecke FC, Király FC, Müller KR (2009) Finding stationary subspaces in multivariate time series. Phys Rev Lett 103(21):214,101

    Google Scholar 

  • Yan K, Zhang D (2015) Improving the transfer ability of prediction models for electronic noses. Sens Actuators B: Chem 220:115–124

    Article  Google Scholar 

  • Yan K, Zhang D (2016a) Calibration transfer and drift compensation of e-noses via coupled task learning. Sens Actuators B: Chem 225:288–297

    Article  Google Scholar 

  • Yan K, Zhang D (2016b) Correcting instrumental variation and time-varying drift: a transfer learning approach with autoencoders. IEEE Trans Instrum Meas 65(9):2012–2022

    Google Scholar 

  • Yan K, Zhang D, Wu D, Wei H, Lu G (2014) Design of a breath analysis system for diabetes screening and blood glucose level prediction. IEEE Trans Biomed Eng 61(11):2787–2795

    Google Scholar 

  • Yan K, Kou L, Zhang D (2017) Learning domain-invariant subspace using domain features and independence maximization. IEEE Trans Cybern

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Zhang .

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Nature Singapore Pte Ltd.

About this chapter

Cite this chapter

Zhang, D., Guo, D., Yan, K. (2017). Drift Correction Using Maximum Independence Domain Adaptation. In: Breath Analysis for Medical Applications. Springer, Singapore. https://doi.org/10.1007/978-981-10-4322-2_9

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-4322-2_9

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-4321-5

  • Online ISBN: 978-981-10-4322-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics