Advertisement

Label Noise-Tolerant Hidden Markov Models for Segmentation: Application to ECGs

  • Benoît Frénay
  • Gaël de Lannoy
  • Michel Verleysen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6911)

Abstract

The performance of traditional classification models can adversely be impacted by the presence of label noise in training observations. The pioneer work of Lawrence and Schölkopf tackled this issue in datasets with independent observations by incorporating a statistical noise model within the inference algorithm. In this paper, the specific case of label noise in non-independent observations is rather considered. For this purpose, a label noise-tolerant expectation-maximisation algorithm is proposed in the frame of hidden Markov models. Experiments are carried on both healthy and pathological electrocardiogram signals with distinct types of additional artificial label noise. Results show that the proposed label noise-tolerant inference algorithm can improve the segmentation performances in the presence of label noise.

Keywords

label noise hidden Markov models expectation maximisation algorithm segmentation electrocardiograms 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Lawrence, N.D., Schölkopf, B.: Estimating a kernel fisher discriminant in the presence of label noise. In: Proceedings of the Eighteenth International Conference on Machine Learning, ICML 2001, pp. 306–313. Morgan Kaufmann Publishers Inc, San Francisco (2001)Google Scholar
  2. 2.
    Li, Y., Wessels, L.F.A., de Ridder, D., Reinders, M.J.T.: Classification in the presence of class noise using a probabilistic kernel fisher method. Pattern Recognition 40, 3349–3357 (2007)CrossRefzbMATHGoogle Scholar
  3. 3.
    Bouveyron, C., Girard, S.: Robust supervised classification with mixture models: Learning from data with uncertain labels. Pattern Recognition 42, 2649–2658 (2009)CrossRefzbMATHGoogle Scholar
  4. 4.
    McSharry, P.E., Clifford, G.D., Tarassenko, L., Smith, L.A.: Dynamical model for generating synthetic electrocardiogram signals. IEEE Transactions on Biomedical Engineering 50(3), 289–294 (2003)CrossRefGoogle Scholar
  5. 5.
    Goldberger, A.L., Amaral, L.A.N., Glass, L., Hausdorff, J.M., Ivanov, P.C., Mark, R.G., Mietus, J.E., Moody, G.B., Peng, C.-K., Stanley, H.E.: PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation 101(23), e215–e220 (2000)CrossRefGoogle Scholar
  6. 6.
    Hughes, N.P., Tarassenko, L., Roberts, S.J.: Markov models for automated ECG interval analysis. In: NIPS 2004: Proceedings of the 16th Conference on Advances in Neural Information Processing Systems, pp. 611–618 (2004)Google Scholar
  7. 7.
    Brodley, C.E., Friedl, M.A.: Identifying Mislabeled Training Data. Journal of Artificial Intelligence Research 11, 131–167 (1999)zbMATHGoogle Scholar
  8. 8.
    Barandela, R., Gasca, E.: Decontamination of training samples for supervised pattern recognition methods. In: Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition, pp. 621–630. Springer, London (2000)Google Scholar
  9. 9.
    Guyon, I., Matic, N., Vapnik, V.: Discovering informative patterns and data cleaning. In: Fayyad, U.M., Piatetsky-Shapiro, G., Smyth, P., Uthurusamy, R. (eds.) Advances in Knowledge Discovery and Data Mining, pp. 181–203 (1996)Google Scholar
  10. 10.
    Bootkrajang, J., Kaban, A.: Multi-class classification in the presence of labelling errors. In: Proceedings of the 19th European Conference on Artificial Neural Networks, pp. 345–350 (2011)Google Scholar
  11. 11.
    Côme, E., Oukhellou, L., Denoeux, T., Aknin, P.: Mixture model estimation with soft labels. In: Proceedings of the 4th International Conference on Soft Methods in Probability and Statistics, pp. 165–174 (2008)Google Scholar
  12. 12.
    Rabiner, L.R.: A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE 77(2), 257–286 (1989)CrossRefGoogle Scholar
  13. 13.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society. Series B (Methodological) 39(1), 1–38 (1977)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics), 1st ed. 2006. corr. 2nd printing edition. Springer, Heidelberg (2007)Google Scholar
  15. 15.
    Clifford, G.D., Azuaje, F., McSharry, P.: Advanced Methods And Tools for ECG Data Analysis. Artech House, Inc., Norwood (2006)Google Scholar
  16. 16.
    Hughes, N.P., Roberts, S.J., Tarassenko, L.: Semi-supervised learning of probabilistic models for ecg segmentation. In: IEMBS 2004: Proceedings of the 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 1, pp. 434–437 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Benoît Frénay
    • 1
  • Gaël de Lannoy
    • 1
  • Michel Verleysen
    • 1
  1. 1.Machine Learning Group, ICTEAM InstituteUniversité catholique de LouvainLouvain-la-NeuveBelgium

Personalised recommendations