Advertisement

Kernel-Based Naive Bayes Classifier for Medical Predictions

  • Dishant Khanna
  • Arunima Sharma
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 695)

Abstract

Researchers and clinical practitioners in medicine are working on predictive data analysis at an alarming rate. Classification methods developed using different modeling methodologies is an active area of research. In this paper, live dataset in clinical medicine is used to implement recent work on predictive data analysis, implementing a kernel-based Naïve Bayes classifier in order to validate some learned lessons for predicting the possible disease. With the medical diagnosis prediction, the aim is to enable the physician to report the disease, which might be true. The input training dataset for the classifier was taken from a government hospital.

Keywords

Classifiers Naïve Bayes Kernel density Modeling Live dataset Prediction Precision Kernel Naïve Bayes 

Notes

Acknowledgements

This work has been possible thanks to Ms. Narina Thakur, Head of Department, Department of Computer Science Engineering, BVCOE, New Delhi.

References

  1. 1.
    Aladjem, M.: Projection pursuit fitting Gaussian mixture models. In: Proceedings of Joint IAPR, vol. 2396. Lecture Notes in Computer Science, pp. 396–404 (2002)Google Scholar
  2. 2.
    Fayyad, U. et al.: Knowledge discovery in databased: an overview. In: Releational Data Mining, pp. 28–47CrossRefGoogle Scholar
  3. 3.
    Aladjem, M.: Projection pursuit mixture density estimation. IEEE Trans. Signal Process. 53(11), 4376–4383 (2005)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Bilmes, J.: A gentle tutorial on the EM algorithm and its application to parameter estimation for gaussian mixture models. Technical Report No. ICSI-TR-97021, University of Berkeley (1997)Google Scholar
  5. 5.
    Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press (1995)Google Scholar
  6. 6.
    Bishop, C.M.: Latent variable models. In: Learning in Graphical Models, pp. 371–403 (1999)CrossRefGoogle Scholar
  7. 7.
    Bishop, C.M.: Pattern Recognition and Machine Learning. Information Science and Statistics, Springer, Heidelberg (2006)zbMATHGoogle Scholar
  8. 8.
    Bottcher, S.G.: Learning Bayesian Networks with mixed variables, Ph.D. Thesis, Aalborg University (2004)Google Scholar
  9. 9.
    Castillo, E., Gutierrez, J.M., Hadi, A.S.: Expert Systems and Probabilistic Network Models. Springer (1997)Google Scholar
  10. 10.
    Chow, C., Liu, C.: Approximating discrete probability distributions with dependence trees. IEEE Trans. Inf. Theory 14, 462–467 (1968)CrossRefGoogle Scholar
  11. 11.
    Cormen, T.H., Charles, L.E., Ronald, R.L., Clifford, S.: Introductions to Algorithms. MIT Press (2003)Google Scholar
  12. 12.
    Domingos, P.: A unified bias-variance decomposition and its applications. In: Proceedings of the 17th International Conference on Machine Learning, Morgan Kaufman, pp. 231–238 (2000)Google Scholar
  13. 13.
    Domingos, P., Pazzani, M.: On the optimality of the simple Bayesian classifier under zero-one loss. Mach. Learn. 29, 103–130 (1997)CrossRefGoogle Scholar
  14. 14.
    Fayyad, U., Irani, K.: Multi-interval discretization of continuous-valued attributes for classification learning. In: Proceedings of the 13th International Conference on Artificial Intelligence, pp. 1022–1027 (1993)Google Scholar
  15. 15.
    Casella, G., Berger, R.L.: Statistical Inference. Wadsworth and Brooks (1990)Google Scholar
  16. 16.
    Friedman, J.H.: On bias, variance, 0/1—loss, and the curse-of-dimensionality. Data Min. Knowl. Disc. 1, 55–77 (1997)CrossRefGoogle Scholar
  17. 17.
    Friedman, N., Geiger, D., Goldszmidt, M.: Bayesian network classifiers. Mach. Learn. 29, 131–163 (1997)CrossRefGoogle Scholar
  18. 18.
    Cheng, J., Greiner, R.: Comparing Bayesian network classifiers. In: Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence, pp. 101–107 (1999)Google Scholar
  19. 19.
    Chickering, D.M.: Learning equivalence classes of Bayesian-network structures. J. Mach. Learn. Res. 2, 445–498 (2002)MathSciNetzbMATHGoogle Scholar
  20. 20.
    Cover, T.T., Hart, P.E.: Nearest neighbour pattern classification. IEEE Trans. Inf. Theory 13, 21–27 (1967)CrossRefGoogle Scholar
  21. 21.
    DeGroot, M.: Optimal Statistical Decisions. McGraw-Hill, New York (1970)zbMATHGoogle Scholar
  22. 22.
    Fukunaga, K.: Statistical Pattern Recognition. Academic Press Inc. (1972)Google Scholar
  23. 23.
    Bouckaert, R.: Naive Bayes classifiers that perform well with continuous variables. In: Proceedings of the 17th Australian Conference on Artificial Intelligence, pp. 1089–1094 (2004)Google Scholar
  24. 24.
    Gurwicz, Y., Lerner, B.: Rapid spline-based kernel density estimation for Bayesian networks. In: Proceedings of the 17th International Conference on Pattern Recognition, vol. 3, pp. 700–703 (2004)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Bharati Vidyapeeth’s College of EngineeringNew DelhiIndia
  2. 2.Columbia UniversityNew YorkUSA

Personalised recommendations