Weighted Naïve Bayes Classifiers by Renyi Entropy

  • Tomomi Endo
  • Mineichi Kudo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8258)

Abstract

A weighted naïve Bayes classifier using Renyi entropy is proposed. Such a weighted naïve Bayes classifier has been studied so far, aiming at improving the prediction performance or at reducing the number of features. Among those studies, weighting with Shannon entropy has succeeded in improving the performance. However, the reasons of the success was not well revealed. In this paper, the original classifier is extended using Renyi entropy with parameter α. The classifier includes the regular naïve Bayes classifier in one end (α = 0.0) and naïve Bayes classifier weighted by the marginal Bayes errors in the other end (α = ∞). The optimal setting of α has been discussed analytically and experimentally.

Keywords

Support Vector Machine Feature Selection Recognition Rate Shannon Entropy Linear Support Vector Machine 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Omura, K., Kudo, M., Endo, T., Murai, T.: Weighted naïve Bayes classifier on categorical features. In: Proc. of 12th International Conference on Intelligent Systems Design and Applications, pp. 865–870 (2012)Google Scholar
  2. 2.
    Chen, L., Wang, S.: Automated feature weighting in naive bayes for high-dimensional data classification. In: Proc. of the 21st ACM International Conference on Information and Knowledge Management, pp. 1243–1252 (2012)Google Scholar
  3. 3.
    Zhang, H., Sheng, S.: Learning weighted naive Bayes with accurate ranking. In: Proc. of Fourth IEEE International Conference on Data Mining, pp. 567–570 (2004)Google Scholar
  4. 4.
    Grtner, T., Flach, P.A.: WBCSVM: Weighted Bayesian Classification based on Support Vector Machines. In: Proc. of the Eighteenth International Conference on Machine Learning, pp. 207–209 (2001)Google Scholar
  5. 5.
    Frank, E., Hall, M., Pfahringer, B.: Locally weighted naive bayes. In: Proc. of the Nineteenth Conference on Uncertainty in Artificial Intelligence, pp. 249–256 (2002)Google Scholar
  6. 6.
    Lee, C.H., Gutierrez, F., Dou, D.: Calculating Feature Weights in Naive Bayes with Kullback-Leibler Measure. In: Proc. of 11th IEEE International Conference on Data Mining, pp. 1146–1151 (2011)Google Scholar
  7. 7.
    Yang, Y., Webb, G.I.: On why discretization works for naive-bayes classifiers. In: Proc. of 16th Australian Conference on AI, pp. 440–452 (2003)Google Scholar
  8. 8.
    Feder, M., Merhav, N.: Relations Between Entropy and Error Probability. IEEE Transactions on Information Theory 40, 259–266 (1994)CrossRefMATHGoogle Scholar
  9. 9.
    Newman, D.J., Hettich, S., Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases. University of California, Department of Infromation and Computer Science, Irvine, http://www.ics.uci.edu/~mlearn/MLRepository.html

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Tomomi Endo
    • 1
  • Mineichi Kudo
    • 1
  1. 1.Division of Computer Science, Graduate School of Information Science and TechnologyHokkaido UniversitySapporoJapan

Personalised recommendations