Assigning Different Weights to Feature Values in Naive Bayes

  • Chang-Hwan LeeEmail author
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 652)


Assigning weights in features has been an important topic in some classification learning algorithms. While the current weighting methods assign a weight to each feature, in this paper, we assign a different weight to the values of each feature. The performance of naive Bayes learning with value-based weighting method is compared with that of some other traditional methods for a number of datasets.


Feature weighting Feature selection Naive Bayes Kullback-Leibler 



This work was supported by the Korea Research Foundation (KRF) grant funded by the Korea government (MEST) (No. 2014-R1A2A1A11051011).


  1. 1.
    Zheng, Z., Webb, G.I.: Lazy learning of Bayesian rules. Mach. Learn. 41, 53–84 (2000). Kluwer Academic PublishersCrossRefGoogle Scholar
  2. 2.
    Wettschereck, D., Aha, D.W., Mohri, T.: A review and empirical evaluation of feature weighting methods for a class of lazy learning algorithms. AI Rev. 11, 273–314 (1997)Google Scholar
  3. 3.
    Domingos, P., Pazzani, M.: On the optimality of the simple Bayesian classifier under zero-one loss. Mach. Learn. 29(2–3), 103–130 (1997)CrossRefzbMATHGoogle Scholar
  4. 4.
    Gärtner, T., Flach, P.A.: Wbcsvm: weighted Bayesian classification based on support vector machines. In: The Eighteenth International Conference on Machine Learning (2001)Google Scholar
  5. 5.
    Hall, M.: A decision tree-based attribute weighting filter for naive Bayes. Knowl. Based Syst. 20(2), 120–126 (2007)CrossRefGoogle Scholar
  6. 6.
    Zhang, H., Sheng, S.: Learning weighted naive Bayes with accurate ranking. In: ICDM 2004: Proceedings of the Fourth IEEE International Conference on Data Mining (2004)Google Scholar
  7. 7.
    Ratanamahatana, C.A., Gunopulos, D.: Feature selection for the naive Bayesian classifier using decision trees. Appl. Artif. Intell. 17(5–6), 475–487 (2003)CrossRefGoogle Scholar
  8. 8.
    Kohavi, R.: Scaling up the accuracy of naive-Bayes classifiers: a decision-tree hybrid. In: Second International Conference on Knowledge Discovery and Data Mining (1996)Google Scholar
  9. 9.
    Langley, P., Sage, S.: Induction of selective Bayesian classifiers. In: Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pp. 399–406 (1994)Google Scholar
  10. 10.
    Friedman, N., Geiger, D., Goldszmidt, M., Provan, G., Langley, P., Smyth, P.: Bayesian network classifiers. Mach. Learn. 29, 131–163 (1997)CrossRefzbMATHGoogle Scholar
  11. 11.
    Kohavi, R., John, G.H.: Wrappers for feature subset selection. Artif. Intell. 97(1–2), 273–324 (1997)CrossRefzbMATHGoogle Scholar
  12. 12.
    Lee, C.-H., Gutierrez, F., Dou, D.: Calculating feature weights in naive Bayes with Kullback-Leibler measure. In: 11th IEEE International Conference on Data Mining (2011)Google Scholar
  13. 13.
    Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers Inc., San Francisco (1993)Google Scholar
  14. 14.
    Kullback, S., Leibler, R.A.: On information and sufficiency. Ann. Math. Stat. 22(1), 79–86 (1951)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Frank, A., Asuncion, A.: UCI machine learning repository (2010).
  16. 16.
    Fayyad, U.M., Irani, K.B.: Multi-interval discretization of continuous-valued attributes for classification learning. In: International Joint Conference on Artificial Intelligence, pp. 1022–1029 (1993)Google Scholar
  17. 17.
    Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The weka data mining software: an update. SIGKDD Explor. 11, 10–18 (2009)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2016

Authors and Affiliations

  1. 1.Department of Information and CommunicationsDongGuk UniversitySeoulKorea

Personalised recommendations