Advertisement

An Exact No Free Lunch Theorem for Community Detection

  • Arya D. McCarthyEmail author
  • Tongfei Chen
  • Seth Ebner
Conference paper
Part of the Studies in Computational Intelligence book series (SCI, volume 881)

Abstract

A precondition for a No Free Lunch theorem is evaluation with a loss function which does not assume a priori superiority of some outputs over others. A previous result for community detection by [12] relies on a mismatch between the loss function and the problem domain. The loss function computes an expectation over only a subset of the universe of possible outputs; thus, it is only asymptotically appropriate with respect to the problem size. By using the correct random model for the problem domain, we provide a stronger, exact No Free Lunch theorem for community detection. The claim generalizes to other set-partitioning tasks including core–periphery separation, \(k\)-clustering, and graph partitioning. Finally, we review the literature of proposed evaluation functions and identify functions which (perhaps with slight modifications) are compatible with an exact No Free Lunch theorem.

References

  1. 1.
    Chen, Z., Li, L., Bruna, J.: Supervised community detection with line graph neural networks. In: International Conference on Learning Representations (2019)Google Scholar
  2. 2.
    Gates, A.J., Ahn, Y.Y.: The impact of random models on clustering similarity. J. Mach. Learn. Res. 18(87), 1–28 (2017)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Hauer, B., Kondrak, G.: Decoding anagrammed texts written in an unknown language and script. Trans. Assoc. Comput. Linguist. 4, 75–86 (2016)CrossRefGoogle Scholar
  4. 4.
    Hubert, L., Arabie, P.: Comparing partitions. J. Classif. 2(1), 193–218 (1985)CrossRefGoogle Scholar
  5. 5.
    Kvalseth, T.O.: Entropy and correlation: some comments. IEEE Trans. Syst. Man Cybern. 17(3), 517–519 (1987)CrossRefGoogle Scholar
  6. 6.
    Lai, D., Nardini, C.: A corrected normalized mutual information for performance evaluation of community detection. J. Stat. Mech: Theory Exp. 2016(9), 093403 (2016)CrossRefGoogle Scholar
  7. 7.
    Liu, X., Cheng, H.M., Zhang, Z.Y.: Evaluation of community structures using kappa index and F-score instead of normalized mutual information. ArXiv e-prints, July 2018Google Scholar
  8. 8.
    McCarthy, A.D., Matula, D.W.: Normalized mutual information exaggerates community detection performance. In: SIAM Workshop on Network Science, SIAM NS 2018, pp. 78–79. SIAM, Portland, July 2018Google Scholar
  9. 9.
    McCarthy, A.D., Rudinger, R., Chen, T., Matula, D.W.: Metrics matter in community detection. In: Proceedings of the 8th International Conference on Complex Networks and Their Applications: Complex Networks, Lisbon, Portugal (2019)Google Scholar
  10. 10.
    Newman, M.E.J., Girvan, M.: Finding and evaluating community structure in networks. Phys. Rev. E 69, 026113 (2004)CrossRefGoogle Scholar
  11. 11.
    Peel, L.: Estimating network parameters for selecting community detection algorithms. J. Adv. Inform. Fusion 6, 119–130 (2011)Google Scholar
  12. 12.
    Peel, L., Larremore, D.B., Clauset, A.: The ground truth about metadata and community detection in networks. Sci. Adv. 3(5), e1602548 (2017)CrossRefGoogle Scholar
  13. 13.
    Radicchi, F., Castellano, C., Cecconi, F., Loreto, V., Parisi, D.: Defining and identifying communities in networks. Proc. Natl. Acad. Sci. 101(9), 2658–2663 (2004)CrossRefGoogle Scholar
  14. 14.
    Romano, S., Bailey, J., Nguyen, V., Verspoor, K.: Standardized mutual information for clustering comparisons: one step further in adjustment for chance. In: International Conference on Machine Learning, pp. 1143–1151 (2014)Google Scholar
  15. 15.
    Romano, S., Vinh, N.X., Bailey, J., Verspoor, K.: Adjusting for chance clustering comparison measures. J. Mach. Learn. Res. 17(1), 4635–4666 (2016)MathSciNetzbMATHGoogle Scholar
  16. 16.
    Schumacher, C., Vose, M.D., Whitley, L.D.: The no free lunch and problem description length. In: Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation, GECCO 2001, pp. 565–570. Morgan Kaufmann Publishers Inc., San Francisco (2001)Google Scholar
  17. 17.
    Vinh, N.X., Epps, J., Bailey, J.: Information theoretic measures for clusterings comparison: is a correction for chance necessary? In: Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, pp. 1073–1080. ACM, New York (2009)Google Scholar
  18. 18.
    Wolpert, D.H.: The lack of a priori distinctions between learning algorithms. Neural Comput. 8(7), 1341–1390 (1996)CrossRefGoogle Scholar
  19. 19.
    Yang, Z., Algesheimer, R., Tessone, C.J.: A comparative analysis of community detection algorithms on artificial networks. Sci. Rep. 6, 30750 (2016)CrossRefGoogle Scholar
  20. 20.
    Zhang, J., Chen, T., Hu, J.: On the relationship between gaussian stochastic blockmodels and label propagation algorithms. J. Stat. Mech: Theory Exp. 2015(3), P03009 (2015)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Zhang, P.: Evaluating accuracy of community detection using the relative normalized mutual information. J. Stat. Mech: Theory Exp. 2015(11), P11006 (2015)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Johns Hopkins UniversityBaltimoreUSA

Personalised recommendations