Skip to main content

First Notes on Maximum Entropy Entailment for Quantified Implications

  • Conference paper
  • First Online:
Formal Concept Analysis (ICFCA 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10308))

Included in the following conference series:

  • 521 Accesses

Abstract

Entropy is a measure for the uninformativeness or randomness of a data set, i.e., the higher the entropy is, the lower is the amount of information. In the field of propositional logic it has proven to constitute a suitable measure to be maximized when dealing with models of probabilistic propositional theories. More specifically, it was shown that the model of a probabilistic propositional theory with maximal entropy allows for the deduction of other formulae which are somehow expected by humans, i.e., allows for some kind of common sense reasoning.

In order to pull the technique of maximum entropy entailment to the field of Formal Concept Analysis, we define the notion of entropy of a formal context with respect to the frequency of its object intents, and then define maximum entropy entailment for quantified implication sets, i.e., for sets of partial implications where each implication has an assigned degree of confidence. Furthermore, then this entailment technique is utilized to define so-called maximum entropy implicational bases (ME-bases), and a first general example of such a ME-base is provided.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In the field of machine learning, an implication is also called association rule, a premise is called antedecent, and a conclusion is called consequent.

  2. 2.

    Of course, this may be easily solved by regarding i as a partial function.

  3. 3.

    We use the logarithm with base 2 here, since we are dealing with data sets or informations, respectively, which are encoded as bits. However, using another base would not cause any problem, since this would only distort the entropy by a multiplicative factor.

  4. 4.

    We denote by the quantified implication set which assigns no probability to \(X\rightarrow Y\), but otherwise coincides with \(\mathcal {L}\).

References

  1. Adaricheva, K.V., Nation, J.B., Rand, R.: Ordered direct implicational basis of a finite closure system. Discrete Appl. Math. 161(6), 707–723 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  2. Balcázar, J.L.: Minimum-size bases of association rules. In: Daelemans, W., Goethals, B., Morik, K. (eds.) ECML PKDD 2008. LNCS, vol. 5211, pp. 86–101. Springer, Heidelberg (2008). doi:10.1007/978-3-540-87479-9_24

    Chapter  Google Scholar 

  3. Beierle, C., et al.: Extending and completing probabilistic knowledge and beliefs without bias. KI 29(3), 255–262 (2015)

    Google Scholar 

  4. Borchmann, D.: Deciding entailment of implications with support and confidence in polynomial space. CoRR abs/1201.5719 (2012)

    Google Scholar 

  5. Ganter, B., Wille, R.: Formal Concept Analysis: Mathematical Foundations. Springer, Heidelberg (1999)

    Book  MATH  Google Scholar 

  6. Guigues, J.-L., Duquenne, V.: Famille minimale d’implications informatives résultant d’un tableau de données binaires. Mathématiques et Sci. Hum. 95, 5–18 (1986)

    Google Scholar 

  7. Kriegel, F.: Probabilistic implicational bases in FCA and probabilistic bases of GCIs in \({\cal{EL}}^\bot \). In: Yahia, S.B., Konecny, J. (eds.) Proceedings of the Twelfth International Conference on Concept Lattices and Their Applications, Clermont-Ferrand, France, 13–16 October 2015, vol. 1466. CEUR Workshop Proceedings. CEUR-WS.org, pp. 193–204 (2015)

    Google Scholar 

  8. Luxenburger, M.: Implikationen, Abhängigkeiten und Galois Abbildungen - Beiträge zur formalen Begriffsanalyse. Ph.D. thesis. Technische Hochschule Darmstadt (1993)

    Google Scholar 

  9. Paris, J.B.: The Uncertain Reasoner’s Companion - A Mathematical Perspective. Cambridge Tracts in Theoretical Computer Science, vol. 39. Cambridge University Press, Cambridge (1994)

    MATH  Google Scholar 

  10. Shannon, C.E., Weaver, W.: The Mathematical Theory of Communication. University of Illinois Press, Urbana (1949)

    MATH  Google Scholar 

Download references

Acknowledgements

The author gratefully thanks the anonymous reviewers for their constructive hints and helpful remarks.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Francesco Kriegel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Kriegel, F. (2017). First Notes on Maximum Entropy Entailment for Quantified Implications. In: Bertet, K., Borchmann, D., Cellier, P., Ferré, S. (eds) Formal Concept Analysis. ICFCA 2017. Lecture Notes in Computer Science(), vol 10308. Springer, Cham. https://doi.org/10.1007/978-3-319-59271-8_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-59271-8_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-59270-1

  • Online ISBN: 978-3-319-59271-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics