Skip to main content

Perspectives on Bayesian Natural Language Semantics and Pragmatics

  • Chapter
  • First Online:
Bayesian Natural Language Semantics and Pragmatics

Part of the book series: Language, Cognition, and Mind ((LCAM,volume 2))

Abstract

Bayesian interpretation is a technique in signal processing and its application to natural language semantics and pragmatics (BNLSP from here on and BNLI if there is no particular emphasis on semantics and pragmatics) is basically an engineering decision. It is a cognitive science hypothesis that humans emulate BNLSP. That hypothesis offers a new perspective on the logic of interpretation and the recognition of other people’s intentions in inter-human communication. The hypothesis also has the potential of changing linguistic theory, because the mapping from meaning to form becomes the central one to capture in accounts of phonology, morphology and syntax. Semantics is essentially read off from this mapping and pragmatics is essentially reduced to probability maximation within Grice’s intention recognition. Finally, the stochastic models used can be causal, thus incorporating new ideas on the analysis of causality using Bayesian nets. The paper explores and connects these different ways of being committed to BNLSP.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Symbolic parsing with visual grammars becomes intractable as soon as the grammar becomes interesting (Marriott and Meyer 1997). It follows that visual recognition must be stochastic. To profit fully from the mental camera, it must also be Bayesian.

  2. 2.

    The choice of the right features is not a trivial matter.

  3. 3.

    The principle gives the preference for global accommodation in terms of Heim (1983) and Van der Sandt (1992) in terms of BNLSP: nothing else would do. It gives however what I defend in Zeevat (1992), a preference for multiple accommodation, something which does not make sense on the versions of the satisfaction theory in these two references.

  4. 4.

    Garcia Odon (2012) is an admirable discussion of the empirical facts around projection in Karttunian filters. Her analysis however rests on conditional perfection. Causal inferences seem a much better way of deriving the crucial pattern: \(p \rightarrow q\), where p is the presupposition and q the non-entailed cause of p. If it is given that p and that q is the cause of p then q must be the case as well. The non-projection in (15d) also shows that the causal explanation also works outside Karttunian filters.

  5. 5.

    There are other ways of formulating the inequality, e.g. in all rational \(P\ P(x)< P(y)\). Rational probability assignments can be formalised straightforwardly on n-bounded models (“worlds”) for a finite first order language L. An n-bounded model has a domain with cardinality smaller than n. An n-bounded model can be seen—under isomorphism—as a row in a truth-table for propositional logic and can interpret a given set of connections by probabilities given directly by the frequencies for the connection in the model. This will assign a probability \(p_w(e)\) to an event e, deriving from the presence of other events in w and connections of them to e. Assume a probability distribution P over W, the set of worlds. P can be updated by bayesian update on the basis of observations e and the distributions \(p_w\) over events. The probability of a formula of L can now be equated with the sum of the probabilities of the worlds on which it is true. A rational probability assignment is an assignment P over W which respects the beliefs of the subject (the beliefs receive probability 1) and the experience of the subject, in the double sense that the experiences themselves are treated as beliefs and receive probability 1 and that the probability assignments to the worlds are updated by Bayes’ rule on the basis of those experiences. In this way, a world acquires more probability to the degree that its own frequencies predict the frequencies in the experience. The information state of the subject can be defined as the combination of the set of worlds it allows and the rational probability assignments over those worlds. Updates eliminate worlds and reassign probability by experience. It is possible to remove the restriction to n-bounded worlds, by using probability density functions and Bayesian updates over those, but that is not as straightforward.

  6. 6.

    Bayesian nets are popular technology and introductions to Bayesian nets, their theory and their applications abound, as well as software environments for defining these nets and computing with them. Any attempt to give an example in these pages could not compete with what the reader can find immediately on the net. It is however unclear to the author how important Bayesian nets are for modeling the dynamic causal dependencies that are needed for BNLSP, though they definitely are an important source of inspiration.

  7. 7.

    The probability of the effect given the possible cause should outweigh its probability if the possible cause did not obtain.

  8. 8.

    Zeevat (2010) gives a more detailed discussion.

References

  • Benotti, L., & Blackburn, P. (2011). Causal implicatures and classical planning. Lecture Notes in Artificial Intelligence (LNAI) 6967 (pp. 26–39). Springer.

    Google Scholar 

  • Bos, J. (2003). Implementing the binding and accommodation theory for anaphora resolution and presupposition projection. Computational Linguistics, 29(2), 179–210.

    Article  Google Scholar 

  • Bowers, J., & Davis, C. (2012). Bayesian just-so stories in psychology and neuroscience. Psychological Bulletin, 138(3), 389–414.

    Article  Google Scholar 

  • Brozzo, C. (2013). Motor intentions: Connecting intentions with actions. Ph.D. thesis. Universita degli Studi di Milano.

    Google Scholar 

  • Chomsky, N. (1957). Syntactic structures. The Hague: Mouton.

    Google Scholar 

  • Clark, H. (1996). Using language. Cambridge: CUP.

    Book  Google Scholar 

  • Dalrymple, M., Kanazawa, M., Kim, Y., Mchombo, S., & Peters, S. (1998). Reciprocal expressions and the concept of reciprocity. Linguistics and Philosophy, 21(2), 159–210.

    Article  Google Scholar 

  • Dowty, D. (1990). Thematic proto-roles and argument selection. Language, 67(3), 547–919.

    Article  Google Scholar 

  • Doya, K., Ishii, S., Pouget, A., & Rao, R. P. N. (Eds.). (2007). Bayesian Brain: Probabilistic approaches to neural coding. Cambridge: MIT Press.

    Google Scholar 

  • Garcia Odon, A. (2012). Presupposition projection and entailment relations. Ph.D. thesis, Universitat Pompeu Fabra.

    Google Scholar 

  • Gazdar, G. (1979). Pragmatics: Implicature presupposition and logical form. New York: Academic Press.

    Google Scholar 

  • Heim, I. (1983). On the projection problem for presuppositions. In M. Barlow, D. Flickinger, & M. Westcoat (Eds.), Second Annual West Coast Conference on Formal Linguistics (pp. 114–126). Stanford University.

    Google Scholar 

  • Hobbs, J., Stickel, M., Appelt, D., Martin, P. (1990). Interpretation as abduction. Technical Report 499, SRI International, Menlo Park, California.

    Google Scholar 

  • Hogeweg, L. (2009). Word in Process. On the interpretation, acquisition and production of words. Ph.D. thesis, Radboud University Nijmegen.

    Google Scholar 

  • Kilner, J. M., Friston, K. J., & Frith, C. D. (2007). Predictive coding: an account of the mirror neuron system. Cognitive processing, 8(3), 159–166.

    Article  Google Scholar 

  • Kiparsky, P. (1973). “Elsewhere” in phonology. In S. R. Anderson & P. Kiparsky (Eds.),  A Festschrift for Morris Halle (pp. 93–107). New York: Holt.

    Google Scholar 

  • Langendoen, D. T., & Savin, H. (1971). The projection problem for presuppositions. In C. Fillmore & D. T. Langendoen (Eds.), Studies in Linguistic Semantics (pp. 373–388). New York: Holt.

    Google Scholar 

  • Liberman, A., Cooper, F., Shankweiler, D., & Studdert-Kennedy, M. (1967). Perception of speech code. Psychological Review, 74, 431–461.

    Article  Google Scholar 

  • Marriott, K., & Meyer, B. (1997). On the classification of visual languages by grammar hierarchies. Journal of Visual Languages and Computing, 8(4), 375–402.

    Article  Google Scholar 

  • Mellish, C. (1988). Implementing systemic classification by unification. Computational Linguistics, 14(1), 40–51.

    Google Scholar 

  • Oaksford, M., & Chater, N. (2007). Bayesian rationality. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Pearl, J. (2009). Causality: Models, reasoning and inference (2nd ed., Vol. 29). Cambridge: MIT press.

    Google Scholar 

  • Pickering, M. J., & Garrod, S. (2007). Do people use language production to make predictions during comprehension? Trends in Cognitive Sciences, 11(3), 105–110.

    Article  Google Scholar 

  • Reiter, E., & Dale, R. (2000). Building natural-language generation systems. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Rizzolatti, G., & Craighero, G. (2004). The mirror-neuron system. Annual Review of Neuroscience, 27, 169–192.

    Article  Google Scholar 

  • Smolensky, P. (1991). Connectionism, constituency and the language of thought. In B. M. Loewer & G. Rey (Eds.), Meaning in mind: Fodor and his critics (pp. 286–306). Oxford: Blackwell.

    Google Scholar 

  • Soames, S. (1982). How presuppositions are inherited: A solution to the projection problem. Linguistic Inquiry, 13, 483–545.

    Google Scholar 

  • Stalnaker, R. (1979). Assertion. In P. Cole (Ed.), Syntax and semantics (Vol. 9, pp. 315–332). London: Academic Press.

    Google Scholar 

  • Van der Sandt, R. (1992). Presupposition projection as anaphora resolution. Journal of Semantics, 9, 333–377.

    Article  Google Scholar 

  • Zeevat, H. (1992). Presupposition and accommodation in update semantics. Journal of Semantics, 9, 379–412.

    Article  Google Scholar 

  • Zeevat, H. (2010). Production and interpretation of anaphora and ellipsis. International Review of Pragmatics, 2(2), 169–190.

    Article  Google Scholar 

  • Zeevat. H. (2014a). Bayesian presupposition. MS, Amsterdam University.

    Google Scholar 

  • Zeevat, H. (2014b). Language production and interpretation. Linguistics meets cognition (CRISPI). Leiden: Jacob Brill.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Henk Zeevat .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Zeevat, H. (2015). Perspectives on Bayesian Natural Language Semantics and Pragmatics. In: Zeevat, H., Schmitz, HC. (eds) Bayesian Natural Language Semantics and Pragmatics. Language, Cognition, and Mind, vol 2. Springer, Cham. https://doi.org/10.1007/978-3-319-17064-0_1

Download citation

Publish with us

Policies and ethics