Abstract
Bayesian interpretation is a technique in signal processing and its application to natural language semantics and pragmatics (BNLSP from here on and BNLI if there is no particular emphasis on semantics and pragmatics) is basically an engineering decision. It is a cognitive science hypothesis that humans emulate BNLSP. That hypothesis offers a new perspective on the logic of interpretation and the recognition of other people’s intentions in inter-human communication. The hypothesis also has the potential of changing linguistic theory, because the mapping from meaning to form becomes the central one to capture in accounts of phonology, morphology and syntax. Semantics is essentially read off from this mapping and pragmatics is essentially reduced to probability maximation within Grice’s intention recognition. Finally, the stochastic models used can be causal, thus incorporating new ideas on the analysis of causality using Bayesian nets. The paper explores and connects these different ways of being committed to BNLSP.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Symbolic parsing with visual grammars becomes intractable as soon as the grammar becomes interesting (Marriott and Meyer 1997). It follows that visual recognition must be stochastic. To profit fully from the mental camera, it must also be Bayesian.
- 2.
The choice of the right features is not a trivial matter.
- 3.
The principle gives the preference for global accommodation in terms of Heim (1983) and Van der Sandt (1992) in terms of BNLSP: nothing else would do. It gives however what I defend in Zeevat (1992), a preference for multiple accommodation, something which does not make sense on the versions of the satisfaction theory in these two references.
- 4.
Garcia Odon (2012) is an admirable discussion of the empirical facts around projection in Karttunian filters. Her analysis however rests on conditional perfection. Causal inferences seem a much better way of deriving the crucial pattern: \(p \rightarrow q\), where p is the presupposition and q the non-entailed cause of p. If it is given that p and that q is the cause of p then q must be the case as well. The non-projection in (15d) also shows that the causal explanation also works outside Karttunian filters.
- 5.
There are other ways of formulating the inequality, e.g. in all rational \(P\ P(x)< P(y)\). Rational probability assignments can be formalised straightforwardly on n-bounded models (“worlds”) for a finite first order language L. An n-bounded model has a domain with cardinality smaller than n. An n-bounded model can be seen—under isomorphism—as a row in a truth-table for propositional logic and can interpret a given set of connections by probabilities given directly by the frequencies for the connection in the model. This will assign a probability \(p_w(e)\) to an event e, deriving from the presence of other events in w and connections of them to e. Assume a probability distribution P over W, the set of worlds. P can be updated by bayesian update on the basis of observations e and the distributions \(p_w\) over events. The probability of a formula of L can now be equated with the sum of the probabilities of the worlds on which it is true. A rational probability assignment is an assignment P over W which respects the beliefs of the subject (the beliefs receive probability 1) and the experience of the subject, in the double sense that the experiences themselves are treated as beliefs and receive probability 1 and that the probability assignments to the worlds are updated by Bayes’ rule on the basis of those experiences. In this way, a world acquires more probability to the degree that its own frequencies predict the frequencies in the experience. The information state of the subject can be defined as the combination of the set of worlds it allows and the rational probability assignments over those worlds. Updates eliminate worlds and reassign probability by experience. It is possible to remove the restriction to n-bounded worlds, by using probability density functions and Bayesian updates over those, but that is not as straightforward.
- 6.
Bayesian nets are popular technology and introductions to Bayesian nets, their theory and their applications abound, as well as software environments for defining these nets and computing with them. Any attempt to give an example in these pages could not compete with what the reader can find immediately on the net. It is however unclear to the author how important Bayesian nets are for modeling the dynamic causal dependencies that are needed for BNLSP, though they definitely are an important source of inspiration.
- 7.
The probability of the effect given the possible cause should outweigh its probability if the possible cause did not obtain.
- 8.
Zeevat (2010) gives a more detailed discussion.
References
Benotti, L., & Blackburn, P. (2011). Causal implicatures and classical planning. Lecture Notes in Artificial Intelligence (LNAI) 6967 (pp. 26–39). Springer.
Bos, J. (2003). Implementing the binding and accommodation theory for anaphora resolution and presupposition projection. Computational Linguistics, 29(2), 179–210.
Bowers, J., & Davis, C. (2012). Bayesian just-so stories in psychology and neuroscience. Psychological Bulletin, 138(3), 389–414.
Brozzo, C. (2013). Motor intentions: Connecting intentions with actions. Ph.D. thesis. Universita degli Studi di Milano.
Chomsky, N. (1957). Syntactic structures. The Hague: Mouton.
Clark, H. (1996). Using language. Cambridge: CUP.
Dalrymple, M., Kanazawa, M., Kim, Y., Mchombo, S., & Peters, S. (1998). Reciprocal expressions and the concept of reciprocity. Linguistics and Philosophy, 21(2), 159–210.
Dowty, D. (1990). Thematic proto-roles and argument selection. Language, 67(3), 547–919.
Doya, K., Ishii, S., Pouget, A., & Rao, R. P. N. (Eds.). (2007). Bayesian Brain: Probabilistic approaches to neural coding. Cambridge: MIT Press.
Garcia Odon, A. (2012). Presupposition projection and entailment relations. Ph.D. thesis, Universitat Pompeu Fabra.
Gazdar, G. (1979). Pragmatics: Implicature presupposition and logical form. New York: Academic Press.
Heim, I. (1983). On the projection problem for presuppositions. In M. Barlow, D. Flickinger, & M. Westcoat (Eds.), Second Annual West Coast Conference on Formal Linguistics (pp. 114–126). Stanford University.
Hobbs, J., Stickel, M., Appelt, D., Martin, P. (1990). Interpretation as abduction. Technical Report 499, SRI International, Menlo Park, California.
Hogeweg, L. (2009). Word in Process. On the interpretation, acquisition and production of words. Ph.D. thesis, Radboud University Nijmegen.
Kilner, J. M., Friston, K. J., & Frith, C. D. (2007). Predictive coding: an account of the mirror neuron system. Cognitive processing, 8(3), 159–166.
Kiparsky, P. (1973). “Elsewhere” in phonology. In S. R. Anderson & P. Kiparsky (Eds.), A Festschrift for Morris Halle (pp. 93–107). New York: Holt.
Langendoen, D. T., & Savin, H. (1971). The projection problem for presuppositions. In C. Fillmore & D. T. Langendoen (Eds.), Studies in Linguistic Semantics (pp. 373–388). New York: Holt.
Liberman, A., Cooper, F., Shankweiler, D., & Studdert-Kennedy, M. (1967). Perception of speech code. Psychological Review, 74, 431–461.
Marriott, K., & Meyer, B. (1997). On the classification of visual languages by grammar hierarchies. Journal of Visual Languages and Computing, 8(4), 375–402.
Mellish, C. (1988). Implementing systemic classification by unification. Computational Linguistics, 14(1), 40–51.
Oaksford, M., & Chater, N. (2007). Bayesian rationality. Oxford: Oxford University Press.
Pearl, J. (2009). Causality: Models, reasoning and inference (2nd ed., Vol. 29). Cambridge: MIT press.
Pickering, M. J., & Garrod, S. (2007). Do people use language production to make predictions during comprehension? Trends in Cognitive Sciences, 11(3), 105–110.
Reiter, E., & Dale, R. (2000). Building natural-language generation systems. Cambridge: Cambridge University Press.
Rizzolatti, G., & Craighero, G. (2004). The mirror-neuron system. Annual Review of Neuroscience, 27, 169–192.
Smolensky, P. (1991). Connectionism, constituency and the language of thought. In B. M. Loewer & G. Rey (Eds.), Meaning in mind: Fodor and his critics (pp. 286–306). Oxford: Blackwell.
Soames, S. (1982). How presuppositions are inherited: A solution to the projection problem. Linguistic Inquiry, 13, 483–545.
Stalnaker, R. (1979). Assertion. In P. Cole (Ed.), Syntax and semantics (Vol. 9, pp. 315–332). London: Academic Press.
Van der Sandt, R. (1992). Presupposition projection as anaphora resolution. Journal of Semantics, 9, 333–377.
Zeevat, H. (1992). Presupposition and accommodation in update semantics. Journal of Semantics, 9, 379–412.
Zeevat, H. (2010). Production and interpretation of anaphora and ellipsis. International Review of Pragmatics, 2(2), 169–190.
Zeevat. H. (2014a). Bayesian presupposition. MS, Amsterdam University.
Zeevat, H. (2014b). Language production and interpretation. Linguistics meets cognition (CRISPI). Leiden: Jacob Brill.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Zeevat, H. (2015). Perspectives on Bayesian Natural Language Semantics and Pragmatics. In: Zeevat, H., Schmitz, HC. (eds) Bayesian Natural Language Semantics and Pragmatics. Language, Cognition, and Mind, vol 2. Springer, Cham. https://doi.org/10.1007/978-3-319-17064-0_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-17064-0_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-17063-3
Online ISBN: 978-3-319-17064-0
eBook Packages: Humanities, Social Sciences and LawSocial Sciences (R0)