Logic and linguistics have engaged in a many-faceted dialogue since the very beginnings of both disciplines in Antiquity. While participants may have had diverse views over the ages, arguably, the dialogue has always revolved around the relationship between human thought and natural language. While there are those who see these two domains as one and the same, or as a case of one-directional influence (language as an expression of thought, or thought as an expression of language), we beg to differ. To us, the long historical tradition of authors such as Arnauld, Boole, Turing, or Jespersen demonstrates the much richer perspectives on language and reasoning that are needed, including connections with intelligence and computability.

Another major historical theme in the above dialogue are similarities or convergences between natural and artificial languages. It is clear that natural languages are not only semantically broader, but also much closer to human communication than artificial languages, and so linguistic grammars have long dealt with a larger set of problems than logical formalisms. To mention just one key instance, the notion of time has always been present in grammars (whatever their normative or descriptive stance or theoretical affiliation), whereas logical frameworks either lack explicit reference to time, or treat only some selected features, falling far short of all the features of time and aspect that natural languages are capable of expressing. Even so, the goal of translating utterances to logical forms has been a never-ending source of attraction for linguists aiming for methodological rigor and metalinguistic precision. Naturally, simple conversion of utterances to some logical form does not exhaust all their inferential potential, or their ontological content, but the logical form is at least a precise model for these important features.

Against this background, recent work at the logic-language interface has employed state of the art logical and linguistic structures and theories as well as cognitive and computational models. This broad approach is promoting a revolution in the way researchers conceive of interpreting and generating natural language at all levels of analysis (phonological, morphological, syntactical, and textual). The Handbook of Logic and Language edited by Johan van Benthem and Alice ter Meulen draws attention to the central role of the notion of information in this new intellectual paradigm, binding together linguistics and studies of communication modeled by computation. This combination of themes extends naturally into Computational Linguistics, Artificial Intelligence, and Cognitive Science.

While theoretical perspectives and approaches sometimes differ in these areas, consensus may arise from the undeniable fact that natural and artificial languages structured by grammar and logic are important tools of human thinking, cognition of the world, and knowledge acquisition, that is, phenomena which provide foundations for our very existence.

This special issue of JoLLI consists of six selected papers presented in the “Logic and Linguistics” Workshop held during the 4th World Congress on Universal Logic (UNI-LOG 2013) in Rio de Janeiro, plus two papers by invited authors: Johan van Benthem and Marcus Kracht and Udo Klein. It contains a number of samples of what in our view is intriguing and promising work at these interfaces of logic and language. We will briefly describe their contents, which show a lively mix of empirical studies and theoretical reflection. The following presentation is in alphabetical order.

In “Aspecto-Temporal Meanings Analyzed by Means of Combinatory Logics”, Jean-Pierre Desclés, Anca Pascu and Hee-Jin Ro present a concrete computational model of the meaning of tenses and aspects in natural languages. The model is built within the framework of Desclés’ system of Applicative and Cognitive Grammar and Enunciative Operations (GRACE), which combines sophisticated logical techniques: applicative formalisms, functional types in Categorial Grammars, and Curry’s Combinatory Logic. Moreover, while being oriented toward natural language, the treatment proposed in this paper can easily be translated into a functional programming language, in order to automatically carry out discourse analysis.

In “Complement Polyvalence and Permutation in English”, Brendan Gillon investigates a longstanding problem of generative linguistics: “dative shift”, as exemplified by verbs like give that admit two equivalent complement patterns: give Fido to Alice and give Alice Fido. However, the author starts with a less studied issue: the fact that one verb may admit complements with phrases of different categories. For example, to be takes noun phrase, adjective phrase, and prepositional phrase complements. The author proposes a treatment of the latter problem that extends to the former. Moreover, his solution generalizes to English verbs that are doubly complemented, remain synonymous under permutation of complements, and yet are not instances of dative shift.

Direct quotation is an intriguing feature of natural languages, with some bizarre features at first sight. In “Quotation via Dialogical Interaction”, Jonathan Ginzburg and Robin Cooper show how direct quotation can diagonalize (in the Cantorian sense) out of any grammar, since quoting even ungrammatical expressions is grammatical. The authors show how this mystery gets dissolved in a dialogical perspective on language, such as the framework KoS with its rich type theory TTR. Quotations then denote entities that are independently motivated for dialogue processing, such as utterance types or Austinian propositions about speech events. In the resulting view of language, there is no overarching notion of grammar, but rather an array of linguistic resources, depending on purpose and setting.

Continuing with broader views of language, in “The Grammar of Code Switching”, Marcus Kracht and Udo Klein abandon the view of linguistic competence as mirrored by a single grammar, and replace it by a bundle of grammars of varying complexity, with the possibility (or even necessity) of producing utterances with any number of them in parallel. The authors address some major issues that arise in this setting, including how to identify a code, how to fuse two codes, and how to embed one code into another. Finally, they propose an interesting new semantics for variables viewed as schemata for potential constant expressions that can be interpreted differently in different uses of the language.

Knowledge processing is the main area of research addressed in the paper “Functional and Structural Integration without Competence Overstepping” by Marek Krótkiewicz and Krystian Wojtkiewicz. It is demonstrated that a proper design of knowledge bases can benefit both linguistic and logic aspects of knowledge processing. The main concepts put forward in this work not only apply to the particular systems developed and discussed by the authors. Their approach also provides a guide on how to combine sometimes antagonistic approaches into a coherent system for the representation of knowledge and information.

“Deverbal Semantics and the Montagovian Generative Lexicon”, by Livy Real and Christian Retoré, propose a lexical account of action nominals whose meanings are related to the event described by their base verbs. Contrary to received wisdom, they claim that the information in the verb does not completely determine the semantics of action nominals. This is supported by evidence from several languages, indicating that some verbal aspects of nominals do not automatically result from the internal structure of the verb nor from its interaction with morphological suffixes. A fully computable lexicalist approach to such nominalizations is advanced as an extension of Montague semantics with a richer type system and a new organization of the lexicon that incorporates the semantics of action nominals and deverbals, including their polysemy and (in-)felicitous co-predications.

In a more purely methodological agenda extension for the logic-language interface, Johan van Benthem’s paper “Natural Language and Logic of Agency” explores the semantics and pragmatics of natural language from the standpoint of agency. His main vehicle for this study are dynamic-epistemic logics of information-driven agency, putting together the agendas of two related yet distinct fields. In particular, this line of analysis of communication and other key linguistic tasks suggests that language users might be modeled explicitly, including their abilities, not just for perfect formulation, inference and information exchange, but also for detecting and correcting mistakes, for pursuing a broad range of intentions and goals, and for engaging in longer-term strategic social and game-theoretic interactions.

Inquisitive Semantics is a new framework for studying natural language in terms of the direction of inquiry that gives sense to sentences and texts. This approach has wide-ranging consequences, for instance, for understanding linguistic meaning and new, linguistically relevant notions of entailment. In “Support and Sets of Situations”, Andrzej Wiśniewski proposes an alternative conceptual setting for the basic propositional system of Inquisitive Semantics. Inquisitive entailment is retained in his analysis, but the concept of model used is now more general. One underlying motivation is that, in contrast with the canonical account of language as describing “the world”, a language can in fact have many models. Accordingly, the new formalism allows for distinct models that are indistinguishable in their valuation for propositional variables, and a “situational” interpretation is sketched for the resulting logic.