7.1 Chapter Overview

Just as we compared the original DTM with similar theories that could be seen as its natural competition, it is important to evaluate the nDTM against the backdrop of some of the contemporary approaches to semantics that contain analogous ideas. I believe that this comparison should enable us to concisely outline some of the most important points of the nDTM and eliminate some of the misunderstandings that could arise among readers accustomed to similar modern approaches. Additionally, since the theory I propose can be understood as natural competition for conceptual and inferential role semantics , I will also use this opportunity to point out some of the advantages of the nDTM.

There are two important groups of theories that can be compared to the nDTM: conceptual role semantics , represented by the theories of Ned Block, Gilbert Harman and Paul Boghossian, and inferential role semantics , represented by Robert Brandom and Jaroslav Peregrin. For the reasons I explain in the section below, the nDTM is much more similar to the latter approaches, hence I devote more space to them. As for the former group of theories, I believe that it is especially important to compare the nDTM to Ned Block’s proposal because the theory he presents differs from the nDTM with respect to one key assumption: it is a theory that is both environmentally and socially narrow. It might thus be interesting to see the advantages and the disadvantages of this approach in relation to the nDTM.

7.2 Ned Block’s Conceptual Role Semantics

Ned Block’s conceptual role semantics, henceforth CRS for short, has been presented in a fairly voluminous paper entitled Advertisement for a Semantics for Psychology (Block 1987). You could say that in a way it would be hard to choose a more apt and honest title as the main aim of the paper is to present the advantages of CRS and encourage philosophers to develop the idea further. The theory has also been presented, in more general terms, in some of Block’s further publications (Block 1998), but it is safe to say that the first paper contains the most developed and detailed version of the idea. It seems that the encouragement did not work as planned since even Block did not develop the proposal further. It is hard to say exactly why the theory did not gain traction as it was definitely a very intriguing idea. The most probable answer is that some of the details it lacked proved very hard to develop.

Block is very clear on what exactly his theory was supposed to achieve. He starts his considerations with a list of eight desiderata for a semantic theory and ends the paper by showing how the account he champions fulfils these desiderata. This strategy makes our comparison very easy, because, as you saw in Chap. 2, I decided to choose the same approach. However, before we proceed with our checklist, let us describe Block’s framework briefly by listing some of the similarities and differences between it and the nDTM.

The most important reason that the two theories can be fruitfully compared is that Block’s understanding of the “conceptual role”, which is the most important element of the theory, is very similar to how sentences enclosed in directives can be said to function in language in the nDTM. Here’s the description Block proposes:

The internal factor, conceptual role, is a matter of the causal role of the expression in reasoning and deliberation and, in general, in the way the expression combines and interacts with other expressions so as to mediate between sensory inputs and behavioural outputs. A crucial component of a sentence’s conceptual role is the matter of how it participates in inductive and deductive inferences. A word’s conceptual role is a matter of its contribution to the role of sentences. (Block 1987, p. 628)

Note that if we focused only on the sentences that the directives expect users to accept, they could be characterized in a very similar way: they function as a connection between sensory inputs and language (empirical directives) , between parts of inferences (inferential directives), and as a link between language and its behavioural output (promotive directives) . We could just as well say that the meaning of expressions is the role they play in the network of these sentences represented by a language matrix . It is also important to remind the reader that meaning in the nDTM should be understood as environmentally narrow meaning ; this is also the case in the account of Block, who points out that his “conceptual roles stop at the skin” (Block 1987, p. 623). It is worth adding that although Blocks talks about “sensory input” in the fragment quoted above, the way he understands the extralinguistic component in the framework is actually very similar to the functional descriptions I proposed in Chap. 5. As we remember, in contrast to how they functioned in the original theory, in the nDTM the extralinguistic parts do not have any special status that makes them more important than any other part of the language matrix. Their role is the same as the role of just about every item in the matrix: they are there to determine the structure together with all the other parts. Block shares the same sentiment when he writes:

(…) procedural semanticists sometimes sound as if they want to take phenomenal terms as primitives whose meaning is given by their “sensory content”, while taking other terms as getting their meanings via their computational relations to one another and to the phenomenal terms as well [perhaps they see the phenomenal terms as “grounding” the functional structures]. It should be clear that this is a “mixed” conceptual role/phenomenalist theory and not a pure conceptual role theory. (Block 1987, p. 629)

One last similarity of the two theories is that, just as in the case of the nDTM, Block differentiates between two layers of language: shallow and deep. Yet, it is important to stress that in what should be considered the first important difference between the two frameworks, Block understands both of these levels differently from how they are seen in the nDTM. As we saw in Chap. 1, even both levels that the original DTM talks about are always levels of language: one of them is the specification of meaning, i.e. the mapping of concrete vocabulary to meanings, and the other is the meanings themselves, i.e. the places in the semantic structure of the language. For Block this opposition works quite differently: it is the opposition between an ethnic language and the language of thought. This is in stark contrast to the DTM since neither the original formulation nor the modified version I proposed in Chaps. 5 and 6 can be understood as theories of human or any other thought. Although sometimes the DTM does give you the ability to predict someone’s behaviour on the basis of her expressions, it is not because it gives us some kind of insight into people’s heads, but only into the structure of the language used in a given community and into patterns of acceptance of sentences. Similarly, the fact that we are able to see the structure of the language does not mean that we are able to see the structure of thoughts of the user of this language, as they can be much more complex processes that differ from one user to another. As the reader can now probably infer, the price Block pays for his choice is that his framework should be best understood as a theory of private language. As he at one point reveals, some of the confusions related to CRS could be eliminated once “one focuses on the use of language, not in communication, but in thinking out loud or in internal soliloquies” (Block 1987, p. 634). It could be said, therefore, that CRS leans towards semantic solipsism. To appeal to the dichotomy we mentioned earlier, Block’s account presents meanings as narrow in both environmental and social senses, whereas in the case of the DTM, only the former is true.

Block’s proposal is also probably more holistic than molecularist, which, as we have seen, constitutes an important difference. On the one hand, Block admits that “there are many differences in reasoning that we do not want to count as relevant to meaning” (Block 1987, p. 628); on the other hand, he does not give us a criterion for choosing these relevant reasonings, or generally, conceptual roles. As we have seen, the fact that the nDTM is a molecularist theory and that it is able to clarify which sentences count as meaning constitutive is one of its advantages as it allows us to escape the Fodor-Lepore dilemma. Interestingly, Block anticipated this problem and was aware that CRS is not able to avoid it without some theoretical provisions.

This leads us to the biggest problem of Block’s framework: the lack of identity criteria for conceptual roles. Block recognizes that this is a crucial aspect of the further development of the theory but admits that at present he does not have any solution to it. This is, of course, understandable in a paper that, as the title admits, “advertises” a theory. Still, the lack of an idea as to how we could find such criteria remains a huge detriment. One of the suggestions that Block gives is that looking for identity criteria might be a fool’s errand and it might be better to settle for a gradient of similarities. It is nonetheless not obvious how such a gradient could be found and measured. Note that in contrast to this, the nDTM gives us both. It delivers precise identity criteria for the identity of meanings in a given language, as well as identity across languages. Additionally, thanks to the elimination of the closed languages requirement, we were able to define similarity of meaning.

Two additional differences between the two theories originate from the close connection between Block’s framework and cognitive science. The first is the fact that the conceptual roles he speaks of should be understood in causal terms. As we pointed out in Chap. 1, although the DTM uses a behavioural notion of disposition, it is not treated as a causal relation, but rather as a correlation that the community expects to hold. For this reason, occasional failure in conforming to a directive is not especially surprising and the way the community judges the violator depends on her reaction to correction in a semantic trial . In contrast to this, Block presents a theory that is much better suited to an algorithmic view according to which language rules determine the reactions of the user. Additionally, although Ajdukiewicz initially alluded to mental causation via the psychological notion of “motivation”, especially in the 1931/1978 paper, the nDTM consciously avoids this connection because its details do not depend on the type of user and its specific causal powers.

The second difference comes from the fact that Block defines conceptual roles as operations on representations – he admits that the theory will not work if representationalism is false. It seems to me that the nDTM is completely neutral in this respect because all the matrix items are described functionally, so the theory fully retains its non-referential nature. Specifically, the linguistic parts of the matrix do not have to be understood as symbols in Fodor’s sense, which Block uses; all that is required of them is that they can be used for syntactic operations. Additionally, the non-linguistic parts are not said to refer to or represent anything in any other sense apart from a deflationary understanding in which variable types represent data types. Having discussed how both theories relate to each other, let us now see if the nDTM fulfils the desiderata Block lists for his own account.

The first desideratum proposed in the “Advertisement…” is quite surprising when you consider that CRS is non-referential because Block expects the theory of meaning to explain the relation between meaning on the one hand and reference and truth on the other. The way he understands this is that since CRS is just one part of a two-factor theory, it is not supposed to fix the reference or truth of sentences for us, but only show how the two factors – semantic and referential – relate to each other. He believes his theory achieves this because it shows that reference does not fix the meaning. But what about the relation in the other direction? In a slightly confusing passage, Block admits that even though his account does not fix the reference, it does fix “the nature of the referential factor” (Block 1987, p. 643). There are two ways of understanding this remark. First of all, we could say that what Block has in mind is that meaning co-determines reference just as Kaplan’s character co-determines denotation of terms. Note that nothing precludes using the nDTM like this, but the ball is in the court of the referential theory. In other words, a given theory of reference that we might add to the nDTM could make use of narrow nDTM meanings as parts of the referential mechanism, but the specifics of this application depend on the details of the reference theory in question. The other possible interpretation of Block’s remark is that CRS explains the meaning of reference talk: it shows us how the notion of “reference” functions in a given linguistic community, it gives us the meaning of the notion of “reference”. The nDTM performs this function quite well as it gives us an insight into the meaning of any expression, reference and truth included. The theory shows us how reference and truth are understood in a given community: for example, as we saw in Chap. 3, it can show us that the users hold all sentences figuring in the directives as true, but it cannot tell us what their words “really”Footnote 1 refer to or whether their sentences “really” are true. What makes the first interpretation plausible is how Block shows the role CRS could play in Twin-Earth-type scenarios. What makes the second interpretation plausible is that he points out that the theory could reveal a disquotational nature of the predicate “is true”.

The second desideratum from Block’s list is the ability to explain what makes meaningful expressions meaningful. He gives the explanation that all meaningful expressions simply have some conceptual role. The explanation delivered by the DTM is very similar: expressions have meaning when they are mapped to the places in the meaning structure, which boils down to the fact that they have directives associated with them.

The third desideratum expects the theory to explain the relativity of meaning to a representational system. Apart from the notion of a “representational system” that is not needed in the nDTM, the theory fulfils this requirement quite naturally as it ties all meaning to a given semantic structure, as there is no “place in structure” without a structure.

Desideratum 4 is the expectation for the theory of meaning to explain the compositionality of language. As we have seen, the original DTM had some problems with this requirement, but the nDTM solves it by introducing the SD set and defining the meaning of all compound expressions as their place in its structure. It is worth adding that Block’s solution to this problem is far from ideal. He points out that the problem does not really affect CRS as the theory starts with sentences and reconstructs the meaning of their constituents from their conceptual roles. If this was a satisfactory solution to the problem of compositionality, we might just as well say that the original DTM contained a solution for it. This line of reasoning still fails to address the problem that worried Davidson: we still have not learned how it is possible for a user to understand the conceptual role of any sentence, especially sentences she has not encountered before. The nDTM answers this question by stating that the users know the meaning of compound expressions that are not present in the D set because they are capable of recognizing them as substitutions of forms that are in D and recognizing the meaning of their parts.

The fifth desideratum is closely related to the idea that CRS should be a theory that is applicable to psychology. According to Block, the theory of meaning should fit with an account of the relation between meaning and mind or brain. This is a desideratum that neither the original DTM nor the nDTM tries to fulfil (see the comments above about causal powers). The nDTM is a model of linguistic meaning and nobody expects our brain structures to be similar to language matrices.

The sixth desideratum claims that the theory of meaning should illuminate the relation between autonomous and inherited meaning. The opposition invoked in this desideratum is related to the fact that CRS is a theory of representation, but there are some aspects of it that are relevant to the nDTM. The main idea behind the autonomous/inherited distinction is that some representations, for example the symbols in this book, are often understood as parasitic on the internal representations of the system. For this reason, once you explain how it is possible that the system uses the internal representations, you can explain the external ones. This line of reasoning often leads to infinite regress (Searle 1980): the internal manipulation of symbols cannot be explained by invoking the notions of understanding or interpreting as both of these relations are often understood as relations between the sign and its meaning. If the ability to understand a symbol requires me to grasp its meaning, then doing so cannot be reduced to understanding or interpreting it, as this requires a new, deeper meaning to grasp. When it comes to this requirement, both theories are equally successful as neither Block’s account nor the DTM requires the user to understand or interpret the meaning sentences figuring in directives (see Assumption 9 in Chap. 6 for a summary of this aspect of the DTM).

The seventh desideratum Block lists is that a theory of meaning should explain what it means to “know the meaning” and how the user achieves this state, i.e. how meanings are learned. CRS explains all of these quite nicely, provided it solves the problem of identity criteria for conceptual roles: in order for an expression to be meaningful to the user, the expression has to start to play a conceptual role in the user’s cognitive system. In order to know the meaning of an expression, the user has to have this conceptual role incorporated in her system. The results that the nDTM gives us are very similar. First of all, knowing a meaning boils down to having a set of dispositions to accept sentences figuring in meaning directives . As we have seen, it was not clear whether the original DTM stated that this is what “knowing the meaning” boils down to, or if it is only a necessary condition of it. The nDTM chooses the first option: it is a prohibitory semantics and states that meaning is about avoiding misuse, and not about use of language. Thus, if the user does not cross the semantic boundaries of a language, she is good to go. Moreover, in contrast to the original DTM, which was not suited to explaining changes in a language, the nDTM does not assume that the theory has to be built for closed languages only. For this reason, it is possible to explain how language learning functions: the user enhances her idiolect with new meanings – the process consists either of adding new meanings to it, adding or subtracting directives, or remapping the vocabulary she used.

The last desideratum proposed by Block is that the theory should “[e]xplain why different aspects of meaning are relevant in different ways to the determination of reference and to psychological explanation”. On the face of it, CRS is a better theory in this respect as, in principle, it could explain even the private language of a singular user – of course, if it were possible to describe the conceptual roles in her system, but that is another story. Still, as we saw in Chap. 5, the DTM explains many mismatches between meaning and reference, but on the level of the community. For example, it shows us why empty expressions may still be meaningful for a given group of users as well as why the actions of a person might not match the expectations of the community even if the reference is preserved: for example, when a person successfully refers to an object but still violates a meaning directive connected to its name.

To sum up, although there is a significant overlap between the nDTM and Block’s CRS, the former contains several important advantages. The main advantage of the nDTM is that it is able to fulfil most of the desiderata posited by Block, but it does not ask us to pay the same price for it (representationalism, semantic solipsism, full-blown holism ). Additionally, and even more importantly, the nDTM does not have the main flaw of Block’s account as it provides us with identity criteria for its equivalent of conceptual roles: places in the language matrix . Still, provided we accept the fact that meaning in the DTM is narrow only in an environmental sense, the theory could be understood as one of the possible developments of his idea of a CRS.Footnote 2

7.3 Robert Brandom’s Inferential Role Semantics

As we saw in Chap. 3, the original DTM can be seen as the first functional role semantics that, at least in some important respects, displays similarities to Wilfrid Sellars’ theory of language. As I have argued, the nDTM embraces the functional aspect of Ajdukiewicz’s theory and makes it more prominent: first, by describing the extralinguistic components functionally, and second, by insisting on identifying the place of expressions and their meanings in the language matrix , as opposed to a necessary condition of meaning. It is thus natural to expect that we will see how the nDTM relates to the current most prominent functional role semantics, which is at the same time a descendant of Sellars’ project: Robert Brandom’s inferential role semantics.

There are no doubts that the general approach of both theories is very similar: for example, they are both pragmatic in the sense I described in Chap. 3 – they start by looking at language users’ behaviour and try to build semantics on the basis of this analysis. In the nDTM, pragmatics is the starting point, the method of choosing the right set of sentences that the meaning is based on. Everything that happens later comes from syntactic decomposition. In Brandom’s theory, semantics is also a result of an operation on the data delivered by pragmatics. After all, the title of his magnum opus and the main slogan of his semantics (“making it explicit”) (Brandom 1994) is the procedure of explicating community members’ behaviour. As Brandom puts it, “semantics has to answer to pragmatics” (Brandom 2000, p. 125).

How both theories perceive language is also interestingly similar. The nDTM sees language as a structure and the requirement for belonging to a language comes down to being located in this structure. For Brandom, language is an inferential structure that the users internalize and learn to use properly. Anything that belongs to a language does so because it can be located somewhere in this structure – it can be an antecedent of some inferences and a consequence of some other inferences. On the surface, this might look like a difference between the two theories because, as we have seen, inferences according to the nDTM are only a part of what holds the linguistic structure together, as there are also three other types of directives to account for. However, interpreting Brandom this way would be a mistake. Brandom understands “inferences” in a wide sense according to which they encompass not only language-to-language transitions, which the DTM classifies as inferential, but also what the nDTM classifies as empirical directives and promotive directives , that is, moves from experience to language and from language to action.

Using this wide notion of “inference”, Brandom defines two versions of inferentialism that nicely correspond to the DTM. Weak inferentialism is a position according to which the property of being a part of an inferential network is a necessary condition of being meaningful. As we saw in Chap. 3, this fits one of the interpretations of the original DTM. Strong inferentialism is a claim according to which being embedded in the network of inferences is both a necessary and sufficient condition of being meaningful. This is a position Brandom endorses and, as we saw in Chap. 5, it is also the position I propose for the nDTM.Footnote 3

It is also worth mentioning that both theories are non-referential. Brandom emphasizes this by presenting his theory as a conscious reaction to the dominant tradition of truth-conditional semantics. He even introduces the key notion of “inference” as opposed to “reference” (Brandom 2000, p. 1). As we have seen, in the case of the original DTM this choice was more of a necessary evil than anything else, but the nDTM proudly wears non-referentiality on its sleeve.

One of the most interesting common points is that both theories are molecularist. As Brandom writes:

(…) inferentialist semantics is resolutely holist. On an inferentialist account of conceptual content, one cannot have any concepts unless one has many concepts. (Brandom 2000, p. 15)

Concepts, then, must come in packages (though it does not yet follow that they must come in just one great big one). (Brandom 2000, p. 16)

Needless to say, this provokes a question: how do we choose the relevant packages? Which parts of the language should we treat as constitutive of meaning? Brandom elaborates on this idea of “packages” in his reply to Wanderer (Brandom 2010c), in which he invokes the notion of molecularism directly and explains that inferentialism might start with a body of sentences used by a particular person, i.e. the set of sentences actually used by the person up to a certain point. The obvious problem associated with this idea is that we have to secure some kind of uniformity of packages between the users if we want to ever move from a theory of users’ idiolects towards a theory of language. But even if we limit ourselves to idiolects, it is still far from obvious whether we should count all individuals’ inferential customs as relevant to meaning or only some selected, privileged inferences. As we saw in Chap. 3, this problem has been diagnosed by Fodor and Lepore. As they pointed out, if we do not supply the molecularist accounts with the analytic/synthetic distinction and, according to many philosophers, Brandom included, we shouldn’t, molecularism has a tendency to slide into holism as it is unable to explain why certain sentences should be treated as privileged in language.

One particular part of Brandom’s theory in which this problem is clearly visible is the category of “material inferences” that Brandom invokes to explain some of the crucial inferences language users have to make, in his opinion. As he argues, the rules of inference cannot only be formal and the theory should not treat material inferences as enthymemes. This is an important aspect of the theory as Brandom believes that this is how meanings manifest themselves in linguistic practice. The obvious problem with this idea is that we have to be able to either list these material inferences for a given expression or provide a criterion for detecting them. This question is Brandom’s version of our problem of the criterion of choice for meaning directives that we discussed in Chap. 5. The problem with Brandom’s account is that he does not provide such a criterion; instead, he suggests two examples of material inferences taken from Sellars. The first example concerns the terms “west” and “east” and can be presented as the following rule: the user should be able to move from the sentence “A is west of B” to the sentence “B is east of A”. The second example concerns a move from the sentence “Lightning is seen” to the sentence “Thunder will be heard soon” (Brandom 2000, p. 52). As pointed out by Fodor and Lepore, these examples are actually rather confusing. The first looks like a typical enthymeme. The second looks like a typical contingent reasoning – something that we would not expect on the list of meaning-constitutive rules. How are we to extend the category of material inferences further if the examples are so confusing?

At this point the nDTM has a clear advantage over Brandom’s account.

The theory does not preclude the existence of material inferences; we could easily put them as additional inferential directives. But it gives us a criterion to decide whether they are to be counted as directives or not. The set of material inferences, if there are any, depends on a given linguistic community and, as I pointed out in Chap. 5, might even sometimes be surprising to the semantician. Still, according to the nDTM, as long as they are treated in the community as directives, that is, as long as they are used in semantic trials, she should treat them as constitutive of meaning. From the nDTM’s point of view, the disagreement over the examples provided by Brandom could actually be resolved if we tested how both directives (the one about geographical directions and the one about the storm) fare in an English-speaking community. Of course, readers can test them on themselves and see whether these inferences feel like directives to them; this is more or less how typical armchair philosophers would evaluate them. However, as we pointed out in Chap. 5, this type of testing of linguistic intuition has obvious limitations.

Emphasizing all of these similarities could result in us losing sight of one fundamental difference between the two accounts. It is important to remember that Brandom’s theory is a much more encompassing project than the nDTM. As pointed out by Rodl (2010, p. 63), it is an inquiry “into the nature of assertion, belief, perception and action, that is, into the nature of mental life, the nature of the mind”. The theory functions more like a general theory of human cognition and rationality than semantics. Once you put it this way, it is a far cry from the minimal, prohibitory attitude of the nDTM. Even if we focused only on the linguistic aspect of Brandom’s account it would still be a theory of totality of language use, with all its complexities, and not only a theory of meaning. This difference in focus is clearly visible in that Brandom prefers to talk about “concepts” rather than “meanings”. Although he understands concepts to be a purely linguistic affair, his theory clearly tells a much broader story than traditional theories of meaning. As he himself points out:

Indeed, although the enterprise I am engaged in here is not happily identified with analysis of meanings in a traditional sense, it is properly thought of as pursuing a recognizable successor project. (Brandom 2000, p. 31)

We could argue that both theories found the traditional scope of semantics to be ill-fitting but they chose the opposite solutions for this problem. Brandom’s account extends the scope of semantics, the nDTM shrinks it. The biggest bone of contention between Brandom’s account and the nDTM that results from this difference in scope is that they differ in terms of the requirements they impose on the agents in order for them to be treated as genuine language users. As I pointed out in Chap. 3, neither the original DTM nor the nDTM requires the user to understand or interpret the sentences she accepts in any other sense besides the ability to accept them in the right circumstances. The users conform to the rules, they do not interpret them. This is how the nDTM chooses to solve the bootstrapping problem made famous by the Achilles and the tortoise paradox suggested by Lewis Carroll that I mentioned in Chap. 1 (1895) (Brandom invokes it as well). As we saw in Chap. 4, this solution wouldn’t be satisfactory for a Sellarsian; it seems that it might be equally unsatisfactory for a Brandomian.

To see this, it would be good to look at how Ajdukiewicz and Brandom answer the question as to when we could say that someone speaks a language. As we saw in Chap. 1, Ajdukiewicz wondered what it is for a person to speak English and decided that it can be neither the ability to use sounds belonging to a language nor the internal mental state associated with a given name. Brandom starts with an identical quest: he wants to know how to differentiate genuine language use from mere appearance of use, but his conclusion is different. He invokes the same example of a parrot and adds that if we trained it to react according to a given stimulus, for example to the sight of red things, then even if the parrot learned to reliably issue the right sound in the right circumstances, that is, even if it acquired the relevant dispositions, it would still not be counted as a user of the term “red”. What does the parrot lack? As Brandom points out:

The distinction can also be made out in terms of the employment of concepts. To be a perceiver rather than just an irritable organism is to be disposed to respond reliably and differentially to the perceptible environment by the application of appropriate concepts. (Brandom 1994, p. 8)

In another place, he elaborates on what being a concept user could mean:

Merely reliably responding differentially to red things is not yet being aware of them as red. Discrimination by producing repeatable responses (as a machine or a pigeon might do) sorts the eliciting stimuli, and in that sense classifies them. But it is not yet conceptual classification, and so involves no awareness of the sort under investigation here. (If instead of teaching a pigeon to peck one button rather than another under appropriate sensory stimulation, we teach a parrot to utter one noise rather than another, we get only to the vocal, not yet to the verbal.) As a next stage, we might imagine a normative practice, according to which red things are appropriately responded to by making a certain noise. That would still not be a conceptual matter. What is implicit in that sort of practical doing becomes explicit in the application of the concept red when that responsive capacity or skill is put into a larger context that includes treating the responses as inferentially significant: as providing reasons for making other moves in the language game, and as themselves potentially standing in need of reasons that could be provided by making still other moves. (Brandom 2000, p. 17)

On the face of it, this seems to be an unsurmountable difference between the two theories. It may seem that Brandom downgrades the role of empirical stimuli – empirical directives in the nDTM’s case – to the point of making them irrelevant (he is criticized for this by McDowell (2010). Additionally, he introduces the category of “concept usage”, which is absent in the nDTM. As you saw in Chap. 3, Ajdukiewicz mentioned concepts, but the nDTM went in the opposite direction. And yet, once you look at what Brandom wishes to add to this ability of stimulus discrimination in order to turn the speaker from a mere “reporter” to a concept user, you will see that it is actually not that different from the DTM.

(…) understanding of non-inferential reports was that parrots and photocells and so on might reliably discriminate the circumstances in which the concept red should be applied, without thereby grasping that concept, precisely in the case where they have no mastery of the consequences of such application – when they cannot tell that it follows from something being red that it is coloured, that it is not a prime number, and so on. (Brandom 2000, p. 65)

In yet another place Brandom adds a very similar remark:

The child must first acquire sufficient mastery of the responsive dispositions that in her elders are exercised in the making of language-entry moves resulting in perceptual judgments and observation reports: for instance, responding to visibly red things by making what is still just the noise “red”. And she must begin to make the sorts of connections that in her elders constitute the making of language–language moves, paradigmatically inferential ones. When she is good enough at doing so (how good is not a matter for semantic, but for social, prudential and existential decision-making), she can begin to be held responsible for doing what only then begins to have the significance of making moves in the public game. (Brandom 2010a, p. 334)

Note that this is something that an advocate of the nDTM could agree on without any hesitation. The meaning of a term is its place in the whole structure of directives, and not only in one of the directives or only in one type of them, specifically empirical directives ; what the parrot lacks is a number of additional directives. Of course, you could not master these additional directives if you weren’t a user of some additional terms, but this is just a consequence of the molecularism of the theory. The verdict of the nDTM is thus, in fact, more or less the same. The parrot is not a fully fledged user of the term “red” because she has not acquired the appropriate set of dispositions: she accepts a sentence “red” in proper empirical circumstances, but that’s it. The meaning of “red” cannot be limited to one empirical directive. The answer Brandom could give to this is that the nDTM still opens the possibility of the parrot blindly learning the whole set of directives associated with the term “red”. This surely isn’t enough to be a proper “red” user, but it seems that the nDTM is not able to catch this type of “fake” language user.

Initially this may seem like a serious challenge to the DTM. It seems that we set a trap for ourselves. On the one hand, we said that knowing the meaning does not require one to understand the sentences – the user simply has to follow directives. On the other hand, we do not want to grant the parrot the knowledge of meaning if the only thing she does is repeat sentences mindlessly or almost mindlessly, as she is able to recognize the right circumstances. The good news is that the nDTM gives us the ability to explain why the meaning of “red” in the parrot’s “language” differs from ours. In order to see this, we have to realize that although the sentences the parrot uses look very similar to our sentences, they are actually built from elements whose functional profile is very different from that of our own expressions. Imagine an expressionless matrix of the “language” of the parrot: every term that accompanies the term “red” in directives has an incredibly poor functional profile, therefore they are not similar to our terms at all. For this specific reason, the term “red” cannot be as similar to our terms as we may initially think. After all, it is surrounded by expressions with completely different meanings.

However, the Brandomian could press further and add that the sheer increase in the number of directives a given creature learns won’t help us because they could still be accepted blindly. Couldn’t a given cognitive system master all of the directives blindly, turning the parrot into some kind of super-parrot as a result? At some point Brandom talks about chicken sexers and their inability to explain their skill. We do not have to imagine how being a chicken sexer feels because we have many classification skills that work similarly. Take the example of the ability of differentiating between male and female faces as an example of this. Do we know what triggers this classification in our cognition? If you are like me, you probably have no idea. You just know that someone looks more like a man than a woman and that it comes down to some properties of the face. But what properties they are is a mystery. What if our knowledge of the meanings of words was like that? Can we imagine we are “super-parrots”? To be honest, I do not think that we have to imagine much because I think that our meaning skills are exactly of this type. How do I know that “triangles are figures” or that “this patch right here is red”, or that if I accepted a sentence “p” I should not accept a sentence “not p”? The best answer I can give is that I just know these things – they are obvious to me. Of course, I could try to probe myself and pinpoint the boundaries of my meanings, but then I would be simply building a DTM for my language. Should we decide that only theory makers, even amateur ones, have genuine linguistic capabilities? Theory-making practices certainly make our linguistic skills explicit, but it is definitely too much to ask of every language user. I believe that the key reason for confusion is the ambiguity of the notion of “knowing the meaning”. In the first sense, the notion comes down to the ability to use a given expression properly. In the second sense, the notion comes down to the ability to explain the meaning of a given expression – to the ability to provide the meaning to the one who asks.Footnote 4 The definition of “knowing the meaning” I proposed in Chap. 6 should be understood as an explication of the first sense. One restriction that the nDTM adds to this definition is that it does not require the users to master the whole usage of the expression but only a very rudimentary part of it, the one tested in semantic trials. According to the theory I advocate, “knowing the meaning” in the second sense relates to the ability of a semantician who, equipped with a language matrix of a given language, might provide us with a detailed explanation of the meaning of the expression in question – i.e. can show us the set of places the expression figures in in the language matrix. It is important to realize that both abilities are independent of each other. Properly trained language users can follow all of the directives to the letter, but have otherwise no ability to explain the meaning to others. In this sense, they function as super-parrots. Note that acting this way is not equal to following the directives accidentally. The users recognize semantic trials and know that there is a social expectation they have to fulfil,Footnote 5 but they do not have to know that this is what expression meanings boil down or have a synthetic, overall view on the set of directives associated with a given expression. A semantician can provide us with a complete distribution of an expression in the matrix but would lack the proper training to act according to the directives, if she was not trained properly. According to the nDTM you cannot be a fake or accidental language user if you follow the directives due to linguistic training. There is nothing more to add to the behaviour in order to transform it to “real language use”. As we have already seen in one of the examples in Sect. 6.2 of Chap. 6, you can accidentally use a given expression according to the meaning it has in a different community but this is just a result of the fact that you have been accidentally trained this way.

There is one more aspect of a Brandomian “concept user” that differentiates her from an nDTM language user. Following Kant, Brandom points out that a concept user has to recognize the normative character of concepts and that she should expect other members to conform to these norms. Applying concepts is applying norms, but how can we tell that someone is applying or recognizing norms as opposed to just acting according to rules?

What exactly this ability consists of is not easy to say as Brandom does not want to deflate normativity with a reductive explanation. The problem is that although he devotes a lot of thought to this aspect, the theory struggles with explaining the normativity of social practices: it either looks to be mysterious or collapses into naturalistic reduction. A good example of the latter is one of the passages from his response to Rodl, who criticizes his account of normativity:

Suppose some hominids reliably negatively reinforce entering the central hut if one does not display a leaf from a particular tree. (They beat offenders with sticks.) An interpreter could then legitimately treat as a reward for some other behaviour, say, building a fire, giving the fire builder such a leaf. For that is altering their normative status, giving them a licence or entitlement they did not have before. And that can be taken to be rewarding the fire building, even if being given a leaf does not positively reinforce the fire building (intuitively, because the fire builder happens not to care about entering the central hut). This is normative sanctioning. (Brandom 2010b, p. 310)

The problem is that if we took this passage for granted, it would mean that true normativity boils down to instrumental conditioning, which is definitely not what the reader expected. After all, this does not differ from the typical, reductionist approaches of naturalists (see Quine’s (1979) approach to ethics as a good example of this).

At another point, Brandom resorts to the type of sanctions the violator may face and says:

One way of doing that is to look to sanctions – treating a performance as correct by responding in practice with a reward (or withholding a punishment) and treating it as incorrect by responding in practice with a punishment (or withholding a reward). What counts as a reward or punishment might be construed naturalistically, for instance as any response that positively or negatively reinforces the behaviour responded to. Or it might be construed normatively, for instance in terms of the granting of special rights or the assignment of special obligations. (Brandom 1994, p. 63)

However, this is perfectly compatible with the nDTM as the user has to recognize that she is in a semantic trial to react properly. If she fails to recognize this context, she may not react as she is supposed to, because she may, for example, decide that the question was to banal. It is also worth noting that the nDTM embraces the third-person perspective and proposes testing the linguistic intuitions of a community member who observes and reacts to the semantic trials of other members (Brandom calls this ability to react to violations a “normative attitude”).

The big difference between Brandom’s account and the nDTM is that the latter leads to a much simpler picture of normativity of language. The theory does not place normativity outside the boundaries of the natural world, but it explains two things Brandom struggles with. First of all, it shows in what sense the user has to “recognize” the norms. According to the nDTM, the users conform to directives because they know how to act in the context of a semantic trial. As mentioned above, the users’ training makes them sensitive to this context because they know they will not be treated seriously if they fail the trial. Being treated this way is a form of social punishment, so it can be safely said that the nDTM does claim that language is normative. It is, nonetheless, a very specific form of normativity – one that is related to a very specific form of punishment, i.e. not being treated seriously – and based only on standard conditioning.

To sum up, I believe that despite the differences in how the nDTM and Brandom perceive language users, the theories are actually very much compatible with each other: the idea of being a concept user (as opposed to a mere reporter) and the idea of being aware of norms can be explained within the nDTM’s framework, sometimes with fewer caveats. I believe that the crucial difference between the two accounts boils down to the amount or scale of inferences that are demanded for the user to be treated as a genuine speaker of a given language. This difference comes from the difference in scale of the two theories that I have already mentioned. Brandom’s theory aims to explain the totality of language use and the methods users employ to understand each other. The nDTM limits semantics to its bare bones. It puts meaning in a very modest place: as a mechanism that charts the boundaries of language and helps people detect verbal misunderstandings. To see this difference better, it might be useful to turn to the game analogy Brandom uses: as he often points out, language can be understood as a practice of “scorekeeping” in the “game of giving and asking for reason”. In typical games, you can be a better or a worse player, and this is reflected in your score; this is what the score is for, to represent your skill in the game.Footnote 6 You can learn to be a better player while you play. Language is understood as an inferential art: the ability to recognize your commitments and the commitments you impose on your interlocutors is no exception in this. In this sense, Brandom’s analogy works very well and his theory seems to fit the phenomenon of language use; however, this applies to players who know how to play the game. Scoring presupposes that the game is played by the rules, as violating them leads to disqualification or a lack of any score – not just a lower score. Accordingly, there is a difference between the rules players use to be better in a game or even sometimes theories they apply to a game to optimize their performance and as constitutive rules that are required to be conformed to if one is to be counted as a player. The nDTM is only about this second form of rules: it tells you what you have to do to stay in the game, and not what you have to do to be better at it. To stretch the football analogy a bit, the nDTM does not tell a story about scorekeepers, but rather about line referees. In other words, the disagreement between Brandom and the nDTM comes down to how much we should demand from meanings – and by extension, from a semantic theory. According to the latter, prohibitory take on meaning is sufficient to explain many of the mysteries related to communication, language use or understanding. Needless to say, we will not be able to address all of the mysteries, but this is, in my opinion, a price worth paying. The full scope of language use, including the way language helps us in solitary acts of cognition, is not a task one theory could ever manage to do. To explain the phenomenon of language we need much more than semantics. Apart from the obvious complementary theories such as syntax, theory of reference and pragmatics we will probably also have to reach to psychology, sociology or maybe even neurobiology. This might have been the very reason older theories of meaning failed us – they were simply trying to bite off much more than they could ever chew. In this sense, the prohibitory notion of meaning may be understood as a less ambitious but more successful descendant of the traditional notion of meaning.Footnote 7

There is also another less metaphorical way of saying the same thing. As we saw in Chap. 6, the nDTM defines its notions using the set of directives D. Later we introduced an additional set, CD, which is the result of generating the consequences of D. We could say that just as the nDTM is a story about the D set, Brandom’s theory is a story about the CD set. The nDTM informs us what the players have to do in order to be treated as players. Every player has to accept certain sentences, but only some of the players realize how far their commitments go when it comes to the logical and material consequences of their acceptances . These players (or speakers) are the game experts; they are clearly better at the language game, but being good is not a requirement everyone has to meet. If you know all the moves in chess, you can be said to know how to play chess, but it is a far cry from being a good chess player. According to the view preferred by the nDTM, knowing the meaning of words is just like knowing the rules of the game. This interpretation of the relation between Brandom’s theory and the nDTM is compatible with his insistence that the theory of language has to explain not only the commitments of the users, but also their entitlements – the moves they are not required but allowed to make. Contrary to this, the nDTM is not interested in the moves the players can make: everything is permitted as long as it stays within the boundaries of language.

7.4 Jaroslav Peregrin’s Rule Theory of Language

A theory that places much more focus on the difference between the constitutive rules of the language game and the actual moves speakers make when they play it is Jaroslav Peregrin’s inferential semantics, presented in Peregrin (2014). Using the very convincing example of chess, Peregrin shows how this difference helps us to clear the air of the paradox related to the idea of meaning being bestowed on expressions by their usage rules. Here’s an example:

Consider the following “objection” aimed at chess: chess is played with chess pieces and not with mere bits of wood, hence the piece’s role in chess must be explained by its value and its value cannot be explained by its role in chess. Or, put differently, chess moves are not made with bits of wood, but rather with chess pieces, hence we must have the pieces prior to the moves and independent of them. The obvious reply is that it is the rules of chess that confer the values on the bits of wood, i.e. make them into the chess pieces. Hence as soon as we have the distinctions between rules and moves, we may let the former constitute the pieces and the latter then “operate” on the pieces. In other words, “the piece’s role in chess” is ambiguous, in between the role conferred by the rules of chess and the role we confer on it by the ways we use it in games. (Peregrin 2014, p. 12)

As we saw in the preceding section, I believe this difference to be very important for understanding the idea of “prohibitory semantics ”. The nDTM gives us only the constitutive rules of the game, which determine the limitations of our moves but do not determine them fully; in this sense, examples of other less restrictive games than chess may be even better. According to the picture presented in Chap. 6, language users start by learning that acceptance of certain sentences is expected from them in certain situations; they then learn to make new moves in the language game – they learn what other constraints society puts on language use.

There are passages in which Peregrin comes very close to this idea, especially in his choice of metaphors. He speaks of constitutive rules as “walls” that limit the space of our language and stresses that semantics is “restrictive” rather than “prescriptive” (Peregrin 2014, p. 88). Still, it is not obvious how this change of focus influences the theory or how it makes his understanding of an inferential network different from Brandom’s. We could also say that from the point of view of the nDTM, Peregrin does not take the decisive step in the right direction as he does not declare that meaning comes only from a specific subset of the inferential network of our language. In a thought experiment similar to Brandom’s parrot example that we analysed above, Peregrin invokes a cat that hisses in a specific way every time it sees a dog. Why is the sentence “This is a dog” a meaningful part of a rule-based system and the hiss only a reaction? Peregrin’s answer is identical to that of Brandom: the difference between a human and the cat is that a sentence is a part of the bigger inferential system in which the human learned to orient herself. But from the point of view of the nDTM, it is not obvious why a minimal inferential structure imposed on the cat couldn’t serve as a basic form of linguistic response. What if you trained the cat to respond with hissing whenever two conditions were met: (1) there was a dog present, and (2) we uttered the word “dog” with a specific intonation indicating a question? How much inferential structure do we need for language to emerge? The big advantage of the nDTM is that it enables us to say that a creature displaying this type of complex behaviour could be said to use some kind of language, although, as we saw above, the similarity of this language to ours may be less obvious than we think.

The big problem with the Brandom/Peregrin approach is that it does not give us any clear answer to this question. Language seems to emerge from the inferential structure once it achieves some kind of critical mass, some kind of threshold, but we are left with a puzzle as to what exactly this threshold is to be. The answer given by the nDTM may be somewhat controversial but it is, at least, clear-cut. If the cat reacts this way, then it is a language user. It is very easy to misunderstand this result and treat it as an argument against the nDTM. When I say that the ability to selectively react to a command associated with an empirical stimulus transformed the cat into a language user I am not trying to suggest that it became a user of English or even that it mastered the English word “dog”. What the cat mastered is an extremely simple, borderline trans-speciesFootnote 8 language consisting of a single vocabulary item associated with one directive. It really does not get simpler than that, but it is some kind of achievement and I claim that the nature of this claim is, in fact, linguistic. What may be confusing is that the primitive language in question uses a word that is taken from English. But we have to remember that the only thing these words have in common is a single empirical directive . By definition, it makes it a partial translation , but the similarity in meaning is minimal – it is, in fact, the smallest possible similarity of meaning. I believe that given two possibilities, i.e. (1) treating the cat as a user of a primitive language containing one word, which is a minimal partial translation of the word “dog” from English, and (2) declaring that this ability has nothing to do with language learning, even in a minimal, borderline sense, the first is a much better choice. It helps us understand how language could develop organically by adding more and more directives and dictionary items without the need to reach any specific target of a number of directives or connections between vocabulary items.

Let us leave the examples in which Peregrin agrees with Brandom aside. As the author of Rules Matter himself acknowledges, his theory is in most places compliant with Brandom’s account (Peregrin 2014, p. 14). For this reason, I am not going to focus on the aspects we mentioned in the previous section. Without pretending to be a fully fledged comparison of the two theories, I am going to focus only on those aspects of Peregrin’s account that differ from Brandom’s and are relevant for the discussion of the nDTM.

The heart of Peregrin’s theory is the notion of a “rule”. His starting point is the general Wittgensteinian intuition of “meaning as use”. As he rightly points out, once we employ this perspective, we will quickly realize that many of the things we use do not have semantic values. Thus, being used isn’t enough to elevate something to the position of a linguistic token. What else is needed? According to Peregrin, we have to use it in a specific way: we have to use it according to rules. Meaning is usage governed by rules. According to Peregrin, this reliance on rules helps to define two importantly different strands of inferentialism: theories based on the notion of rules and theories based on the notion of causal connections. The first can be related to Sellars, Brandom and Peregrin himself, the second to Paul Boghossian and, as we might as well add, Ned Block, whose proposal we analysed at the beginning of this chapter. As I have already pointed out, the DTM belongs to the first category, although I have my reservations as to the categorization itself, which I discuss below.

Not surprisingly, this leads us to a fundamental question: what is a rule and how can you spot that something is being “governed by it”? Note that coming up with an answer to this question is especially hard when you try to build a semantic theory on this notion. In the case of games like chess, the rules are explicit because they can be presented to us linguistically.Footnote 9 We cannot have this luxury in the case of a theory that tries to explain how the language functions in the community. Although the rules can be described in the metalanguage of the theory, we cannot claim that the mechanism of following the rules comes down to knowing these formulations. This is a stark contrast to how we explain rule following in games. There is nothing problematic in saying that someone follows the rules of chess because she read them in a book. On the other hand, we do not want to slide into “regularism”, i.e. to say that rules are simply explications of prominent correlations.

For this reason, Peregrin does not want to use the notion of a “disposition” in his account. As he argues, the link between the circumstances, such as the appearance of a dog in our example, and the utterance of a sentence shouldn’t be explained by a disposition of the user to utter the sentence in the vicinity of a dog, but by the fact that it is a proper reaction. He explains his reluctance towards dispositions by saying:

The basic difference is that while the term “disposition” refers to a covert mental state (and supposedly underlain by a physical state of the organism in question, which, however, nobody is able to specify), the term “propriety” refers to an overt social mechanism. We all know what proprieties, in simple cases, are. (Peregrin 2014, p. 49)

This part of Peregrin’s account could be treated as a direct critique of the approach I chose for the nDTM, so it is important to address it. First, I do not think that appealing to the fact that “we all know what something is” will be of much help here. We could just as well say that we all know what simple examples of dispositions, such as being soluble in water, are. The problem is that, to use Brandom’s expression, we are unable to make it explicit. Second, even if I think that Peregrin is partially right about why people appeal to dispositions, I think there is more to it. Sometimes the reason we refer to dispositions comes not from the fact that we are unable to determine the internal property that manifests itself in a given way, but because we do not care what the property in question is. I believe that this is how dispositions can be understood in the nDTM: they are “whatever makes the users conform to the rules”. This mechanism isn’t mysterious, but semantically irrelevant – it can be explained by psycholinguistics. I also cannot agree that, contrary to the internal hidden mechanism of dispositions, the propriety of actions is “overt”. As we saw in the section devoted to Brandom, it is not easy to determine whether a given action has been approved or scorned if you do not know the social context and cannot, for example, guess what counts as punishment and what as a reward.

What is more important, Peregrin is not able to completely eliminate the notion of disposition from his account anyway. In the end, he admits that rules are dependent on patterns of behaviour and that these patterns of behaviour are based on dispositions:

We must realize that, as we already indicated, a rule (or, for that matter, propriety) is a very different kind of entity. Unlike a disposition, it exists in the intersubjective, public, space. What we call a rule is nothing psychological or physiological; it is a matter of a social setting. Nevertheless, this intersubjective, social pattern is carried by certain psychological and physiological substratum; ultimately anything anybody does is a matter of some motives that may be called dispositions. (Peregrin 2014, p. 76)

On the face of it, the above quotation seems to be self-defeating, but a charitable interpretation is that Peregrin wants to simply distance himself from accounts that explain language by adhering to local causation – dispositions of a single user to react to his environment. Rules are patterns of dispositions in society – combinations of actions of speakers and reactions of listeners. For this reason, I believe that dividing inferential theories between those that are causation-based and those that are rule-based is not very useful, as all of them contain nets of causal relations that underlie observable behaviour. It seems to me that differentiating between individual-based and community-based theories is much more useful. From this perspective, the nDTM is compatible with Peregrin’s approach as it describes conforming to rules as a combination of a person’s action and the reaction of the community.

However, even if we agreed on the nature of rules and decided that we are able to differentiate the act of conforming to a rule from different actions, we would still have to differentiate between linguistic rules and many other rules that may affect the actions of community members. Here we cannot adhere to propriety as it also comes in many forms. Utterances may be proper in many senses besides semantic correctness: they can be pious, agreeable, polite etc., but how can we discern between them? There are two questions that we have to differentiate between here. First, we may ask how we are supposed to differentiate between linguistic and non-linguistic rules, as we have already seen, and the nDTM has an answer to this problem. Second, we may ask which of the linguistic rules are constitutive. Some utterances are determined by the rules, some are merely allowed by them – they are moves in the game. How can we say which are which? Peregrin does not address the first problem, but he is fully aware of the second. Interestingly, while discussing the Fodor-Lepore dilemma he comes very close to the solution proposed by the nDTM. While looking for the criterion of meaning-constitutive rules he writes:

There is an (inconclusive) test, based on the fact that rules are a matter of our normative attitudes: some kinds of inference are such that if somebody turns out to be ignorant of them, we tend to see this as a shortcoming in their knowledge of the language, while others are such that this is not the case. Thus, if they were to doubt that dogs are animals, we would probably become suspicious about their mastery of the words dog and animal. On the other hand, were they ignorant of the fact that dogs chase cats, this would probably not compromise their mastery of English in our eyes (though we might question their general knowledge of the world). (Peregrin 2014, p. 59)

And yet, at the end of the day, Peregrin back-pedals from this idea and declares that there is no sharp boundary between meaning-constitutive rules and mere moves in the inferential game, which, according to him, is the result we should have expected all along, considering Quine’s critique of the analytic /synthetic dichotomy. (Peregrin 2014, p. 60).

It seems to me that our problem with the notion of “rules” and especially with “constitutive rules ” is that we act as if we want to have our cake and eat it too. On the one hand, we do not want these rules to be just another case of physical laws – they cannot be simply causation. On the other hand, we want there to be some kind of “air of necessity” to them – we want them to guide our actions, to motivate us. It doesn’t help that we have already seen a promising solution to this problem in the past – analytic sentences – and that we lost it due to Quine’s criticism. One way of dealing with this defeat is to accept this consequence, just as Peregrin does. However, the nDTM proposes another way out: even if the circumstances described in directives do not cause users to accept sentences and because of this the users may very well cease to react according to the rules, the community classifies their violations with unshakeable conviction. No matter how the violator tries, she will not be taken as saying what she seems to be saying according to the meaning of the words she is using. She will not be taken seriously. She will not be taken at face value. She will either conform to the rules or be treated as if she were doing something completely different, for example dodging a semantic trial. This is how the rules get their “air of necessity”; unlike in the case of religious or ethical rules, the resistance against meaning rules is futile. You can’t argue over semantics.

One last aspect of Peregrin’s theory that seems very interesting from the point of view of the nDTM is the way he deals with the equivalent of empirical directives in Brandom’s account, i.e. with “language entry rules”. As I have already mentioned in the section devoted to Brandom’s framework, he seems to downplay their relevance too much. We could say that, in a way, Brandom is the opposite of Quine, who, as we saw in Chap. 3, put too much weight on them. Peregrin approaches the role of extralinguistic parts of references by using the very interesting analogy of games. Of course, as we have seen, this analogy is nothing new in the philosophical study of language, but it has not, to my knowledge, been used this way before. Peregrin points out that some games could be described in such a way that the set of rules that define them abstracts them from any materiality of the game. Think of chess as a good example of this. But then, there are games in which rules cannot be completely abstracted away from materiality.Footnote 10 Think of football as a good example of such a game. In the case of football, the objects we use to play the game – e.g. a ball or a goalpost – are embedded in the rule set. Now, if we get back to language, what Peregrin tries to convey with this analogy is two aspects of language that we should not forget about. First, we should remember that the inferential rules that deal with the extralinguistic environment of language have to be connected with the rest of its inferential apparatus. I believe that in terms of the DTM it is a weaker version of the same intuition Ajdukiewicz had when he insisted on language being connected. It is weaker because Peregrin’s remark does not preclude the isolation of some non-empirical parts of language. The second idea that this illustration helps to explain is that natural language may be embedded and embodied in a sense that all of its extralinguistic components (experiential and motor states) are actually necessary for it to work as it does. As Peregrin puts it “(…) it is important to realize that in the case of our language games, the whole world is our (potential) shared equipment” (Peregrin 2014, p. 112). This idea seems to be interesting from the point of view of the DTM because it can be expressed by saying that Peregrin’s account is similar to the original DTM and not to the nDTM, in which we decided to eliminate the links with particular extralinguistic items in favour of their functional descriptions. As we saw in Chap. 5, a classic philosophical way of presenting the same idea is to say that the nDTM works no matter whether the language users are real embodied agents situated in an environment, brains in a vat or disembodied virtual agents that deal only with virtual representations of objects. As long as their internal states are synchronized with ours, we will be able to communicate easily.Footnote 11 As we saw in Chap. 5, the nDTM is an environmentally narrow theory, but not a socially narrow theory. To express the same thing using the game analogy, the language game does not make any equipment a necessary part of the game, but it cannot be played alone. The language game is a team sport.

7.5 Back to Desiderata

In this chapter I tried to show how the nDTM avoids some of the problematic assumptions and consequences of competing accounts. The chapter functions as a follow-up to Chap. 3, in which the original DTM has been compared to its contemporaries. One possible objection to this strategy is that it may appear to be somehow dishonest. Even if we agree that the nDTM does not have some of the problems other theories struggle with, isn’t it simply the consequence of its bare-bones prohibitive nature? If a theory avoids issues because it is simply less ambitious, then it can hardly be presented as an accomplishment. To answer this concern, it will be best to remind the reader that, as suggested in Chap. 2, the best way to evaluate a theory of meaning is to check how many of the questions that led us to search for such a theory in the first place it answers. If a theory provides an answer to them, then its simplicity counts in its favour and suggests that the competing approaches might have simply been a tad overambitious. The most fitting way of answering these questions will be to revisit the desiderata I presented at the beginning of this book. As the reader may remember, in Chap. 2 I collected a list of 18 desiderata for a non-referential theory of meaning. Now, after I introduced modifications to the theory and effectively turned the DTM into the nDTM, it is time to revisit the list and see how many of them the new theory fulfils. Revisiting the desiderata will also be a good opportunity to compare the new theory with the original DTM and see how much progress has been made.

  • Desideratum 1. A theory should tell us if a given expression has any meaning at all.

As I argued in Chap. 2, the original DTM provides only a partial answer to this question because it gives us the criterion of sense only for non-compound expressions. The advantage of the nDTM is that it is able to provide such a criterion for both non-compound and compound expressions. A non-compound expression has meaning if it appears in the set D, the set of all directives. A compound expression has meaning if it appears in the set SD, the set of all substitutions of SD. As we saw in Chap. 6, the result of this criterion is that all syntactically proper expressions are meaningful in the nDTM. We were, nevertheless, still able to show that some of these expressions may have an aura of nonsense to them, if only they are incompatible with the meaning directives, that is, if the directives make them practically useless, in a sense that they will never be accepted if the user is to follow the directives. What is important from the users’ perspective is that the theory gives them an actual tool capable of testing the meaningfulness of expressions – we have to simply see if they can be found in D (or SD, respectively). One hypothesis that needs further empirical testing is that it might be possible to grade the “meaningfulness” of expressions, depending on the number of directives in which they appear in an essential manner. It is, for example, possible that expressions such as “nick”, as used in the longer expression “the nick of time”, will be evaluated as barely meaningful, because of how small their distribution in the set D is.

  • Desideratum 2. A theory should be able to evaluate if two expressions are synonymous.

Similarly to Desideratum 1, the original DTM detected synonyms only in the case of non-compound expressions. The nDTM is more versatile in this respect. In the case of non-compound expressions it provides results that are very much along the lines of the original theory. If we want to test whether two simple expressions are synonymous we have to test if they can be mutually exchanged in sentences of set D without changing the set. Once you have the set of directives, the task of testing synonymy is simple and automatic. Extending the theory to compound expressions allows us to provide a criterion of synonymy for them as well. Two compound expressions are synonymous if they are built the same way using synonymous elements. Similarly to Desideratum 1 the theory provides the users with a tool capable of finding the synonymous expressions – they can be easily found using the set SD. One additional interesting result of the nDTM is that, as we saw in Chap. 5, it is capable of discovering tacit synonymy in language.

  • Desideratum 3. A theory should be able to evaluate if two expressions coming from two different languages are translations of each other.

As we saw in Chap. 3, the original theory provided users with translations but did it at a rather significant cost – the original DTM was capable of comparing only languages containing the same semantical structure. What it meant in practice was that the translators were allowed to compare only two closed languages or two open languages that were semantically predetermined in exactly the same way – their respective ultimate, closed forms had to have the same semantical structure. The nDTM lifts this unrealistic constraint via the notion of partial translation that I introduced in Chap. 6. The practical side of testing the translations is rather trivial. Once you build matrices for both languages you can compare their structures easily and look to see whether the terms in question have a similar distribution in their respective languages.

  • Desideratum 4. A theory should be able to explain the difference between mere verbal and substantial disagreements .

On the face of it, both theories are equally successful with regard to this desideratum. After all, the fact that the users stumble into purely semantic disputes is the starting point of the DTM and it remained one of the initial assumptions of the nDTM. Having said that, it might be good to point out that the original DTM did not explain the nature of semantic trials in any meaningful way. What is especially important is that it did not give any hints as to how they differ from situations in which other social norms are violated. In contrast to that, the nDTM explores the pragmatic part of the theory in detail. As we learned in Chap. 5, semantic disputes can be detected via a combination of different factors. First of all, the tests are signified by semantic markers, expressions such as “what do you mean”, “what is the sense of this word”, etc. Second, the sentences enclosed in the directives are trivial to the point that they are practically never used outside of the trials – this was probably the reason why Ajdukiewicz thought them to be a priori and analytic . Third, the violation of semantic norms differs from the violation of other norms in that the community never treats the violator seriously – they never think that this is what the person is “really saying”. Fourth, the violation of semantic norms is not used by the community members as a predictor of the future actions of the violator.

  • Desideratum 5. The theory should explain what meanings are.

As we saw in Chap. 3, the original DTM does not tell us clearly whether directives determine meaning fully. Ajdukiewicz seemed hesitant towards this idea as in some places he still mentioned traditional objects associated with meanings such as “concepts” as determinants of directives. In contrast to this, the nDTM presents a bold statement. There is nothing more to meanings than the distribution of expressions in the set of directives or the set of substitutions of directives in the case of compound expressions. In other words, the nDTM does not reify meanings; they are not treated as objects but sets in the abstract semantic structure of language. There are no additional theoretical posits such as concepts that this distribution models, detects or depends on. This is what meanings are.

  • Desideratum 6. A theory should explain what synonymy is.

As previously mentioned, the original DTM defines synonymy only in the case of non-compound expressions. Synonymy of two expressions comes down to the fact that they both play the same role in meaning directives (which manifests itself in the fact that they can be mutually replaced in them without altering the set of directives). The nDTM extends this answer to compound expressions – two compound expressions are synonymous when they have the same syntactic structure and when they are composed of synonymous elements, which comes down to the fact that they are distributed in the set SD in the same way.

  • Desideratum 7. A theory should explain what translation is.

The original DTM explains translations in a very traditional fashion. Provided that two languages have the same semantic structure, that the deep structure of their meanings represented by language matrices is the same, translation can be reduced to the fact that the same space in the structure may be filled with different labels. Different labels used by different languages to fill the same spaces are translations of each other. The nDTM makes this answer more nuanced. Being a translation requires the semantic structures of both languages to overlap but they do not have to be identical in every part. Depending on whether the translated term figures only in the overlapping part or if it extends further we can talk about full or partial translations. This enables us to explain the general idea of translatability – it can be understood as the existence of an overlap in the semantic structures of two languages. As we saw in Chap. 5, it is possible for a given term to be untranslatable if it requires empirical or promotive directives but the users who wish to create the translation cannot synchronize their internal states with the foreigners. This might happen, for example, when the structure of their internal states is too coarse-grained.

  • Desideratum 8. A theory should explain how the first translations could have been created.

As mentioned in one of the assumptions listed at the beginning of Chap. 6, neither the original DTM nor the nDTM explains the phenomenon of the first translation. Nevertheless, it is important to remember that the nDTM gives us some insight into the phenomenon of indeterminacy of translation. As we saw in Chap. 5, the theory can help us detect three forms of indeterminacy: tacit synonymy , the possibility of alternative syntactic descriptions, which carry ontological assumptions with them, and the duality of parts of the semantic structure.

  • Desideratum 9. The theory should help us explain how it is possible to learn the first language.

When it comes to the ontogeny of language, there is very little that we can learn from the original DTM. The main reason for this is that the Ajdukiewiczian account was focused only on fully developed languages in their last, idealized state, which Ajdukiewicz described as “closed languages ”. The ontogeny of language is not in the focus of the nDTM either, but there are some consequences of the theory that may be relevant to this subject. First of all, since the nDTM embraces a fully behavioural interpretation of the original theory, we can safely say that the way users master directives originates from standard behavioural training. This is the reason why the users accept the sentences enclosed in the directives in a non-reflexive, automatic manner. Additionally, since the nDTM explains changes in language in a much more robust way, it is possible to explain some of the changes meanings undergo during the life of users. It is, for example, possible that at the beginning users master only a small number of directives associated with a given expression. What it means in practice is that their particular idiolects contain many synonyms that are not present in the more developed version of the language mastered by the community. Language learning can thus be explained as a process of adding directives that differentiate initially synonymous expressions and eliminate wrongly adopted directives so the overlap between the idiolect of the learning speaker and her community increases over time.Footnote 12

  • Desideratum 10. The theory should be able to account for the evolution of language, and explain how new meanings can be introduced to language and how existing meanings can change.

As explained in Chap. 3, the fact that the theory of meaning was only an auxiliary tool for explaining the views on the philosophy of science made Ajdukiewicz assume that languages change only incrementally (they accumulate meanings as they evolve) and that the path of their evolution is predetermined. The nDTM allows for much more robust changes in language: for example, it allows language to lose some of its meanings. Most importantly, the epistemological assumptions that Ajdukiewicz imposed on his theory of meaning are no longer present in the nDTM. There are practically no constraints on the addition of new meanings to the nDTM. As we saw in Chap. 6, it is even possible to add quasi-nonsensical meanings, that is, expressions that cannot practically be used in language, as all of their uses will clash with the existing directives.

  • Desideratum 11. The theory should explain the role meanings play in language.

The sheer nature of functional role semantics makes it the best choice for explaining the role meanings play in language. Although the original DTM was a pioneering effort in this respect, Ajdukiewicz could not utilize this aspect of the theory fully. One reason for this is that he was not aware of some of the subtle differences that have been made later, specifically the difference between environmentally and socially narrow theories. For this reason it is not obvious whether the original DTM should be treated as an explanation of the role expressions play in the speaker’s mind or rather as an explanation of the role expressions play in the whole community. It is also important to remember that the non-referentiality of the original theory was the result of the scepticism towards referential notions rather than a conscious choice of a type of semantics. The nDTM embraces this aspect of the original theory fully and positions itself as a non-referential functional theory of environmentally narrow but socially wide meaning. This enables us to answer the question asked in Desideratum 11 directly – the role meanings play in language is that they synchronize the internal states of its members in order to guarantee communication. Meanings create a space for communication but they do not determine it because they only delineate its boundaries. Meanings facilitate communication by guaranteeing that the users do not misuse language.

  • Desideratum 12. The theory should explain the role of semantic discourse.

The original theory does not give semantic discourse any privileged position. Since the nDTM focuses on a better explanation of semantic trials it has a story to tell here. According to the nDTM, the main role of the semantic discourse is to be a mark of semantic trials. Whenever the language users start to suspect that their argument is merely verbal, they start to talk about “meanings”, “senses” etc. According to the nDTM, what is really important is not the discourse itself – users may have no idea what meanings and senses are – but the directives users invoke after they signal the semantic trial.

  • Desideratum 13. The theory should explain how non-referential discourses work.

Ajdukiewicz did not relate to this problem directly, but it seems that the original theory was quite well suited to explaining how non-referential discourses work. The reason for this is that, unlike Quine, Ajdukiewicz did not privilege empirical directives . On the contrary, it is entirely possible for some terms to appear only in axiomatic and deductive discourses. The reason Ajdukiewicz preferred this solution is that he wanted the theory to be able to describe formal languages, which do not contain any empirical directives, but it is possible to utilize this feature of the theory in order to explain any non-referential discourse. The nDTM retains this feature of the original theory – the meaning of many words belonging to these discourses can be simply exhausted by axiomatic, inferential and promotive directives they appear in. This enables us to characterize non-referential discourses as discourses in which the only thing that is expected from the users is to accept certain sentences (axiomatic directives ), perform certain reasonings (inferential directives) and act in a certain way (promotive directives). The biggest difference between the DTM and the nDTM that is relevant here is the addition of promotive directives . The reason why this addition is important in the context of this desideratum is that it may help us explain that one of the important aspects of non-referential discourses is that the users are expected to act as if they believed the objects of this discourse existed.

  • Desideratum 14. The theory should explain if meaning is normative and what type of normativity it is.

Although in some places Ajdukiewicz adheres to normative vocabulary he never addresses the question of normativity of meaning in the DTM directly. As we saw in Chap. 4 in the section devoted to Sellars’ theory of language, the holy grail of normativity of language is that it should be somehow able to find a middle ground between the idea of following rules, which demand understanding and interpretation, both of which assume meaning, and being subjected to physical laws, which are too strong because they do not allow for exceptions. The nDTM achieves this through two key assumptions. First of all, it assumes that the users follow the directives automatically via behavioural training. For this reason they do not have to understand the prescriptions or interpret them. They only have to recognize that they are in a semantic trial and that they have to accept a certain sentence. This act of recognition does not demand interpretation just as the command given to a dog does not assume the dog is able to interpret it. The other key assumption of the nDTM is that directives function as constitutive rules of language. This makes them function in a very law-like way, without sliding into being physical laws. As is the case with every behavioural training, the trainee may sometimes fail to react as expected. What is special in the case of meaning directives is that even if this happens, even when a directive is violated, it immediately becomes categorized by the community as a different act. This is what I meant by saying that the semantic violator is never taken seriously. Once a given violation persists – when the violator does not change her behaviour – the community changes the status of the act. The violator who rejects a sentence prescribed by a directive is now taken as joking or more generally not playing along, that is, not really violating the rule, or she is taken to be using a different vocabulary, which again means that she is not violating the rule, she is simply saying something else, possibly in a different language. This enables linguistic rules to achieve a truly impressive feat – even though, on the face of it, they are still “mere conventions”, they function as if they were necessary, unbreakable laws.

  • Desideratum 15. The theory should instruct us how to add meanings to an artificial language.

Even the original DTM could be easily used as a helping tool for creating artificial languages. After all, it shows that once you create a list of directives for a language, you equip the expressions that figure in the directives (in an essential way ) with meaning. The problem is that, as we saw in Chap. 3, the theory was not clear enough when it came to the relation between meanings and directives. There are places where Ajdukiewicz suggests that the directives are determined by meanings, and not the other way around, i.e. that the meanings, whatever they are, are prior to directives. If we treat these remarks seriously then the process of creating directives for an artificial language is far from obvious, as it has to be governed by our prior knowledge of meanings, which is not something the theory explains. The nDTM chooses to embrace the prohibitory potential of the Ajdukiewiczian account. As I pointed out in the comment to Desideratum 5, the nDTM assumes that there is nothing more to the linguistic meaning of an expression than its distribution in the meaning directives. What it means in practice is that whenever we decide to create an artificial language, the only thing we have to worry about is that every item in the vocabulary of our language figures in an essential way in at least one of the directives we attach to the language. One additional practical advantage of the approach chosen for the nDTM is that, since the theory uses fully functional descriptions of internal and motor states, the artificial language we create can in principle be created for any user: it can be a community of humans, machines or software agents who have no real connection to the real world.

  • Desideratum 16. The theory should provide a criterion of “knowing the meaning”.

The original DTM does not provide a definition of “knowing the meaning”Footnote 13 but it is possible to reconstrue it from the informal considerations that preceed the more formal part of Ajdukiewicz’s paper. To know the meaning of an expression is to conform to the directives in which the expression figures in an essential way . Since Chap. 6 added compositionality to the theory, we are now also able to present the definition for the compound expression: to know the meaning of a compound expression is to know how it is built and know the meanings of the expressions it was built from. These claims are a direct result of the prohibitory nature of the nDTM. The theory states that “knowing the meaning” is a much simpler skill than competing theories suggest. The user does not have to know all, or even most, of the possible uses of a word to know its meaning because meanings are not instruction manuals for language use. She only has to know the limits of the use of the word.

  • Desideratum 17. The theory should explain the role of meaning in the internal ecology of thought of the speaker.

It is doubtful that Ajdukiewicz believed his theory to be an explanation of the real internal, psychological mechanism of language. Despite the usage of mentalistic terminology, the resulting definitions are free from any mention of actual cognitive processes. It is nonetheless possible to develop the theory in this direction – it would then be very similar to Sellars’ and Block’s accounts.

The direction I chose in this book is different. The nDTM explains language from a third-person perspective using functionalist descriptions of users’ states. For this reason it does not really explain what happens inside the cognitive system when it makes an utterance or understands an utterance made by someone else. The downside of this solution is that the nDTM is not immune to Putnam’s “super-actor” challenge. If a user refuses to follow directives by hiding all of the dispositions she has, then, from the point of view of the nDTM, she will not be using the words according to their meanings. On the other hand, the user cannot “fake” linguistic competence. If she builds proper syntactic sentences and follows directives, she is a genuine user of this language.Footnote 14

Having said that, it is obvious that the patterns of linguistic behaviour codified in language matrices have to have some sort of internal correlate in the user – they have to be realized somehow. The nDTM does not preclude the possibility of finding these correlates in a given type of user, or even given individuals, if more fine-grained analysis is needed. Obtaining this knowledge could solve the “super-actor” problem in the future.

  • Desideratum 18. The theory should explain the relations between linguistic and non-linguistic actions.

Ajdukiewiczian theory does not contain any mention of users’ actions, apart from the act of acceptance and rejection of the sentences contained in the directives. Since the nDTM adds a fourth type of directive (promotive directives ) it ties the acceptance of sentences with extralinguistic actions. However, it is important to remember that the actions described by the theory are pretty rudimentary, so the theory does not have an ambition of being a full-blown theory of action.