1 Introduction: Defining the Key Notions

1.1 The Semiotic Triad

In 1938, Charles Morris coined a triadic divide of semiotics into three intertwined areas: the syntax, semantics and pragmatics of a language. Linguists and philosophers of language still use this divide to discriminate between different theoretical areas of interest in their fields. Nevertheless, they also constantly debate its accuracy. According to Morris, syntax is the study of the ‘formal relation of signs to one another’, semantics investigates ‘the relation of signs to the objects to which the signs are applicable’ and pragmatics is the study of ‘the relation of signs to interpreters’ [1, 2]. This divide remains roughly valid up till today. For instance, the standard textbook definition of semantics is:

[a study that] has to do with certain relations between the (or at least some among the) expressions in a language, on the one hand, and typically extra linguistic objects, on the other. The standard example of a semantically interesting relationship is that between a name and its referent: a linguistic item such as ‘Felix’ has apparently something to do with Felix, a cat. This relatively unproblematic example of a semantic feature and of the extra linguistic items it targets is, however, typically accompanied by a list of other less straightforward instances: predicates are semantically related to classes of individuals, sentences to truth-values, and, more generally, expressions of all sorts get paired with non-linguistic entities of a peculiar type, their meanings [3].

In his classical textbook ‘Pragmatics’ S. Levinson discriminates within syntactic, semantic and pragmatic inquiries a pure (or ‘meta’) study and a descriptive account. The first one aims at abstracting some general patterns that would apply to the entire field of linguistic inquiry. The second aims at the analysis of quantitative data provided by language users. The present considerations will follow the pure method in the first four sections and the descriptive method in the fifth section of the paper [2].

The order of mention of the parts within the semiotic triad should be purely random. This is because, it does not tell us anything about which aspect of the divide is being processed first by a concrete speaker. It persists for utterly systematic purposes. Moreover, the divide has nothing to do with the reality of language processing. It could be that the human brain processes all the aspects at once or in a reverse order (than the order of mention). Thus, the sensibility of the divide is rather theoretical than practical. Nevertheless, there remain some disputes affecting even the scope of the theoretical sensibility. Starting from the times of Paul Grice, a most heated debate concerns determining a clear border between semantics and pragmatics. The range of views is broad. Some attribute semantics a leading role. Others reduce it to a very minimal or non-existent layer of the human linguistic enterprise. This also results in various reformulations of the definitions of semantics and pragmatics respectively. To avoid misunderstandings as to what belongs to which field, researchers formulate definitions enumerating the principal elements of interest in the semantic or pragmatic inquiry. An instance of such a definition could be: ‘Pragmatics is the study of deixis (at least in part) implicature, presupposition, speech acts, and aspects of discourse structure’ [2]. Before having a closer look at such definitional controversies, we need to define another crucial notion for the present considerations, that is context.

1.2 Context Versus Co-text

It is extremely difficult, if not impossible, to formulate a sharp or exhaustive definition of context. This is because, it is supposed to be a general idea that fills out the gap of everything which is not semantic or syntactic in linguistic studies. However, there have been many attempts to flesh out an idea of what the principal elements of context could be. Usually, the general definitions claim that it is the knowledge of the world shared by the participants to a particular situation where a sentence is uttered at some point in time. Following Lyons, S. Levinson enumerates as elements of context: ‘(i) Knowledge of role and status (where role covers both role in the speech event, as speaker or addressee, and social role, and status covers notions of relative social standing), (ii) knowledge of spatial and temporal location, (iii) knowledge of formality level, (iv) knowledge of the medium (roughly the code or style appropriate to a channel, like the distinction between spoken and written varieties of a language), (v) knowledge of appropriate subject matter, (vi) knowledge of appropriate province (or domain determining the register of a language)’ [2]. For the present considerations, a fairly broad definition of context should be sufficient. Consequently, let context be information that is neither a syntactical structure nor a dictionary, lexical definition of the meaning of a word or expression. Let me emphasize that I want to exclude from context everything that constitutes the ‘co-text’. In other words, I exclude every knowledge or information acquired from over pieces of text. I do it because co-textual information comes too directly from syntactic or semantic sources to be properly called pragmatic. The notion of co-text will be discussed in details in section four of the paper.

As the second part of the paper is almost exclusively devoted to the legal language and the legal realm, I need to adopt a more specific definition of the legal context. Consequently, I need to modify the above definition through introducing the concept of common legal knowledge. D. Walton or S. Azuelos-Atias characterize it as: ‘all the knowledge based on “ordinary ways of doing things familiar to lawyers in their everyday professional life”—namely, all the knowledge of the legal system and legal procedure familiar to lawyers’ [4]. I accept it, but with a slight modification: the exclusion of the above-mentioned co-text.

1.3 Understanding Versus Interpretation

Before proceeding further, I need to adopt a definition of understanding and interpretation as well as take a stance on the relation between them.Footnote 1 Moreover, I have to relate specifically legal interpretation to this broader picture. If I understand a claim, it is because the claim is intelligible to my limited mind as well as to other human minds. But what does it mean that I am interpreting something? Is understanding a part of interpretation or is it a distinct basis for it?

According to the literature, understanding and interpretation seem to be two distinct concepts. Wittgenstein claimed that understanding a rule, following it, need not require interpretation [5]. T. Endicott and D. Patterson have formulated a similar approach, modified to fit the specificity of legal reasoning [6, 7]. Endicott defines interpretation in general as: ‘a creative reasoning process for finding grounds for answering a question as to the meaning of some object’ [6]. The direct consequence of this formulation is that if there is no question as to the meaning of the object, then there is no need to interpret. Consequently, according to Endicott, interpretation has three features. It is (i) of an object because it depends on true propositions referring to the object. It is (ii) creative as it ascribes to the object a meaning that someone else might dispute. Finally, it is (iii) rational because there can be reasons for arriving at an interpretation [6].

Analogously, legal reasoning means finding rational support for legal conclusions. It is not just about identifying the content of the law, but also reasoning as to ‘what is to be done according to the law’ [6].Footnote 2 Consequently, legal interpretation will be: of an object, creative and rational. However, it will also be giving a rule for the application of the law and it will be articulate or propositional [6]. Although both understanding the law and legal interpretation are parts of legal reasoning, understanding the law need not involve interpretation:

Sometimes, gaining an understanding requires a creative intellectual process of finding reasons for an answer to a question (which might have been answered differently) as to the meaning of the object. Some understanding does not require that process. The distinction is well signaled by using the term ‘interpretation’ for that process. [6]

To sum up, the idea that understanding does not always involve interpretation will be adopted in the present considerations. You could now wonder where the doctrines of constitutional or statutory interpretation fit to this picture? D. Patterson puts it neatly: they are interpretive arguments that build upon our understanding of the law [7]. So the claim will be that to grasp literal meaning understanding is sufficient. Moreover, such understanding is a common denominator of theories of statutory interpretation. Nevertheless, formulating a textualist, purposivist or intentionalist argument means interpreting.

1.4 The Semantic–Pragmatic Interface

In his ground-breaking article “Logic and Conversation”, Paul Grice noticed that we often try to convey far more than just the amalgam of the meanings of the words we use. As a consequence, we can usually distinguish not only what is said, but also what is meant or implicated through the utterance [9].

The Gricean legacy has triggered a heated debate over the boundaries between semantics and pragmatics. It remains undisputed that strong forms of pragmatic occurrences such as some conversational implicatures are lacking a strictly semantic character. By contrast, virtually any other component of propositional content (understood as the truth apt content of a linguistic occurrence) is being put under scrutiny as far as the semantic–pragmatic relations are concerned. The present debate is in fact about how much pragmatics do we need to decode the propositional content or state the truth-value of a sentence. Three main stances can be identified. First, the strictly semantic theories, claiming that propositional content can be determined without using pragmatics. This view is labelled by E. Borg “formal theories” and defined as

fundamentally syntax-driven theories, which claim that it is possible to deliver an account of the propositional or truth-conditional content of a sentence in natural language simply via formal operations over the syntactic features of that sentence, that is, over the lexical items it contains and their mode of composition. They have their roots in the work of Frege, Russell, and the early Wittgenstein, and find contemporary expression in, among other accounts, the model-theoretic semantics of Montague and the truth-conditional approach pioneered by Davidson. [10]

Second, the pragmatic approach treats propositional content as an entity, which cannot be grasped without resorting to pragmatics. E. Borg calls them “use-based approaches to linguistic meaning” and gives the following definition:

At a very general level of description, these kinds of account state that the right point at which to offer a semantic analysis is the level of the utterance, and not some more abstract notion of a sentence-type. Meaning will depend in some quite fundamental way upon the use to which linguistic expressions are put. In this way, to get to the level of semantic content we need to appeal to the context in which an expression is used, so determining the meaning of a complex linguistic item will no longer simply be a matter of the formal properties of that item. [10]

Third, the dual-pragmatic theories, which state that to determine propositional content, we need an amalgam of formal and pragmatic processing. According to E. Borg:

dual pragmatic theories differ from paradigm varieties of use-based theory since they take features from both the formal and the use-based side of the divide. So they claim that formal theories have a role to play en route to the determination of truth-evaluable semantic content but also that formal approaches are (very often) incapable of determining such content, since not all natural language sentences are capable of expressing complete propositions in isolation from serious consideration of the use to which that sentence is being put. [10]

The core reason for the third stance lays in the refutation of a traditional semantic or formal approach. It limited limited the role of pragmatics to dealing with lexical disambiguation and indexicals. Scholars such as François Recanati claim that this approach is insufficient, because of the phenomenon of ‘unarticulated constituents’ (UCs).Footnote 3 UCs differ from implicatures, because they are part of the ‘explicature’ of a sentence. An explicature is the content that a speaker conveys and does not just implicate. For example:

  1. (I)

    A: Are you hungry?

    B: I have had a very large breakfast [today] [11].

What B literally conveys is ‘I have had a very large breakfast’ tout court. However, the explicature is ‘I have had a very large breakfast today.’ and the UC is simply ‘today’. The implicature is that B is not hungry. Therefore, what is exactly the difference between explicature and implicature? E. Borg defines explicature as “(…) what a speaker directly communicates and it contrasts with indirectly communicated implicatures (which are arrived at via further pragmatic processing based on applications of the principle of relevance)” [10]. However, this definition does not fully capture the core idea. The term ‘explicature’ was introduced by Sperber and Wilson (a similar view is held by Kent Bach, who calls the phenomenon impliciture and not implicAture) [12]. They view explicature as a development of the logical form of the sentence. Consequently, pragmatic bits of the explicature do influence the truth conditions of the proposition expressed through the sentence. By contrast, the content of an implicature does not influence the truth conditions of the propositions expressed. As a result, through an utterance we may express a proposition (explicature) with truth conditions distinct from the ones attributed to some other proposition grasped as implicature. Thus, the utterance encodes two distinct propositions.Footnote 4

This entire dispute is struggling to find convincing explanations of a much greater role of pragmatics in linguistic reasoning. Three main explanations have been formulated so far. First, there are scholars such as Jason Stanley, who claim that there exist far more hidden indexical expressions in natural language, than initially assumed. In fact, every linguistic expression may conceal an indexical component. Second, there are the minimalists, like E. Borg, H. Cappelen and E. Lepore, who distinguish between a minimal, sentence propositional content (E. Borg calls it liberal truth conditions, which are discussed in the next section of the paper) and a communicated propositional content. The former does not involve pragmatic reasoning and is insufficient to fully grasp what the speaker intended to convey through his utterance on a particular occasion. It is only the latter—the communicated propositional content, that leads to a proper linguistic understanding. Third, there are the pragmatists, for example F. Recanati or Ch. Travis. Recanati claims that minimal propositions are useless in explaining communication. Therefore, they are not worth distinguishing. Travis is more radical, claiming that minimal propositions do not exist. As a consequence, they cannot be ascribed any truth apt content [14, 15].

Having established a broader picture, let us restrict the considerations to an assessment of E. Borg minimalist approach. Nevertheless, before proceeding to such approach, we need to provide a brief account of the notion of “liberal truth conditions”.

2 Liberal Truth Conditions

Context can change meaning. We often mean something else than we literally say. But this does not mean that literal meaning is to be neglected. Speakers are often aware of literal meaning and treat it as propositional content. They use it to produce a whole range of phenomena such as irony, sarcasm or jokes. Consider the example:

A, B and C are standing in front of a lake. A points toward the water and says:

  • A. Look! A hippo’s head!

  • B. Oh! It is.

  • C. It’s probably a whole hippo; it’s just that the rest of him is under the water.

The mechanism that C adopts to produce the joke is to refuse to enrich A’s utterance with contextual features. Thus he treats A’s literal utterance as a truth evaluable proposition and reacts to it. What this proves is that, at least in some cases, it makes sense to distinguish a distinct level of literal, lexical meaning. Moreover, some scholars such as E. Borg go even one step further. They claim that every single utterance (even containing indexicals or ambiguities), in absence of context, can be treated as a (truth evaluable) proposition.

According to Borg: “Delivering appropriate truth-conditions for some communicative exchange is not a proper task for a theory of literal linguistic meaning” [10]. As a consequence, we must distinguish between intuitive truth conditions and liberal truth conditions. Liberal truth conditions are:

conditions which are liberal since they clearly admit of satisfaction by a range of more specific states of affairs. A liberal truth-condition posits ‘extra’ syntactic material (i.e. material in the sub-syntactic basement) only when it is intuitively compelling to do so, or when there is good empirical evidence to support the move. Furthermore, what these truth-conditions take to be delivered by sub-syntactic information is merely the presence of an additional argument place, marked by an existentially quantified argument place in the lexical entry, and not the contextually (intentionally) supplied value of this variable. [10]

Consequently, when there is no syntactic evidence for a possible argument, it does not even occur at this level. It may be decoded only in the next step, while taking into considerations the context of utterance and intentions of the speaker, which leads to intuitive truth conditions. E. Borg provides a few examples:

(a) If u is an utterance of ‘Jane can’t continue’ in a context c then u is true iff Jane can’t continue something in c.

(b) If u is an utterance of ‘Steel isn’t strong enough’ in a context c then u is true iff steel isn’t strong enough for something in c.

(c) If u is an utterance of ‘Fido is bigger than John’s dog’ in a context c then u is true iff Fido is bigger than the dog bearing some relation to John in c.

(d) If u is an utterance of ‘The apple is red’ in a context c then u is true iff the apple is red in c [10].

C becomes more precise only at a higher pragmatic level [10]. The liberal truth conditions are supposed to be some objective, observer-independent and syntactically guided devices devoid of a speaker’s intention. As a result it is sufficient that the:

competent interlocutor can grasp the truth-conditions of the sentence, she knows how the world would have to be for the sentence to be true. To think that, in addition to this, the agent must be in a position to ascertain whether or not that condition is satisfied in order to count as understanding the literal meaning of the sentence is to run together notions of meaning and verification which (the history of Verificationist approaches to meaning tells us) are best kept apart. [10]

Such approach raises numerous objections. The most common are that liberal truth conditions are underdetermined and not intuitive. In fact, there may be propositions, which are true or false in every ‘liberal’ context. For instance take the examples discussed by Borg:

  1. (II)

    Jane is ready.

  2. (III)

    Jane can make it.

  3. (IV)

    Jane can continue.

It is perfectly possible that in every context there is something that Jane actually is ready for or is able to make. Moreover, the requirements set by Borg upon the truth conditions are quite minimal. It is not even necessary for a native speaker to be able to tell whether the truth conditions are satisfied in every possible situation. In other words, we do not have to decide at this level on some unclear or borderline cases such as for example polysemy (the expression ‘to cut something’). Is cutting the icing of a cake with a spoon an instance of cutting a cake? Analogously, is cutting the lawn into squares with a kitchen knife an instance of cutting the grass? [10] As a result, the liberal truth conditions are deeply under determined.

Second, Borg herself admits, that it is quite rare that an interlocutor fully realizes or uses ‘liberal truth conditions’ while processing linguistic data in the brain:

We need to hold apart knowing the truth-conditions of a sentence (a semantic matter) and knowing whether or not those truth-conditions are satisfied on some particular occasion of utterance (a non-semantic matter). What is obviously the case, given our limited cognitive resources and the speed of communicative exchanges, is that we simply don’t have the time or ability to check all possible situations satisfying the conditions on any given occasion; but we should also note that very often we don’t have to. [10]

Agents probably rarely use such entities while communicating, as they are too demanding cognitively. However, this does not mean that such cases are theoretically implausible. Consider the case where an agent would have no access to the context of some utterance. All he could get is the bare literal meaning of the words used. Such a case is interesting for the legal theorist, as I will try to prove later in the paper. In fact, it could be more interesting to a legal theorist than to a contemporary philosopher of language or a philosopher of mind. This is because, Borg’s claim that agents can access liberal truth conditions, in absence of rich context, relies on the Fodorian version of the modularity of mind idea. Jerry Fodor introduced this idea in 1983 in his book ‘The Modularity of Mind’. Since then, it has been put under constant criticism as the theory has many internal inconsistencies [1618]. Nevertheless, even if this kind of modularity is a false empirical assumption it does not directly entail that liberal truth conditions [LTC] are never really acknowledged by the agent (and are only abstract, theoretical constructions). In fact, recent psychological experiments measuring time of reaction, confidence of response etc. seem to confirm that the level of ‘bare linguistic meaning’ (which is a strikingly similar concept to LTC) is accessible to interlocutors. It may be that the first accessed interpretation is contextual. Nevertheless, it does not rule out the possibility of ‘being aware of’ or of thinking in terms of bare linguistic meaning [19].Footnote 5 Moreover, even if LTC would prove to be just an abstract concept in the processing of natural language, it can still be interesting to a legal theorist, which I shall demonstrate. In other words, even if LTC do not explain how we intuitively grasp or process utterances, they may explain what is the basis of interpretation—that is understanding.

A third counterargument is that LTC generate problems while explaining possessive constructions. Consider the phrase ‘John’s dog’. It can be described as a dog staying in whatever possible relations to John. This is simply too broad. It is theoretically possible to state that possessives are a matter of pragmatics and a minimal semantic theory does not have to account for them. Nevertheless, the consequences of such a move are appalling: we get a semantic theory that cannot explain one of the most basic and fundamental elements of a natural language: possessives. So the idea is this: when I hear an utterance devoid of context such as ‘John’s dog.’ or ‘Maria’s coffee.’ the default interpretation I get is the possessive one. I may overturn this default assumption with the use of context. However, as long as I do not dispose of context the semantics of the sentence seem to point sufficiently to the possessive relation. The possessive relation is a basic one for any human language.Footnote 6

Therefore, the question why should we create a theory where pragmatics are required to account for such a primary element of linguistic capacity as explaining possessives arises. A good reason could be the possibility of applying this idea to a practical field such as legal interpretation, despite the aforementioned objections. Is the legal ‘literal meaning’ an example of liberal truth conditions? If the answer is positive, then even if ‘liberal truth conditions’ fail to give an account of human communication, they are useful for explaining another phenomenon—the basis for the interpretation of law.

3 Theories of Statutory Interpretation

Within the common law tradition, it is possible to distinguish three main approaches to statutory interpretation: intentionalism, purposivism and textualism. All three rely on pragmatic reasoning: intentionalism postulates the decoding of a subjective intention of the legislature, purposivism concentrates on the pragmatic aim of the text and textualism favours an objective reading of what the speaker could have meant by a hearer. In other words, a textualist tries to grasp how an objective hearer, knowing the relevant context and background conditions, would have understood the utterance. Let us take a closer look at those three doctrines.

The intentionalist line of thinking goes as follows:

when judges face an interpretative question about statutory law, they should, first and foremost, strive to ascertain the actual intention of the legislature that bears on the issue at hand, and, if they manage to find out what that intention was, they must defer to it and decide the case accordingly. (…) intentionalism urges judges to take the legislative history very seriously and try to figure out the actual intentions and purposes that guided the relevant piece of legislation, striving to extrapolate an answer to the question they face from those intentions and purposes. [20]

According to the purposivist middle-ground:

the task of statutory interpretation should be seen as continuous with the legislative task of making the law in the first place or, at least, coherent with it. Roughly, the idea is this: When faced with an interpretative question about a statute, judges should ask themselves what the relevant purpose of the law is and how that general purpose can best be achieved by resolving the particular interpretative question one way or the other. And how do we know what the relevant purpose of the law is? Not by trying to figure out the actual intentions of the legislators, but by asking what a reasonable legislature would have reasonably wanted to achieve by enacting the piece of legislation that it did. [20]

Finally, textualism claims:

legislation is a speech act, an act of communication, whereby the legislature, by voting on a bill, communicates a certain legal content, and that legal content is the content of the statutory law. (…) Voting procedures are meant to generate an institutional decision. Participants in such procedures often have many reservations about the resolution they vote for; it often does not reflect their subjective preferences. But when they vote for approving a certain resolution, they express the intention to communicate the content of the resolution as the official decision of the institution in question. [20]

All of the three approaches rely on a notion of literal content transmitted through writing from the legislature to the judicial powers. All of them supply literal meaning with various contextual factors that help build their respective argumentation. Intentionalists do it through legislative history or records of discussions within the enacting institution. Purposivists seek a purpose that the law should serve. It can even be a ‘counterfactual intention’. This is an opposite intention than the one expressed by the legislature. It is usually justified by the following pattern of reasoning:

  1. 1.

    The legislator wanted to achieve purpose X with some law.

  2. 2.

    Achieving purpose X in a particular situation leads to absurd results.

  3. 3.

    If the legislator had foreseen that there could be such a situation, he would have stated that in this situation purpose Y should be achieved.

  4. 4.

    Therefore, let’s favour the interpretation that achieves purpose Y.

Finally, textualists, at least at a normative level, are trying to reconstruct some objective understanding of words uttered in a set of definite circumstances. Consequently, it seems that textualism is the most minimal of those three stances as far as the involvement of pragmatic factors is being considered. Precisely because of this pragmatically limited character, textualism proves to be of little help when vagueness or polysemy must be resolved. Moreover, it does not provide us with conceptual tools to deal with a conflict of two clear but contradictory rules [20].

4 Literal Meaning in Legal Interpretation: A Common Denominator

Intentionalism, purposivism and textualism rely on a common denominator—literal meaning. This meaning is the written content of the provisions enacted. It consists of syntactic structures combined with semantic meanings of the words employed. The common denominator is of crucial importance:

In jurisprudence, the notion of literal meaning is often used to draw a line between application and interpretation of law (…) or, especially in civil law countries, between mere interpretation and creation (in particular, judicial creation) of law; in some legal cultures it is also employed as basis for a doctrine of Constitutional interpretation. [21]

To apply the law, understanding of the provision is sufficient. However, when pragmatic enrichment is at stake, interpretation becomes necessary. Moreover, the notion of literal meaning seems to have much in common with E. Borg’s ‘liberal truth conditions’. First, it is just as underdetermined as liberal truth conditions. Take a random example—article 2(3) of the Treaty on the Functioning of the European Union:

  • (V) The Member States shall coordinate their economic and employment policies within arrangements as determined by this Treaty, which the Union shall have competence to provide.

Taken roughly, that is through the bare syntactic structure and conventional meaning, it is not clear what are ‘the Member States’, ‘the economic and employment policies’ or ‘this Treaty’. Moreover, we do not know what ‘arrangements’ are supposed to determine which ‘coordination’. Second, ‘literal meaning’ just as ‘liberal truth conditions’ fails to provide us with ‘intuitive truth conditions’. Without pragmatic data, there is no possibility of obtaining a determined propositional content devoid of ambiguity or polysemy.Footnote 7 Coming back to the example: while reading the sentence we usually already have an intuition about what arrangements will determine which coordination, or what ‘the economic and employment policies’ could be. Nevertheless, to create a full proposition out of (V) we must resort to either a co-textual definition somewhere else in another legal text or, if there is no such definition, then to context. This is the factor that will help us deal with indexicals and ambiguities. This way we will know what are for instance ‘the Member States’ of the European Union at the moment of reading the sentence. Take another example:

  • (VI) All men are equal.

The liberal truth conditions permit us only to state that all men, whatever falls within the scope of that name, are equal somewhere and in some time period in some respect. It is only with the addition of contextual factors, that we can arrive at an either intentional or textual interpretation. Therefore, the two ideas of liberal truth conditions and literal meaning (as a common denominator for theories of interpretation) are almost identical at a conceptual level.Footnote 8

If literal meaning is a useful and fruitful tool to explain what it means to understand a provision and what is the basis for legal interpretation, then E. Borg’s liberal truth conditions would cease being only an abstract intellectual experiment. By contrast, if the very notion of literal meaning is conceptually dubious or is a normative, postulated ideal, rather than an adequate description of the interpretive process, then Borg’s minimal semantics remain challenged.

The existence of interpretive doctrines in the common law tradition is proof of the former possibility. This is because; literal meaning fits perfectly the understanding versus interpreting distinction. A bare literal meaning is sufficient for an abstract understanding of a provision: ‘minimal propositions are (and are known to be) the content any competent language user is guaranteed to be able to recover merely through exposure to the sentence uttered’ [22]. However, when a concrete decision must be taken it can be underdetermined and unintuitive. Thus, lawyers quite often resort to either intentions (subjective or objective) or pragmatic factors such as a purpose of the law to formulate reasons for adopting an interpretation. This results in respectively intentionalist, textualist and purposivist lines of thinking. However, in the civil law system, the notion of literal meaning is frequently evoked as a self-sufficient entity at the theoretical level. It is seen as a guardian of the law’s certainty and predictability [23].Footnote 9 By contrast, in common law systems, the concept of certainty and predictability of the law is treated as rather wishful thinking, than an operable, practical idea. In such systems, literal meaning can be viewed as a manifestation of taking seriously the ethical challenge of consistency and coherence of adjudicating. Thus, literal meaning guarantees a higher probability of consistent and coherent outcomes of judicial reasoning.Footnote 10 It also reflects ‘the role of minimal propositions as the content deferred to when information about the context of utterance is insufficient, unreliable or in some way unstable’ [22].

In strictly civil systems any divergence from literal meaning is seen as a dangerous switch from the application of the law toward its creation. It is viewed as a threat to the divide of powers principle. Consequently, what is exactly this mysterious notion of ‘literal meaning’ used by lawyers? According to F. Poggi in continental law systems, literal meaning is a concept that includes the notion of ‘co-text’ [21]. In fact, as I shall later argue, the Borgian LTC can include co-text as well. Thus, they can explain well the mechanisms of the legal enterprise. Let us first take a detailed look at Poggi’s arguments for introducing co-text to literal meaning:

The speaker’s intention does not coincide with what the speaker has in mind, but with what she demonstrates: in other words, what counts is the intentional action to demonstrate, the expressed or recognizable intention to make salient an object [25, 26]. In the written legal language the demonstration is always expressed by the co-text (by former provisions), which is a contextual element: so, even in legal text, when a sentence contains a demonstrative (or a pronoun), we must recognize that the semantic-syntactic rules do not fully determine the meaning of that sentence. However, it seems that the meaning resulting by the semantic-syntactic rules plus the co-textual elements (necessary to fix the reference of a demonstrative or a pronoun) is still quite determined: of course, this meaning, that we might label “legal textual meaning” (LTM) is no more a pure semantic-syntactic concept, but lawyers are more interested in the existence of a unique and determined meaning than in solving theoretical questions, concerning the relationship between semantics and pragmatics.

The ‘co-textual elements’ are substitutes for pragmatic factors. This is because, coming back to the example (V), even if the context does not allow to determine what are the ‘economic and employment policies’, they may be well defined in some other provision of the Treaty. Consequently, the co-textual elements are formed by a collection of all valid provisions in a system. Its principal asset is that they lead to a much more determined meaning than the one generated by context. The reason for this is that they come from strictly semantic-syntactic sources—written text.

Lawyers are not afraid to abandon the idea of a (pure semantic-syntactic) literal meaning, to embrace the notion of a LTM [co-textual meaning – author’s clarification] – viz. of a meaning which, although not totally context-independent, is, however, unique and well determined. [21]

What is crucial, the notion of legal textual meaning (LTM) does perfectly conform to the literal meaning understood in Borgian terms. Even if we add co-text to the picture, the liberal truth conditions idea is still preserved and the minimalist is satisfied (at least as far as legal language is at stake). This is due to a difference in quality between context and co-text. The latter is an enlarged amalgam of syntactic and semantic data devoid of for instance disambiguation, reference assignment or argument filling.Footnote 11 Nevertheless, as Poggi herself further argues, there are a number of cases where the identification of the relevant co-text is quite problematic. Moreover, even a successful identification may prove to be insufficient in applying the law to the case at hand. In these cases we will anyway need to resort to intentions or moral or political arguments. In other words, we will need to create reasons for adopting an interpretation by enriching the LTM with contextual factors. However, there will be a number of cases where the LTM will be sufficient. This will be the majoritarian ‘easy cases’, where the liberal propositional content will be easily satisfied by the facts of the case. Nevertheless, even if to take some decisions context is necessary, all the interpretations will rely on an understanding of the provisions conform to the literal meaning understood in Borgian terms.

In the Italian tradition (the Polish is rather similar), Poggi distinguishes four levels of co-text:

we may distinguish four levels of (legal) co-text: a first level formed by (norms expressed by) other paragraphs of the same section; a second level composed by (norms expressed by) other sections of the same text (of a statute or a code); a third level consisting of (norms expressed by) provisions of other legal texts; and, finally, a fourth level formed by legal principles (possibly constitutional principles). [21]

The decision to resort to a next level of co-text can often be a matter of judicial discretion. However, all the co-textual levels are appropriate for the minimalist. They all preserve the idea of liberal truth conditions, as they are an amalgam of syntactic and semantic factors without pragmatic elements.

If there exists a common denominator of interpretive doctrines constituted by liberal truth conditions (that are sometimes supplied by co-text), then how to account for the issue of pervasive disagreement in law? How to explain Ronald Dworkin’s observation that theoretical disagreement about the law is omnipresent? As Brian Leiter puts it: ‘The “theoretical disagreements” that interest Dworkin presuppose that statutes and judicial decisions are, indeed, “grounds of law,” but deny that this settles the question of what the criteria of legal validity really are: the key theoretical disagreements for Dworkin concern the meaning of the acknowledged sources of law such as statutes and constitutional texts’ [27]. Thus, where does the discussion about ‘criteria of legal validity’ fit to the idea of liberal truth conditions? It can fit well for two reasons.

First, if statutes and judicial decisions are grounds of law, then their text has liberal truth conditions. This is the very minimal level of meaning common to all interpretive possibilities. Nevertheless, in hard cases, a minimal proposition (with liberal truth conditions) derived from the text of the law does not tilt the balance toward either possible interpretation. Thus the idea of liberal truth conditions does not interfere with existing pervasive disagreements. It only emphasises that there are common grounds of the law in the interpretive disputes over criteria of legal validity that arise in hard cases.

Second, pervasive disagreements are rare. They must be rare if a legal system is to function effectively. Judges must agree most of the time over what are the validity criteria, otherwise no system could be stable [27]. Thus, any convincing theory of the grounds of the legal interpretive enterprise must rather focus on the less controversial cases. B. Leiter puts it neatly:

‘If theoretical disagreement were something other than a marginal phenomenon—if it were not primarily the provenance of the pyramid of the universe of legal phenomena—then the claims of a theory, like Dworkin’s, that give it pride of place might be theoretically significant. But when the most striking feature about legal systems is the existence of massive agreement about what the law is, then any satisfactory theory has to do a good job making sense of that to be credible’ [27].

If Leiter is right and there is massive agreement in most cases, then minimal propositions with liberal truth conditions supplied by co-text should suffice in most cases for giving a clear account of what is going on. Moreover, in hard cases, where pervasive disagreement occurs, propositions with liberal truth conditions are an account of what are the ‘common grounds of the law’.Footnote 12 , Footnote 13

5 Where Else Do We Need Literal Meaning?

The above considerations were largely based on the Levinsonian pure (or meta) method. They aimed at finding some abstract mechanism employed by lawyers in their work. This section aims at a more descriptive analysing of some instances of literal meaning found in the literature. The use of empirical data can provide a few interesting examples where the notions of liberal truth conditions or literal meaning are exploited [29].

Firstly, the facetious uses of language that “rely on equivocation on literal vs conveyed meanings, and on a surface lack of cooperation, for their effect” Recall the joke about the hippo from section two of the paper. Another example is the following conversation:

[B has a teenage daughter, but is no longer living with her mother. He is telling A how he and his ex-wife normally communicate in writing about matters concerning their daughter. Lately, they have had occasion to discuss their daughter with more than usual frequency.]

A. Why don’t you just talk to her? I thought you lived in the same town.

B. We do. We’ve just become used to writing. I did talk to her once, though.

A. [laughing] Well, you must have!

B. Oh God! I meant “once RECENTLY”! [29]

In the above example A is clearly avoiding any pragmatic enrichment of the relevant time by ‘recently’. Nevertheless, such jokes usually rely on a partial use of literal meaning. They rely only on one of the expressions used in a sentence. They alter the proposition decoded, but they do not create a fully ‘liberal’ proposition, as indexicals such as ‘we’ or ‘her’ are obviously disambiguated by A. Therefore, the mechanism is comparable to making a step backwards—from a full proposition, to one where an element is not disambiguated or pragmatically enriched. The aim of such a move is the creation of a joke. It is perfectly possible that from a cognitive point of view the full proposition is decoded first. Only secondly some element of it is treated as liberal, but this does not create a Borgian proposition where all truth conditions are liberal.

Similarly, other examples, such as manipulative uses of language in advertising, also rely on a literal use of an expression. They do not involve the literal use of an entire sentence [29].

The most interesting examples of literal uses provided by Mosegaard-Hansen are:

openly adversarial uses of language, such as courtroom interaction, where literal meanings take on a greater importance than in most other settings, due to the stakes involved and to the essentially conflictual nature of the dialogue. [29]

This becomes particularly blatant in witness cross-examination during criminal trials in common law. The courtroom environment should demand absolute precision. Therefore, the best strategy to adopt is (or at least should be) the meticulous disambiguation of each and every liberal truth condition of a sentence in order to form the most precise possible context of utterance. It is supposed to be achieved through the questioning of the witness about what he means by uttering an expression that could potentially be ambiguous or pragmatically enriched. Take the example:

Q. At that time at the restaurant on April 26, did Mr. Wahl report to you at least in general terms a sighting he had made at Geary State Lake?

A. Yes, sir.

Q. Did you attempt to conduct a full interview of him at that time?

A. No. We had been instructed when we obtain a new piece of information like that-

Mr. Tigar: Excuse me, you [sic]. Not responsive.

The court: Yes. Not what you were instructed, what you did.

The witness: Okay. No, I did not. (Oklahoma City Bombing trial, 12/11/97) [29]

The above communicative exchange is an attempt to de-pragmatize meaning. It is not characteristic of natural language, as the courtroom situation is rather an artificial one. The curious observation is that precision here is not achieved through the addition of more fine-grained contextual factors. Precision is gained through resorting to literal meaning. This is even stronger than a ‘step backwards’ and it demonstrates the importance of literal meaning to the legal realm.

6 Conclusion

The Borgian idea of minimal semantics and liberal truth conditions is an attempt to defend semantics from an overflow of pragmatic theories. An overflow that is to some extent necessary because of the need to cope with ‘unarticulated constituents’. Nevertheless, a fully-fledged theory of language must be one that explains the evolutionary riddle that we call human communication. The idea of liberal truth conditions is undoubtedly an interesting intellectual experiment. Moreover, it does find fields of practical application in legal theory. When applied to the realm of law, the idea of liberal truth conditions has a role to play. It is a concept, which is supposed to create a path toward a coherent and consistent legal system, ruled by the ‘letter’ of law. A system with a stable principle of the divide of state powers. At the descriptive level, literal meaning sometimes demands to be supplied with data that will cope with indeterminacy, ambiguity. This is blatant when hard cases are at stake. The supply can either be pragmatic data, just as intentionalists, textualists and purposivists claims, or it can be a substitute of pragmatic elements—the idea of co-text, developed in civil law systems. However, even if we supply the syntactic and semantic features of a sentence with a level of co-text, what we receive is still a minimal proposition with liberal truth conditions in the Borgian sense. Finally, courtroom interaction such as witness cross-examination in common law systems is not an example of using a liberal proposition. This is because the minimal element has usually a selective character. It concerns only one of the expressions contained in an uttered sentence.

To sum up, in the paper I proposed a novel approach to the notion of literal meaning used by lawyers. I proposed to treat it as the Borgian liberal truth conditions. I also argued that the notion of co-text could be accommodated by LTC as far as legal language is concerned. I believe this framework fits elegantly the jurisprudential distinction of understanding and interpretation. Moreover, it explains why in some cases understanding is insufficient and interpretation is necessary. Finally, the idea is profitable not only to jurisprudence, but also to philosophy of language because it suggests an area where the Borgian liberal truth conditions are instantiated.