Chapter 9 of The Boundary Stones of Thought (henceforward Boundary Stones) is a fascinating, characteristically rigorous and resourceful exploration of how standard set-theory might best be reconfigured if one is persuaded that the universal validity of the principle of bivalence for set-theoretic statements is compromised by a crucial indeterminacy in the notion of set. Ian Rumfitt is of that persuasion but, in keeping with his project throughout this beautifully conceived and elegantly written book, proposes to retain classical logic all the same as the appropriate medium for set-theoretic proof. At the end of this note, I shall comment—in the space available to me only, alas, very cursorily—on what I see as two difficulties for the way Rumfitt pursues this goal, one specifically for the set-theoretic case, the other more general. My principal concern, however, will be with the argument he sustains for the indeterminacy of set in the first place, and with its relative force in comparison with an illustrious—some may say, notorious—precursor. I’ll begin with the precursor.

In the course of his career, Michael Dummett ran a number of distinguishable kinds of argument against the unrestricted validity of classical logic. The Dummettian train of thought that provides the background to Chapter 9 of Boundary Stones culminates in the claim that the principle of bivalence, and hence—so Dummett concludes—the law of excluded middle, is not generally justified where quantification over indefinitely extensible collections is concerned.Footnote 1

Strikingly, Dummett countenanced a very generous range of application for the notion of indefinite extensibility. Each of set, cardinal number, ordinal number, real number, arithmetical proof, arithmetical truth, and even natural number are in various places in his writings suggested to be indefinitely extensible.Footnote 2 Briefly characterised, an indefinitely extensible concept, according to Dummett, is one that is essentially associated with a “principle of extension”—a function—that takes as argument any definite totality, t, of objects each of which falls under the concept and produces as value an object that also falls under the concept, but is not in t. In Dummett’s view, when we are dealing with a domain comprising the instances of such a concept, the general validity of the principle of bivalence for quantifications over it must be forfeit.

Rumfitt is, of course, fully au courant with this aspect of Dummett’s thought. But he thinks there is a better way of making something close to Dummett’s point. Evincing some dissatisfaction with what he describes as the “rather dark”Footnote 3 notion of indefinite extensibility, he quickly moves to consider instead a line of argument, due to William Tait,Footnote 4 that targets bivalence for set-theoretic statements on grounds that he takes to be similar in spirit to Dummett’s but which Rumfitt finds more satisfactory because engaging “more directly with the specifics of axiomatic set-theory” (p. 264). I shall be suggesting that, properly understood, Dummett’s argument is actually the deeper and more forceful of the two.

To begin with, I don’t think that the notion of indefinite extensibility itself is all that “dark”. The thrust of the usual illustrations of it seems reasonably clear. The most salient problem with Dummett’s various characterisations of it is not opacity of intent but what one hopes is an avoidable circularity. For let P be any concept falling within the scope of Dummett’s intent and F the relevant principle of extension. Evidently not any old totality of items falling under P is meant to be an admissible argument for F. In particular, indefinitely extensible sub-totalities, including of course P itself, are excluded. Rather we are supposed to restrict F’s domain of arguments to definite sub-totalities of P. But ‘definite’ here is just the complement of ‘indefinitely extensible’. So someone who has not yet grasped that notion won’t understand the implicit restriction either, and so will not be able to acquire any specific understanding from Dummett’s characterisations.

There is at least one extant solution to this problem.Footnote 5 Here I must give only the briskest statement of it. We can begin with an explicitly relativised notion. Let P be any concept and Π a higher-order property of concepts of the type of P. Then we say that P is indefinitely extensible with respect to Π if and only if there is a function F from items of the same type as P to items of the type of the instances of P such that if Q is any sub-concept of P such that ΠQ, then

  1. 1.

    FQ falls under the concept P,

  2. 2.

    It is not that case that FQ falls under the concept Q, and

  3. 3.

    ΠQ′, where Q′ is the concept instantiated just by FQ and by every item which instantiates Q (in set-theoretic terms, i.e., Q′ is (Q∪{FQ}).

The idea is that the sub-concepts of P of which Π holds have no maximal member. For any sub-concept Q of P such that ΠQ, there is a proper extension Q′ of Q such that Q′ is likewise a sub-concept of P and ΠQ′.

This relativised notion is straightforward to illustrate. By its lights, natural number is indefinitely extensible with respect to finite and the operation of selecting the successor of the greatest member of a finite set of natural numbers, real number is indefinitely extensible with respect to countable and Cantor’s diagonal construction, and arithmetical truth is indefinitely extensible with respect to recursively enumerable and Gödel’s key construction in the proof of incompleteness of arithmetic.

Now reflect that in some cases, the process of extension is not actually indefinitely possible but stabilises at some ordinal point. These are cases where—helping ourselves to the classical ordinals—we can say that some ordinal λ places a lowest limit on the length of the series of Π-preserving applications of F to any Q such that ΠQ. In such instances, any series of extensions whose length is less than λ results in a collection of P’s that is still Π, but once the series of iterations extends as far as λ, the resulting collection of P’s is no longer Π, and so the “process” stabilises. Thus ω provides such a limit for the case of natural number, finite and successor of the greatest member, and ω1 provides such a limit for real number, countable and the diagonal construction.Footnote 6

Say in such a case that P is up-to-λ-extensible with respect to Π. And now let us say that P is properly indefinitely extensible with respect to Π just if P, Π, and some F meet the conditions for the relativised notion as originally defined but there is no λ such that P is merely up-to-λ- extensible with respect to Π. Finally, say that P is absolutely indefinitely extensible just in case there is some Π such that P is properly indefinitely extensible with respect to Π.

So that finesses the circularity. It is salient, however, that our revamped characterisation embeds an unrestricted quantification over the ordinals. This is, plausibly, no artefact of our characterisation. Something of the sort must be a feature of any satisfactory account of the intent of “indefinite extensibility” which by the very phrase adverts to the idea of serial but limitless iteration of some or other process of expansion. Assuming so, I think the point may shed some light on Dummett’s generosity with the notion, remarked on earlier. We observed above that natural number, for instance, is merely up-to-ω extensible with respect to finite and successor of the greatest member.

But that claim requires that we countenance ω, the first infinite ordinal. Suppose a finitist about the ordinals. From the point of view of such a theorist, natural number will qualify as absolutely indefinitely extensible. And while there are not many contemporary sceptics about the simple infinite, sceptics about the Cantorian uncountable are not so scarce. These theorists will acknowledge only countable ordinals, so for them real number will, by our characterisation, likewise rank as absolutely indefinitely extensible. In short, which are the absolutely indefinitely extensible concepts will, on our characterisation, depend upon what one takes to be the extent of the ordinals, the measures of all possible series. We may conjecture that Dummett’s ‘generosity’ reflected a sense of this, coupled with the reflection that for any but the most theological of Platonists, the extent of the ordinals is a matter that is—metaphysically—open.

It is, however, the latter thought that is key to the argument against the assumption of bivalence. Why should a concept’s possession of an indefinitely extensible domain put an obstacle in the way of bivalent semantics for statements concerning its instances—in particular, for statements involving quantification over the entire domain? The crucial point would appear to be that Dummett is thinking of indefinitely extensibility as a distinctive genre of vagueness—a kind of essential haziness of extension.Footnote 7 And this, he is taking it, impacts distinctively on the meaning we may legitimately attach to the quantifiers. For where we generalise over the instances of such an extensionally hazy concept, we may not legitimately suppose, as classical bivalent semantics does, that truth-values will invariably be conferred on a generalisation ‘upwards’ as it were: that whenever ‘(∀x)Ax’ is true, it will be made so by the truth of each of a determinate range of instances, ‘Aa’.Footnote 8

How, though, are we to understand this putative genre of vagueness? Not straightforwardly on the model of borderline-case vagueness of the domestic or garden variety exhibited by “red”, “bald’, etc. Ordinal number is itself the paradigm (both intuitively and by the above characterisation) of an absolutely indefinitely extensible concept, but if there is vagueness in it, it cannot be cashed out by reference to the idea that there are or could be items which were borderline-cases for it—neither clearly ordinals not clearly not.

Still, we can approach a partial understanding of the Dummettian thought and its impact on the meaning of quantification by focusing on a garden-variety example. Consider a double sorites. Suppose we learn that a linear spatial array of coloured 2d figures has been constructed that is simultaneously a sorites series both for red and for round. We understand the instructions to have been that a bright red, circular figure was to be placed at one edge and a pale orange, elliptical figure at the other, and that the series of figures had to be arranged in such a way that each element in the array was, by eye, indistinguishable in both hue and shape from its immediate neighbours. Knowing only this, what should we think about this generalisation over the series:

All the red figures are round?

Assuming for the sake of the example that garden-variety vagueness already poses obstacles to bivalence, there is, in our present state of information, no evident reason why the generalisation has to be either determinately true or determinately false. If, as there may be, there are figures in the series of which it is indeterminate both whether they are red or not and whether they are round or not, then—since there is no internal relation between the two characteristics—it will be indeterminate of any such figure whether it is red and not round and so indeterminate whether it is a counterexample to the generalisation. And now we are free to suppose that the array has been so constructed that the worst cases for the generalisation are all cases like that. If so, it will then be indeterminate whether it has any counterexamples, so indeterminate whether it is true or false.

But of course the situation may change if we learn that there were additional instructions —for example, an instruction to ensure that changes in hue took place more rapidly than changes in shape, so that the first figures to be placed that are borderline in hue were still to be definitely round. Or conversely.

For our purposes, the crucial points for to take from the example are two. First, the reds and the rounds are of indeterminate extent in the imagined array. Second, generalisations over them are accordingly at risk of indeterminacy unless it is determined otherwise, by appropriate additional rules of construction for the series. These are exactly the essential elements in the way that Dummett is thinking of generalisations over the instances of any indefinitely extensible concept: they are concepts whose instances, under some canonical order, are likewise indeterminate in extent, so that generalisations over them can be presumed to be determinate in truth-value only in so far as determined as true or false by essential aspects of the very concept concerned or by the ‘rules of construction’ for the relevant ordering. A flow of truth-value fixation upwards, from the truth-value of the instances, cannot be relied upon.

So, the relevant species of vagueness for Dummett’s purpose is essential indeterminacy of extent. Still, the analogy lets us down at a crucial point. The reds and the rounds peter out in the array we imagined. There is a smooth slide towards oranges and ellipses. But the ordinals, e.g., do not peter out—we do not, if we run the series of ordinals on and on, gradually slide into a region where we no longer deal with ordinals but something else. The analogy uses the reds and the rounds to illustrate indeterminacy in extent—that is how Dummett is thinking of the instances of an indefinitely extensible concept. But it does not, so far, convey any understanding of how a concept that is not borderline-case vague can nevertheless be indeterminate in extent in something relevantly similar to the way in which, in the imagined array, red and round are. The reason for the indeterminacy of the extent of the reds is the borderline-case vagueness of red. The reason for indeterminacy in extent for the case of ordinal cannot be that. So what is it?

According to the proposed characterisation of absolute indefinite extensibility, the extension of any absolutely indefinitely extensible concept spreads out in tandem with that of ordinal. But how far do the ordinals go? Let an unrestrictionist be anyone who allows, as for example a finitist does not, that every well-ordered series has an ordinal number, and one moreover greater than that of any of its proper segments.Footnote 9 Then the crucial point is that the extent of the ordinals is something that we—unrestrictionists—have not merely not fully determined but cannot determine. The ordinals, unrestrictedly conceived, are extended by any possible number of iterations of successor and of limit. But how many such iterations are possible? The rub is that, for the unrestrictionist, the extent of the ordinals themselves is a parameter in the relevant notion of possibility. If we enquire what determines how far the possibilities run, there is no answer for the unrestrictionist to give than: well, for any ordinal number λ, a series of iterated operations of successor and limit of length λ is always possible. It follows that there is no explaining the extent of the ordinals. Sure, they are to run on “without bound”. But we have no grip on what a bound is here except: some specific ordinal limit. We would thus need an antecedent grasp of the extent of the ordinals to determine the extent of the possibilities. To say that the series of ordinals goes on as far as is possible is to say nothing that we can non-circularly explain. Nor, therefore—since anything we can understand must somehow have been explained to us without presupposition that we understood it already—is it anything of which we possess a clear concept.

This is of no consequence for a theological Platonist. All that follows for that theorist is that in ordinal arithmetic we investigate a domain of whose overall structure we have no clear concept. But if we hold that the determinacy of a mathematical domain depends on our achieving a determinate concept of it,—something which, as Rumfitt rightly emphasises, is an essential part of Dummett’s outlookFootnote 10—then we must acknowledge that the extent of the ordinals is indeed indeterminate. With that acknowledgement, we surrender the right to think of the truth-values of generalisations over the ordinals as settled, after the fashion of classical bivalent semantics, by those of their instances. And crucially the same will then go, in tandem, for absolutely indefinitely extensible concepts as a class. Rather, as illustrated with the reds and the rounds, the only possible ground for the truth of such generalisations must reside in the ‘rules of construction’.

That, I take it, is the essential thrust of Dummett’s argument.

The contrasted argument of William Tait which provides the springboard for Rumfitt’s chapter focuses on the notion of set (though in so far as their instances are subjected to set-theoretic treatment, it also bears on cardinal and ordinal.) It draws on assumptions about categoricity of axioms and determinacy of content that are, it is my impression, quite prevalent in contemporary philosophy of mathematics. I think these assumptions are in general very questionable and that the Tait argument, as a stand-alone challenge, is weakened in consequence.

The categoricity of a theory ensures that any pair of models of its axioms have a domain of the same structure. Tait’s argument begins with the consideration that, unlike the axioms for second-order arithmetic, e.g., the standard axioms for second order set theory (specifically ZF(C)2 with full classical impredicative second-order logic) are not categorical, but only quasi-categorical. That is, while all models of ZF(C)2 will share an initial structure—the structure of the sets in the standard iterative hierarchy up to but excluding the first inaccessible—they may diverge thereafter. In short, the axioms fail to settle the ‘height’ of the universe.

It follows that a very large range of claims, and their contradictories, about how potentially inaccessibly high the universe of sets extends are independent of the axioms of ZF(C)2. A perfectly insightful being who has no information about sets other than that they obey those axioms would have no basis for an opinion about even whether there are sets of inaccessible cardinality at all, let alone what marvellous extent and variety they might display if there are. However in order to move from this consideration to anything that threatens the unrestricted validity of bivalence for statements expressible in the language of ZF(C)2, we need to suppose more: that such a being is in position to know all there is to know about sets and hence, reminiscently of the esteemed Professor Jowett, that “What he don’t know isn’t knowledge”.

Tait’s argument for indeterminacy thus presupposes first, contrary to Platonism, that the facts about sets are one and all determined by our concept of the sets and, second, that that concept should be regarded as exhausted by the axioms of ZF(C)2. It is the second claim that underlies Rumfitt’s laudatory remark that the Tait argument for indeterminacy “engages more directly with the specifics of axiomatic set-theory”. But is this a strength of the argument? The implicitly claimed connection between categoricity and determinacy of subject matter may provoke misgivings in both directions.

While the claim that categoricity suffices for determinacy is strictly orthogonal to the present purpose, I’ll take the opportunity briskly to pour some cold water. There are significant reasons for doubt that the categoricity of an axiom set suffices tout court to fix a determinate conception of a subject matter for it. To be sure, a distinguished company of philosophers of mathematics has united in reposing confidence in at least some instances of the transition from categoricity to a conclusion about determinacy of concept and thence to an acceptance of local bivalence.Footnote 11 But there is a gap in this train of thought when so generally characterised. Categoricity requires only that, of any two non-isomorphic candidate interpretations of the relevant theory, at least one must be defective. A proof of that need not go so far as to delineate any determinate conception of a unique structure. It is not guaranteed even that there will be any determinate structure characterised by a categorical set of axioms if such a structure has to be understood as the shape of a possible set of objects. That this doesn’t follow is obscured by the conventional assumption in model theory that “true of” is bivalent with respect to any particular candidate model; and that assumption in turn rests on the thought that the domain of a model is always a set. Of course we can make that true by so defining “model”. But in the present context, where the domain of an interpretation—to fall back on a neutral term—may be indefinitely extensible, it is exactly the assumption of the bivalence of “true of” that is sub judice. The point applies even at the level of arithmetic. If the natural numbers were somehow indeterminate in extent—as the strict finitist takes them to be—all that the categoricity of PA2 would ensure is that any pair of acceptable interpretations of its axioms will be of matching indeterminacy.Footnote 12

It is, however, the reverse connection between categoricity and determinacy that the Tait argument requires. But this too seems very challengeable. That a mathematical subject matter admits of a categorical axiomatisation is surely not a necessary condition for the availability of a determinate conception of it. Most of us believe that we have a determinate conception of the structure of the natural numbers, but it would be very far-fetched to claim that the source of that—the way we arrive at that conception—is via the categorical axioms of second order Peano arithmetic. Rather there is, or so we think, a determinate intended interpretation of arithmetic which is already available at first order and precisely contrasts with what we recognise as the unintended interpretations which first-order arithmetic allows. Suppose a set theorist claims, correspondingly, an intended conception of the universe of sets that similarly transcends what is characterised by ZF(C)2. Then a philosopher who is running the Tait argument needs to be able to argue either that there is no such concept to be had or, perhaps more plausibly, that any such concept must anyway be in turn indeterminate in certain respects. Given however, that whatever the set theorist may have to say by way of articulation of her claimed conception will, as she will acknowledge, not fare any better than the ZF(C)2 axioms in point of categoricity, a dialectically effective argument that the claimed conception does not eliminate indeterminacy will have to resort to other considerations. I would suggest a good direction in which to look for such considerations would be precisely towards the indefinite extensibility of the intuitive pre-formal notion of set.Footnote 13 But that of course is exactly to look away from “the specifics of axiomatic set theory” and instead towards considerations concerning its intuitively intended subject matter.

Finally, to the two concerns I advertised at the beginning. Each pertains to the idea that we may retain excluded middle while acknowledging the lack of any guarantee of bivalence.

If excluded middle is to be valid while bivalence fails, then—for anyone but a conventionalist about validity—something else had better be sustaining the validity of the former. Rumfitt’s proposal (in section 5 of chapter 9) is that set-theoretic instances of excluded middle can be sustained by a Kripkean semanticsFootnote 14 applied to the sentences of first order set theory when the latter are reinterpreted in accordance with the so-called Gödel-Gentzen translation scheme described at pp. 288–289. The details are sophisticated and ingenious, but alarm bells are sounded, for this reader at least, by the point that the package secures the validity of excluded middle only by, in effect, interpreting a disjunction as a compound negation: the negation of the conjunction of the negations of its disjuncts. Classically, of course, there is indeed exactly that equivalence. But the equivalence is vouchsafed by bivalence. Without that background, a potential gap opens between, as we may say, a genuine disjunction—something whose endorsement entrains commitment to the one or the other disjunct’s being determinately true—and the weaker claim, consistent with their both being indeterminate, that we may at least exclude the joint determinate truth of their respective negations. There is therefore a case to answer that, wherever failures of determinacy are acknowledged, the Rumfittian treatment of or effectively exiles any genuine notion of disjunction from the conceptual resources of the discourse concerned.

It would, it seems to me, be a serious blow to Rumfitt’s project if that were the effect for his proposed set theory.Footnote 15 For in that case the Gödel-Gentzen translation schema cum Kripkean semantics would merely save something that looks like classical logic, at the cost of jettisoning the intended understanding of any concept that incorporates a pukka, properly distributive disjunction. There would arguably be, therefore, an implicit jettisoning of the ordinary concept of set-theoretic union—a cornerstone of our understanding of the iterative hierarchy itself. Could the result, accordingly, even be a set theory at all?

In general, I must confess to some uncertainty about the point of retaining classical logic without bivalent semantics if the cost of doing so is so significant a compromise of its expressive resources.

The second concern is one that Rumfitt himself addresses in detail in his concluding chapter. His response requires a much more detailed appraisal than I can offer here, but I can gesture at a cause for dissatisfaction with it. Bivalence is not merely a potential support for classical logic, the surrender of which raises a concern about our right to regard classical logic as valid. There is, more directly, an utterly simple-seeming inference (the “Simple Argument”) from excluded middle to bivalence, contraposition across which must call into the question the coherence of any view that retains excluded middle once bivalence is dropped. In briefest outline, assume that whenever the proposition that P is expressed by S, then we may infer from P as premise that S is true. Then reasoning by cases from any instance of excluded middle will yield (what we may naturally take to beFootnote 16) a corresponding instance of bivalence: S is true or not-S is true.

Though disarmingly simple,Footnote 17 this schematic argument is of course, as Rumfitt well appreciates, a fundamental objection to the whole project of the book. Chapter 10 is devoted to its rebuttal. There seems very little room for manoeuvre. The essence of Rumfitt’s response is to dismiss as invalid the truth-introduction steps involved. They are, in his view, invalid exactly where and because we are in the presence of indeterminacy. For if it is indeterminate whether P, the conditional: if P, then S is true, will have an indeterminate antecedent but a false [sic, p. 309] consequent, and will accordingly be “unacceptable”.

One response the reader may have to this move is to wonder why Rumfitt is concerned to block the reasoning at all. Why not just let it run and deny that the conclusion is, in the setting provided by the non-distributive or that features in the premise, a proper expression of bivalence?

Let there be a good answer to that. My worry concerns the question: what exactly mandates the needed treatment of the conditional? To treat a conditional as failing because it has an indeterminate antecedent proposition and its consequent is a predication of truth on an expression of that proposition, is to regard claims of the truth of (expressions of) indeterminate propositions as untrue; to treat indeterminacy as a failure of truth, a kind of wide falsity. I cannot argue the point hereFootnote 18 but I think that, at least for soritical vagueness, this notion is a great mistake. Indeterminacy in those cases, properly viewed, is just that— a situation where neither truth nor falsity is settled; where the matter of truth-value is open, a situation consistent with each of the poles.Footnote 19 If the truth-value of a statement is unsettled, that is not a way of failing to be true. To think otherwise is to take indeterminacy to be a kind of settlement after all.

To stress, I do not assert that every possible variety of indeterminacy must be conceived like this. In particular, we may want to distinguish the borderline-case vagueness of red, bald and the other ‘usual soritical suspects’ from the predicament of, say, the Generalised Continuum Hypothesis in exactly this respect. But however that may be, Rumfitt’s own treatment of the former (chapter 8) places massive weight on their assimilation. Absent persuasive argument for that—for the legitimacy of a ‘gappy’ conception of indeterminacy everywhere—it is unclear that Rumfitt has offered any generally satisfying diagnosis of error in the Simple Argument. A satisfactory way of conserving the “green fields” of classical logic while demolishing the “gasometers” of bivalent semantics had better not require a misconception of the very phenomenon that calls bivalence into question in the first place.Footnote 20