This is an annotated reading list on the beginning elements of the theory of computable functions. It is now structured so as to complement the ﬁrst eight lectures of Thomas Forster’s Part III course in Lent 2011 (see the ﬁrst four chapters of his evolving handouts).
Here is Hilbert is his famous address of 1900: The supply of problems in mathematics is inexhaustible, and as soon as one problem is solved numerous others come forth in its place. Permit me in the following, tentatively as it were, to mention particular deﬁnite problems, drawn from various branches of mathematics, from the discussion of which an advancement of science may be expected.
We are going to prove a key theorem that tells us just a bit more about the structure of the non-standard countable models of first-order Peano Arithmetic; and then we will very briefly consider whether any broadly philosophical morals can be drawn from the technical result.
Preface 1 The First Theorem revisited 1.1 Notational preliminaries 1.2 Definitional preliminaries 1.3 A general version of G¨ odel’s First Theorem 1.4 Giving the First Theorem bite 1.5 Generic G¨ odel sentences and arithmetic truth 1.6 Canonical and standard G¨ odel sentences 2 The Second Theorem revisited 2.1 Definitional preliminaries 2.2 Towards G¨ odel’s Second Theorem 2.3 A general version of G¨ odel’s Second Theorem 2.4 Giving the Second Theorem bite 2.5 Comparisons 2.6 Further results about provability predicates 2.7 Back (...) to the First Theorem 2.8 Introducing Rosserized provability predicates.. (shrink)
In the Wednesday Logic Reading Group, where we are working through Sara Negri and Jan von Plato’s Structural Proof Theory – henceforth ‘NvP’ – I today introduced Chapter 6, ‘Structural Proof Analysis of Axiomatic Theories’. In their commendable efforts to be brief, the authors are sometimes a bit brisk about motivation. So I thought it was worth trying to stand back a bit from the details of this action-packed chapter as far as I understood it in the few hours I (...) had to prepare, and to try to give an overall sense of the project. These are the notes I wrote for myself. As often with such middle-of-term efforts dashed off in a couple of hours, I both would have liked to do better and do more justice to what we are reading, but I also just don’t have time to do more now than make a few corrections to the first version. So the usual warning applies: caveat lector. (shrink)
In approaching Ch. 4 of Saving Truth from Paradox, it might be helpful first to revisit Curry’s original paper, and to revisit Lukasiewicz too, to provide more of the scenesetting that Field doesn’t himself fill in. So in §1 I’ll say something about Curry, in §2 we’ll look at what Lukasiewicz was up to in his original three-valued logic, and in §3 we’ll look at the move from a three-valued to a many-valued Lukasiewicz logic. In §4, I move on to (...) announce a theorem by H´. (shrink)
In the section ‘Further reading’, I listed a book that arrived on my desk just as I was sending IGT off to the press, namely Church’s Thesis after 70 Years edited by Adam Olszewski et al. On the basis of a quick glance, I warned that the twenty two essays in the book did seem to be of ‘variable quality’. But actually, things turn out to be a bit worse than that: the collection really isn’t very good at all! After (...) I sent my book to press, I gave a paper-by-paper review on my blog, at http://logicmatters.blogspot.com. It is probably more fun to chase up the reviews ‘in the wild’, so to speak, starting from the entry for May 14, 2007. But here they are wrapped up into a single document, only marginally tidied. Some of the points made here should help further explain and support the general line on the Thesis taken in.. (shrink)
An Introduction to G¨ odel’s Theorems now exists in two versions – the original 2007 printing, and a corrected reprint in 2008. The quick way of telling these apart is to glance at the imprints page (the verso of the title page): the later version notes, halfway down that page, “Reprinted with corrections 2008”. Corrections marked with marginal side-bars as here are those that have been noted after the second printing went to press, and hence still need correction/improvement. If you (...) have the corrected reprint, then these marked corrections are the only errors you need care about! This complete list of corrections is divided, for convenience, into two sections. (shrink)
Like the other major journals, ANALYSIS can accept less than 10% of submissions. So standards are fierce. Many submissions are ruled out of court for being badly argued or for re-inventing the wheel or for being plain boring. But a fair proportion end up on the rejection pile simply because they are badly written. I saw far too much bad prose (to be sure, some of the prose that gets published is not exactly wonderful: I assure you that a lot (...) that doesn't get published is very much worse). (shrink)
The last Episode wasn’t about logic or formal theories at all: it was about common-or-garden arithmetic and the informal notion of computability. We noted that addition can be defined in terms of repeated applications of the successor function. Multiplication can be defined in terms of repeated applications of addition. The exponential and factorial functions can be defined, in different ways, in terms of repeated applications of multiplication. There’s already a pattern emerging here! The main task in the last Episode was (...) to get clear about this pattern. So first we said more about the idea of defining one function in terms of repeated applications of another function. Tidied up, that becomes the idea of defining a function by primitive recursion (Defn. 27). (shrink)
Last week, we talked a bit about the Anderson-Belnap logic of entailment, as discussed in Priest’s Introduction to Non-Classical Logic. For a quite different approach to entailment, we’ll look next week at Neil Tennant’s account. Doing things rather out of order, this week I’d like to say something more basic about the problems to which both Anderson and Belnap, on the one hand, and Tennant on the other, are responding. This will give me the chance for a bit of nostalgic (...) philosophical time-travelling, revisiting work by Casimir Lewy and Timothy Smiley from the Cambridge of fifty years ago. (shrink)
... and a reading knowledge of formal logical symbolism is essential too. (Philosophers often use bits of logical symbolism to clarify their arguments.) Because the artificial and simply formal languages of logic give us highly illuminating objects of comparison when we come thinking about how natural languages work. (Relevant to topics in ‘philosophical logic’ and the philosophy of language.) But mainly because it us the point of entry into the study of one of the major intellectual achievements by philosophers of (...) the 20th – i.e. the development of mathematical logic. (Not least in Cambridge: Russell and Whitehead, Ramsey, Turing.). (shrink)
In a reading group, we’ve been working through the first three parts of Field’s Saving Truth from Paradox, by the end of which he has presented his core proposals. At this point, we’ve now rather lost the will to continue – for this is an astonishingly badly written book, which makes ridiculous demands on the patience of even a sympathetic reader. It so happened that it fell to me to introduce the last two chapters in Part III, Ch. 17 in (...) which Field rounds out his key technical construction, and Ch. 18, ‘What Has Been Done’. I talked mainly about the latter. Here, for what they are worth, are some very quickly written reflections on what has and hasn’t been achieved. (shrink)
Theorem 1. If T is a sound formalized theory whose language contains the language of basic arithmetic, then there will be a true sentence GT of basic arithmetic such that T GT and ¬GT, so T must be negation incomplete.
Publish or perish? Well, like it or not (and I for one don't!--for I fear it encourages narrowness and scholasticism), having a track record of pieces accepted for publication is now more or less a sine qua non for getting a foot on the first rung of the profession, as a junior research fellow or temporary lecturer. And when it comes to applying for a permanent lectureship a good track record of publication and clear evidence that you are going to (...) continue publishing is even more essential: UK departments attach a huge importance to their ratings in the Research Assessment Exercises, and good overseas departments place equal if not more weight on research promise. (shrink)
Here’s one version G¨ odel’s 1931 First Incompleteness Theorem: If T is a nice, sound theory of arithmetic, then it is incomplete, i.e. there are arithmetical sentences ϕ such that T proves neither ϕ nor ¬ϕ. There are three things here to explain straight away.
I am interested in the philosophical prospects of what is called ‘predicativism given the natural numbers’. And today, in particular, I want to critically discuss one argument that has been offered to suggest that this kind of predicativism can’t have a stable philosophical motivation. Actually you don’t really need to know about predicativism to find some stand-alone interest in the theme I will be discussing. But still, it’s worth putting things into context. So I’m going to start by spending a (...) bit of time introducing you to the background. (shrink)
In Episode 1, we introduced the very idea of a negation-incomplete formalized theory T . We noted that if we aim to construct a theory of basic arithmetic, we’ll ideally like the theory to be able to prove all the truths expressible in the language of basic arithmetic, and hence to be negation complete. But Gödel’s First Incompleteness Theorem says, very roughly, that a nice theory T containing enough arithmetic will always be negation incomplete. Now, the Theorem comes in two (...) flavours, depending on whether we cash out the idea of being ‘nice enough’ in terms of (i) the semantic idea of T ’s being a sound theory, or (ii) the idea of odel’s own T ’s being a consistent theory which proves enough arithmetic. And we noted that G¨. (shrink)
The first main topic of this paper is a weak second-order theory that sits between firstorder Peano Arithmetic PA1 and axiomatized second-order Peano Arithmetic PA2 – namely, that much-investigated theory known in the trade as ACA0. What I’m going to argue is that ACA0, in its standard form, lacks a cogent conceptual motivation. Now, that claim – when the wraps are off – will turn out to be rather less exciting than it sounds. It isn’t that all the work that (...) has been done on ACA0 has been hopelessly misplaced: that would be a quite absurd suggestion. The mistake, if that’s what it is, has been a relatively small one. Still, we really ought to try to put things into conceptual good order here. That’s part of what philosophers are for. Here’s the structure of my main claim. On the one hand, interesting work on ACA0 actually only uses part of the strength of the theory: or as we might put it, the interesting work is actually carried on in a cut-down theory I’ll call ACA!. This theory, I’ll be claiming, does have a good conceptual motivation – it is in fact the theory that the putative conceptual grounding for ACA0 actually underpins. On the other hand, I’ll be arguing that original-strength ACA0 inductively inflates. I mean, to put it more carefully, that anyone who accepts ACA0 as a cogent theory can have no reason not to accept a certain significantly stronger theory, with a stronger induction principle. This stronger theory is standardly known as plain ACA. So, my claim comes to this: you can either go for the cut-down theory ACA!; or you can go for the much richer theory ACA. What you can’t do is – I mean, what you can’t have a stable conceptual motivation for doing – is to rest content with the intermediate strength ACA0 in its standard presentation. Yet in much of the literature, in particular in Simpson’s encyclopedic book Subsystems of Second-Order Arithmetic (1991), neither ACA! nor full ACA gets so much as a mention, and the conceptually unstable theory ACA0 gets all the glory. Why is my claim at all interesting? For at least two reasons.. (shrink)
Why these notes? After all, I’ve written An Introduction to Gödel’s Theorems (CUP, heavily corrected fourth printing 2009: henceforth IGT ). Surely that’s more than enough to be going on with? Ah, but there’s the snag. It is more than enough. In the writing, as is the way with these things, the book grew far beyond the scope of the lecture notes from which it started. And while I hope the result is still pretty accessible to someone prepared to (...) put in the time and effort, there’s a lot more in the book than is really needed by philosophers meeting the incompleteness theorems for the first time. After all, you might want to get your heads around only those technical basics which are actually needed for understanding philosophical discussions about incompleteness. So you need a cut-down version of the book – an introduction to the Introduction! Well, isn’t that what lectures are for? Indeed. But there’s another snag. I haven’t got many lectures to play with. So either (A) I crack on at quite a fast pace (hard-core mathmo style), cover those basics, but perhaps leave too many people puzzled and alarmed. Or (B) I do relaxed talk’n’chalk, highlighting the really Big Ideas, making sure everyone is grasping them as we go along, but inevitably omit important stuff and leave quite a gap between what happens in the lectures and what happens in the book. What to do? I’m going for plan (B). But then I still need to do something to fill that gap between lectures and book. Hence these notes. The idea, then, is to give relaxed lectures, highlighting Big Ideas, not worrying too much about depth or fine-detail (or even about getting through all of the day’s intended menu of topics). Then after the lecture, I’ll write up notes that expand things just enough, and then give pointers to relevant chunks of IGT. The idea, however, is for the notes to be more or less stand-alone, and to tell a brief but coherent story read by themselves. So occasionally I’ll copy a paragraph or two from the book, rather than just refer to them. Warning: just occasionally in these notes, I’ll no doubt apply that good maxim ‘Where it doesn’t itch, don’t scratch’.. (shrink)
In the opening chapter of ‘the Shorter Hodges’, we get a lot of fixing of terminology and notation, and some fairly natural definitions of ideas like that of isomorphism between structures. There are no really tricky ideas which need further exploration, nor any nasty proofs that could do with more elaboration. So I don’t pretend to have anything very thrilling by way of introductory comments. But let me make some more general philosophical comments.
There is a familiar derivation of G¨ odel’s Theorem from the proof by diagonalization of the unsolvability of the Halting Problem. That proof, though, still involves a kind of self-referential trick, as we in effect construct a sentence that says ‘the algorithm searching for a proof of me doesn’t halt’. It is worth showing, then, that some core results in the theory of partial recursive functions directly entail G¨ odel’s First Incompleteness Theorem without any further self-referential trick.
In each of Parts 1A, IB and II of the Philosophy Tripos, there is an Essay paper in which you are asked to write for three hours on a single topic. In these notes I offer some suggestions about how to tackle this paper, and try to answer some Frequently Asked Questions. The notes are based (in the second half, very closely indeed) on notes written by Jane Heal -- I'm very grateful to her for allowing me to snaffle some (...) of her best suggestions, though she is not responsible for the final result. Do note, though, that the result is still just one view ... Consult other supervisors/directors of studies for additional advice. (shrink)
odel’s Theorems (CUP, heavily corrected fourth printing 2009: henceforth IGT ). Surely that’s more than enough to be going on with? Ah, but there’s the snag. It is more than enough. In the writing, as is the way with these things, the book grew far beyond the scope of the lecture notes from which it started. And while I hope the result is still pretty accessible to someone prepared to put in the time and effort, there is – to be (...) frank – a lot more in the book than is really needed by philosophers meeting the incompleteness theorems for the first time, or indeed by mathematicians wanting a brisk introduction. You might reasonably want to get your heads around only those technical basics which are actually necessary for understanding how the theorems are proved and for appreciating philosophical discussions about incompleteness. So you need a cut-down version of the book – an introduction to the Introduction! Well, isn’t that what lectures are for? Indeed. But there’s another snag. I haven’t got many lectures to play with. So either (A) I crack on at a very fast pace (hard-core mathmo style), cover those basics, but perhaps leave too many people puzzled and alarmed. Or (B) I do relaxed talk’n’chalk, highlighting the really Big Ideas, making sure everyone is grasping them as we go along, but inevitably omit important stuff and leave quite a gap between what happens in the lectures and what happens in the book. What to do? I’m going for plan (B). But then I ought to do something to fill that gap between lectures and book. Hence these notes. The idea, then, is to give relaxed lectures, highlighting Big Ideas, not worrying too much about depth or fine-detail (nor even about getting through all of the day’s intended menu of topics). These notes then expand things just enough, and give pointers to relevant chunks of IGT. Though I hope these notes will be to a fair extent be stand-alone, and tell a brief but coherent story read by themselves: so occasionally I’ll copy a paragraph or two from the book, rather than just refer to them.. (shrink)
In our preamble, it might be helpful this time to give a story about where we are going, rather than (as in previous episodes) review again where we’ve been. So, at the risk of spoiling the excitement, here’s what’s going to happen in this and the following three Episodes.
LO : John L. Bell, David DeVidi and Graham Solomon, Logical Options, Broadview Press, 2001. ILF : Peter Smith, Introduction to Formal Logic, CUP 2003. LFP : Ted Sider, Logic for Philosophy, OUP forthcoming: draft available at http://tedsider.org/books/lfp/lfp.pdf.
In the very last chapter of my Introduction to Gödel Theorems, I rashly claimed that there is a sense in which we can informally prove Church’s Thesis. This sort of claim isn’t novel to me: but it certainly is still very much the minority line. So maybe it is worth rehearsing some of the arguments again. Even if I don’t substantially add to the arguments in the book, it might help to approach things in a different order, with some different (...) emphases, to make the issue as clear as possible. (shrink)
At the last meeting, Tim Crane gave a talk in which he made play with a distinction between ‘believing in’ and ‘believing that’. And he claimed that this distinction could be put to serious philosophical work of interest to serious metaphysicians. My hunch at the time was that this distinction in fact can’t bear any real weight. But I can’t now reconstruct Tim’s own arguments sufficiently to give a fair evaluation of them. However, Tim did say that the distinction he (...) wanted to draw, and at least some of the work to which he wanted to put the distinction, was grounded in a paper on ‘Believing in Things’ by Zolt´ an Szab´ o. So in this talk, I’ll see what we can get out of that paper. And, as far as I can recall Tim’s paper, I think it is fair to say the following. If Szab´. (shrink)
Why should we care about topoi so defined? Indeed, why should it be said that the idea of a topos is a sort of categorial generalization of the idea of a universe of sets? (By ‘categorial’ I mean a treatment where we give characteristics of the relevant objects and the mappings between them in terms of their relations to other objects and mappings belonging to the same category.).
We’ve now proved our key version of the First Theorem, Theorem 42. If T is the right kind of ω-consistent theory including enough arithmetic, then there will be an arithmetic sentence GT such that T GT and T ¬GT. Moreover, GT is constructed so that it is true if and only if unprovable-in T (so it is true). Now recall that, for a p.r. axiomatized theory T , Prf T(m, n) is the relation which holds just if m (...) is the super g.n. of a sequence of wffs that is a T proof of a sentence with g.n. n. This relation is p.r. decidable (see §25.4). Assuming T extends Q, it can capture any p.r. decidable relation, including Prf T (§22). So we can define.. (shrink)
• How to construct a ‘canonical’ Gödel sentence • If PA is sound, it is negation imcomplete • Generalizing that result to sound p.r. axiomatized theories whose language extends LA • ω-incompleteness, ω-inconsistency • If PA is ω-consistent, it is negation imcomplete • Generalizing that result to ω-consistent p.r. axiomatized theories which extend Q..
Our last big theorem – Theorem 6 – tells us that if a theory meets certain conditions, then it must be negation incomplete. And we made some initial arm-waving remarks to the effect that it seems plausible that we should want theories which meet those conditions. Later, we announced that there actually is a consistent weak arithmetic with a first-order logic which meets the conditions (in which case, stronger arithmetics will also meet the conditions); but we didn’t say anything about (...) what such a weak theory really looks like. In fact, we haven’t looked at any detailed theory of arithmetic yet! It is high time, then, that we stop operating at the extreme level of abstraction of Episodes 1 and 2, and start getting our hands dirty. This Episode introduces a couple of weak arithmetics, before we tackle the canonical first-order arithmetic PA in the following instalment. By all means skip fairly lightly over some of the more boring proof details! But it is certainly worth getting just a flavour of how these two simple formal theories work. (shrink)
Where to begin? I’ll take three books from my shelves. First, now nearly forty years old, a little book of television lectures by the great physicist Richard Feynman, The Character of Physical Law. He talks about the laws of motion, the inverse square law of gravitation, conservation laws, symmetry principles and the various ways these all hang together. Feynman obviously takes it that it is a prime aim of science to discover such laws. But what are laws? He writes – (...) and this is about his one and only shot at a characterization at the level of abstraction that we might think of as philosophical –. (shrink)
Unlike his other major typescripts, the Big Typescript is divided into titled chapters, themselves divided into titled sections. But within a section we still get a collection of remarks typically without connecting tissue and lacking any transparently significant ordering or helpful signposting. So we still encounter the usual difficulties in trying to think our way through into what Wittgenstein might be wanting to say. Some enthusiasts like to try to persuade us that the aphoristic style is really of the essence. (...) But I’ve always wondered how true that is. So I here propose an experiment. Let’s take §108 of the ‘Big Typescript’ (the first of the sections in the part of the book titled ‘The Foundations of Mathematics’). There are four and a bit pages of remarks. I’m going to try to render them into rather more conventional prose. In the section that follows, then, I have embedded almost every sentence Wittgenstein wrote in his §108 – or rather, of course, I’ve embedded almost every sentence from the well-regarded translation. I have, however, often changed the order in which remarks appear, omitted some ‘and’s and ‘but’s and the like, changed some punctuation, demoted some passages to footnotes, but not – I hope – in any way that radically changes the significance of any remark, or distorts what is going on. And I’ve added a lot of signposting. I’ll comment, necessarily very briefly, on the results of the exercise starting in §3 below. So here we go . . (shrink)
An inertial frame is one in which a freely falling particle obeys Newton’s ﬁrst law (i.e., continues in a state of uniform motion). Classically, we have the following: Galilean Principle of Relativity: The laws of dynamics are invariant between all inertial frames. In other words, all inertial observers (at rest in an inertial frame) will get experimentally verify the same dynamical laws.
Some bilateralists have suggested that some of our negative answers to yes-or-no questions are cases of rejection. Mark Textor (2011. Is ‘no’ a force-indicator? No! Analysis 71: 448–56) has recently argued that this suggestion falls prey to a version of the Frege-Geach problem. This note reviews Textor's objection and shows why it fails. We conclude with some brief remarks concerning where we think that future attacks on bilateralism should be directed.
Many of our concepts are introduced to us via, and seem only to be constrained by, roughand-ready explanations and some sample paradigm positive and negative applications. This happens even in informal logic and mathematics. Yet in some cases, the concepts in question – although only informally and vaguely characterized – in fact have, or appear to have, entirely determinate extensions. Here’s one familiar example. When we start learning computability theory, we are introduced to the idea of an algorithmically computable function (...) (from numbers to numbers) – i.e. one whose value for any given input can be determined by a step-by-step calculating procedure, where each step is fully determined by some antecedently given finite set of calculating rules. We are told that we are to abstract from practical considerations of how many steps will be needed and how much ink will be spilt in the process, so long as everything remains finite. We are also told that each step is to be ‘small’ and the rules governing it must be ‘simple’, available to a cognitively limited calculating agent: for we want an algorithmic procedure, step-by-minimal-step, to be idiot-proof. For a classic elucidation of this kind, see e.g. Rogers (1967, pp. 1–5). Church’s Thesis, in one form, then claims this informally explicated concept in fact has a perfectly precise extension, the set of recursive functions. Church’s Thesis can be supported in a quasi-empirical way, by the failure of our searches for counterexamples. It can be supported too in a more principled way, by the observation that different appealing ways of sharpening up the informal chararacterization of algorithmic computability end up specifying the same set of recursive functions. But such considerations fall short of a demonstration of the Thesis. So is there a different argumentative strategy we could use, one that could lead to a proof? Sometimes it is claimed that there just can’t be, because you can never really prove results involving an informal concept like algorithmic computability.. (shrink)
Timothy Smiley's wonderful paper 'Rejection' (Analysis 1996) is still perhaps not as well known or well understood as it should be. This note first gives a quick presentation of themes from that paper, though done in our own way, and then considers a putative line of objection - recently advanced by Julien Murzi and Ole Hjortland (Analysis 2009) - to one of Smiley's key claims. Along the way, we consider the prospects for an intuitionistic approach to some of the issues (...) discussed in Smiley's paper. (shrink)
Timothy Smiley’s wonderful paper ‘Rejection’ (1996) is still perhaps not as well known or well understood as it should be. This note first gives a quick presentation of themes from that paper, though done in our own way, and then considers a putative line of objection – recently advanced by Julien Murzi and Ole Hjortland (2009) – to one of Smiley’s key claims. Along the way, we consider the prospects for an intuitionistic approach to some of the issues discussed in (...) Smiley’s paper. (shrink)
Needless to say, Charles Parsons’s long awaited book1 is a must-read for anyone with an interest in the philosophy of mathematics. But as Parsons himself says, this has been a very long time in the writing. Its chapters extensively “draw on”, “incorporate material from”, “overlap considerably with”, or “are expanded versions of” papers published over the last twenty-five or so years. What we are reading is thus a multi-layered text with different passages added at different times. And this makes for (...) a rather bumpy read. There is another route Parsons could have taken: he could have reprinted the relevant papers with postscripts, and then top-and-tailed the collection with a preface and added concluding reflections. It must sound very ungrateful, but I rather suspect that that might have worked better. Much of the book is about arithmetic. But Parsons has woven into the discussion claims about mathematics more generally and about set theory in particular. We might well have a basic worry about this structure: for do defensible claims about the ontology and epistemology of arithmetic have to be generalizable to apply to more infinitary mathematics? For example, suppose you are attracted to a Hellman-like modal structuralist account of arithmetic: then should you think it a problem if you suspect that such an account can’t readily be extended to cope e.g. with set theory (since you boggle at the idea of possible worlds free of abstracta but with enough structure to somehow model ZFC)? Parsons himself seems to waver over such questions. So for present purposes, I’ll focus just on arithmetic, and in what follows I’ll revisit two of the most familiar Parsonian themes, his views on structuralism as an account of the ontology of arithmetic, and his exploration of the role of intuition in grounding arithmetical knowledge. (shrink)