1 Introduction

Kleene’s recursion theorem [8] states that every computable operation on codes of partial computable functions has a fixed point. That is, for every computable function f there exists a number e such that \(\varphi _{f(e)} = \varphi _e\). Here \(\varphi _e\) denotes the e-th partial computable function. Kleene actually proved a more general version of this theorem with parameters:

Theorem 1.1

(Recursion theorem with parameters, Kleene [8]) Let h(nx) be a computable binary function. Then there exists a computable function f such that for all n, \(\varphi _{f(n)} =\varphi _{h(n,f(n))}\).

This result shows that the recursion theorem is effective, in the sense that the fixed points of a computable sequence of functions can be found in a uniformly computable way. We refer to Moschovakis [10] for an overview of some of the applications of this classic result. (Note that Kleene referred to Theorem 1.1 as the first recursion theorem, whereas Moschovakis proposes to call it the second recursion theorem, to contrast it with the simpler nonuniform statement.)

The recursion theorem has been extended in several ways. We refer the reader to Soare [12] for a general discussion. In this paper we discuss the effectivity of two extensions, namely Arslanov’s completeness criterion (Sects. 2 and 3) and Visser’s ADN theorem (Sects. 4 and 5). In particular we show that the parameterized versions of these extensions, analogous to Theorem 1.1, fail. Finally, we discuss a joint generalization of Arslanov’s completeness criterion and the ADN theorem from [13]. Though the ADN theorem does not have a parameterized version, it is uniform in certain other respects. In Sect. 6 we show that this uniformity does not hold for the joint generalization.

Our notation from computability theory is mostly standard. Partial computable (p.c.) functions are denoted by lower case Greek letters, and (total) computable functions by lower case Roman letters. \(\omega \) denotes the natural numbers, \(\varphi _e\) denotes the e-th p.c. function, and \(W_e\) denotes the domain of \(\varphi _e\). We write \(\varphi _e(n)\downarrow \) if this computation is defined, and \(\varphi _e(n)\uparrow \) otherwise. \(\emptyset '\) denotes the halting set. For unexplained notions we refer to Odifreddi [11] or Soare [12].

In the discussion below we will use the following notions from the literature:

  • A function f is called fixed point free, or simply FPF, if \(W_{f(n)}\ne W_n\) for every n. We will also use this terminology for partial functions, see Definition 4.1 below, but by FPF function we will always mean a total function, unless explicitly stated otherwise.

  • A function g is called diagonally noncomputable, or DNC, if \(g(e)\ne \varphi _e(e)\) for every e.

Though the notions of FPF and DNC function are different, it is well-known that they coincide on Turing degrees, cf. Jockusch et al. [5]. Namely, a set computes a FPF function if and only if it computes a DNC function. Moreover, this is also equivalent to computing a function f such that \(\varphi _{f(e)}\ne \varphi _e\) for every e.

DNC functions played an important role in Kučera’s alternative solution to Post’s problem [9]. In the paper by Kjos-Hanssen et al. [7], the notion of DNC function is linked to sets with high initial segment Kolmogorov complexity.

2 Arslanov’s completeness criterion

By the recursion theorem, and the equivalence quoted above, no FPF function is computable. It is easy to see that the halting set \(\emptyset '\) computes a FPF function, as \(\emptyset '\) can list all computable functions. However, by the low basis theorem [6], there also exist FPF functions of low degree. The next result shows that FPF functions cannot have incomplete c.e. degree. (On the other hand, by Kučera [9], any FPF degree below \(\emptyset '\) bounds a noncomputable c.e. degree.) This shows that the recursion theorem can be extended from computable functions to functions bounded by an incomplete c.e. degree.

Theorem 2.1

(Arslanov completeness criterion [1]) A c.e. set A is Turing complete if and only if A computes a \(\mathrm{FPF}\) function.

Proof

Suppose A is c.e. and incomplete, and \(f\leqslant _T A\). Then f has a computable approximation \(\hat{f}(n,s)\), and there is an A-computable modulus function m(n) such that \(\forall s\geqslant m(n) \big (f(n) = \hat{f}(n,s)\big )\). By the recursion theorem with parameters (Theorem 1.1), let h be a computable function such that

$$\begin{aligned} W_{h(n)}= & {} \left\{ \begin{array}{ll} W_{\hat{f}(h(n),s_n)} &{}\quad \text {if } n\in \emptyset ' \text { and } s_n \text { is minimal such that } n\in \emptyset '_s, \\ \emptyset &{}\quad \text {otherwise.} \end{array}\right. \end{aligned}$$

Then there exists \(n\in \emptyset '\) such that \(\hat{f}(h(n),s_n) = f(h(n))\), so that h(n) is a fixed point of f. Namely, if this were not the case, then we would have that for all n, if \(n\in \emptyset '\), then \(m(h(n))>s_n\), and hence \(n\in \emptyset '_{m(h(n))}\). Thus we would have \(\emptyset '\leqslant _T A\), contrary to assumption. \(\square \)

The proof given here is basically the contrapositive of the proof in Soare [12]. The proof above already suggests that the result is not effective: It does not give a fixed point effectively, but merely produces a c.e. set, namely \(\big \{{h(n)\mid n\in \emptyset '}\big \}\), at least one of the elements of which is a fixed point. That this is necessarily so follows from the result in the next section.

3 The failure of Arslanov’s completeness criterion with parameters

Let h be a computable function of two arguments. Since for every fixed n the function h(nx) is a computable function of x, by the recursion theorem we have

$$\begin{aligned} \forall n \exists x \; \varphi _x = \varphi _{h(n,x)}. \end{aligned}$$

When we Skolemize this formula we obtain:

$$\begin{aligned} \exists f \forall n \; \varphi _{f(n)} = \varphi _{h(n,f(n))}. \end{aligned}$$

The recursion theorem with parameters tells us that we can take fcomputable here. In other words, the recursion theorem holds uniformly.

Now consider the Arslanov completeness criterion. Let A be an incomplete c.e. set, and let \(h\leqslant _T A\) be a binary function. By Theorem 2.1 we have

$$\begin{aligned} \forall n \exists x \; \varphi _x = \varphi _{h(n,x)} \end{aligned}$$

and Skolemization gives

$$\begin{aligned} \exists f \forall n \; \varphi _{f(n)} = \varphi _{h(n,f(n))}. \end{aligned}$$

We prove that in general we cannot take f computable in this case. This even fails when A is of low Turing degree. Note that by relativizing the recursion theorem with parameters, there always exists an A-computable Skolem function f.

Theorem 3.1

(Failure of Arslanov with parameters) There exist a low c.e. set A and an A-computable binary function h such that for every computable f, there exists n with

$$\begin{aligned} W_{f(n)} \ne W_{h(n,f(n))}. \end{aligned}$$

Proof

We build A c.e. and \(h\leqslant _T A\) total using a finite injury construction.The requirements for the construction are:

\(R_e:\):

\(f=\{e\}\) is total \(\;\Longrightarrow \;\)\(\exists n \; W_{f(n)} \ne W_{h(n,f(n))}\),

\(L_e:\):

\(\exists ^\infty s \; \{e\}^{A_s}_s(e)\downarrow \;\Longrightarrow \; \{e\}^A(e)\downarrow \).

The requirements \(L_e\) guarantee that A is low (cf. Soare [12]), and clearly the requirements \(R_e\) are sufficient to prove the theorem. We give the requirements the following priority ordering:

$$\begin{aligned} L_0> R_0> L_1> R_1> L_2 > \cdots \end{aligned}$$

To satisfy \(L_e\) we do not have to enumerate anything into A, we only maintain a restraint function r(es) to preserve computations in the usual way. Let us consider the strategy for \(R_e\) in isolation. Suppose we have picked n as a potential witness for \(R_e\).

Step 1. Suppose we see at stage s such that \(f(n) = \{e\}_s(n)\downarrow \).

If \(W_{f(n),s}\ne \emptyset \) we let \(W_{h(n,f(n))}=\emptyset \), thus satisfying \(R_e\) forever.

If \(W_{f(n),s}=\emptyset \) we let \(W_{h(n,f(n))}\ne \emptyset \).

Step 2. Suppose that at a later stage \(t>s\) we see \(W_{f(n),t} = W_{h(n,f(n))}\ne \emptyset \).

Now we changeh(nf(n)) so that \(W_{h(n,f(n))}=\emptyset \) by changing A below the use of h.

Since the definition of \(W_{h(n,f(n))}\) needs to be adapted at most twice (from empty to nonempty to empty), we can get by by letting h use only two bits of A. We define h as follows. We use a standard computable pairing function \(\langle \cdot ,\cdot \rangle \) to denote coded pairs and triples. For ease of notation, we write A(xyz) instead of \(A(\langle x,y,z\rangle )\). We let h be an A-computable function such that

$$\begin{aligned}&W_{h(n,x)}=\emptyset \iff A(n,x,0)=A(n,x,1),\\&W_{h(n,x)} \ne \emptyset \iff A(n,x,0)\ne A(n,x,1). \end{aligned}$$

Clearly such a function h can be defined from A. (As the computation of h(nx) uses only two bits from A, this is even a btt-reduction.)

We construct A in stages. \(L_e\) requires attention at stage s if \(e<s\), \(\{e\}^{A_s}_s(e)\downarrow \), and \(r(e,s)=0\). (This means that a restraint should be set to preserve the computation.) \(R_e\) requires attention at stage s if \(e<s\) and one of the following holds:

  1. (a)

    \(R_e\) does not have a witness at stage s, that is, \(n_{e,s}\) is undefined. Required action in this case: pick n larger than all current restraints r(is), \(i\leqslant e\), and also different from all other witnesses \(n_{i,s}\) that are currently defined, and let \(n_{e,s+1}=n\).

  2. (b)

    \(n=n_{e,s}\) is defined, \(f(n)=\{e\}_s(n)\downarrow \), and one of the following subcases applies:

    1. (b.1)

      \(W_{f(n),s}=\emptyset \) and \(A_s(n,f(n),0)= A_s(n,f(n),1)=0\). Required action: Define \(A(n,f(n),0)=1\).

    2. (b.2)

      \(W_{f(n),s}\ne \emptyset \), \(A_s(n,f(n),0)=1\), and \(A_s(n,f(n),1)=0\). Required action: Define \(A(n,f(n),1)=1\).

    Also, if either \(A(n,f(n),0)=1\) or \(A(n,f(n),1)=1\) is set at stage s, we define \(r(i,s+1) =0\) for all \(i>e\).Footnote 1

Construction Initially A is empty: \(A_0=\emptyset \). At stage \(s>0\), pick the highest priority requirement \(R_e\) or \(L_e\), if any, that requires attention. If there is none, proceed to the next stage. If \(L_e\) is picked, set \(r(e,s+1)\) equal to the use of \(\{e\}^{A_s}_s(e)\) (this computation converges since \(L_e\) requires attention). Also, initialize all lower priority \(R_i\) by letting all witnesses \(n_{i,s+1}\) with \(i\geqslant e\) be undefined, and proceed to the next stage. If \(R_e\) is picked, perform the actions indicated above under (a) and (b). This concludes the construction of \(A=\bigcup _s A_s\).

Verification We verify that all requirements are satisfied. For \(L_e\), note that the only requirements that can injure it are the \(R_i\) with \(i<e\), and by induction each of these enumerates at most finitely many numbers into A, so \(L_e\) is injured at most finitely often, and hence is eventually satisfied.

For \(R_e\), suppose that \(f=\{e\}\) is total. By induction, assume that no higher priority requirement \(L_i\) or \(R_i\) requires attention after stage t. Let r be the maximum of all higher priority restraints:

$$\begin{aligned} r = \max _{i\leqslant e} \lim _{s\rightarrow \infty } r(i,s). \end{aligned}$$

Note that since by assumption every \(L_i\), \(i\leqslant e\), acts only finitely often, this limit exists and is finite. By the construction and (a) above, at some stage s after the last stage that a requirement \(L_i\) with \(i\leqslant e\) acts, \(n = n_{e,s} >r\) is defined, which is then never redefined later. We have the following cases.

If \(W_{f(n)}=\emptyset \), then \(R_e\) acts exactly once after the stage s where n is defined, the clause (b.1) applies at that stage, and we have \(A(n,f(n),0)=1\) and \(A(n,f(n),1)=0\). Hence \(W_{h(n,f(n))}\ne \emptyset \), and \(R_e\) is satisfied.

If \(W_{f(n)}\ne \emptyset \) then we have two subcases:

  • After the stage s where n is defined, \(R_e\) never requires attention. In this case we have \(A(n,f(n),0)=A(n,f(n),1)=0\), hence \(W_{h(n,f(n))}=\emptyset \), and \(R_e\) is satisfied.

  • In the opposite case, \(R_e\) does require attention after stage s. In this case, \(R_e\) will act precisely twice after stage s. The first time, at stage \(s'\) say, since \(A_s(n,f(n),0)=0\) we will have \(W_{f(n),s'}=\emptyset \) (for otherwise \(R_e\) would not require attention) and case (b.1) will apply. The second time will occur at a stage \(s''>s'\) that is large enough to see that \(W_{f(n),s''}\ne \emptyset \). Since now \(A_{s''}(n,f(n),0)=1\), case (b.2) applies, and we will have \(A(n,f(n),0)=A(n,f(n),1)=1\). Hence \(W_{h(n,f(n))}=\emptyset \), and \(R_e\) is satisfied.

So we see that \(R_e\) acts at most twice after the last time it is initialized, and is eventually satisfied. \(\square \)

4 The ADN theorem

It is well-known that Kleene found the recursion theorem by studying the \(\lambda \)-calculus. (See for example Crossley [4] for some historical comments.) Also motivated by the \(\lambda \)-calculus, arithmetic provability, and the theory of numerations, Visser [14] proved the following generalization of the recursion theorem. It has interesting applications in the theory of numerations, see for example Bernardi and Sorbi [3] and Barendregt [2]. ADN theorem stands for “anti diagonal normalization theorem”.

Definition 4.1

We extend the definition of FPF function to partial functions. We call a partial function \(\delta \) FPF if it is fixed point free on its domain, i.e. for every n,

$$\begin{aligned} \delta (n)\downarrow \; \Longrightarrow \; W_{\delta (n)}\ne W_n. \end{aligned}$$
(4.1)

Theorem 4.2

(ADN theorem, Visser [14]) Suppose that \(\delta \) is a partial computable FPF function. Then for every partial computable function \(\psi \) there exists a computable function f such that for every n,

$$\begin{aligned}&\psi (n)\downarrow \; \Longrightarrow \; W_{f(n)}= W_{\psi (n)} \end{aligned}$$
(4.2)
$$\begin{aligned}&\psi (n)\uparrow \; \Longrightarrow \; \delta (f(n))\uparrow \end{aligned}$$
(4.3)

If (4.2) holds for every n, we say that ftotalizes\(\psi \), and if in addition (4.3) holds, we say that ftotalizes\(\psi \)avoiding \(\delta \).

Just as the Arslanov completeness criterion extends the recursion theorem from computable functions to functions computable from any incomplete c.e. degree, Theorem 4.2 can be extended to such degrees. This gives the following joint generalization of the ADN theorem and the Arslanov completeness criterion:

Theorem 4.3

(Joint generalization [13]) Suppose A is a c.e. set such that \(A <_T \emptyset '\). Suppose that \(\delta \) is a partial A-computable \(\mathrm{FPF}\) function. Then for every partial computable function \(\psi \) there exists a computable function f totalizing \(\psi \) avoiding \(\delta \), i.e. such that for every n (4.2) and (4.3) above hold.

Note that Theorem 4.3 implies Theorem 2.1, because if \(\delta \) were total then (4.3) could not hold. Hence no total FPF function of incomplete c.e. degree can exist.

Thus we have the picture of generalizations of the recursion theorem from Fig. 1. All of these generalizations can be proved using the recursion theorem with parameters (Theorem 1.1). This prompts the question whether any of these generalizations have a parameterized version. The negative answer for Arslanov’s completeness criterion was already given in Sect. 3. We discuss the ADN theorem in the next section.

Fig. 1
figure 1

Generalizations of the recursion theorem

5 The ADN theorem with parameters

The ADN theorem is uniform in codes of \(\psi \), as is easy to see, cf. [14]. In fact, one may assume without loss of generality that the function \(\psi \) is universal. Also, from the proof of the ADN theorem from the recursion theorem with parameters, given in [13], it is clear that the code of the function f depends effectively on a code for \(\delta \). Hence the result is uniform in both \(\psi \) and \(\delta \). However, that the result is effective in this sense does not mean it has a parameterized version analogous to the recursion theorem with parameters. As the ADN theorem is a statement about partial FPF functions, and hence in a way a contrapositive of the recursion theorem, it is not even immediately clear what the statement of the ADN theorem with parameters should be. At least it should imply Theorem 1.1.

To formulate the analog of Theorem 1.1 for the ADN theorem, we define the following notion.

Definition 5.1

A partial binary function \(\delta (n,x)\) is \(\mathrm{FPF}^+\) if for every computable function g there exists n such that either \(\delta (n,g(n))\uparrow \) or \(\varphi _{g(n)}\ne \varphi _{\delta (n,g(n))}\).

Note that by negating the property from the definition, \(\delta \) is not\(\mathrm{FPF}^+\) if there exists a computable function g such that for every n, \(\delta (n,g(n))\) is defined and \(\varphi _{g(n)}=\varphi _{\delta (n,g(n))}\). This expresses that g uniformly computes fixed points for the family of functions \(\delta (n,x)\). By the recursion theorem with parameters, every total computable \(\delta \) is not \(\mathrm{FPF}^+\).

We can now formulate the analog of the recursion theorem with parameters as follows.

Statement 5.2

(ADN theorem with parameters) Suppose that \(\delta \) is a binary partial computable \(\mathrm{FPF}^+\) function. Then for every partial computable function \(\psi \) there exists a computable function f such that for every n,

$$\begin{aligned}&\psi (n)\downarrow \; \Longrightarrow \; W_{f(n)}= W_{\psi (n)} \end{aligned}$$
(5.1)
$$\begin{aligned}&\psi (n)\uparrow \; \Longrightarrow \; \delta (n,f(n))\uparrow \end{aligned}$$
(5.2)

To show that this is the proper analog of Theorem 1.1 for the ADN theorem, we show that Statement 5.2 both implies Theorem 1.1 and the ADN theorem. We then proceed to show that it is false.

Statement 5.2 implies Theorem 1.1: Note that for the statement to hold, \(\delta \) cannot be total (for then (5.2) could not hold in case \(\psi \) is nontotal). So if \(\delta \) is total, it is not \(\mathrm{FPF}^+\). As already observed above, this means that there is a computable function g such that for every n, \(\varphi _{g(n)}=\varphi _{\delta (n,g(n))}\), which is the statement of Theorem 1.1.

Statement 5.2 implies Theorem 4.2: Given a unary p.c. FPF function \(\delta \), consider the function defined as \(\hat{\delta }(n,x) = \delta (x)\) for every n and x. Note that \(\hat{\delta }\) is \(\mathrm{FPF}^+\): For every computable function g and every n, \(\hat{\delta }(n,g(n)) = \delta (g(n))\uparrow \) or \(\varphi _{g(n)}\ne \varphi _{\delta (g(n))}\) since \(\delta \) is FPF. Applying Statement 5.2 to \(\hat{\delta }\) gives, for a given p.c. \(\psi \), a computable f totalizing \(\psi \) such that

$$\begin{aligned} \psi (n)\uparrow \; \Longrightarrow \; \hat{\delta }(n,f(n))\uparrow \; \Longrightarrow \; \delta (f(n))\uparrow \end{aligned}$$

for every n, hence Theorem 4.2 holds for \(\delta \).

Proposition 5.3

Statement 5.2 is false.

Proof

We construct \(\delta \) p.c. and \(\mathrm{FPF}^+\) and \(\psi \) p.c. to diagonalize against all computable \(f=\varphi _e\), ensuring that (5.2) fails. Constructing a \(\mathrm{FPF}^+\) function is very easy: According to Definition 5.1 we simply have so make sure that for every computable g we have a point n such that \(\delta (n,g(n))\) is undefined. The construction is as follows. Let \(\psi \) be totally undefined. For every \(f = \varphi _e\) pick two witnesses \(n_e\) and \(m_e\) such that all \(n_e\) and \(m_e\) are different, e.g. \(n_e = 2e\) and \(m_e = 2e+1\). Define \(\delta \) to be a partial computable function such that

$$\begin{aligned} \delta (n,x)\downarrow \;\; \Longleftrightarrow \;\; n=n_e \wedge \varphi _e(n_e)\downarrow =x. \end{aligned}$$
(5.3)

Now suppose that \(f = \varphi _e\) is total. Then \(f(n_e)\downarrow \), so by (5.3) we have \(\delta (n_e,f(n_e))\downarrow \). Hence f fails to satisfy (5.2), because \(\psi (n_e)\uparrow \).

To finish the proof, all that remains is to verify that \(\delta \) is \(\mathrm{FPF}^+\). Note that (5.3) implies that \(\delta (m_e,x)\uparrow \) for every e and x. So if \(f = \varphi _e\) is total, we have in particular that \(\delta (m_e, f(m_e))\uparrow \), which by Definition 5.1 makes \(\delta \) an \(\mathrm{FPF}^+\) function. \(\square \)

6 The nonuniformity of the joint generalization

As remarked in Sect. 5, the dependence of f on \(\psi \) and \(\delta \) in Theorem 4.2 is uniform in codes of \(\psi \) and \(\delta \). This prompts the question whether a similar uniformity holds for the joint generalization Theorem 4.3. Indeed, Theorem 4.3 is also uniform in \(\psi \), as is easy to check, using the same argument as for Theorem 4.2. As for \(\delta \), as this is no longer a p.c. function, we first have to specify what exactly we mean by uniformity in this case. The weakest form of uniformity, using the strongest possible assumption, would be to give f codes of both A and \(\delta \), i.e. codes a and d such that \(A=W_a\) and \(\delta = \{d\}^A\). Uniformity then means that there is a computable function h such that an f as in the theorem is given by

$$\begin{aligned} f = \varphi _{h(a,d,b)}, \end{aligned}$$
(6.1)

where b is a code such that \(\psi = \varphi _b\). Note that h is total, but f only has to satisfy the theorem in case A is incomplete and \(\delta \) is FPF. Instead of issuing f with a code b of \(\psi \), we could alternatively simply assume that \(\psi \) is universal. This amounts to the same thing, but in the construction below it will be easier to work with b.

The proof of the joint generalization in [13] is not uniform. For A and \(\delta \) as in the theorem, the proof provides a family of functions \(f_x\), \(x\in \omega \), at least one of which satisfies the theorem. That the proof is necessarily nonuniform is confirmed by the next result.

Theorem 6.1

Uniformity of Theorem 4.3 in the sense of (6.1) does not hold.

Proof

Assume for a contradiction that a computable function h as in (6.1) exists. We will prove the existence of codes a, d, and b such that \(A = W_a\) is Turing incomplete, \(\delta =\{d\}^A\) is a partial A-computable FPF function, and \(\psi =\varphi _b\) is partial computable, such that \(f = \varphi _{h(a,d,b)}\) does not satisfy Theorem 4.3, contradicting the assumption.

The code d for \(\delta \) will depend effectively on a and b, so that we have only two parameters a and b in the construction. We will construct computable functions p and q such that \(A = W_{p(a,b)}\) and \(\psi = \varphi _{q(a,b)}\). An application of the double recursion theoremFootnote 2 will provide us with codes a and b such that \(W_a = W_{p(a,b)}\) and \(\varphi _b =\varphi _{q(a,b)}\).

Construction ofA, \(\delta \), and\(\psi \). We use a coding of \(\delta \) in A similar to the one used for h in the proof of Theorem 3.1. Namely, we let

$$\begin{aligned}&\delta (x)\uparrow \iff A(x,0)=A(x,1),\\&\delta (x)\downarrow \;\wedge \; W_{\delta (x)} \ne \emptyset \iff A(x,0)=1 \wedge A(x,1)=0. \end{aligned}$$

Note that a code d of \(\delta \) effectively depends on a code of A. Since \(A = W_{p(a,b)}\), there is a computable function d such that d(ab) is a code of \(\delta \).

Our assumption is that \(f = \varphi _{h(a,d(a,b),b)}\) satisfies Theorem 4.3. At the beginning of the construction A is empty and \(\psi \) is totally undefined.

Step 1.:

Wait for f(0) to become defined. If this never happens, we automatically win, and do not have to take further action.

Step 2.:

If \(f(0)\downarrow \), we let \(\delta (f(0))\downarrow \) such that \(W_{\delta (f(0))}\ne \emptyset \) by defining \(A(f(0),0)=1\). This action would kill f by making (4.3) fail, but now there is the threat that \(W_{f(0)} = W_{\delta (f(0))}\) so that \(\delta \) may fail to be FPF, hence we may have to take further action to prevent this.

Step 3.:

Wait for \(W_{f(0)}\ne \emptyset \). We take this as a sign that \(W_{f(0)}\) might follow \(W_{\delta (f(0))}\), so we redefine \(\delta (f(0))\uparrow \), by defining \(A(f(0),1)=1\). Also, we define \(\psi (0)\) so that \(W_{\psi (0)}=\emptyset \). This kills f by making (4.2) fail.

This completes the construction. Note that the construction depends effectively on the parameters a and b, so that there exist computable functions p and q such that \(A = W_{p(a,b)}\) and \(\psi = \varphi _{q(a,b)}\). By the double recursion theorem there exist a and b such that \(W_a = W_{p(a,b)}\) and \(\varphi _b =\varphi _{q(a,b)}\). We verify that \(A = W_a\) is incomplete, \(\delta =\{d(a,b)\}^A\) is FPF, and that \(f = \varphi _{h(a,d(a,b),b)}\) does not satisfy the statement of Theorem 4.3.

First note that A is finite, since at most the two numbers \(\langle f(0),0\rangle \) and \(\langle f(0),1\rangle \) are enumerated into A. In particular A is Turing incomplete.

If f(0) fails to become defined in step 1, it obviously fails the theorem by not being total, so assume that \(f(0)\downarrow \).

In case \(W_{f(0)}=\emptyset \), by step 2 we have \(W_{\delta (f(0))}\ne \emptyset \), hence \(\delta \) is FPF (note that it is not defined on any other point). Also, the construction ends with this step, and f fails to satisfy (4.3) since \(\psi (0)\uparrow \) and \(\delta (f(0))\downarrow \).

If \(W_{f(0)}\ne \emptyset \), by step 3 we have \(\delta (f(0))\uparrow \), so again \(\delta \) is FPF. Also, we now have \(W_{\psi (0)}=\emptyset \), so f fails to totalize \(\psi \). Thus we see that f fails to satisfy the theorem in every case. \(\square \)

The set A in the proof above is actually computable, and hence \(\delta \) is p.c. This does not contradict the fact that Theorem 4.2 is uniform in a code of \(\delta \). Namely, \(A=W_a\) may be computable, but not via the code a that is provided.