Abstract
Discussions of Ryle’s regress argument against the “intellectualist legend” have largely focused on whether it is effective against a certain view about knowledge how, namely, the view that knowledge how is a species of propositional knowledge. This is understandable, as this is how Ryle himself framed the issue. Nevertheless, this focus has tended to obscure some different concerns which are no less pressing—either for Ryle or for us today. More specifically, I argue that a version of Ryle’s regress confronts any view according to which the intelligence manifested in action must be inherited from purely inner mental causes. I recommend an alternative account of the metaphysics of intelligent action, which avoids this commitment.
Similar content being viewed by others
1 Introduction
Since the publication of Stanley and Williamson’s (2001) article on knowing-how, Gilbert Ryle’s (1945, [1949] 2002) infinite regress argument against the so-called “intellectualist legend” has received much attention. Nevertheless, I believe that both those sympathetic to Ryle and those opposed to him have missed some of the deeper implications of Ryle’s argument.Footnote 1
In this, they may have been led astray by Ryle himself. While Ryle’s intentions are not always easy to interpret, he frequently presents the regress as an argument against a very specific view—namely, the view that knowledge how (the kind of knowledge manifested in intelligent or skilled activity) is propositional knowledge, or knowledge that.Footnote 2 For instance, both the chapter and the section in which the argument appears in The Concept of Mind are titled “Knowing How and Knowing That”. And when we turn to Ryle’s own positive view of knowledge how, what we get is an account in terms of multi-track dispositions, presumably meant to replace the propositional account (ibid., pp. 40–45). Understandably, contemporary discussion of Ryle’s regress argument has focused on its implications for just this debate. In particular, Ryle’s critics argue that, if the argument had any bite at all, it would threaten dispositionalism about knowledge how no less than propositionalism (Cath 2013; Stanley and Williamson 2001; Stanley 2011). The implication is that, since these are the only two options available, the argument must lack bite.
In this paper I argue in the opposite direction. Ryle’s critics are wrong to think that the regress argument has no bite: on the contrary, Ryle puts his finger on problems that afflict even sophisticated contemporary accounts of intelligent action. At the same time, however, Ryle’s critics are correct that these problems have little to do with whether knowledge how is propositional or not. The deeper concerns underlying the regress argument threaten any account that sees the intelligence manifested in action as merely inherited from mental states that are entirely distinct from them—regardless of whether those states are propositional or not.
Since my proposed reading of the regress does not discriminate between these two views about knowledge how, it may seem contrary to Ryle’s own intent. It is worth recalling, however, that the skirmish over knowledge how is only part of Ryle’s larger campaign against “Descartes’ Myth”, or the myth of the mind as a “ghost in the machine”. One aspect of this myth, according to Ryle, is the idea that mental states and events are purely inner entities, stopping short of the overt bodily events that are the observable manifestations of our agency (ibid., pp. 11–12).Footnote 3 While Ryle’s regress may not tell one way or another with regard to propositionalism about knowledge how, it has clear implications for this aspect of the Cartesian myth: fully exorcising the ghost, it seems, requires taking our actions not just as the causal effects of intelligent operations, but as themselves the “workings of our minds”, as Ryle puts it at one point (ibid., p. 33). As I hope to show, this is a powerful and attractive idea, which has been underexplored in the literature. My positive aim in this paper is to sketch an account of action that can make sense of it.
2 Revisiting the regress
My aim in this section is to develop a version of Ryle’s regress argument that brings out what I think are the deeper concerns that underlie it. Although my aim is not exegetical, I will begin by working through Ryle’s argument itself, as this can help clarify where my understanding of the regress differs from other accounts.
Let us begin by clarifying the scope of the debate. The class of actions Ryle is concerned with includes those to which “epithets of intelligence” might apply. Thus, for instance, Ryle mentions actions “exhibiting carefulness, judgment, wit” (ibid., p. 33). Now, while most of Ryle’s examples involve positive “epithets of intelligence”, his full list (ibid., p. 25) also includes negative ones, such as “silly” and “careless”. It seems pretty clear, therefore, that his real topic is not only actions that score well along dimensions of intelligence, but rather actions that are apt for assessment in terms that implicate personal-level intelligence at all. According to Ryle, such assessment is inappropriate not only when it comes to things done by parts of the agent (such as digestion), but also to actions which, although performed by the agent herself, are still not done “on purpose”, such as actions done “absent-mindedly” (ibid., p. 25). Ryle’s topic, in other words, seems close to the one usually discussed under the rubric of “intentional action”, although I will not attempt to determine how exact the match is.Footnote 4 I will continue to speak of intelligent actions, but—like Ryle—I intend this as shorthand for “actions apt for assessment in terms implicating personal-level intelligence”.
Ryle’s question about such actions, then, is this: what do they owe their intelligence to? Ryle starts with the observation that we often attribute the intelligence of intelligent actions to the fact that one is “thinking what one is doing” as one is doing it, and in the right sort of way (ibid., p. 29). Ryle does not object to this way of speaking. But he objects to the “intellectualist legend”, which puts the following gloss on it:
To do something thinking what one is doing is, according to this legend, always to do two things; namely, to consider certain appropriate propositions, or prescriptions, and to put into practice what those propositions or prescriptions enjoin. (ibid., p. 29)
This view, then, is argued to lead to an infinite regress:
The consideration of propositions is itself an operation the execution of which can be more or less intelligent, less or more stupid. But, if for any operation to be intelligently executed, a prior intelligent operation had first to be performed and performed intelligently, it would be a logical impossibility for anyone ever to break into the circle. (ibid., p. 30)
Now, the quotes from Ryle we just discussed make no direct mention of knowledge how. But it is not hard to see how Ryle thinks his arguments connect to that topic: he takes it as common ground between himself and his opponents that if an action is intelligent, this must be because manifests such knowledge.Footnote 5
Let us, then, examine the argument more carefully. Ryle attributes to his opponents the following premisesFootnote 6:
-
1.
If you F intelligently, then your F-ing is guided by knowledge how to F.
-
2.
If your F-ing is guided by knowledge how to F, then you must have considered propositions about how to F.
Premise (1), as we saw, is supposed to be common ground by everyone in this debate. Premise (2) is the “intellectualist legend”—the view Ryle seeks to reduce to absurdity. To do so, Ryle adds the following assumption:
-
3.
Considering propositions about how to F is an intelligent action.
It is not hard to see how these three premises give rise to a regress. For, from premise (1), to F intelligently your F-ing must be guided by your knowledge how to F. From (2), this means that your F-ing intelligently presupposes an act of considering propositions about how to F (which, presumably, is an act distinct from your F-ing itself). But then, from (2), we get that this act of considering propositions about how to F is also intelligent, and thus [from (1)] must itself be guided by knowledge how, and thus [from (2)] presuppose an act of considering propositions. Assuming that the propositions whose consideration is required to guide an act of F-ing are distinct from those required to guide an act of considering propositions about how to F, we seem to have started on an infinite regress of more and more propositions about how to do various things, and of acts of considering such propositions.
Who, if anyone, should be worried about this argument? To begin with, note that the immediate target of the argument is not propositionalism about knowledge how as such; rather, it is the intellectualist legend, as stated in premise (2). That is not a view about the nature of knowledge how, but about its employment. The question, then, is whether we should think that propositionalists must be committed to such a view about the employment of knowledge how.
Unsurprisingly, propositionalists deny that they must be.Footnote 7 And it is, indeed, clear that from the claim that knowledge how has propositional content it does not immediately follow that to be guided by such knowledge you must first consider the relevant propositions. Even more strongly—and in order to avoid a debate regarding the nature of “considering” or “contemplating” propositions—propositionalists argue that they can appeal to the idea that some employments of propositional knowledge in action are direct, in the sense that they involve no other mental operations at all (Cath 2013, pp. 367–369). Thus, for example, you may employ your knowledge that you can open the door by turning the doorknob directly, by doing just that. After all, as Cath (2013) and Stanley (2011) point out, dispositionalists seem to feel free to appeal to the idea that some employments of knowledge how are direct; so why shouldn’t propositionalists do the same?
In response, one might try to show that, despite appearances, there is some asymmetry between the two views that makes this move unavailable to propositionalists (e.g., Fantl 2011; Hetherington 2011). For present purposes, I will not try to assess such arguments. This is because, as I will argue, there may be a deeper problem in the area that afflicts propositionalists and dispositionalists alike.
To see this, let us revisit the regress argument, this time starting with Ryle’s own opening statement in Chapter 2 of The Concept of Mind:
In this chapter I try to show that when we describe people as exercising qualities of mind, we are not referring to occult episodes of which their overt acts and utterances are effects; we are referring to those overt acts and utterances themselves. (ibid., p. 25, emphasis mine)
Taken at face-value, this suggests that the deeper target of Ryle’s arguments is not propositionalism about knowledge how, but rather the “mythical bifurcation of unwitnessable mental causes and witnessable physical effects” (ibid., p. 33). This would clearly seem to be a target broader than just propositionalism.
To see how Ryle’s argument might be understood as aiming at this broader target, let us first try to get clearer on what exactly the “bifurcation” in question is. Consider Wittgenstein’s famous question: “What is left over if I subtract the fact that my arm goes up from the fact that I raise my arm?” (1958, sec. 621). The implication here is that your arm’s going up is just a brute physical movement. If intelligence is present in your act of raising your arm at all, then this must be in virtue of something else, which does not entail the occurrence of the overt physical movement itself—so that it can be “left over”, after the movement is “subtracted”. More generally, on the conception of agency suggested by Wittgenstein’s question, the intelligence of intelligent physical actions can only be inherited from mental states that are purely “inner”, in the sense that they fall short of entailing the occurrence of any particular physical movement.
This conception of human agency is not just of historical interest. Contemporary theorists tend to focus on intentions as the mental antecedents of action, conceived of as states that combine both motivation and plans for action (e.g., Bratman 1987; Mele 1992; Enç 2006). Details to the side, such views clearly conform to the Wittgensteinian template, in the sense that they portray the intelligence of intelligent actions as inherited from purely inner mental states. Mental states such as intentions, plans, and states of knowledge how provide blueprints for an action, but are not themselves sufficient for any particular action to actually be carried out.Footnote 8
So, what might be wrong with this picture? Let us begin by considering an ordinary action, such as tying your shoelaces. On the sort of view under consideration here, such an action would count as intelligent in virtue of manifesting a mental state of yours, such as perhaps an intention you would express by the sentence “I shall now tie my shoelaces”. The crucial point, now, is that on the Wittgensteinian picture there is always going to be some distance between it and the particular movements of your hands and fingers that tie your shoelaces.Footnote 9 For one thing, there is simply no guarantee that once you have acquired this intention (or other guiding mental state), any movements at all will ensue. For another, a mental state such as an intention to now tie your shoelaces is clearly compatible with a vast number of more fine-grained movement-types that would bring about the intended result. Could this gap between a mental state and its bodily implementation raise the prospect of a regress?
Some recent work suggests that it could. For instance, Fridland (2012, 2015) draws on similar considerations to argue against propositionalism regarding knowledge how. On one way of reconstructing Fridland’s reasoning, she is arguing that propositionalists face regress at at least two points [see also Löwenstein (2017, 278)]. First, if an act is to be intelligent, it cannot be guided by just any proposition; it must be guided by some appropriate propositions. But, Fridland suggests, how are we to understand the processes or mechanisms that select which propositions to employ on a given occasion? If this selection is itself an intelligent action, then regress looms (ibid., pp. 706–707). Second, Fridland claims that propositional knowledge necessarily possesses some degree of generality and context-independence (ibid., pp. 720–721). This generality, however, means that if such knowledge is to be applied in action, a further step of selecting how to implement it in the particular circumstances of the action will need to take place. Since such selection would seem itself to be the sort of thing that one can do more or less intelligently, it would seem that propositionalists face regress at this point as well.
There are, however, reasons to be dissatisfied with Fridland’s argument. For one thing, if there really is a problem in this area, it would seem to be entirely independent of whether the mental states in question have propositional content or not. On a non-propositional conception of the relevant mental states, being in such a state would presumably involve standing in some non-propositional relation to an action-type, such as shoelace-tying. Different views might appeal to different relations to play this role, ranging from heavily intellectualized ones—like “understanding” [as in Bengson and Moffett (2012)]—to mere dispositions to act. No matter the details, however, it remains true that actually tying your shoelaces will always involve some fine-grained way of implementing the action-type of tying your shoelaces. (This point is, of course, entirely general: the disposition of a crystal vase to shatter is consistent with a huge multitude of fine-grained ways for it to shatter, only one of which is actualized on the occasion of its shattering.) Rejecting propositionalism does not make the need for selection disappear.
Moreover, one might even doubt that alleged problem of selection really is a genuine problem at all. This is because, in order to reinstate the regress, we would need to show not just that operations of selection are necessary for intelligent actions, but that such operations are themselves intelligent actions. Critics of the regress argument, however, are unlikely to concede that they are, any more than they concede that Ryle’s acts of “contemplating propositions” are.Footnote 10
Even if Fridland’s argument does not work as it stands, however, this does not mean that no threat of regress exists in this area. I suggest that we can bring the problem into sharper focus by looking not at the mental operation of selecting a fine-grained sequence of actions, but rather on the overt actions that implement our ordinary intentions themselves. For instance, part of how you carry out your intention to tie your shoelaces might be that you pick up the left free end of the shoelace with your right hand and bring it across the right free end, which you hold in your left hand. Intuitively, these would also seem to be actions of yours—and, indeed, intelligent actions, if your shoelace-tying as a whole is. But what mental state of yours do these actions inherit their intelligence from? By hypothesis, your intention to tie your shoelaces did not specify any particular way of tying your shoelaces. It seems, therefore, that that intention cannot be the answer: we must look for further mental states, that do specify those actions. And there is no reason to stop here, of course: picking up the left free end of the shoelace with your right hand is itself an action-type that may be implemented in many different ways: you may use different fingers, for example. Your implementation of the action-type of picking up the left free end of the shoelace with your right hand itself appears to be an action of yours, subject to assessments of intelligence. And we seem to be off on a regress again.
Let me state the argument a bit more formally, before turning to consider possible responses. Suppose you perform some action of type F, which is a candidate for being an intelligent action. Then, on the Wittgensteinian picture, the intelligence of your F-ing must be due to its being guided by an appropriate mental state, such as an intention to F:
-
1*.
If your F-ing is intelligent, then it must be guided by an appropriate purely inner mental state M(F), which specifies your F-ing.Footnote 11
Furthermore, our considerations suggest that, no matter what we take M(F) to consist in, F-ing on any particular occasion will involve a particular way of implementing M(F):
-
2*.
On any occasion in which you intelligently F, your F-ing consists in an implementation I of M(F).
Moreover, as suggested above, it seems that if you F by I-ing, and your F-ing is a manifestation of your intelligence, then so must be your I-ing. After all, if you F by I-ing, isn’t your I-ing an expression of your ability to solve the practical problem of how to F?Footnote 12 Thus, the following seems prima facie plausible:
-
3*.
Your I-ing is intelligent.
The trouble for the Wittgensteinian picture is that accepting this, in the context of (1*) and (2*), would seem to expose us to a regress very much like Ryle’s own. Suppose that your I-ing is something that you do intelligently; then, by (1*) and (2*), this must be because your I-ing is guided by some purely inner, mental states. Moreover, as discussed, the mental state M(F) would seem to leave it entirely open whether you F by I-ing, or in some other way. But then, it would seem, the mental state that explains the intelligence of your I-ing must be other than M(F). Let’s call it M(I). But, of course, any purely inner mental state M(I) itself admits of being implemented in a large number of different ways. So, once again, we need to ask about the intelligence of the implementation that actually does take place. How can adherents of the Wittgensteinian regress respond to this regress?Footnote 13
Let us begin with premise (1*) of the argument: could proponents of the Wittgensteinian picture reject this premise? I do not think that they could. To see why, begin by considering an alternative view, according to which your F-ing may count as an intelligent action of yours just in virtue of being causally initiated and sustained by some suitable mental state or other, regardless of content. This proposal will clearly not work: your sweating and your hyperventilating may be initiated and sustained by your intention to run up a hill (via your actually running up the hill); but this does not make them into intelligent actions of yours. But it is instructive to ask: why exactly are your sweating and your hyperventilating not intelligent actions of yours? The reason, I believe, is simply that they do not reflect your solution to any practical problem, or your assessment of what to do.Footnote 14 For example, even if your hyperventilating is caused and sustained by your intention to run up the hill, it does not in any way depend upon or express your appreciation that hyperventilating is a way to get more oxygen to your tiring muscles, and so make it up the hill. Cases of this sort need to be ruled out, and something along the lines of (1*) seems like the obvious way to do so, for views that follow the Wittgensteinian template.
One might protest that the above examples are unfair, because the intention to run up the hill may causally initiate and sustain your sweating and hyperventilating, but it does not “guide” them. The problem with this suggestion, however, is that it is hard see what, for a view that rejects (1*), guidance could come to. Guidance in the present context is typically linked to “closed-loop” control systems (e.g., Adams and Mele 1989; Jeannerod 1997). Roughly, a goal-representation F is said to have a guiding role in a system S’s process of F-ing just in case:
-
(i)
S has some way of using the goal representation and feedback about its own present state to determine how far it is from successfully F-ing
-
(ii)
S has some way of using this information to adjust its present state to be closer to successfully F-ing
It is hard to see how this concept of guidance could help those who reject (1*), however. On such a view, your F-ing might be intelligent even if it is not specified by an intention (or other appropriate mental state) of yours. A fortiori, then, no such mental state would be available to guide your F-ing. At most, your F-ing would be a by-product of something else that you are doing, and which is specified (and guided) by an appropriate intention. This, however, is a condition that your sweating and hyperventilating also meet.
Now, adherents of the Wittgensteinian template may point out that, in addition to personal-level states such as intentions, contemporary psychology and neuroscience also posit an hierarchy of lower-level “motor representations” that figure in the control of bodily movements, and which specify the fine detail of those movements (Jeannerod 1997, 2006). Could it be that our actions count as intelligent in virtue of their being guided by such lower-level representations, rather than more familiar mental states such as intentions? I think the answer to this has to be “no”, however. This is because the representations in question are sub-personal, in the sense that they are not available to the agent herself for the purposes of deliberation and reasoning.Footnote 15 They do not, therefore, embody the agent’s own solution to a practical problem, or her own assessment of what to do. To see the point, note that appeal to lower-level representations is still not going to distinguish between the movements of your fingers while tying your shoelaces and your sweating or hyperventilating while running up the hill—after all, the latter responses are also subject to control by sub-personal mechanisms, presumably also relying on low-level representations.
Of course, the regress could be avoided by rejecting (3*), or the claim that the actions that implement our ordinary intentions are themselves intelligent, in the relevant sense. And, indeed, many philosophers of action appear to do just that. Enç, for example, writes:
[T]he goal directed system commands a package behavior, the way one orders a packed lunch from a hotel for a day’s hike, being confident that what goes into the package will be selected by competent personnel. (2006, p. 65)
The “competent personnel” in Enç’s analogy consists of automatic mechanisms, whose operations, once set in motion, are no longer under the online guidance of personal-level mental states. In other words, the fine-grained implementation of our intelligent actions is not under the control of our personal-level mental states. Given the Wittgensteinian template, this suggests that the implementation of our actions is not itself intelligent.
In a similar spirit, Papineau (2015) introduces a distinction between basic actions—which, although capable of being executed without deliberation, are nonetheless intentional and expressive of personal-level intelligence—and their components, which are automatic and not guided by personal-level mental states at all. For example, a skilled shoelace tier might be able to tie her shoelaces without having to deliberate about how to tie her shoelaces. Nevertheless, her shoelace tying is an intentional action of hers, manifesting personal-level intelligence. By contrast, each of the components of this basic action (e.g., her picking up the left free end of her shoelace with the right index finger and thumb) is automatic in that it does not normally manifest personal-level thought (ibid., pp. 298–299). Very similar views are expressed by many others, including Dretske (1988), Fodor (1968), and Stanley (2011).
What should we make of such views? On the one hand, by denying (3*), they avoid the regress. Nevertheless, they carry their own costs. On the views in question it might well be that, from your own point of view, it is a complete accident that you move your fingers the way you do while tying your shoelaces. Of course, assuming that your movements really constitute a way for you to tie your shoelaces, there will also be another, external or third-personal, sense in which this is not an accident at all: your movements are under the control of “competent personnel”, as Enç puts it. But there is nothing in the views in question to guarantee that to you this is anything other than a lucky accident. Intuitively, however, this seems hard to accept. For example, if you were asked why you are moving your fingers in the way you are, you would have no problem responding by giving a reason: “because I’m tying my shoelaces”. (This is so even if you are only able to refer to the movements indexically, lacking an adequate verbal description of them.Footnote 16) Intuitively, in other words, we do treat your movements as expressing your appreciation or recognition of the fact that they constitute a way for you to tie your shoelaces. This is hard to square with views that reject (3*).
Relatedly, on such views your mind’s contribution to your actions is limited to determining what actions to perform, but not how you perform them. We treat you as responsible not just for what you ordered for your lunch, but also for how the lunch was prepared. Again, it is hard to see how we can do justice to this practice, unless we take the implementation of your actions—and not just the intentions with which you act—to express your intelligence.Footnote 17
In response to this type of problem, some recent authors offer accounts that supplement ordinary intentions with more fine-grained, but still personal-level mental states. Consider the “motor intentions”, suggested by Brozzo (2017) and Blomberg and Brozzo (2017). Motor intentions specify not just broad action goals such as tying your shoelaces, but more fine-grained ways of moving your body so as to achieve those goals. Motor intentions are, in turn, matched by sub-personal representations in the motor control system, which ultimately issues the motor commands that contract your muscles and move your limbs.Footnote 18
But it is hard to see why should motor intentions make a difference to the underlying problem. After all, such intentions are still action-independent, or purely inner, mental states. At some point, even motor intentions will give out, and at that point we will still need to ask how those motor intentions get implemented.Footnote 19 Is the way you implement your motor intentions not expressive of intelligence on your part?
In response, one might argue that introducing non-intelligent processes to block the regress at this point is not a problem. One might argue that, although on the views in question the mind does not reach all the way to the action itself, it gets as close as it is reasonable to expect—in specifying, for example, which fingers to use to tie your shoelaces on a given occasion. This response may appear to be strengthened by the observation that agents can be mistaken about sufficiently low-level motor properties of their actions: if agents’ reports about the fine details of their actions can be systematically wrong, then perhaps we should simply accept that personal-level thought does not extend beyond motor intentions.Footnote 20
I think, however, that this response subtly misses the point. Insisting that the movements implementing your intelligent actions are “the workings of your mind”, as Ryle puts it, does not require that you possess maximally fine-grained descriptions of those movements. [For an analogy, consider the objections to descriptive theories of reference and singular thought: holding that our thoughts can refer to particular individuals does not require holding that we possess maximally fine-grained descriptions of those particulars (see, e.g., Kripke 1980; Evans 1981; McDowell 1986).] The need for such fine-grained descriptions is only generated because we have been taking the Wittgensteinian template for granted. The reason why we have been struggling to find mental states with sufficiently fine-grained descriptive content, in other words, is because we have assumed that the intelligence of intelligent actions must be accounted for in terms of purely inner mental states that provide “blueprints” or specifications for them.
Conversely, of course, if we give up on the Wittgensteinian template, we might hope to account for the intelligence of overt action in a different way—one that does not require personal-level mental states with such fine-grained content. Such a view would also be able to avoid the regress, by rejecting (1*). The rest of this paper is devoted to developing a view along precisely these lines.
3 An alternative picture: thinking by doing
I argued in the last section that views that adopt the Wittgensteinian template threaten to either land us in a regress, or else to force us to accept that we are much less involved in the execution of our own actions than we intuitively assume. My goal in this section is to argue that we can avoid this choice, by rejecting the “mythical bifurcation of unwitnessable mental causes and witnessable physical effects” that Ryle himself sought to argue against. In a sense that I hope to make clearer below, we should consider our physical actions as “workings of our minds”, just as much as our beliefs, desires and intentions.Footnote 21
This thought is not contrary to common sense. Suppose you are in a classroom full of students taking a test, and you notice a student in the far corner asking for your attention. In walking over to the student, you have to navigate various obstacles, including the classroom furniture, the other students’ bags and other possessions strewn across the floor, and the students themselves. This is a task most of us accomplish easily and, in a sense, unthinkingly. Nevertheless, it clearly involves relatively complex problem-solving on the agent’s part: it is clearly an action apt for assessment along dimensions of intelligence, in the sense that Ryle intends. But, following Ryle’s lead, we should resist the idea that the problem-solving activity involved is an inner process distinct from the walking; the walking itself is an instance of intelligent problem-solving by the walker.Footnote 22
We can develop this idea further, adapting a framework drawn from Michael Thompson’s discussion of intentional action (2008, chap. 2). Thompson’s concerns are somewhat different from mine, as his interest is in the teleology of action, rather than its intelligence. However, there is a clear convergence on a core point: according to Thompson, it is a mistake to think of the goal-directedness of actions as inherited from inner mental causes.
Thompson’s argument for this view begins with the observation that, when we give rationalizing explanations, we often explain some actions in terms of other actions. For example, if you see me breaking an egg over a bowl in the kitchen I might explain myself by saying that I am making an omelet. My breaking of the egg is explained as a means, or instrumental part, of the “bigger” action of making an omelet. Other approaches in the philosophy of action would seek to explain this in terms of inner mental states (such as plans or intentions) that represent the omelet-making as an end, and egg-breaking as a means. Thompson, by contrast, suggests that we take instrumental explanation in terms of actions as the metaphysically most fundamental type of rationalizing explanation. The egg-breaking is, as such, directed towards omelet-making.Footnote 23
This approach gives us a neat way to handle the case of “component actions”, in Papineau’s sense. According to Papineau, as we saw, when an agent ties her shoelaces, the action as a whole may be a (basic) intentional action, while its component actions (the finger movements that get one end of the lace over the other, and so on) are not. From the present point of view, however, we can resist this conclusion: these component actions are intentional actions—in the stringent sense of actions done for a reason—because they are instrumental parts of the bigger action of tying her shoelaces: the agent moves her fingers in just this way because she is tying her shoelaces. Importantly, this account does not require the agent to have a descriptive grasp of how she moves her fingers while tying her shoelaces: just as egg-breaking as such may be directed towards omelet-making, an agent’s finger-movements as such may be directed towards shoelace-tying, even if the agent is not able to give an informative description of them.
Can we adapt this framework for the purposes of capturing the intelligence manifested in action? The first thing to note is that, while Thompson states his view in terms of reasons and rationalization, these terms are meant to be understood in their subjective sense: you may be G-ing because you are F-ing even though G-ing is, in fact, wholly inadequate as a means to F-ing. For example, your going shopping for groceries might rationalize your current walking even if, unbeknownst to you, your current walking is actually taking you further away from the store.Footnote 24 This means, however, that Thompson’s proposal does not, as it stands, suffice for our purposes: since your making an omelet can equally rationalize (in the subjective sense) either your breaking some eggs or your hopping on one leg, it does not capture the sense in which intelligence constitutes a dimension along which actions can be assessed.
I see no other way of addressing this except by making explicit a hitherto implicit epistemic dimension of doing one thing because you are doing another. When you are doing one thing because you are doing another, you thereby take it that the former thing is a (partial) way for you to do the latter. Since your taking may be right or wrong, clever or foolish, this does allow for the required dimension of evaluation. We can put the idea as follows:
If you are G-ing because you are F-ing, then you are taking it, of your G-ing, that it is (part of) a way for you to F.
If all goes well, your G-ing because you are F-ing manifests your knowledge that your G-ing is (part of) a way for you to F. By contrast, if you are hopping on one leg because you are making an omelet, then you must be taking your hopping to be a way for you to make an omelet. Since it is hard to see how this can be a reasonable thing to think, your action would count as silly or irrational.
It might seem that the view I am proposing represents a massive departure from Ryle. After all, am I not suggesting that the intelligence of intelligent action derives from “takings”—items, that is, with propositional content? And doesn’t this also open up my account to the threat of regress? Despite appearances, however, the present proposal can be understood very much in the Rylean spirit, and is immune from the threat of regress.
The reason is this. Although actions embody or manifest such takings (and, when all goes well, such knowledge), these takings do not guide these actions, in the way suggested by the Wittgensteinian template.Footnote 25 For example, the intelligence of your finger movements as you tie your shoelaces is not due to their being guided by some purely inner mental state that specifies just these movements. Rather, it is in the making of those movements that you manifest that you take them to constitute a way for you to tie your shoelaces. Moreover, since the intelligence of intelligent actions is not, on the present point of view, due to their being guided by mental states that are constitutively independent of them, we can block the regress by rejecting premise (1*).
How should we understand this idea of thoughts embodied in actions? There could be different ways to develop this idea, but I suggest that the analogy with debates regarding singular thought mentioned above can be useful here too. Familiarly, many philosophers hold that some of our thoughts (de re thoughts) are about some particular or other not in virtue of our deploying any identifying descriptions of that particular, but rather in virtue of our being in some form of sensory or informational contact—an “information link” (Evans 1982)—with that particular. Importantly, the relations we stand into such particulars are not merely an enabling condition of the thoughts in question, but rather partially constitutive of them: they constitute our ways of thinking of the relevant particulars, or their Fregean “modes of presentation”.Footnote 26 As a result, for example, the thought that I would express by the utterance “this cup is green”, grounded in my visually attending to a green cup on my desk, would not be available to me to think at all, were I not in visual contact with this particular cup.
I cannot attempt to defend this kind of account of de re thought here. For present purposes, my suggestion is only that, insofar as we countenance the existence of such de re thoughts grounded in perceptual information links, we should also countenance de re thoughts about particular ongoing bodily actions of ours, grounded in our performing them. As in the case of perceptually based de re thoughts, such thoughts constitutively require an information link to their objects. But the nature of the link in the two cases is crucially different. In the perceptual case, the link carries information from the object to the subject of the thought. In the action case, by contrast, the link consists in bringing about the action that the thought concerns: it is in virtue of moving your fingers that you are in a position to have the relevant type of de re thoughts about those movements.Footnote 27
Nevertheless, and despite this crucial disanalogy with perceptually based de re thoughts, such thoughts share with their perceptual cousins the crucial feature that they are constitutively dependent on the existence of the relevant information links to the objects they are about. Since in the action case the relevant information links consist in your performing the relevant actions, there is a clear sense in which these thoughts are “embodied” in the actions themselves. Thus, they are not suited to play role of “unwitnessable mental causes” of our actions that Ryle decries.
On the view just sketched, for example, your finger movements as you are tying your shoelaces embody the thought that this is a way for you tie your shoelaces, where the “this” would express the relevant sort of de re way of thinking about the particular finger-movements you are currently performing. But since this thought is dependent upon your actually performing these finger movements, it does not guide them from the outside, as the Wittgensteinian picture would suggest. Bodily actions can embody or manifest de re instrumental thoughts, without these thoughts being their unwitnessable mental causes.Footnote 28
Notes
This is the view that, in the contemporary debate, has come to be called “intellectualism”, though, as we shall see, it is not the same as what Ryle dubbed the “intellectualist legend”. For this reason, I will use the term “propositionalism” to refer to the view that knowledge how is a species of propositional knowledge, and contrast it with “dispositionalism”.
The view that mental states and events are purely inner, in the sense that they are constitutively independent of bodily states and events, is quite plausibly a consequence of Cartesian immaterialism about the mind. Importantly, however, the converse entailment does not hold: a materialist can be an “internalist” in the relevant sense, by taking mental states and events to strictly supervene on brain states, for example. This, incidentally, is the key to answering the puzzlement Snowdon (2004, p. 19) expresses, regarding why Ryle thought that propositionalism about knowledge how has anything to do with Cartesianism: Ryle seems to have thought (wrongly, as we shall see below) that propositionalism is entailed by the relevant kind of internalism, which he (again wrongly, as I just explained) associated with Cartesianism.
Löwenstein (2013, 2017) and Weatherson (2017) defend versions of Ryle’s argument, in part by arguing that the class of intelligent actions is larger than the class of intentional or voluntary actions. In light of Ryle’s insistence on things done “on purpose”, I am not sure he would have agreed. In any case, as we will see, my version of the regress argument does not depend on taking a stand on this.
This may appear puzzling, in light of the fact that “intelligent” in this context does not imply “not silly”. But do silly instances of F-ing manifest knowledge how to F? I assume that Ryle would answer “yes” to this question. To count as F-ing “on purpose” at all, Ryle might say, you need to manifest at least a rudimentary knowledge how to F. For example, even if my driving is terrible, it manifests some knowledge how to drive.
These two premises are slightly different from the ones Stanley and Williamson (2001) attribute to Ryle in the article that begun the contemporary debate, but not in ways that will matter. Regarding (1), Stanley and Williamson (2001, pp. 414–415) attribute to Ryle the claim that if you F, then you exercise knowledge how to F—whatever F might be. However, as mentioned above, Ryle is explicit that his focus is only on actions done “on purpose”. Instead of (2), Stanley and Williamson attribute to Ryle a general claim about the employment of propositional knowledge, rather than specifically about action-guidance. I do not doubt that Ryle was committed to that general claim; I use a more specific one only to simplify the exposition.
What explains the popularity of this conception of human agency? It is not often explicitly argued for. When its proponents turn to its defense, they typically focus on specific challengers, rather than providing positive reasons for its endorsement [see, e.g., Adams (2010); Blomberg and Brozzo (2017); Clarke (2010)]. The following classic argument by Davidson (1980), however, may be in the background. The very same bodily movement, physically described, may sometimes be (part of) an intelligent action, while at other times it is not. For example, the very same movement of my hand may be, on one occasion, a simple muscle-spasm while on another it is an instance of signaling to an accomplice. Since the movements are the same on both occasions, it is natural to think that any difference in whether they express intelligence or not must lie in their inner causal antecedents. The thing to note here, however, is that appealing to inner causal antecedents need not be the only way to mark such differences. I will return to this point in Sect. 3.
Löwenstein (2013, 2017) and Weatherson (2017) try to parry this type of response by arguing that Ryle’s conception of intelligent action is broader than that of intentional action, applying also to operations performed “automatically”. It is not clear that this parry is dialectically effective, however. In order to re-instate the regress, we would need to argue that the intelligence of these automatic selection operations is to be explained in the same way as the intelligence of intentional actions; otherwise no regress ensues, even if selection is intelligent in some other sense. But opponents of the regress, like Stanley and Williamson, are unlikely to concede this point. My version of the regress does not hinge on this.
There is a debate in the philosophy of action, spurred by Bratman (1984, 1987), concerning whether intentionally F-ing necessarily involves an intention specifically to F (this is known as the “simple view”). The formulation in the text is somewhat weaker than this, requiring only mental states that in some appropriate way specify your F-ing (an intention to G¸and a belief that F-ing is a way for you to G might count, for example). We will see reasons below for insisting on at least this much.
As an anonymous referee suggests, one might use this point to connect the present version of the regress with the one sketched above, by noting that your I-ing expresses your selection of a way to F. This is correct, but taking selection in this context to be an intelligent operation distinct from your I-ing seems like a further commitment, and one which (as we saw) may be open to objection.
One might wonder whether the regress just described really is vicious. Given that variation beyond a certain fineness of grain is surely irrelevant for discussions of human agency, one might argue that after a finite number of steps the regress will stop, because the corresponding version of (2*) will fail: the distinction between F-ing and a way of F-ing will no longer meaningfully apply. I do not think this response would help, however, even if sound: simply arguing that the regress is not literally infinite does not show that it is acceptable. A finite regress of this sort would still leave adherents of the Wittgensteinian template having to postulate mental states that specify actions with maximal fineness of grain—i.e., grain so fine that the distinction between F-ing and a way of F-ing loses its significance. It seems clear, however, that personal-level mental states do not contain action-specifications with anything like this fineness of grain. This is not to deny that our motor control systems make use of fine-grained specifications of our bodily movements; it is, rather, to deny that such lower-level representations can play the role required of them by the Wittgensteinian template. More on this below.
Notice, in particular, that the examples here are not cases of causal deviance. For one thing, the fact that your sweating and hyperventilating are the results of something else that you do (running up the hill) does not constitute deviance, since most of what we do we do by doing other things. More fundamentally, however, cases of causal deviance typically presuppose something that something like (1*) is a genuine condition on action. Thus, in typical causal deviance scenarios, there is a match between what you do and a relevant intention, but the match is brought about in a deviant way. In the present case, however, there is no such match to begin with.
A referee suggests that views that reject (3*) may be able to account for our taking agents to be responsible for the execution of their actions, indirectly. On such views, the agent is responsible for the execution of her action insofar as she is responsible for cultivating reliable sub-personal action-execution mechanisms. This indeed seems like a viable fall-back position. Still, having to fall back to an indirect account of responsibility for action-execution seems like a cost for such views.
Importantly, no one is suggesting that the content of motor intentions is maximally fine-grained, in the sense sketched in n. 13 above. Questions of how motor intentions are to be implemented are assumed to be sensible.
Fridland (2014, 2017) has criticized some of the views I have been arguing against as well, and on related grounds. Nevertheless, Fridland seems to end up at a somewhat different place from where I do. In particular, while Fridland emphasizes the intelligence of sub-personal action control systems, she does not explain how this accounts for the sense in which the execution of your actions manifests your intelligence. So far as I can tell, Fridland at this point seems to fall back on the idea that overt action is intelligent (or, at least, controlled by the agent) to the extent that it reliably matches the agent’s personal-level mental states (ibid., 2017, p. 1558). But then, it is hard to see how her view constitutes an alternative to the views she criticizes.
An anonymous referee asks why we should attribute the problem solving to the walker, as opposed to (merely) sub-personal mechanisms inside of her. The reason, I think, is just that the case appears to be very different from uncontroversial cases of merely sub-personal problem-solving, such as that involved in our bodies’ maintaining a stable internal temperature, for example. Perhaps we could learn to live with views that erase this intuitive difference; but this is still an intuitive cost we should seek to avoid, if possible.
An anonymous referee wonders how this proposal addresses the Davidsonian challenge (mentioned in n. 8 above) of how to determine, given some bodily movements, what (if any) intentional action is performed. On views that conform to the Wittgensteinian template, this challenge is answered by looking at the inner causes of the relevant bodily movements: if my egg-breaking is caused (in the right way) by a desire or intention to make an omelet, then this is what I am intentionally doing. In an entirely parallel way, on Thompson’s view we appeal to the “bigger” action by means of which my egg-breaking is explained (or towards which it is directed): since it is my omelet-making (rather than, say, my writing a paper about action) that explains why I am breaking the eggs, this is what I am intentionally doing in breaking the eggs.
Of course, if the means you take are inadequate, you may never end up having F-ed. However, as Thompson (2008, pp. 120–146) rightly emphasizes, the fact that you never end up having F-ed is consistent with your F-ing for a time. Some actions are botched or abandoned, and so never reach completion.
For a similar emphasis on the distinction between action manifesting knowledge and action being guided by that knowledge, see Dickie (2012). Dickie uses this distinction for a somewhat different purpose, namely, giving an account of how skill and knowledge how relate to propositional knowledge. On her view, a skilled agent’s actions manifest or embody propositional knowledge, of roughly the same sort as we are considering here. This type of approach to skill and knowledge how would, broadly speaking, be congenial to the view of intelligent action that I am suggesting (though a lot of details would need to be worked out, of course). For related points see also Löwenstein (2017, pp. 250–256) and Kremer (2017).
For such views of singular thought, see Evans (1981, 1982), McDowell (1984, 1986), Campbell (2002), Stanley (2011). Some authors draw a distinction between “content externalism”, which concerns the determination of the contents of our thoughts, and “vehicle” or “active externalism”, which concerns the thoughts themselves (Clark and Chalmers 1998; Hurley 1998; Rowlands 2011). As these authors intend the distinction, the labels “passive” or “content” externalism apply to views on which the content of (some of) our thoughts is determined by their causal history (e.g., Putnam 1975; Burge 1979). Regardless of the merits of this distinction in general, the views on singular thought I am appealing to here include present relations to the objects of your thoughts in their identity conditions. They would thus seem to meet the conditions for “active” or “vehicle” externalism as well as for content externalism. Wilson (1989) appeals to a similar notion of “act-relational” intention, and puts it to similar use in criticizing causal accounts of action. Interestingly, McDowell (2011) argues against this view, but I cannot consider this argument here. Butterfill and Sinigaglia (2014) also draw upon accounts of de re (or demonstrative) thought in their solution to what they call the “interface problem”—i.e., the problem of explaining how personal level states such as intentions are integrated with lower-level motor representations. Their concerns, however, are different from my own, leading them to focus on thoughts about action types, rather than particular actions (ibid. 2014, pp. 133–134).
Some support for this claim may be provided by recent work that links the phenomenology of agency to the neuroscience of motor control (Bayne and Pacherie 2007; Blakemore et al. 1998; Frith et al. 2000a, b; Frith 2005; Marcel 2003; Pacherie 2008). While this is not the place to enter this debate in any depth, what all these views have in common is that they suggest that our awareness of our own actions depends upon the mechanisms in us that produce the movements that constitute our actions. Such accounts, therefore, seem to support the idea of distinctively productive information links to our own actions. This notion of a distinctively productive way of thinking about our own actions echoes Anscombe’s (1957) famous distinction between “contemplative” and “practical” knowledge. The parallel is worth exploring further, but I cannot do so here.
In writing this paper, I benefited from discussions with Olle Blomberg, Yair Levy, and Melissa Merritt. I would also like to thank two anonymous referees for Synthese, for their patience and their thoughtful and penetrating comments.
References
Adams, F. (2010). Action theory meets embodied cognition. In J. H. Aguilar & A. A. Buckareff (Eds.), Causing human actions: New perspectives on the causal theory of action (pp. 229–252). Cambridge, MA: A Bradford Book.
Adams, F., & Mele, A. (1989). The role of intention in intentional action. Canadian Journal of Philosophy,19, 511–532.
Anscombe, G. E. M. (1957). Intention. Cambridge, MA: Harvard University Press.
Bayne, T., & Pacherie, E. (2007). Narrators and comparators: The architectrure of agentive self-awareness. Synthese,159, 475–491.
Bengson, J., & Moffett, M. A. (2012). Non-propositional intellectualism. In J. Bengson & M. A. Moffett (Eds.), Knowing how: Essays on knowledge, mind, and action (pp. 162–193). New York: Oxford University Press.
Blakemore, S.-J., Wolpert, D. M., & Frith, C. D. (1998). Central cancellation of self-produced tickle sensation. Nature Neuroscience,1(7), 635–640.
Blomberg, O., & Brozzo, C. (2017). Motor intentions and non-observational knowledge of action: A standard story. Thought: A Journal of Philosophy,6(3), 137–146.
Bratman, M. (1984). Two faces of intention. The Philosophical Review,93(3), 375–405.
Bratman, M. (1987). Intention, plans and practical reason. Cambridge, MA: Harvard University Press.
Brownstein, M. (2014). Rationalizing flow: Agency in skilled unreflective action. Philosophical Studies,168(2), 545–568.
Brownstein, M., & Michaelson, E. (2016). Doing without believing: Intellectualism, knowledge-how, and belief-attribution. Synthese,193(9), 2815–2836.
Brozzo, C. (2017). Motor intentions: How intentions and motor representations come together. Mind and Language,32(2), 231–256.
Burge, T. (1979). Individualism and the mental. Midwest Studies In Philosophy,4(1), 73–121.
Butterfill, S. A., & Sinigaglia, C. (2014). Intention and motor representation in purposive action. Philosophy and Phenomenological Research,88(1), 119–145.
Campbell, J. (2002). Reference and consciousness. Oxford: Oxford University Press.
Cath, Y. (2013). Regarding a regress. Pacific Philosophical Quarterly,94(3), 358–388.
Clark, A., & Chalmers, D. J. (1998). The extended mind. Analysis,58(1), 7–19.
Clarke, R. (2010). Skilled activity and the causal theory of action. Philosophy and Phenomenological Research,80(3), 523–550.
Davidson, D. (1980). Actions, reasons and causes. In Essays on actions and events (pp. 3–21). New York: Oxford University Press.
Dickie, I. (2012). Skill before knowledge. Philosophy and Phenomenological Research,85(3), 737–745.
Dretske, F. (1988). Explaining behavior: Reasons in a world of causes. Cambridge: MIT Press.
Enç, B. (2006). How we act: Causes, reasons, and intentions. New York: Oxford University Press.
Evans, G. (1981). Understanding demonstratives. In G. Evans (Ed.), Collected papers (pp. 291–321). Oxford: Clarendon Press.
Evans, G. (1982). The varieties of reference. Oxford: Oxford University Press.
Fantl, J. (2011). Ryle’s regress defended. Philosophical Studies,156(1), 121–130.
Fodor, J. (1968). The appeal to tacit knowledge in psychological explanation. The Journal of Philosophy,65, 627–640.
Fridland, E. (2012). Problems with Intellectualism. Philosophical Studies,165(3), 879–891.
Fridland, E. (2014). They’ve lost control: Reflections on skill. Synthese,191(12), 2729–2750.
Fridland, E. (2015). Knowing-how: Problems and considerations. European Journal of Philosophy,23(3), 703–727.
Fridland, E. (2017). Skill and motor control: Intelligence all the way down. Philosophical Studies,174(6), 1539–1560.
Frith, C. (2005). The self in action: Lessons from delusions of control. Consciousness and Cognition,14, 752–770.
Frith, C., Blakemore, S.-J., & Wolpert, D. (2000a). Abnormalities in the awareness and control of action. Philosophical Transactions: Biological Sciences,355(1404), 1771–1788.
Frith, C., Blakemore, S.-J., & Wolpert, D. (2000b). Explaining the symptoms of schizophrenia: Abnormalities in the awareness of action. Brain Research Reviews,31(2–3), 357–363.
Ginet, C. (1975). Knowledge, perception, and memory. Boston: Reidel.
Hetherington, S. (2011). How to know: A practicalist conception of knowledge. Malden, MA: Wiley.
Hurley, S. (1998). Consciousness in action (1st ed.). Cambridge: Harvard University Press.
Jeannerod, M. (1997). The cognitive neuroscience of action. Oxford: Blackwell.
Jeannerod, M. (2006). Motor cognition: What actions tell the self. Oxford: Oxford University Press.
Kremer, M. (2017). A capacity to get things right: Gilbert Ryle on knowledge. European Journal of Philosophy,25(1), 25–46.
Kripke, S. A. (1980). Naming and necessity. Cambridge: Harvard University Press.
Levy, N. (2015). Embodied savoir-faire: Knowledge-how requires motor representations. Synthese, 194, 1–20.
Löwenstein, D. (2013). Why know-how and propositional knowledge are mutually irreducible. In M. Hoeltje, T. Spitzley, & W. Spohn (Eds.), Was Dürfen Wir Glauben? Was Sollen Wir Tun? - Sektionsbeiträge Des Achten Internationalen Kongresses Der Gesellschaft Für Analytische Philosophie E.V (pp. 365–371). Cambridge: DuEPublico.
Löwenstein, D. (2017). Know-how as competence. A rylean responsibilist account. Vittorio Klostermann: Frankfurt am Main.
Marcel, A. (2003). The sense of agency: Awareness and ownership of action. In J. Roessler & N. Eilan (Eds.), Agency and self-awareness: Issues in philosophy and psychology (pp. 48–93). New York: Oxford University Press.
McDowell, J. (1984). De re senses. Philosophical Quarterly,34(136), 283–294.
McDowell, J. (1986). Singular thought and the extent of ‘inner space’. In J. McDowell & P. Pettit (Eds.), Subject, thought, and context. Oxford: Clarendon Press.
McDowell, J. (2011). Some remarks on intention in action. The Amherst Lecture in Philosophy,6, 1–18.
Mele, A. (1992). Springs of action. Oxford: Oxford University Press.
Mylopoulos, M., & Pacherie, E. (2016). Intentions and motor representations: The interface challenge. Review of Philosophy and Psychology,8, 317–336.
Pacherie, E. (2008). The phenomenology of action: A conceptual framework. Cognition,107(1), 179–217.
Papineau, D. (2015). Choking and the yips. Phenomenology and the Cognitive Sciences,14(2), 295–308.
Putnam, H. (1975). The meaning of ‘meaning’. Minnesota Studies in the Philosophy of Science,7, 131–193.
Rowlands, M. (2011). Body language: Representation in action. Cambridge, MA: A Bradford Book.
Ruben, D.-H. (2003). Action and its explanation. Oxford: Oxford University Press.
Ryle, G. (1945). Knowing how and knowing that: The presidential address. Proceedings of the Aristotelian Society,46, 1–16.
Ryle, G. (2002). The concept of mind. Chicago: University of Chicago Press.
Snowdon, P. (2014). Knowing how and knowing that: A distinction reconsidered. Proceedings of the Aristotelian Society,104, 1–29.
Stanley, J. (2011). Know how. Oxford: Oxford University Press.
Stanley, J., & Williamson, T. (2001). Knowing how. Journal of Philosophy,98(8), 411–444.
Thompson, M. (2008). Life and action. Cambridge, MA: Harvard University Press.
Valaris, M. (2015). The instrumental structure of actions. The Philosophical Quarterly,65(258), 64–83.
Weatherson, B. (2017). Intellectual skill and the rylean regress. Philosophical Quarterly,67(267), 370–386.
Wilson, G. (1989). The intentionality of human action. Palo Alto: Stanford University Press.
Wittgenstein, L. (1958). Philosophical Investigations (G. E. M. Anscombe, Trans.). Englewood Cliffs, NJ: Prentice Hall.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Valaris, M. Thinking by doing: Rylean regress and the metaphysics of action. Synthese 197, 3395–3412 (2020). https://doi.org/10.1007/s11229-018-1893-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-018-1893-6