Skip to main content
Log in

Rationality as Effective Organisation of Interaction and Its Naturalist Framework

  • Original Paper
  • Published:
Axiomathes Aims and scope Submit manuscript

Abstract

The point of this paper is to provide a principled framework for a naturalistic, interactivist-constructivist model of rational capacity and a sketch of the model itself, indicating its merits. Being naturalistic, it takes its orientation from scientific understanding. In particular, it adopts the developing interactivist-constructivist understanding of the functional capacities of biological organisms as a useful naturalistic platform for constructing such higher order capacities as reason and cognition. Further, both the framework and model are marked by the finitude and fallibility that science attributes to organisms, with their radical consequences, and also by the individual and collective capacities to improve their performances that learning organisms display. Part A prepares the ground for the exposition through a critique of the dominant Western analytic tradition in rationalising science, followed by a brief exposition of the naturalist framework that will be employed to frame the construction. This results in two sets of guidelines for constructing an alternative. Part B provides the new conception of reason as a rich complex of processes of improvement against epistemic values, and argues its merits. It closes with an account of normativity and our similarly developing rational knowledge of it, including (reflexively) of reason itself.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. For proper scientific grounding naturalism ultimately requires provision of a more detailed neuro-cognitive functional account of our rational capacities, complementing what is provided here. That task is put off to another place and time, both from lack of space and as yet lack of a settled scientific basis for doing so.

  2. This is well illustrated by Feyerabend's presentation of Galileo versus the Aristotelians at the dawn of the age of modern science. Feyerabend argues with considerable plausibility that naive induction from common sense experience overwhelmingly supports the Aristotelian generalisations about matter and motion. To take but one feature from common experience, bodies do not continue in motion except under three special cases: If they are 'spontaneously' rising (e.g. fire) or falling (e.g. rocks) or they are driven by an inner motor (e.g. pushed by living creatures). To obtain any kind of observational evidence for a Galilean or Newtonian law of perpetual inertial motion requires a bold and elaborate set of specific and unusual experiments designed to reveal such counter-intuitive and counter-inductive results within the welter of common sense experiences which fly in their face. For Feyerabend’s discussion see e.g. his 1978, but cf. his delightful 1961 for its roots, and see Popper (1972) and later e.g. Brown (1979, 1987, 1988).

  3. For example, Popper’s individual random trial and error evolutionary model is far too crude and slow an adaptive process. In genetic evolution cross-over and transposition processes do much of the adaptive work, not mutation, because they preserve past successes while random change does not (Holland 1992). But this is equivalent to requiring something like inductions as partial guides to generating trials, e.g. by retaining and re-using successful measuring instruments and methods.

  4. And there are further general costs. For general discussion see Brown (1979, 1988) and Hooker (1981a, 1995, chap. 3). When it comes to identifying errors, as Bickhard (1993, 2005b) correctly points out, to know that an error has occurred requires only a judgement that a failure of anticipated (e.g. predicted) outcome has occurred. But to correctly locate the source and nature of the error requires a much more sophisticated understanding of the action situation and discrimination of the factors required for successful action in it. The young cheetah, e.g., will know, simply by continuing hunger, that her hunt failed. But learning to discriminate wind direction, camouflage, terrain kinds, etc. as factors in successful hunting and hence also as sources of hunting errors requires much more than this. For discussion see Christensen and Hooker (2000a, b, 2002) and for a detailed scientific application (ape language research) see Farrell and Hooker (2007a, b) and especially 2009).

  5. Note that unity involves, but is not reducible to, explanatory width (whether applied inter- or intra- theoretically), and that other potential, but unlisted values, e.g. testability, are taken to be determined by these 12. For earlier commentary see Hooker (1991b, following 1976, 1987b).

  6. See, e.g., Rescher (1977), cf. Hooker (1995, chap. 4) and Farrell and Hooker, note 4, for supporting analysis. Moreover, if we conceive of science as a dynamic self-improving system, in which data, theory, method, values and institutional design and roles simply constitute various interacting factors, then method development falls naturally into place as a part of the overall dynamics—see Hooker (1995), cf. Shapere’s (1984) notion of the historical development of scientific rationality.

  7. The general ideas here are well known (though perhaps not the multiple roles for the theory officially under test), see e.g. Duhem (1962) and Campbell (1957). An example of all these complications is worked out in moderate detail in Hooker [1975 (1987a, chap. 4)] and another in Hooker (1989), cf. (1994c), cf. also e.g. Galison (1987).

  8. A simple class of cases is formed by conjoining an argument of the form ‘x% of A’s are C’s, so the probability of a, an A, being a C is x/100’ with one of the form ‘y% of B’s are C’s, so the probability of a, a B, being a C is y/100’ to yield ‘x% of A’s are C’s and y% of B’s are C’s, so the probability of a, an A, being a C is x/100 and, being also a B, is y/100’. Choose, e.g., real A, B and C so that x < 5 but y > 90 to clearly see the problem.

  9. This happens every time context-changing information is introduced where dynamics is physically context-dependent, see Hooker (2010, sect. 6), and Russell’s chicken, Hooker (1995, chap. 3).

  10. For instance, with Newton-Smith (1981) in mind, compare choosing just among the roughly contemporary (Giere 1988; Glymour 1980; Howson and Urbach 1989; Thagard 1998) as the normative philosophy of science.

  11. These and other criticisms made here are drawn from Hooker (1981a, 1995, chap. 3). There are a variety of other difficulties in specifying Popperian method. For instance, a bold conjecture will often conflict with some part of the background into which it is introduced. What has to be revised, and how? And how is severity of test to be defined (is probableness to be assessed against original or revised background)? Formalist method also cannot acknowledge as reasonable the widespread scientific practice of containing or confining errors (anomalies) and even contradictions, by applying the afflicted theory only within some safely usable domain, until some illuminating resolution of the problems is found (e.g. the initially anomalous motions of Uranus and our moon). In all of these cases too non-formal judgement looms large in the conduct of science.

  12. One obvious problem is that perception is not in fact always reliable; carelessness and distractions, illusions and hallucinations all reduce accuracy. Nor is there any way to 'read off from' perceptual experiences themselves which observational reports are reliable and which ought to be discounted, it requires good theoretical guidance to do that. This difficulty (for empiricism) is a model for the next problem: all the scientific evidence we have points to the view that perception is itself an activity essentially cognitively similar to theory construction; the mind forms the 'best' model it can of the scene before it on the basis of memory, stored information processing methods and current information input. In both of these cases, observations, which are the end-products of this process, cannot have any privileged cognitive status and so cannot provide foundations for knowledge. These features of perception also require the introduction of fundamental non-logical decisions. If perception is not reliable then it must be decided whether this particular observation, taken in these circumstances, is reliable, and in what respects and to what degree. And so on. Given that human bodies are essentially instruments and given the fundamental role of theories in the design and evaluation of measuring instruments and experimental methods, e.g., it is not at all likely that a purely formal account could be given of the range of important decisions involved in accepting an observation, and it is clear that these decisions, like those about inductive argument, will be conditioned by accepted theory.

  13. Lakatos (1970) says that it is the community of scientists that makes such decisions, but those decisions, along with decisions about membership in the relevant scientific community and the relevant decision criteria, are all context-dependent non-formal judgements. To meet this objection Popper would need to provide formal logical criteria for the objective resolution of each of the individual and social decisions facing scientists, thereby rendering psycho-social considerations once again superfluous. This seems quite impossible. There are various moves Popper has made to try to ameliorate these problems. Conjectures and Refutations 1972 emphasises problem solving and criticism as the essential features of rationality. This proves a relevant response to criticisms of traditional empiricism for its simplistic progressive conception of the history of science (e.g. Feyerabend 1978; Kuhn 1962), for problems may be dissolved as well as solved, abandoned and re-evaluated. But the difficulty with thus weakening the conception of rationality is that it throws yet more weight on to decisions made by the community of scientists. What is to count as a problem? Which are worth trying to solve? When can a problem be dissolved rather than solved? And so on. [Compare in this respect the criticisms which Laudan (1977) received when promoting a related approach, e.g. Newton-Smith (1981).] Popper, and formalists generally, have no useful answers to these questions.

    Still later, in Objective Knowledge (1979), Popper introduces an abstract World 3 of ideas and logical structures, distinguishing it sharply from Worlds 1 and 2, the realms of nature and mind (psychological states). Objectivity belongs to World 3 and the structure of objective science is found there. But shifting statements to World 3 cannot in itself contribute to their objectivity. Feyerabend (1974) argues that the addition of World 3 represents in effect the Lakatosian degenerating phase of the Popperian research programme, that World 3 merely labels Popper's desire to provide an objective account of knowledge but does not actually solve any of the outstanding problems (e.g. choice of test, allocation of falsification, justifying method, providing rational cut-off to pursuit of research programmes). The shift to World 3 is a shift within the formalist framework, and we are evidently left still in need of a substantive account of rational procedure. The lesson is only reinforced by following through Popper's unsuccessful attempt to resolve the difficulties by importing an artificial evolutionary analogy into World 3—see Hooker (1995, chap. 3).

  14. Involuntariness per se is necessary but not sufficient. Not every aspect of what is pushed involuntarily on us conveys truth. Many perceptual illusions and hallucinations, e.g., cannot be voluntarily corrected. It is indeed an example par excellence of the exercise of reason to learn to distinguish what respects of what involuntary signals reliably convey information. This issue is finessed by the foundationalism of empiricism (and rationalism—see note 16), but quite unconvincingly (cf. note 12). It is avoided by Popper’s fallibilism only at the expense of enlarging the dimensions in which potential errors are to be investigated.

  15. Popper is conventionalist in that the adoption of the maximally informative ideal for science is non-rational, but he smacks of rationalism in the way that he then tacitly constrains rational process to deductive logic as the foundation for falsificationist method. Conversely, empiricists may either deny having explicit ideals or regard them as conventional decisions, and thus support conventionalism, but similarly smack of rationalism by accepting a priori that logic expresses rationality and consider truth, which logically valid argument necessarily preserves, to thus be the rational goal of science. We consider normative knowledge further in Part V.

  16. Accepting the sceptical assumption also requires a content foundationalism (see below) and, for formalism (logic) also a rule foundationalism (cf. Brown 1988). Philosophers like Popper are fallibilists about content and reject this aspect of the sceptical assumption. Yet they seem to accept foundationalism for reason itself. This is presumably because logic itself seems so clearly to require, and satisfy, rule foundationalism (see below). Why settle for less? Again, this shows the hold which the formalist account of reason has.

  17. This is in general a trivial issue for a foundational epistemology, one simply explains how the foundational data can be increased and improved knowledge follows. For empiricism, e.g., the foundational data were observations and are increased simply by adding more observations; induction then insures that knowledge improves. But for Popper this becomes a major, non-trivial issue. It is a particularly important one, since much of the force of Popper's position derives from the argument that it was through highly counter-intuitive, counter-inductive theories that science has made its most striking progress historically.

  18. Naturalism is concerned with unity of understanding, but is not inherently concerned with promulgating any narrow materialism. In a non-linear dynamical world emergence is pervasive. Conversely, as we have penetrated more deeply into the world’s constituents they have grown more complex and strange, not less, until we now face the unlimited complexity and non-locality of relativistic quantum fields. The nature of the world is their nature and, whatever that is, the old materialisms and immaterialisms alike look simplistic. The naturalist commitment is to understanding ourselves as one with nature, as differentiated from within a common natural framework, and to seeing our capacities as differentiated capacities within that framework.

  19. Among the many further consequences is that any concept introduced should ‘grade back’ sufficiently smoothly across the evolution of complex life forms to simpler physico-chemical conditions. Physically, for instance, hormone regulation grades back from its manifold mammalian regulatory roles to elementary rate regulation of various aqueous reactions. Conceptually, ‘salience’ and ‘selection’, e.g., grade back in designation respectively from human significance and choice ultimately to respectively something like unicellular selective surface biochemical reactivity and selective internal biochemical state transition.

  20. This account is given a preliminary development in Hooker (1991b), which has its roots in turn in Hooker (1987a, subsects 7.8, 8.8.7, 8.3.9), and given fuller exposition in Hooker (1995, chaps. 5, 6).

  21. See further Hooker (1995, chap. 6). Quine’s paper does draw attention to the idea that in a naturalised conception the criteria of cognitively justified action merge much more conspicuously into the general criterion of rational action.

  22. See Putnam (1982). For the extended version of the critique summary to follow see Hooker (1995, chap. 6).

  23. Indeed under a Peircian and attractively naturalistic conception of concepts as open-ended dynamic constructs being constantly adjusted to suit their evolving roles in evolving theories and practices (Legg 2005; cf. Brown 2007), every explicit characterisation of a theoretical concept is tentative.

  24. Naturalism requires doing the same for the other central normative capacities: (epistemically) knowing, and (ethically) valuing. These tasks lie beyond the present paper, but for the beginnings of a story of elaboration and adaptation of proxies for these ideals see references note 41.

  25. For partial discussion of this larger programme, at least for reason, see Hooker (1995, chap. 6, 2009) and Sect. 2.1 below.

  26. Material in this section is improved and re-worked from Hooker (1994b), which contains a more extended discussion of Cherniak (1986). All unattributed page references in this section are to Cherniak (1986).

  27. An idealisation will produce a characteristic pattern of deviations from the empirical data which the non-idealised theory explains. For simplifying idealisations this idealisation/deviation structure is sometimes over-generalised and taken to include all cases of reference-case/deviation structure, increasing the risk of confusion about ideals. Newton's Laws of Motion, e.g., can be presented in this form: reference case ‘inertial motion (First Law), deviation ‘forced acceleration (Second Law). Evidently for this reason inertial (First Law) motion is often referred to as ideal motion. This language may be reinforced by the fact that inertial motion may never, or almost never, actually occur (e.g. for the gravitational force, since there is no cut-off to its range and no shields from it -- but contrast here other forces). It is important to note that this extension is additional to the notion of idealisation specified here. The ideal gas, e.g., is itself dynamically specified by Newton's Laws, but it would have remained an idealisation even had some differently structured dynamical theory applied. And it would remain an idealisation even were its occurrence common. Conversely a relativistic gas is not automatically made an idealisation because relativistic dynamics still exhibits a reference-case/deviation structure. Further, degenerate idealisations, fundamental in science, will not in general exhibit the reference-case/deviation structure at all (see below). So to use the terms ideal or idealised to include all examples of reference-behaviour/deviation explanatory structure, without further distinction, is to invite further confusion. Correspondingly, to claim that the analytic rationality standard is an idealisation is not to claim that it either has, or must be able to be presented as, a reference-case/deviation structure with the idealised principles playing the role of the reference case, or that if it had that structure the reference case should thereby properly be classed as an idealisation. And, to repeat, it cannot be expected to exhibit that structure if, as will be argued below, the candidate analytic principles represent degenerate idealisations.

  28. Ironically, continuing to insist that a rational agent must nonetheless employ a sound and complete system is not just to divorce rationality theory from reality but to divorce it from the results of rational analysis itself. Nonetheless, it seems to have been overlooked that such performances are often implicit in idealised rationality requirements.

  29. “Suppose”, says Cherniak (p. 93), “that each line of the truth table… could be checked in the time a light ray takes to traverse the diameter of a proton,” roughly 3 × 10−23 s, a ‘super-computer’ indeed. Then for a universe of age 20 billion years, or roughly 6 × 1017 s, this super-computer can check roughly 6 × 1017/3 × 10−23, or 2 × 1040 lines. But a truth table for an argument of just 137 premises contains 2137 lines, which is more than 1041 lines, i.e., more than even this super-computer could check running for the lifetime of the universe. Even where parallel computations permit it, still quite small belief sets would require a computer which also exhausted the known matter of the universe, e.g., belief sets of say 2 × 137 components for a universe of 1050 particles and parallel components each of 1010 particles. Contemporary complexity theory provides many other examples of similarly intractable problems, e.g., the ‘traveling salesman problem’ (find the shortest route connecting N towns once only). Cherniak argues that complexity theory also suggests that such results are quite resilient in the face of alternative computational algorithms. See also Green and Leishman (2010).

  30. Cf. choice of game form and of game-altering acts in decision theory. The rational response to prisoners’ dilemma type situations, e.g., is not merely to formally analyse the game theoretic problem, but to non-formally choose institutional and cultural circumstances that enable the game to be transformed into one where the co-operative solution becomes available at minimal collective costs. The costs of not doing this intelligently can become large indeed: the extremes of the pure external and internal coercive solutions bring their respective costs of police forces and repression, or of religious or ideological authoritarianism. In the same way the exercise of reason in response to the unstable coalitions of n-person games, such as those that appeared regularly in the pre-modern wars of competing warlords and appear now in democratic parliaments, lies in creating institutional contexts in which collectively favourable stabilities emerge. Thus choosing the shape of contexts themselves is also context dependent, specifically historically dependent, for doing so expresses the working out of a coherent life, both for individuals and for societies. The formalist idealisation of reason, by contrast, is context-free, throwing away the contextual structure that stands at the heart of rational procedure for us finite creatures. And finitude itself also imposes context dependencies. Consider any device which parcels out its workload among a number of internal processes depending upon their limitations and the sequence of external demands made. (A simple example of this kind was considered in Hooker 1981a, Part III.) This device will inevitably assign the processing of problems or demands by historical context, specifically as a function of the previous history of the demands made on it. Our minds seem quite clearly to be at least partial devices of this kind. (Cherniak recognises “metaheuristic” strategies in a related circumstance at p. 142, note 10.) The problems we choose to work on, and the methods which we choose to employ depend to some extent on how much pressure our various kinds of memory and cognitive processes are currently under, what has already been learned, what skills acquired and at what levels, and so on.

  31. Though Cherniak never discusses scientific method explicitly, he does require a very general non-deductive competence for rational agents, the minimal “… requirement must at least be stated as ‘The agent is responsible for some but not all counterpossibilities whose seriousness is implied by his current beliefs.’” Because of the key role of background beliefs, there is an additional minimal ‘input’ requirement: “… the agent is responsible for acquiring some of the beliefs relevant to evaluating counterpossibilities” (p. 121). “In fact, the counterpossibilities that must be considered are relative not to the solitary agent’s own background knowledge but to the shared knowledge of the appropriate community” (p. 115). This underscores the thoroughly context-dependent character of the exercise of non-deductive rational capacities.

  32. Cherniak’s own maximal normative rationality standard requires all feasible sound inferences be made, where ‘feasible’ refers to each inference taken individually. But in the very same discussion Cherniak himself provides the best reason not to demand a condition this strong, namely that the costs of making most inferences collectively or individually overwhelm the benefits from making them, see Cherniak (1986, p. 24). Hence the conception used here.

  33. At best formalism prescribes an algorithm to use, but using an algorithm is a low skill compared with creating it and recognising its appropriate application. Similarly, if we apply an algorithm and it does not work, then typically algorithms by themselves will not instruct us what reasonably to do next, whether e.g. to revise our conception of the situation we are in, or revise our conception of the formalism from which the algorithm is derived, and so on. Once again, it is in the exercise of these latter capacities that the more fundamental exercise of reason lies. What matters for proceeding rationally is the capacity to learn to improve the level of skill in choosing procedures productively in a domain. Later a process for thus bootstrapping our rational performance will be presented—see SDAL, Sects. 2.2, 2.3.

  34. See, for instance, Batterman (2002), Berry (1994), Hooker (2004, Sect. 5).

  35. This extends the consequences of Cherniak’s finitude analyses well beyond their original purview. There is little or no hint in Cherniak’s book that there is anything involved beyond establishing the limitations of existing theory and then weakening the existing rationality conditions in some way. But this misses the main point. So Cherniak's arguments are not taken to provide a conclusive demonstration that current analytic theory of reason must be understood as degenerate idealisation, but as providing so many pieces of evidence which support such an understanding, or, conversely, which such an understanding can fruitfully explain and unify.

  36. For an examination of explanatory adequacy, which includes adequate explanatory scope, precision and depth, see Hooker (1987b). Note that not all idealised conditions will play the role of idealisations, simplifying or degenerate, discussed here. Only those idealisations which have explanatory force will do so. For cases where hitherto basic principles were rejected by subsequent theory, e.g. the Newtonian rejection of the circularity of natural motion in both Ptolemaic and Galilean theory, the original evidence must be explained away, i.e. explained on another basis.

  37. See e.g. Christensen and Hooker (1998, 2000a, b). The use here of the term ‘interactivism’ derives from Bickhard (originally from Vuyk 1981 to describe Piaget’s position), see The Institute for Interactivist Studies, http://www.lehigh.edu/%7einteract/index.html, cf. Bickhard (2005a). Here and elsewhere, I use ‘interactivism’ and ‘interactionism’ interchangeably and to name a doctrine about the universal primacy of organised interaction to all life, but Bickhard, e.g., takes interactivism to be an approach to cognition, and interactionism to be an approach to development—e.g., Piagetian—and, in that sense, takes interactivism to force interactionism.

  38. There is, as Bickhard says, no self-maintenance of their self-maintenance, i.e. no recursive self-maintenance—see Bickhard (1993).

  39. For recent expositions from which the current discussion draws see Hooker (2010), Skewes and Hooker (2009).

  40. In this way naturalism meets its requirement that all capacities attributed to systems should be shown to be dynamically grounded, in particular that adaptive and cognitive capacities should arise from system processes which appeal only to actually available dynamical system processes. Anything else would be non-natural magic. Surprisingly, this rules out many common assumptions, e.g. that proper function for a system is given by selection etiology or that primary signal meaning for a system concerns the state of the sender, since neither of these are system-available conditions.

  41. See further Christensen and Hooker (2000b, 2002, 2004). Of course a much larger story has to be told to capture the rich normative life we humans enjoy. For naturalists this will have to be a constructivist and realist story, in something like the way science is. For elements of this story see Bickhard (2002, 2005b), Hooker (1987b, 1995, chaps. 5, 6, 2009).

  42. For an introduction to mirror neurons see e.g. http://en.wikipedia.org/wiki/Mirror_neuron#cite_note-Dinstein-2, http://lumiere.ens.fr/~alphapsy/blog/?2006/09/29/62-mirror-neurons-a-primer and references. For proposals concerning the neural regulatory origin and roles of emotions see e.g. Bickhard (2000) and Barandiaran and Moreno (2006).

  43. An analogy is the impact of printing on science: printing supported a massive increase in size, supply and accessibility of information and, e.g., of its reproductive fidelity, especially in medical graphics at the time. But while all this was epistemically valuable, and crucial to the social expansion of science, none of it altered the epistemic nature and conduct of science in any deep way.

  44. See Hooker (2009, sect. IIc) for brief elaboration.

  45. The motto is “all learning is in response to anticipation”. The higher order anticipation of successful learning, combined with the availability of masses of incidental information in manual and perceptual activity, results in incidental learning, whose cognitive manifestation is curiosity and related activities such as play and exploration. These characteristics are important to the power of human cognition. See, e.g., Bickhard (2005b, 2006), who also integrates emotion into the process.

  46. See also Hooker (1995, chap. 2, diagram 2.4, p. 89) and text. Hooker et al. (1992a, b) construct an engineering controller schema that is capable of learning from both success and failure. This approach transcends the opposition between induction and falsification, which was an artifact of inappropriate reliance on logic as the structure of method. There is no priority of one over the other, each can de-stabilise the other; this is immediate for error discovery de-stabilising positive knowledge claims, but increasing positive knowledge can also de-stabilise error claims, e.g. by revealing instrument or other methodological error in establishing the original error claim. Modern logic itself expresses hard-won positive knowledge of how to design productive inferential systems while it also provides a way to avoid error in reasoning.

  47. This need not mean abandoning the notion of an evolutionary epistemology, since biological evolutionary processes are not restricted to random trial and error searches (e.g. their most powerful drivers are transposition and cross-over, both significantly non-random processes), rather it means generalising evolutionary processes to include the complete response range; see Christensen and Hooker (1999), Hooker (2009, sects. IIIa, c). To what extent it also involves accepting transmission of learned responses remains to be clarified.

  48. It may be tempting to some to draw a distinction between two senses of rationality here, that of how interactive processes are organised in order to achieve norm satisfaction and that of how these organised processes are then improved. But this is a purely formal distinction and of no actual significance. Coordination of processes so that norm satisfaction increases picks out a single class of organisational improvements, from first engagement of norm satisfaction as playing a practical role at all, to all sorts of more complex processes, some more effective in various respects. The actual learning pattern may define a trajectory that cuts across any a priori division within this class.

  49. According to Piaget development takes the following rough general form: it is initiated by failure of assimilation and (micro-)accommodation, failure that sufficiently disturbs homeostasis (equilibrium); this leads to a search for deficiency among current cognitive operations that in turn stimulates the development of higher order operations over these defective operations; these latter then provide, through reflective abstraction and completing generalisation, a new, improved set of assimilations and accommodations supporting functioning over a wider range of inputs, i.e., across a wider range of environments. Unfortunately, Piaget turned to static formalist constructs (e.g. logic and group theory) in his attempts to insert more detail into this dynamical process conception, but this should not detract from his more basic constructivist biolgical insights, see Hooker (1994a). For exposition of Piaget’s dynamical process conception of rationality as improvement see Hooker (1995, chap. 5, sect. II).

  50. A prerequisite is that there be a competent internal process of judgement formation. It would take us too far afield to spell out in detail what these conditions are, but we all recognise that, though mosquitoes, swamps, bar-room brawls, many severely mentally handicapped individuals, chat rooms and so on are recognisable systems showing plenty of internal activity, as systems they do not make judgements, either largely or at all. These entities simply are not set up (largely or wholly) to make judgements (though in some cases their component entities might). Briefly, our analysis would run along the following lines: judgement is a process in an autonomous system that combines indications of environmental and of internal conditions and transforms them into autonomy-referenced decisions to change internal condition and/or external action (Skewes and Hooker 2009). But not all systems are autonomous in any relevant sense, e.g. not swamps, bar-room brawls, dinner parties or chat rooms, so they cannot act as genuine individuals with reference to their autonomy, and many autonomous individuals do not make judgements across areas of their lives because the changes occurring there do not constitute autonomy-referenced decisions, e.g. carbon-dioxide tracking in mosquitoes (but not the decision to search for a blood host) and human reflex responses. See further Christensen and Hooker (1998, 2000a, b, 2002, 2004).

  51. Here Brown’s orientation returns us to agent capacities that underlie Aristotle’s ethics as a useful starting point for understanding rationality and its practical application in ethics and law. Thanks to Brown for the following summary. Three concepts from Aristotle's ethics—deliberation, practical wisdom and equity—will help clarify the notion of judgment. According to Aristotle, deliberation is the ability to arrive at reasonable results in situations in which we do not have the grasp of necessity that he holds to be characteristic of science, but in which we are also not totally ignorant. Deliberation is particularly important when we are concerned with human affairs, where the circumstances and the possibilities are too complex to be captured in a set of usable explicit rules. The ability to deliberate well is the central characteristic of what Aristotle calls “practical wisdom”. This is the ability to arrive at fallible but non-arbitrary decisions "about what sorts of thing conduce to the good life in general” (1140a, McKeon 1941, p. 1026). Ex hypothesi, this ability is not exercised according to rules, but by specific individuals who have developed practical wisdom as a result of their experience of human life. The third concept, equity, is the ability to override an established rule in order to deal with the special features of a particular case (a striking example of the need for deliberation of a particular kind). Aristotle was concerned with exceptions to legal rules in situations where the law, exactly because it must be expressed in universal language, gives a clearly incorrect outcome in a specific case. When situations of this sort arise, to deliver equity we turn to those who can exercise practical wisdom. We depend on them “to say what the legislator himself would have said had he been present, and would have put into the law had he known” (1137b, McKeon 1941, p. 1020). However, Aristotle’s insights can be generalised; we can develop the ability to deliberate and exercise judgment in many different fields and it is this ability that we draw on when we run into the limitations of our understanding as we have codified it to date, including current rules, and wish to improve upon it.

  52. In these cases pursuit of rule models would quickly lead to an implausible unending regress of higher-order or meta rules to cover required context-dependent changes in rules (cf. Aristotle on legal equity, note 51). In many cases there are in fact some rules for generating a skillful performance; e.g., rules for good chess play or style guides for writers, but these rules are notoriously insufficient to generate a skillful performance. In fact, it is just at the point where available rules cease to be sufficient that differences of skill become particularly apparent. While every step of a valid mathematical proof is rule governed, constructing proofs is not; rather, there is in general provably no sufficient set of rules for deciding what step should be made at any given juncture. As chess and logic illustrate, even activities that are defined by rules can be, often must be, carried out apart from any use of rules. (Current computers do not use rules, or make judgements, at all, they merely conform to rules; vector-based machines provably don’t use input/output symbolisation internally.).

  53. Note that support for a supra-rule rational capacity does not follow from support for a supra-rule performance capacity. Among the best known instances, Dreyfus and Dreyfus (1986), Dreyfus (1991) have developed an account of physical and cognitive skills that has a rule-transcending orientation in common with the conception developed here, but they turn out to support a narrow, rule-following conception of rationality, despite their insistence on transcending it. They introduce a progression of expertise from novice and advanced novice to competent, proficient and expert, the earlier stages clumsily rule-bound the later stages increasingly fluid and powerful because skillfully intuitive and not rule-bound. Describing behaviour that is contrary to reason as “irrational” they note that a “vast area exists between irrational and rational that may be called arational” and they conclude thus: “Competent performance is rational; proficiency is transitional; experts act arationally” (Dreyfus and Dreyfus 1986, p. 36, cf. Dreyfus 1991, p. 186). According to the approach here, expertise, although not rule-bound, is a fundamental part of rationality. The contrary assumption, along with an insufficient appreciation of the richness of the rational processes available for improvement, are the roots of the too-limited conception of procedure they present despite their insistence on intelligence beyond rule use. These failures inhibit their approach at just those places where creativity is most relevant. They speak, e.g., of an expert, faced with a sufficiently new situation, being forced to turn to “detached reflection” and “appeal to principles” (i.e. rules) (Dreyfus 1991, p. 247), rather than utilising the creative construction of new criteria and actions his expertise provides (cf. radically new scientific research domains, like quantum theory in early C20).

  54. All constraints have this dual disabling/enabling character. A skeleton, e.g., makes many kinds of movement physically impossible (disabling), and yet it enables us to walk, lift, manipulate and so on, while the grammar of a language rules out many possible word sequences as forming coherent sentences (disabling) yet its very systematic constraints enables deliberate, effective communication. What is more important is the value of what is enabled; to obtain a useful set of constraints is the most important achievement of any evolutionary or developmental process, including scientific learning and ethical improvement. The resources of reason constitute a rich, appropriate set of constraints of high value because they effectively enable open-ended pursuit of rational decision making and improvement of rationality.

  55. Unhappily, it is equally true that, mis-organised, this reciprocity can damage both kinds of processes. For instance, the defects of observational practice in particular individuals, e.g., careless and/or corrupt observation processes in those in leadership roles, can become socially entrenched and will then in turn synergistically damage the observational powers of those trained under that regime, compounding and reinforcing the epistemic loss. And either character to this process may in turn be synergistically reinforced by larger social processes, e.g. the economic and cultural rewards flowing from expanded, reliable scientific understanding or, contrariwise, the defence of a public lie as a mechanism to consolidate social power. Thus social institutional design must play a crucial role in both providing epistemic power to observation and in ensuring that there is a beneficial reciprocity between individuals and social processes.

  56. Thus we talk both of grasping the point of a spear and of an argument, of gaining an overview of both the countryside and the calculus, and so on. Much of even our abstract mathematics, e.g., vectors, gradients and integrals, clearly extends spatial metaphors while other components extend action metaphors, e.g., operators and mappings, and it is at least plausible that all of our abstracting is of a similar kind.

  57. Note that this is a much richer notion than the typical notion which is just that applied to an inner interpretation process corresponding to (c). See e.g. Heelan (1988, 1998).

  58. As this discussion shows, the integration of observation into science depends on manifold judgments. Judgment enters in when scientists must decide whether to pursue an issue experimentally (is the epistemic value it might deliver worth the risks and resource investments required?), what observations to carry out, what means to use for carrying out these observations and under what conditions and with what instrument settings, what material practices to use carrying them out, how to evaluate the outcome of an observing process, and when to suspend further critical assessment and announce an observation as established. Nonetheless, as we have seen, in judiciously judged ways established observations are used to improve methods, theories and practices, on which subsequent judgements are based, including those theories used in understanding observing instruments, those practices used in operating the instruments and those instrumental methods used in processing their information.

  59. For more on the theme of science as a dynamic system of judgements and practices, and the same for reason itself, see Hooker (1995) passim and Brown (1988, 2006, notes 7, 71, 81).

  60. See, e.g., Carter and Carter (2005, http://en.wikipedia.org/wiki/Ignaz_Semmelweis).

  61. Logic is least effective in dealing with global properties, reasoning from a large set of interconnected premises quickly overwhelms our capacity, and notions like global system consistency quickly escape our formal control, as Cherniak (and Gödel’s theorem) shows (see Part A).

  62. Notice how, once a fallible process conception of rationality is adopted, logic can be evaluated from a wider perspective than just the necessity or otherwise of its inferential relations, with rather different outcomes for its role in, and centrality to, intelligence.

  63. In this way applications of deductive logic allow us to discover consequences of propositions that are far from obvious when we entertain or accept those propositions (such as the inconsistency of classical set theory, or of classical mechanics and electromagnetism, or of naive quantum field theory) and these discoveries can lead us to accept results that we would previously have rejected out of hand (e.g., relativity theory). And if there is disagreement about an inconsistency or like claim then it too can be systematically and publicly investigated, by both developing theoretical consequences from it to compare with other formal consequences and by applying it in sufficiently observable domains.

  64. On timing see Port and van Gelder (1995) and earlier, e.g., Bickhard and Richie (1983, p. 90).

  65. All of the foregoing capacities must be 'added on' to the basic formal structure of logic in some essentially ad hoc way. But because logical models of cognitive activities must assume that they are to be reduced to sequences of logical rule following, there is a constant stream of logical (usually explicitly computational programming) models of these features. However, these additions have always turned out in practice to be clumsy, fragile and artificial, and in practice they constantly find themselves faced with a pervasive tendency to 'computational explosion', to requiring utterly unrealistic or impossible computational resources for even simple cognitive tasks. On the other hand, the recent development of correlation extraction networks (connectionist or neural nets and the like) has provided roughly brain-like and demonstrably non-logical means for often performing the same tasks much more economically. While present connectionist models are too crude and simplistic to be likely models of real brain function, the foregoing experience should make us realise that logic is but one component tool among the resources of reason.

  66. One standard way of showing that a deductive argument form is invalid is by appealing to convincing counter examples. We have struggled to recognise this test procedure at all until recently (it was given a general formulation only a century ago). Often we have had to change our minds about the basic rules of logic; for example, the rule of subalternation was once widely accepted as a valid rule in logical reasoning and is now just as widely rejected by contemporary logical systems. These points about logic carry over to applications of formal methods in general. For instance, this holds for all cases in which we apply an algorithm as well as to those applications of probability and statistics in which we derive probabilistic conclusions from probabilistic premises.

  67. The essential idea is to extract the abstract structure of the relationships among propositions specifying the values of properties in quantum theory, e.g., between ‘The momentum of A is p1’ and ‘The momentum of A is between p1 and p2’, ‘The position of A is between x1 and x2’, by associating to each proposition an idempotent quantum operator and with each such operator associate in turn a subspace of a quantum geometry (Hilbert space), namely the subspace where its value is 1, and then formally characterising the geometrical structure thus picked out. If this is done in classical mechanics, where the state space forms a Euclidean geometry, the result is a Boolean algebra, a structure isomorphic to classical logic. For example, set inclusion in the geometry models implies in logic: if spatial region A includes region B as part then ‘If a thing is located in B then it is located in A’ is true, and vice versa. By parity of reasoning the corresponding structure for quantum mechanics is called quantum logic. See Hooker (1973, 1975/79) for early papers, and many other texts thereafter.

  68. This result is congruent with the fact that logical structure is also deeply connected to space–time geometrical structure (note 67) and space–time is our most general arena for delineating possibility relations among material states, reflected, e.g., in such metaphysical principles as that nothing can be in two distinct places at the same time. Putnam (1968) has argued that the relation between quantum theory and logic parallels that between general relativity and Euclidean geometry: In each case a formal structure that was long considered definitive (respectively, classical logic, Euclidean geometry) is challenged as a result of the continuing attempt to develop an adequate overall account of the world (respectively, quantum mechanics, relativity theory).

  69. Finite, fallible agents typically cannot in practice maximise: ignorance of the future (including of how methods and utilities may develop) and resource constraints on both modelling the world and checking for errors, especially for complex situations, make it impracticable. The next best aim is any outcome of sufficiently satisfactory utility (see e.g. Simon 1947, cf. 1996), but in the face of deep uncertainty even that must be re-configured to any outcome of sufficiently resilient sufficient utility, see Brinsmead and Hooker (2007).

  70. For instance, we can model the scientist as a risky investor in scientific credit, yielding a model of risky belief commitment that replaces inductive logic in a way that gives explicit recognition to the diversity of inductive situations, indeed that applies in both normal and revolutionary settings. In general, this framework makes the diversity of scientific practices—both those that mutually conflict and those that are simply different—explicable as efficient ways of spreading risky cognitive investment across institutionalised disciplines. It illuminates the conflicting nature of multiple epistemic goals and context-dependent trade-offs among them by modelling them as contextually weighted utilities, and so on. It even allows epistemic rules to emerge (cf. price) from these institutionalised activities, thereby providing a new model for understanding their origin and historical character. For all these reasons the decision theoretic framework makes deep sense of institutional structure as an intrinsic part of the cognitive capacity and coherence of science and makes the design of that institutional structure itself a rational choice, learnably improvable. Logic is inherently unable to do any of these things, denying them any part in being rational. A decision theoretic framework, however, encompasses them naturally and thereby radically expands and enriches our conception of reason and its place in understanding intelligent action. For this general conception see Hooker (1995), its roots in the discussion of goals in science, Hooker (1976), of belief as commitment, Hooker (1987b), especially §8.3.2, and of rules versus utilities in science, Hooker (1981a). For the systematic development of the idea of the scientist as risk-taking investor, including the emergence of epistemic rules, see Shi (2001, cf. note 80).

  71. This is illustrated in the intelligent response to Prisoner’s Dilemma games, note 30. Of course, we could then try to model these ‘meta-game’ choices as further formal games, but this both quickly becomes an unwieldly and artificial epi-cyclical structure, and still faces such issues as interactions among players with incommensurate game models (cf. the exquisite treatment of Othello and Desdemona in Rapoport 1960).

  72. See e.g. Hooker (1973, 1991a), for discussion.

  73. For this example see Schon (1967), Indurkhya (1992).

  74. Again, socially organised construction can be used to distinctively, and typically more powerfully, improve individual construction judgements. This is because socially organised construction: (1) can implement construction processes that are unavailable to most individuals, e.g. complex models or computer programs, (2) can be more discriminating than individual construction processes, either because they include a suite of more specifically skillful individuals spread across construction tasks than any one individual could provide, e.g., include disciplinary experts in constructing an interdisciplinary model of a watershed, and/or because they simply focus more construction resources on a problem than any individual could provide, e.g., through assembling a powerful research team, and (3) is more able to correct errors of construction than can individual checking processes by bringing to bear the foregoing methods to publicly evaluate construction judgements against their constraints, demanding appropriate social agreement, i.e., by requiring suitable social invariance of construction judgements. All of these processes are intensely combined in the regulation of construction judgements in science. As with observational and reasoning judgements, individual and socially organised construction processes strongly interact in ways that can synergistically reinforce improvements in both, or in ways that can equally systematically undermine both (see the discussions above).

  75. Nonetheless, it is acknowledged that at present the precise basis for our creativity—our creative construction of a satisfactory model of creative construction—remains unresolved. The three commonest options are these: (1) Deduction. It has long been hoped to show that the constraints on a creative act are always sufficiently strong to single out a unique creative construction as the only one satisfying them and, hence, that a characterisation of that construct might be derived directly from the constraints. In this way the apparent arbitrariness of the creativity involved would be removed. A simple example of this hope is the inductivist hope that the observational evidence plus inductive logic would entail the larger inductive conclusion, so there is no choice about it. But we have seen that this proves futile. Many common constraint approaches leave the residual discovery process unspecified. Thagard’s notion of explanation as satisfaction of constraints (Thagard 1989; Thagard and Verbeurgt 1998), e.g., quite reasonably relies on satisficing, on accepting any sufficient fit to the constraints that can anyhow be had, while Csikszentmihalyi (1988, 1996, 1999, elaborating the essential social constraints for a creative act to be recognised as such, and even specific attitudes or character traits that support creativity, still lacks any model of the creative construction process that stands at the centre of the process. (2) Emergence. It is also possible that certain processes of synergistic emergence, e.g. adaptation in neural networks, might be at play. This is an attractive notion, e.g. because adaptive networks synergistically bootstrap improved learning capacity for a class of tasks through problem-space construction while learning a specific member task, perforce a general feature of all successful learning [see SDAL, notes 82, 83 and text, and in Christensen and Hooker (2000a), Hooker (2009)]. Nonetheless, these processes do not specify their creative character in any more detail, much less uniquely constrain the outcome, and are in general not interpretable in agency functional terms. The same goes for appeal to self-organisational processes generally. (3) Evolution. An opposite approach to the goal of removing the mystery from the creation of novelty is to attempt to model the process as an evolution-like random search for something satisfying the constraints, but this turns out to be equally implausible. The search would have to be among all the logical possibilities and these are both indeterminate for us ignoramuses, especially since we are also partially mistaken, and so infinitely numerous as to require an impossibly long time to search exhaustively (notes 3, 29). Hofstadter’s copycat model of metaphor construction seeks to reduce the search burden by inferring constraints on search focus and scope (Hofstadter and The Fluid Analogies Research Group 1995). It cannot be ruled out that some class of such processes might contribute to understanding creative processes; but because of the fundamental constraints on inference from constraints—whichever model is chosen—it seems unlikely that inference from constraints could rein in the random search possibilities fast enough to hold the solution in general to the nature of creation. This suggests also adding a role for emergence, since the construction of search heuristics across many trials is undoubtedly important (see SDAL in Farrell and Hooker 2007a, b), and perhaps the emergence of problem space structure in nets and like processes across trials can contribute here; nonetheless it leaves unaddressed the difficulties of relying on emergence, noted above. The overall upshot is that, while all three processes potentially play roles in creativity, the ultimate nature of creativity (if it has one) remains unresolved—though meanwhile improvable through improving these processes, what matters here.

  76. Of course they may also have smaller self-oriented cognitive purposes, e.g. to satisfy one’s curiosity as to how this particular creature lives, irrespective of the value of this information to the larger community. But this is neither necessary nor sufficient to construct an epistemically powerful process.

  77. Compare Kuhn's discussion (1977, chap. 13) of scientists who share a set of criteria for theory evaluation but disagree on the relative importance of these criteria in cases in which these criteria yield conflicting judgments, and the discussion in Bjerring and Hooker (1979) of scientists who all agree on a particular experimental investigation, but for a variety of different, potentially conflicting, reasons. On the mutual shaping of individual and institutional roles of relevance here see e.g. Vickers (1968, 1983) and of course much of the sociology of science literature. The consensus/dissensus structure is a sub-class of the class of positive and negative feedback structures that characterise dynamic complex systems generally, see Hooker (1995).

  78. Hooker (1995, chap. 2), develops this position while Hooker (1991b) considers the specific form of law invariance that it has additionally taken in physics. For both formal and non-formal reason objectivity arises from the method used. But whereas in formal reason it is a by-product of the guaranteed truth of observation and logic, in non-formal rationality the objectivity of the outcome judgements is provided by the checks and balances within the process followed in reaching them, not by any guarantee (initial or final) of their truth.

  79. Cf. ‘evolutionary drive’ in Allen (2010). While the fundamental cognitive character of science arises from the profound consequences of finitude, the diversity will be increased through the impact of various other social factors (pursuit of individual scientific careers, personality conflicts among scientists, etc.) and through other personal impacts (e.g. of family influence on cognitive style, risk attitudes and the like). Across society there is no principled sub-system boundary that can be drawn, based solely on intrinsic sub-system properties, that would differentiate the scientific parts from the rest. Every domain can exhibit information processes ranging from rational through non-rational to irrational. Rather, everywhere the rational cognitive processes need to be demarcated by their contributions to transcending limitations.

  80. The irresolvable and useful diversity in science is as essential to a powerful cognitive process as it is to a powerful evolutionary one (cf. also e.g. Allen 2010 on ‘creative drive’). The diversity of real individuals, each in a variety of interactions with other such individuals, generates complex patterns of consequent individual behaviour and learning and of group dynamics that both constrain individuals and are constrained by them. Cf. Vickers, note 77 and Hooker (2002) on culture. It is from this dynamic social reality that the epistemic character of science emerges. The idea of collective dynamics that is generated by individuals (‘upward causation’) but can also constrain those individuals (‘downward causation’) is commonplace in dynamical analyses of complex non-linear systems throughout science: consider, e.g., the behaviours of molecules in the formation of ice crystals as water freezes. One promising version of it here is that epistemic structure emerges quite literally from strategic cooperative and competitive interactions among scientists, much as price emerges from strategic interactions among buyers and sellers in a market (Shi 2001). Conversely, the epistemic authority and power of science are grounded in the social literalness of these processes. Thus the design of real epistemic institutions (Hooker 1995) becomes key to the success of science. (We might liken epistemic institutions to the external nervous sub-system that organises the body of science, e.g. a computer behaves like a simple ganglion cluster in a distributed nervous system, Hooker 1987a, pp. 220–226, 309–315, Hooker 1995, pp. 96–112.) While it might be claimed that in the long run, when science is completed, individual and collective goals will coincide and institutional design become irrelevant, this ‘in the long run’ can never actually be applied in this universe. This is because it would require impossible individual capacities: to acquire and integrate unlimited information, check and resolve inconsistencies within it, investigate unlimited domains, and so on (recall ‘Finitude and idealisation’, Sect. 1.2.2). As always, the ultimate goal is unity, but at each stage there is ineliminable and productive diversity.

  81. For some further elaboration see Hooker (1981b, 2009, 2010).

  82. Bumble bees are simple self-directed agents, since they can learn which flowers currently offer the best nectar rewards (that is, adapt their anticipative flight behaviours) by evaluating the results of their flower searches, but they cannot learn to anticipate flowering patterns (e.g. by species, season and location), or modify or extend their operative norms, or modify their learning strategy. Cheetah cubs can do all of these things. As the cheetah gets better at differentiating the relevant factors in effective hunting (camouflage, wind direction, herd cohesion, prey distance and age, etc.) it not only becomes better at hunting, it also becomes better able to recognise sources of error in its hunting technique and hence improve it.

  83. See Christensen and Hooker (2000a, 2004) and Hooker (2009). The SDAL model has been tested against a new analysis of the early history of ape language research and shown to provide deeper illumination of its processes than does conventional analysis, indeed to lead to an improved account of error management in science (Farrell and Hooker 2007a, b, 2009). It includes ‘double-loop learning’ (Argyris 1999) and ‘expanded rationality’ (Hatchuel 2001; Simon 1982) as special cases—see Brinsmead and Hooker (2010).

  84. Cf. the criterion of progress in Hooker (1995, chap. 4). How this might apply in ethics is hinted at in Hoffmaster and Hooker (2009)—cf. Hooker (1994d) and the suggested parallels between science and ethics in Sects. 2.3 and 2.4 below—but awaits further development.

  85. Abstraction of such regulatory mastery can be literal. For instance, briefly and roughly, rotations at the shoulder, elbow and wrist provide a three-parameter phase space of human arm movements; failures in early grasping stimulate control operations over the coordination and correction of these rotation operations; control is slowly refined until an effective operational phase space emerges equipped with higher order control parameters, such as that characterising repetitive movements as eccentricities of ellipses in phase space. [See, e.g., Churchland (1989, pp. 107–108) on the morphing of walking into running as eccentricity shift, and Kelso (1995) and Thelen and Smith (1994) for further examples.] Higher order control allows the elegance of sporting and dancing performances, which could not be achieved through segment-by-segment incremental movement control. Here the norm of a perfect repetition could be set as constancy of the eccentricity parameter. This would provide one partial proxy for an ideal of perfectly controlled arm movement. For Piaget the ideal of complete-truth arises as the reflective abstraction of successively broader operational closures of this kind, thence to its completing generalisation over all such practical actions, and finally to that of the perfect closure that nothing (in this cosmos) can dis-equilibrate. This is a highly condensed, abstract summary; for an exposition of Piaget from this perspective see Hooker (1995), chapt. 5, sect. 5.II.7, cf. further Hooker (1994a), but none of this is a substitute for familiarity with Piaget’s own analysis of development (even with ‘stages’ omitted, cf. Hooker 1994a).

  86. Were it to turn out, e.g., that there was no theoretical basis for this kind of self-interaction within self-reproducing, self-organising systems, or even simply not within those of reasoned intelligence, and that the contrary appearance derived from scaffolding of behaviour by reaction to organised features of the contexts involved, as happens for ant nest activity, then this would make a strong argument for restricting reason to simple economy or generalised efficiency and removing reason itself as a regulatory ideal in the theorising of intelligence.

  87. The same dichotomy lies behind the objection that empirical experience supplies only facts, not information about norms, and the reply is to reject it on the following grounds. (A) From the principle that 'ought' implies 'can' it follows that ‘cannot’ implies ‘not ought’ and we are thus not obligated to meet the demands of any norms that go beyond our capacities. Whence an empirical study of our capabilities can lead us to challenge previously accepted norms, as in Sect. 1.2. (B) It is not true that the empirical world is devoid of norms, for there is a place in the empirical world where norms naturally emerge, namely in the autonomy constitution of all living things (Sect. 2.1). (C) We have a basic experience of choice and each choice must express the operation of one or more norms, often accompanied by a basic experience of the norm as well, e.g. of hunger and thirst, fear and love. All told, and contra Hume, it is most plausible to accept that our empirical experience is of both the normative and the factual, and indeed that separating these for regulatory purposes is itself a sophisticated adult achievement, always pro tem, rather than an a priori condition. Hume argued for the dichotomy on the basis that we cannot deduce ‘ought’ from ‘is’ (Treatise, Book III, Part I, '1, p. 469), but while this is a trivial truth about the form of deduction (no validly deduced conclusion can have terms in it that do not appear in the premises), that truth says nothing about the substantive issue of whether experience is relevant to choice of normative premises to a deduction, i.e. to knowledge of norms. The history of science’s unfolding norms says it is.

  88. Thus it is clear that this is not a merely instrumental account of reason, such as that rationality is simply being efficient. One way to proceed in an instrumentalist direction would be to look for reductions of the regulatory ideals to more clearly naturalist features. We might, e.g., consider the ideal of truth as emerging out of the requirements for efficient communication, with our desire for this latter driven by its immediate material benefits. Similarly, we could understand the operation of the ideal of goodness as deriving from the requirements for a negotiated stable society, perhaps along Hobbesian lines. And these approaches do offer some fundamental insights into the genesis and function of the regulatory ideals. However, no one context or function is rich enough to account for their regulatory roles, in particular for the open-ended, self-correcting character of intelligence—cf. those for truth, Hooker (1995, sect. 6.I.2). So it would appear that no convincing reduction can be achieved. Moreover, these theories in turn both tacitly embed and also presuppose norms and ideals for their critical development, so there is a question of reflexive consistency here. Nonetheless, Stich has argued, e.g., that truth is not intrinsically valuable and should not play a fundamental role in a theory of reason—see Stich (1989). Similar arguments would apply to the other regulatory ideals. The basic response is already available here: without this internal structure to reason we cannot adequately explain our actual improvement capacities. The theory of reason is, qua theory, itself accepted under the explanatory ideal and proxies. See earlier Hooker (1995, sect. 6.II.6). For the same reason we should not leap, as perhaps Laudan (1990) does, from the epistemic inaccessibility of an ideal to its irrelevance, because its relevance is determined by its structuring of our efforts at improvement rather than by our actually achieving the ideal. Also, the construction of ideals provides a non-ad hoc naturalist account of their felt involuntary character as presenting a highest-order regulatory achievement, a compelling status for creatures whose continued existence depends on sufficiently coherent interactive regulation. Its necessity is akin to that of the necessity of energy in a coherent dynamics.

  89. Aspects of the following discussion of ideals, and of the roles of judgement more generally, have greatly benefitted from contributions by Hal Brown, both through his published work [especially 1988, 2006, which develops a complementary account of the nature of reason to that in Hooker (1995)] and via unpublished joint work by Brown and Hooker. Though the present text develops its own distinctive position, his invaluable contributions are respectfully acknowledged.

  90. Here the fundamental mistakes of the formalist position are, first, to assume that formally characterised rules are paradigms for these tools, whereas they are at best partially useful, though severe, simplifications of them, and second, to assume in particular that logic can serve to specify both an ideal knowledge destination and an ideal rational process, when it can at best serve only very partial roles in either. It has been the burden of Part A to uncover the nature and origins of these mistakes.

  91. This formulation builds in two dimensions to the ideal, the completeness of the component tools available (e.g., the completed development of statistical inference tools) and their unrestrained use. The ‘all’ here is intended to include not only the observational data but also the presuppositions of the observation regime, and so on, plus the decisions to terminate each of these inquiries and to commence new inquiries, and so on.

  92. This is an important aspect of the unities of rational science (and rational ethics), but does not remove the incompatibilities in practice among pursuit of differing proxies, for the same or different ideals (see below). Learning simultaneously both to solve a problem and to become better at solving problems of that class is a mark of mature, open-ended intelligence, naturally expressed in autonomous organisms (Sect. 2.1) by self-directed anticipative learning (SDAL) and permitting the solution of open (initially ill-specified) problems, e.g. how to conduct scientific research in a radically new domain—see note 83 and text.

  93. Of course, they can act as an outer constraint only up to logical coherence, the Gödel etc. results still render the original analytic standard incoherent as a constraint, but a modified version can be constructed. The requirement that (degenerately) idealised principles be satisfied as limiting constraints is a meta-theoretical requirement that, from a naturalist position, will be equally as fallible and open to revision as any other principle, not imposed apriori.

  94. This conception of the relation of idealised rationality theory to finite agency rationality theory is not the one which Cherniak implicitly offers. (He does not discuss the issue explicitly.) Cherniak's response to his own well argued consequences of finitude is to weaken the idealised conditions, and in a manner which leaves their formal character essentially the same. His formal recipe is simply to replace ‘all’ by ‘some’. This essentially represents traditional analytic rationality theory as a simplifying idealisation. Finite agency rationality is idealised rationality corrected for finite costs and capacities. ‘Some’ goes over smoothly into ‘all’ as capacities are enlarged and costs reduced. (This is not a completely fair representation of Cherniak, since e.g. psychological structure and enquiry selection figure in his analysis, dropping out in the idealised limit; but it does capture Cherniak's formalist tendencies: consider e.g. his favouring of deduction despite the enquiry selection considerations he himself introduces.) This conception of rationality theory, I have argued, is inadequate. In Cherniak's ‘some’ version of the minimal normative rationality conditions the fundamental features of context-dependence, psychological organisation-dependence, risk-taking heuristics and non-formal judgement remain effectively suppressed.

  95. And following the the continuing role of these derivative ideals in the new account of rationality (Sect. 1.2), we can see how they contribute specific framework structure. The idealised constraints on General Relativity theory [GR] deriving from Newtonian theory, e.g., provide a framework of space–time categories and dynamical relations which structures, without determining, the categories and relations appearing in GR, the less degenerate theory. Similarly, in their role as limiting constraints on less degenerate theories of epistemology and rationality the idealised constraints deriving from formalist analytic theory help to provide a framework within which to categorise and measure costs and benefits of alternative procedures and the levels of risk that are run, and to define the directions and measures of improvement which are possible. The precise details will depend on the specifics of the domain under consideration and, as with their counterparts in science, their relation to non-degenerate rationality theory will be complex, much more complex than the traditional position envisages. Similarly, there is much more to a theory of rationality than can appear here, e.g. a consideration of the nature and limits of reason's tools and of their proper inter-relations. But pursuing all that complexity here would take us too far afield.

  96. This account shows how to respond to a common objection from those who adopt the idealised analytic standard, namely, that rationality theory is concerned with the normative assessment of rational belief systems, with the constraints they should satisfy, not with how reasonably they are arrived at. But, we have just seen, making this objection is just to revert to employing the constituents of the knowledge ideal, not of a rationality ideal. It is certainly rational (at least up to logical coherence: Gödel etc.) to want to aim at developing a set of beliefs that satisfy the analytic rationality standard, but this in itself says nothing about how to go about doing so in practice, nor about any ideal way of pursuing it. To nonetheless insist that it does pertain to rationality confuses how idealisations function. Consider a physicist who says “Let's not confuse the criteria for something's being an idealised gas with the practical question of under what conditions real systems behave as idealised gases or through what processes a real system might come to behave increasingly like that of an idealised gas; the idealised gas criteria are determined by mathematical formalism and expressed in the idealised gas law independently of any other considerations while the latter practical questions are determined by a series of complex physical processes that are to be explained in other terms. These are two separate issues entirely.” But this makes no sense; to the contrary, it is only the ability to help us understand real gas processes that makes the ideal gas behaviour relevant at all. While ideal gases can be studied as mathematical objects independently of their explanatory role, in the absence of the latter they have no scientific standing. Just so with idealised rationality theory: complete consistency and the like can be studied as mathematical objects independently of their explanatory role, but in the absence of the latter they have no cognitive standing. Only their pretense to a priori status seduces us into thinking otherwise.

  97. Perhaps this is because proxies of the above sorts are so very theory-dependent, which means they are cognitive-context dependent. What counts as total relevant evidence, e.g., will be a factor, not only of the methods used, but also of what our deepest theories say about the connectedness of the world. (Compare in this regard the very different connectednesses provided by classical and quantum mechanics, or by classical Darwinian and non-linear dynamic systems biology.) Similarly, methodological power dictates choosing different statistical methods to use, from among the competing varieties, according to the circumstances.

References

  • Allen PM (2010) Complexity and management. In: Hooker CA (ed) Philosophy of complex systems. Handbook of the philosophy of science, vol 10. Elsevier, Amsterdam

    Google Scholar 

  • Argyris C (1999) On organisational learning. Malden, Blackwell Business

    Google Scholar 

  • Barandiaran X, Moreno A (2006) On what makes certain dynamical systems cognitive: a minimally cognitive organization program. Adapt Behav 14(2):171–185

    Article  Google Scholar 

  • Batterman RW (2002) The devil in the details: asymptotic reasoning in explanation, reduction and emergence. MIT, Boston

    Google Scholar 

  • Berry MV (1994) Asymptotics, singularities and the reduction of theories. In: Prawitz D, Skyrms B, Westerstahl D (eds) Logic and philosophy of science in Uppsala. Ninth international congress on logic, methodology and philosophy of science. Kluwer, Dordrecht, pp 597–607

  • Bickhard MH (1993) Representational content in humans and machines. J Exp Theor Artif Intell 5:285–333

    Article  Google Scholar 

  • Bickhard MH (2000) Motivation and emotion: an interactive process model. In: Ellis RD, Newton N (eds) The caldron of consciousness. J. Benjamins, Amsterdam, pp 161–178

    Google Scholar 

  • Bickhard MH (2002) Critical principles: on the negative side of rationality. New Ideas Psychol 20:1–34

    Article  Google Scholar 

  • Bickhard MH (2005a) Interactivism: a manifesto. Available at http://www.lehigh.edu/~mhb0/

  • Bickhard MH (2005b) The whole person: toward a naturalism of persons. Available at http://www.lehigh.edu/~mhb0/

  • Bickhard MH (2006) Developmental normativity and normative development. In: Smith L, Voneche J (eds) Norms in human development. Cambridge University Press, Cambridge, pp 57–76

    Chapter  Google Scholar 

  • Bickhard MH, Campbell RL (1996) Topologies of learning and development. New Ideas Psychol 14(2):111–156

    Article  Google Scholar 

  • Bickhard MH, Richie DM (1983) On the nature of representation: a case study of James Gibson’s theory of perception. Praeger, New York

    Google Scholar 

  • Bickhard MH, Terveen L (1995) Foundational issues in artificial intelligence and cognitive science—impasse and solution. Elsevier, Amsterdam

    Google Scholar 

  • Bjerring AK, Hooker CA (1979) Process and progress: the nature of systematic inquiry. In: Barmark J (ed) Perspectives in metascience. Berlings, Lund

    Google Scholar 

  • Brinsmead TS, Hooker CA (2007) Adaptive backcasting: a method of possibility and design. Cooperative Research Centre for Coal in Sustainable Development, Brisbane

    Google Scholar 

  • Brinsmead TS, Hooker CA (2010) Complex systems dynamics and sustainability: conception, method and policy. In: Hooker CA (ed) Philosophy of complex systems. Handbook of the philosophy of science, vol 10. Elsevier, Amsterdam

    Google Scholar 

  • Brown HI (1979) Perception, theory and commitment: the new philosophy of science. University of Chicago Press, Chicago

    Google Scholar 

  • Brown HI (1987) Observation and objectivity. Oxford University Press, Oxford

    Google Scholar 

  • Brown HI (1988) Rationality. Routledge, London

    Google Scholar 

  • Brown HI (2006) More about judgment and reason. Metaphilosophy 37:646–651

    Article  Google Scholar 

  • Brown HI (2007) Conceptual systems. Routledge, London

    Google Scholar 

  • Campbell NR (1957) Foundations of science. Dover, New York

    Google Scholar 

  • Carter KC, Carter BR (2005) Childbed fever. A scientific biography of Ignaz Semmelweis. Transaction Publishers, Piscataway

    Google Scholar 

  • Cherniak C (1986) Minimal rationality. Bradford/MIT, Cambridge

    Google Scholar 

  • Christensen WD (2004) Self-directedness, integration and higher cognition. Lang Sci 266:661–692 (Special Issue on Distributed Cognition and Integrationist Linguistics)

    Article  Google Scholar 

  • Christensen WD, Hooker CA (1998) From cell to scientist: toward an organisational theory of life and mind. In: Bigelow J (ed) Our cultural heritage. Australian Academy of Humanities, University House, Canberra, pp 275–326

    Google Scholar 

  • Christensen WD, Hooker CA (1999) The organization of knowledge: beyond Campbell’s evolutionary epistemology. Philos Sci 66:S237–S249 (Proceedings, PSA 1998)

    Article  Google Scholar 

  • Christensen WD, Hooker CA (2000a) Organised interactive construction: the nature of autonomy and the emergence of intelligence. In: Etxebberia A, Moreno A, Umerez J (eds) The contribution of artificial life and the sciences of complexity to the understanding of autonomous systems, communication and cognition. 17(3–4):133–158 (special edition)

  • Christensen WD, Hooker CA (2000b) An interactivist-constructivist approach to intelligence: self-directed anticipative learning. Philoso Psychol 13(1):5–45

    Article  Google Scholar 

  • Christensen WD, Hooker CA (2002) Self-directed agents. In: MacIntosh JS (ed) Contemporary naturalist theories of evolution and intentionality. Can J Philos (special supplementary volume): 19–52

  • Christensen WD, Hooker CA (2004) Representation and the meaning of life. In: Clapin H, Staines P, Slezak P (eds) Representation in mind: new approaches to mental representation. Elsevier, Sydney, pp 41–69

    Google Scholar 

  • Churchland PM (1989) A neurocomputational perspective. Bradford/MIT, Cambridge

    Google Scholar 

  • Csikszentmihalyi M (1988) Society, culture, and person: a systems view of creativity. In: Sternberg RJ (ed) The nature of creativity: contemporary psychological perspectives. Cambridge University Press, New York, pp 325–339

    Google Scholar 

  • Csikszentmihalyi M (1996) Creativity: flow and the psychology of discovery and invention. Harper Perennial, New York

    Google Scholar 

  • Csikszentmihalyi M (1999) Implications of a systems perspective for the study of creativity. In: Sternberg RJ (ed) Handbook of creativity. Cambridge University Press, Cambridge, pp 313–335

    Google Scholar 

  • Dreyfus HL (1991) Being-in-the-world: a commentary on Heidegger’s being and time, division I. MIT, Cambridge

    Google Scholar 

  • Dreyfus HL, Dreyfus SE (1986) Mind over machine. Blackwell, Oxford

    Google Scholar 

  • Duhem P (1962) The aim and structure of physical theory. Atheneum, New York

    Google Scholar 

  • Farrell R, Hooker CA (2007a) Applying self-directed anticipative learning to science I: agency and the interactive exploration of possibility space in ape language research. Perspect Sci 15(1):86–123

    Article  Google Scholar 

  • Farrell R, Hooker CA (2007b) Applying self-directed anticipative learning to science II: learning how to learn across ‘revolutions’. Perspect Sci 15(2):220–253

    Article  Google Scholar 

  • Farrell R, Hooker CA (2009) Error, error-statistics and self-directed anticipative learning. Found Sci 14(4):249–271

    Article  Google Scholar 

  • Feyerabend PK (1961) Knowledge without foundations. Oberlin College (mimeographed)

  • Feyerabend PK (1974) Popper’s objective knowledge. Inquiry 17:475–507

    Article  Google Scholar 

  • Feyerabend PK (1978) Against method. Verso, London

    Google Scholar 

  • Galison P (1987) How experiments end. University of Chicago Press, Chicago

    Google Scholar 

  • Giere R (1988) Explaining science: a cognitive approach. University of Chicago Press, Chicago

    Google Scholar 

  • Glymour C (1980) Theory and evidence. Princeton University Press, Princeton

    Google Scholar 

  • Green D, Leishman T (2010) Computing and complexity—networks, nature and virtual worlds. In: Hooker CA (ed) Philosophy of complex systems. Handbook of the philosophy of science, vol 10. Elsevier, Amsterdam

    Google Scholar 

  • Harnad S (1990) The symbol grounding problem. Physica D 42:335–346

    Article  Google Scholar 

  • Hatchuel A (2001) Toward design theory and expandable rationality: the unfinished programme of Herbert Simon. J Manage Govern 5(3–4):260–271

    Article  Google Scholar 

  • Heelan P (1988) Space—perception and the philosophy of science. University of California Press, Berkeley

    Google Scholar 

  • Heelan P (1998) Scope of hermeneutics in the philosophy of natural science. Stud Hist Philos Sci 29:273–298

    Article  Google Scholar 

  • Hoffmaster B, Hooker CA (2009) How ethics confronts experience. Bioethics 23(4):214–225

    Article  Google Scholar 

  • Hofstadter D, The Fluid Analogies Research Group (1995) Fluid concepts and creative analogies. Computer models of the fundamental mechanisms of thought. Basic Books, New York

    Google Scholar 

  • Holland JH (1992) Adaptation in natural and artificial systems. Bradford/MIT, Cambridge

    Google Scholar 

  • Hooker CA (1973) Contemporary research in the foundations and philosophy of quantum theory. D. Reidel Publishing Co, Dordrecht

    Google Scholar 

  • Hooker CA (1975/1979) The logico-algebraic approach to quantum mechanics, 2 volumes—Historical evolution, vol I. 1975; Contemporary consolidation, vol II. 1979. D. Reidel Publishing Co., Dordrecht

  • Hooker CA (1975) Global theories. Philos Sci 42:152–179 (Reprinted in Hooker 1981a)

    Article  Google Scholar 

  • Hooker CA (1976) Methodology and systematic philosophy. In: Butts RE, Hintikka J (eds) Proceedings, 5th international congress on logic, methodology and philosophy of science, vol III. Reidel, Dordrecht, pp 3–23. (Reprinted in Hooker 1987a)

  • Hooker CA (1981a) Formalist rationality: the limitations of Popper’s theory of reason. Metaphilosophy 12:247–266

    Article  Google Scholar 

  • Hooker CA (1981b) Towards a general theory of reduction. Dialogue, XX, part I: historical framework, pp 38–59, part II, Identity and reduction, pp 201–36, part III, Cross-categorial reduction, pp 496–529

  • Hooker CA (1987a) A realistic theory of science. State University of New York Press, Albany

    Google Scholar 

  • Hooker CA (1987b) Evolutionary naturalist realism: circa 1985 (in Hooker 1987a). pp 255–357

  • Hooker CA (1989) From logical formalism to control system. In: Fine A, Forbes M (eds) PSA 1988. Philosophy of Science Association, East Lansing

    Google Scholar 

  • Hooker CA (1991a) Physical intelligibility, projection, objectivity and completeness: the divergent ideals of Bohr and Einstein. Br J Philos Sci 42:491–511

    Article  Google Scholar 

  • Hooker CA (1991b) Between formalism and anarchism: a reasonable middle way. In: Munevar G (ed) Beyond reason: essays on the philosophy of Paul Feyerabend. Kluwer, Boston, pp 41–107

    Google Scholar 

  • Hooker CA (1994a) Regulatory constructivism: on the relation between evolutionary epistemology and Piaget’s genetic epistemology. Biol Philos 9:197–244

    Article  Google Scholar 

  • Hooker CA (1994b) Idealisation, naturalism, and rationality: some lessons from minimal rationality. Synthese 99:181–231

    Google Scholar 

  • Hooker CA (1994c) From phenomena to metaphysics. In: Prawitz D, Westerstahl D (eds) Logic and philosophy of science in Uppsala. Dordrecht, Kluwer, pp 159–184

    Google Scholar 

  • Hooker CA (1994d) Value and system, notes toward the definition of agri-culture. J Agric Environ Ethics 7:1–84 (special supplement)

    Article  Google Scholar 

  • Hooker CA (1995) Reason, regulation and realism: toward a naturalistic, regulatory systems theory of reason. State University of New York Press, Albany

    Google Scholar 

  • Hooker CA (2000) Unity of science. In: Newton-Smith WH (ed) A companion to the philosophy of science. Blackwell, Oxford, pp 540–549

    Google Scholar 

  • Hooker CA (2002) An integrating scaffold: toward an autonomy-theoretic modelling of cultural change. In: Wheeler M, Ziman J (eds) The evolution of cultural entities. British Academy of Science, Oxford, pp 67–86

    Google Scholar 

  • Hooker CA (2004) Asymptotics, reduction and emergence. Br J Philos Sci 55:435–479

    Article  Google Scholar 

  • Hooker CA (2009) Interaction and bio-cognitive order. Synthese 166(3):513–546 (special edition on interactivism, M. Bickhard, ed.)

    Article  Google Scholar 

  • Hooker CA (2010) Introduction to philosophy of complex systems. Part B: scientific paradigm + philosophy of science for complex systems: a first presentation c. 2009. In: Hooker CA (ed) Philosophy of complex systems. Handbook of the philosophy of science, vol 10. Elsevier, Amsterdam

    Google Scholar 

  • Hooker CA, Penfold HB, Evans RJ (1992a) Control, connectionism and cognition: toward a new regulatory paradigm. Br J Philos Sci 43:517–536

    Article  Google Scholar 

  • Hooker CA, Penfold HB, Evans RJ (1992b) Cognition under a new control paradigm. Topoi 11:71–88

    Article  Google Scholar 

  • Howson C, Urbach P (1989) Scientific reasoning: the bayesian approach. Open Court, LaSalle

    Google Scholar 

  • Indurkhya B (1992) Metaphor and cognition: an interactionist approach. Kluwer, Dordrecht

    Google Scholar 

  • Kelso JAS (1995) Dynamic patterns: the self-organization of brain and behavior. MIT, Boston

    Google Scholar 

  • Kuhn TS (1962) The structure of scientific revolutions. University of Chicago Press, Chicago

    Google Scholar 

  • Kuhn TS (1977) The essential tension: selected studies in scientific tradition and change. University of Chicago Press, Chicago

    Google Scholar 

  • Lakatos I (1970) Falsification and the methodology of scientific research programmes. In: Lakatos I, Musgrave A (eds) Criticism and the growth of knowledge. Cambridge University Press, Cambridge

    Google Scholar 

  • Lakatos I (1976) Proofs and refutations. Cambridge University Press, Cambridge

    Google Scholar 

  • Laudan L (1977) Progress and its problems. University of California Press, Berkeley

    Google Scholar 

  • Laudan L (1990) Science and relativism. Chicago University Press, Chicago

    Google Scholar 

  • Legg C (2005) The meaning of meaning-fallibilism. Axiomathes 15(2):293–318

    Article  Google Scholar 

  • McKeon R (ed) (1941) The basic works of Aristotle. Random House, New York

    Google Scholar 

  • Mill JS (2002) A system of logic. University Press of the Pacific, Honolulu. (First published 1843; for C19 editions see on-line books at http://en.wikipedia.org/wiki/A_System_of_Logic)

  • Newton-Smith WH (1981) The rationality of science. Routledge and Kegan Paul, London

    Book  Google Scholar 

  • Popper KR (1972) Conjectures and refutations. Routledge and Kegan Paul, London

    Google Scholar 

  • Popper KR (1979) Objective knowledge: an evolutionary approach (rev. ed.). Oxford University Press, Oxford

    Google Scholar 

  • Port R, van Gelder T (eds) (1995) Mind as motion: explorations in the dynamics of cognition. MIT, Boston

    Google Scholar 

  • Putnam H (1968) In: Cohen RS, Wartofsky MW (eds) Is logic empirical? Boston studies in the philosophy of science, vol 5. D. Reidel, Dordrecht, pp 216–241. [Repr. as “The logic of quantum mechanics”. In: Putnam H (1975) Mathematics, matter and method. Philosophical papers, vol 1. Cambridge University Press, Cambridge, pp 174-197]

  • Putnam H (1982) Why reason can’t be naturalised. Synthese 52:3–23

    Article  Google Scholar 

  • Quine WVO (1969) Epistemology naturalised. In: Quine WVO (ed) Ontological relativity and other essays. Columbia University Press, New York

    Google Scholar 

  • Rapoport A (1960) Fights, games and debates. University of Michigan Press, Ann Arbor (Edn 5 1997)

    Google Scholar 

  • Rescher N (1977) Methodological pragmatism. Blackwell, London

    Google Scholar 

  • Ruse M (1986) Taking Darwin seriously. Blackwell, Oxford

    Google Scholar 

  • Schon D (1967) Invention and the evolution of ideas. Tavistock, London

    Google Scholar 

  • Shapere D (1984) Reason and the search for knowledge: investigations in the philosophy of science. Reidel, Dordrecht

    Google Scholar 

  • Shi Y (2001) The economics of scientific knowledge: a rational choice institutionalist theory of science. Edward Elgar, Cheltenham

    Google Scholar 

  • Simon HA (1947) Administrative behavior: a study of decision-making processes in administrative organizations. Free Press, New York

    Google Scholar 

  • Simon HA (1982) Models of bounded rationality, vol 3. MIT, Cambridge

    Google Scholar 

  • Simon HA (1996) The sciences of the artificial, 3rd edn. MIT, Cambridge

    Google Scholar 

  • Skewes J, Hooker CA (2009) Bio-agency and the problem of action. Biol Philos 24(3):283–300

    Article  Google Scholar 

  • Stich S (1989) The fragmentation of reason. Bradford/MIT, Cambridge

    Google Scholar 

  • Thagard P (1989) Explanatory coherence. Behav Brain Sci 12:435–502

    Article  Google Scholar 

  • Thagard P (1998) Computational philosophy of science. MIT, Cambridge

    Google Scholar 

  • Thagard P, Verbeurgt K (1998) Coherence as constraint satisfaction. Cogn Sci 22:1–24

    Article  Google Scholar 

  • Thelen E, Smith LB (1994) A dynamical systems approach to the development of cognition and action. Bradford Books, MIT, Boston

  • Vickers G (1968) Value systems and social process. Penguin, London

    Google Scholar 

  • Vickers G (1983) Human systems are different. Harper and Row, London

    Google Scholar 

  • Vuyk R (1981) Piaget’s genetic epistemology 1965–1980, vol II. Academic Press, New York

    Google Scholar 

Download references

Acknowledgments

Comments on an earlier draft by Mark Bickhard and by Robert Farrell improved the paper at several points and they are thanked for their particular contributions to its rational development.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cliff Hooker.

Additional information

This essay will use ‘reason’ and ‘rationality’ as synonyms and their derivatives likewise, e.g. ‘reason through’ and ‘rationalise’.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hooker, C. Rationality as Effective Organisation of Interaction and Its Naturalist Framework. Axiomathes 21, 99–172 (2011). https://doi.org/10.1007/s10516-010-9131-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10516-010-9131-y

Keywords

Navigation