Developmental Systems Theory (DST) emphasises the importance of non-genetic factors in development and their relevance to evolution. A common, deflationary reaction is that it has long been appreciated that non-genetic factors are causally indispensable. This paper argues that DST can be reformulated to make a more substantive claim: that the special role played by genes is also played by some (but not all) non-genetic resources. That special role is to transmit inherited representations, in the sense of Shea (2007: Biology (...) and Philosophy, 22, 313-331). Formulating DST as the claim that there are non-genetic inherited representations turns it into a striking, empirically-testable hypothesis, driving the sort of investigations that are only now beginning to appear in the scientific literature. DST’s characteristic rejection of a gene vs. environment dichotomy is preserved, but without dissolving all potentially explanatory distinctions into an interactionist causal soup, as some have alleged. (shrink)
Abstract Galileo Then and Now (Draft of paper to be discussed at the Conference, HPD1, to be held at the Center for Philosophy of Science, University of Pittsburgh, 11-14 October 2007) William R. Shea, University of Padua The aim of this paper is to stimulate discussion on how shifts in philosophical fashion and societal moods tell us not only what to read but how to go about it, and how history and philosophy of science can jointly deepen our grasp (...) of the issues at stake. The first part highlights some of the things that have occurred in the field of Galileo studies between the monumental edition of Galileo Opere in twenty volumes, edited by Antonio Favaro between 1890 and 1909, and the new enlarged edition that will be published from 2009 onwards by a team of scholars working under Paolo Galluzzi. Part One. From Favaro to Galluzzi "Publish or perish" is an injunction that resonated as clearly in the ears of assistant professors at the end of the 19th century as it does in the first decade of the 21st. But publishing can also mean perishing when what is being edited is the work of an eminent scientist of the past. It simply does not do to offer material that is not what readers expect even if it was written by someone as famous as Galileo, and well authenticated sources were sometimes disregarded when they appeared to be of no interest. It is largely for this reason that a new national edition of Galileo's works is required. Of course, over the last hundred years, a number of letters from and to Galileo as well as a few laudatory or damning comments about his personality or his work have been uncovered, but this would not have been enough to drum up financial and scholarly support for a major editorial project. But the interesting material is Favaro had left out. Before mentioning what this material is, allow me a disclaimer. I'm not focusing on Favaro because he is a singularity, but because he illustrates how a conscientious historian can ride slipshod over evidence because of a philosophical commitment that he is only vaguely aware of, in this case naïve positivism. So what did Favaro to leave out? The answer is large chunks of three collections of manuscript notes that are bound in some of the 347 volumes of the Galilean material in the National Library in Florence. The first of these collections deals with logical treatises and related essays on Aristotelian philosophy, the second with Galileo's laboratory notes on the experiments that he carried out on the pendulum and inclined planes; and the third with astrological computations. Favaro rejected the first collection because they were "pre-Galilean" and hence could only have been trite scholastic exercises that "poor" young Galileo had to undergo in high school. He neglected the second because he had trouble making sense of them The third, astrological collection, he set aside with more trepidation since Galileo cast horoscopes for himself (at least twice), his children and his friends. But the fact that they were also, epistemologically speaking, "pre-Galilean", was enough to cast them into the outer darkness (in this case a dimly lit corridor of the National Library in Florence). The Aristotelian notes that Favaro had neglected were made available by William Wallace, who showed that Galileo culled long passages from professors at the Roman College. Galileo attacked several of Aristotle's ideas, but he never queried Aristotle's scientific realism–namely, the view that there is a uniquely true physical theory, discovered by human powers of reason and observation, and that alternative theories are consequently falls. Wallace made this the basis of his claim that Galileo created, in the heaven above and here on earth, a new science of motion by following the Aristotelian cannons laid down in the Posterior Analytics. On this view, Galileo used Aristotle's logic to subvert Aristotelian physics. It is interesting to contrast Wallace's thesis with that of philosophers of different allegiance, who offer a reconstruction of Galileo's methodology along lines that are much more modern and in which the epistemological core is no longer Aristotelian logic, but common sense instrumentalism. This is not to deny that experiments sometimes speak with a forked tongue, but to stress that methodological rules have also been known to be no more than clashing cymbals. Recent writers have also stressed that Galileo aimed his arguments at a specific audience, and that we must take cognizance of the values and whims of the society in which he operated. The sociology of science can help us understand the background against which Galileo's arguments were assessed and the reasons why he favored some rhetorical strategies over other ones. Mario Biagoli's Galileo Courtier sheds light on the Tuscan court and the Roman famiglia (as the popes styled their entourage), where Galileo found many of his readers and most of his critics. But Galileo was much more than a courtier, and I shall argue that we should use our enhanced knowledge of Galileo's education, his language, his style, and his emoluments to understand his science, not to supplant it. History and philosophy of science can combine their insights to achieve a more critical and balanced view of what actually occurred and why. (shrink)
Consciousness in experimental subjects is typically inferred from reports and other forms of voluntary behaviour. A wealth of everyday experience confirms that healthy subjects do not ordinarily behave in these ways unless they are conscious. Investigation of consciousness in vegetative state patients has been based on the search for neural evidence that such broad functional capacities are preserved in some vegetative state patients. We call this the standard approach. To date, the results of the standard approach have suggested that some (...) vegetative state patients might indeed be conscious, although they fall short of being demonstrative. The fact that some vegetative state patients show evidence of consciousness according to the standard approach is remarkable, for the standard approach to consciousness is rather conservative, and leaves open the pressing question of how to ascertain whether patients who fail such tests are conscious or not. We argue for a cluster-based ‘natural kind’ methodology that is adequate to that task, both as a replacement for the approach that currently informs research into the presence or absence of consciousness in vegetative state patients and as a methodology for the science of consciousness more generally. IntroductionThe Vegetative StateThe Standard ApproachThe Natural Kind MethodologyIs Consciousness a Special Case? 5.1 Is consciousness a natural kind?5.2 A special obstacle?Conclusion. (shrink)
This paper advocates explicitness about the type of entity to be considered as content- bearing in connectionist systems; it makes a positive proposal about how vehicles of content should be individuated; and it deploys that proposal to argue in favour of representation in connectionist systems. The proposal is that the vehicles of content in some connectionist systems are clusters in the state space of a hidden layer. Attributing content to such vehicles is required to vindicate the standard explanation for some (...) classificatory networks’ ability to generalise to novel samples their correct classification of the samples on which they were trained. (shrink)
One of the great outstanding problems in materialist philosophy of mind is the problem of how there can be space in the material world for intentionality. In the 1980s Ruth Millikan formulated a detailed theory according to which representations are physical particulars and their contents are complex relational properties of those particulars which can be specified in terms of respectable properties drawn from the natural sciences. In particular, she relied on the biological concept of the function of a trait, and (...) the existence of historical conditions which enter into an evolutionary explanation of the operation of that trait. The present article is an introduction to this influential theory of intentionality. (shrink)
Millikan’s theory of content purports to rely heavily on the existence of isomorphisms between a system of representations and the things in the world which they represent — “the mapping requirement for being intentional signs” (Millikan 2004, p. 106). This paper asks whether those isomorphisms are doing any substantive explanatory work. Millikan’s isomorphism requirement is deployed for two main purposes. First, she claims that the existence of an isomorphism is the basic representing relation, with teleology playing a subsidiary role — (...) to account for misrepresentation (the possibility of error). Second, Millikan relies on an isomorphism requirement in order to guarantee that a system of representations displays a kind of productivity. This seemingly strong reliance on isomorphism has prompted the objection that isomorphism is too liberal to be the basic representing relation: there are isomorphisms between any system of putative representations and any set, of the same cardinality, of items putatively represented. This paper argues that all the work in fixing content is in fact done by the teleology. Deploying Millikan’s teleology-based conditions to ascribe contents will ensure that there is an isomorphism between representations and the things they represent, but the isomorphism ‘requirement’ is playing no substantive explanatory role in Millikan’s account of content determination. So an objection to her theory based on the liberality of isomorphism is misplaced. The second role for isomorphism is to account for productivity. If some kind of productivity is indeed necessary for representation, then functional isomorphism will again be too liberal a constraint to account for that feature. The paper suggests an alternative way of specifying the relation between a system of representations and that which they represent which is capable of playing an explanatory role in accounting for Millikan’s type of productivity. In short, the liberality of isomorphism is no objection to Millikan’s teleosemantics, since the isomorphism ‘requirement’ need play no independent substantive role in Millikan’s account of representation. (shrink)
Block’s well-known distinction between phenomenal consciousness and access consciousness has generated a large philosophical literature about putative conceptual connections between the two. The scientific literature about whether they come apart in any actual cases is rather smaller. Empirical evidence gathered to date has not settled the issue. Some put this down to a fundamental methodological obstacle to the empirical study of the relation between phenomenal consciousness and access consciousness. Block (2007) has drawn attention to the methodological puzzle and attempted to (...) answer it. While the evidence Block points to is relevant and important, this paper puts forward a more systematic framework for addressing the puzzle. To give it a label, the approach is to study phenomenal consciousness as a natural kind. The approach allows consciousness studies to move beyond the initial means of identifying instances of the kind, like verbal report, and to find its underlying nature. It is wellrecognised that facts about an underlying kind may allow identification of instances of the kind that do not match the initial means of identification (cp. non-liquid samples of water). This paper shows that the same method can be deployed to investigate phenomenal consciousness independently of access consciousness. (shrink)
The question of whether non-human animals are conscious is of fundamental importance. There are already good reasons to think that many are, based on evolutionary continuity and other considerations. However, the hypothesis is notoriously resistant to direct empirical test. Numerous studies have shown behaviour in animals analogous to consciously-produced human behaviour. Fewer probe whether the same mechanisms are in use. One promising line of evidence about consciousness in other animals derives from experiments on metamemory. A study by Hampton (Proc Natl (...) Acad Sci USA 98(9):5359–5362, 2001 ) suggests that at least one rhesus macaque can use metamemory to predict whether it would itself succeed on a delayed matching-to-sample task. Since it is not plausible that mere meta-representation requires consciousness, Hampton’s study invites an important question: what kind of metamemory is good evidence for consciousness? This paper argues that if it were found that an animal had a memory trace which allowed it to use information about a past perceptual stimulus to inform a range of different behaviours, that would indeed be good evidence that the animal was conscious. That functional characterisation can be tested by investigating whether successful performance on one metamemory task transfers to a range of new tasks. The paper goes on to argue that thinking about animal consciousness in this way helps in formulating a more precise functional characterisation of the mechanisms of conscious awareness. (shrink)
Just how far can externalism go? In this exciting new book Ruth Millikan explores a radically externalist treatment of empirical concepts (Millikan 2000). For the last thirty years philosophy of mind’s ties to meaning internalism have been loosened. The theory of content has swung uncomfortably on its moorings in a fickle current, straining against opposing ties to mind and world. In this book Millikan casts conceptual content adrift from the thinker: what determines the content of a concept is not cognitively (...) accessible. She has only the stanchion of the world to hold her theory fast. She hopes that the tide will turn, and the theory of meaning will come stably to rest downstream of this anchor. This book is a bold exploration of how that might be achieved. (shrink)
The distinction between top-down and bottom-up effects is widely relied on in experimental psychology. However, there is an important problem with the way it is normally defined. Top-down effects are effects of previously-stored information on processing the current input. But on the face of it that includes the information that is implicit in the operation of any psychological process – in its dispositions to transition from some types of representational state to others. This paper suggests a way to distinguish information (...) stored in that way from the kind of influence of prior information that psychologists are concerned to classify as a top-down effect. So-drawn, the distinction is not just of service to theoretical psychology. Asking about the extent of top-down processing is one way to pose some of the questions at issue in philosophical debates about cognitive penetrability – about the extent of the influence of cognitive states on perception. The existence of a theoretically-useful perception-cognition distinction has come under pressure, but even if it has to be abandoned, some of the concerns addressed in the cognitive penetrability literature can be recaptured by asking about the extent of top-down influences on any given psychological process. That formulation is more general, since it can be applied to any psychological process, not just those that are paradigmatically perceptual. (shrink)
In ‘Mental Events’ Donald Davidson argued for the anomalism of the mental on the basis of the operation of incompatible constitutive principles in the mental and physical domains. Many years later, he has suggested that externalism provides further support for the anomalism of the mental. I examine the basis for that claim. The answer to the question in the title will be a qualiﬁed ‘Yes’. That is an important result in the metaphysics of mind and an interesting consequence of externalism.
Contents 1. Introduction 2. Reward-Guided Decision Making 3. Content in the Model 4. How to Deflate a Metarepresentational Reading Proust and Carruthers on metacognitive feelings 5. A Deflationary Treatment of RPEs? 5.1 Dispensing with prediction errors 5.2 What is use of the RPE focused on? 5.3 Alternative explanations—worldly correlates 5.4 Contrast cases 6. Conclusion Appendix: Temporal Difference Learning Algorithms.
This paper sets out a view about the explanatory role of representational content and advocates one approach to naturalising content – to giving a naturalistic account of what makes an entity a representation and in virtue of what it has the content it does. It argues for pluralism about the metaphysics of content and suggests that a good strategy is to ask the content question with respect to a variety of predictively successful information processing models in experimental psychology and cognitive (...) neuroscience; and hence that data from psychology and cognitive neuroscience should play a greater role in theorising about the nature of content. Finally, the contours of the view are illustrated by drawing out and defending a surprising consequence: that individuation of vehicles of content is partly externalist. (shrink)
BBS Commentary on: Susan Carey: The Origin of Concepts. Carey’s book describes many cases where children develop new concepts with expressive power that could not be constructed out of their input. How does she side-step Fodor’s paradox of radical concept nativism? I suggest it is by rejecting the tacit assumption that psychology can only explain concept acquisition when it occurs by rational inference or other transitions that are explicable-by-content.
There is increasing evidence for epigenetically mediated transgenerational inheritance across taxa. However, the evolutionary implications of such alternative mechanisms of inheritance remain unclear. Herein, we show that epigenetic mechanisms can serve two fundamentally different functions in transgenerational inheritance: (i) selection-based effects, which carry adaptive information in virtue of selection over many generations of reliable transmission; and (ii) detection-based effects, which are a transgenerational form of adaptive phenotypic plasticity. The two functions interact differently with a third form of epigenetic information transmission, (...) namely information about cell state transmitted for somatic cell heredity in multicellular organisms. Selection-based epigenetic information is more likely to conflict with somatic cell inheritance than is detection-based epigenetic information. Consequently, the evolutionary implications of epigenetic mechanisms are different for unicellular and multicellular organisms, which underscores the conceptual and empirical importance of distinguishing between these two different forms of transgenerational epigenetic effect. (shrink)
The success of a piece of behaviour is often explained by its being caused by a true representation (similarly, failure falsity). In some simple organisms, success is just survival and reproduction. Scientists explain why a piece of behaviour helped the organism to survive and reproduce by adverting to the behaviour’s having been caused by a true representation. That usage should, if possible, be vindicated by an adequate naturalistic theory of content. Teleosemantics cannot do so, when it is applied to simple (...) representing systems (Godfrey-Smith 1996). Here it is argued that the teleosemantic approach to content should therefore be modified, not abandoned, at least for simple representing systems. The new ‘infotel-semantics’ adds an input condition to the output condition offered by teleosemantics, recognising that it is constitutive of content in a simple representing system that the tokening of a representation should correlate probabilistically with the obtaining of its specific evolutionary success condition. (shrink)
The concept of innateness is used to make inferences between various better-understood properties, like developmental canalization, evolutionary adaptation, heritability, species-typicality, and so on (‘innateness-related properties’). This article uses a recently-developed account of the representational content carried by inheritance systems like the genome to explain why innateness-related properties cluster together, especially in non-human organisms. Although inferences between innateness-related properties are deductively invalid, and lead to false conclusions in many actual cases, where some aspect of a phenotypic trait develops in reliance on (...) a genetic representation it will tend, better than chance, to have many of the innateness-related properties. The account also shows why inferences between innateness-related properties sometimes fail and argues that such inferences are especially misleading when applied to human psychology and behaviour because human psychological development is especially reliant on non-genetic inherited representations. (shrink)
Commentary on Bergstrom and Rosvall, ‘The transmission sense of information’, Biology and Philosophy. In response to worries that uses of the concept of information in biology are metaphorical or insubstantial, Bergstrom and Rosvall have identified a sense in which DNA transmits information down the generations. Their ‘transmission view of information’ is founded on a claim about DNA’s teleofunction. Bergstrom and Rosvall see their transmission view of information as a rival to semantic accounts. This commentary argues that it is complementary. The (...) idea that DNA is transmitting information down the generations only makes sense if it is carrying a message, that is to say if it has semantic content. (shrink)
Recent theoretical work has identified a tightly-constrained sense in which genes carry representational content. Representational properties of the genome are founded in the transmission of DNA over phylogenetic time and its role in natural selection. However, genetic representation is not just relevant to questions of selection and evolution. This paper goes beyond existing treatments and argues for the heterodox view that information generated by a process of selection over phylogenetic time can be read in ontogenetic time, in the course of (...) individual development. Recent results in evolutionary biology, drawn both from modelling work, and from experimental and observational data, support a role for genetic representation in explaining individual ontogeny: both genetic representations and environmental information are read by the mechanisms of development, in an individual, so as to lead to adaptive phenotypes. Furthermore, in some cases there appears to have been selection between individuals that rely to different degrees on the two sources of information. Thus, the theory of representation in inheritance systems like the genome is much more than just a coherent reconstruction of information talk in biology. Genetic representation is a property with considerable explanatory utility. (shrink)
What is the evolutionary significance of the various mechanisms of imitation, emulation and social learning found in humans and other animals? This paper presents an advance in the theoretical resources for addressing that question, in the light of which standard approaches from the cultural evolution literature should be refocused. The central question is whether humans have an imitationbased inheritance system—a mechanism that has the evolutionary function of transmitting behavioural phenotypes reliably down the generations. To have the evolutionary power of an (...) inheritance system, an imitiation-based mechanism must meet a range of demanding requirements. The paper goes on to review the evidence for and against the hypothesis that there is indeed an imitation-based inheritance system in humans. (shrink)
There is ongoing controversy as to whether the genome is a representing system (Sterelny K., <span class='Hi'>Smith</span> K.C. and Dickson M. 1996. Biol. Philos. 11: 377–403; Griffiths P.E. 2001. Philos. Sci. 68: 394–412). Although it is widely recognised that DNA carries information, both correlating with and coding for various outcomes, neither of these implies that the genome has semantic properties like correctness or satisfaction conditions (Godfrey-<span class='Hi'>Smith</span> P. 2002. In: Wolenski J. and Kajania-Placek K. (eds), In the Scope of Logic, (...) Methodology, and the Philosophy of Sciences, Vol. II. Kluwer, Dordrecht, pp. 387–400). Here a modified version of teleosemantics is applied to the genome to show that it does indeed have semantic properties – there is representation in the genome. The account differs in three respects from previous attempts to apply teleosemantics to genes. It emphasises the role of the consumer of representations (in addition to their mode of production). It rejects the standard assumption that genetic representation can be used to explain the course of an organism’s development. And it identifies the explanatory role played by representational properties of the genome. A striking consequence of this account is that other inheritance systems could also be representational. Thus, a version of the parity thesis is accepted (Griffiths P.E. 2001. Philos. Sci. 68: 394–412). However, the criteria for being an inheritance system are demanding, so semantic properties are not ubiquitous. (shrink)
Although predictive coding may offer a computational principle that unifies perception and action, states with different directions of fit are involved (with indicative and imperative contents, respectively). Predictive states are adjusted to fit the world in the course of perception, but in the case of action, the corresponding states act as a fixed target towards which the agent adjusts the world.
The explosion of scientific results about epigenetic and other parental effects appears bewilderingly diverse. An important distinction helps to bring order to the data. Firstly, parents can detect adaptively-relevant information and transmit it to their offspring who rely on it to set a plastic phenotype adaptively. Secondly, adaptively-relevant information may be generated by a process of selection on a reliably transmitted parental effect. The distinction is particularly valuable in revealing two quite different ways in which human cultural transmission may operate.
Humans can think about their conscious experiences using a special class of ?phenomenal? concepts. Psychophysical identity statements formulated using phenomenal concepts appear to be contingent. Kripke argued that this intuited contingency could not be explained away, in contrast to ordinary theoretical identities where it can. If the contingency is real, property dualism follows. Physicalists have attempted to answer this challenge by pointing to special features of phenomenal concepts that explain the intuition of contingency. However no physicalist account of their distinguishing (...) features has proven to be satisfactory. Leading accounts rely on there being a phenomenological difference between tokening a physical-functional concept and tokening a phenomenal concept. This paper shows that existing psychological data undermine that claim. The paper goes on to suggest that the recalcitrance of the intuition of contingency may instead by explained by the limited means people typically have for applying their phenomenal concepts. Ways of testing that suggestion empirically are proposed. (shrink)
We report experimental results showing that participants are more likely to attribute knowledge in familiar Gettier cases when the would-be knowers are performing actions that are negative in some way (e.g. harmful, blameworthy, norm-violating) than when they are performing positive or neutral actions. Our experiments bring together important elements from the Gettier case literature in epistemology and the Knobe effect literature in experimental philosophy and reveal new insights into folk patterns of knowledge attribution.
This article examines perceptions of tax partners and non-partner tax practitioners regarding their CPA firms’ ethical environment, as well as experiences with ethical dilemmas. Prior research emphasizes the importance of executive leadership in creating an ethical climate (e.g., Weaver et al., Acad Manage Rev 42(1):41–57, 1999 ; Trevino et al., Hum Relat 56(1):5–37, 2003 ; Schminke et al., Organ Dyn 36(2):171–186, 2007 ). Thus, it is important to consider whether firm partners and other employees have congruent perceptions and experiences. (...) Based on the responses of 144 tax practitioners employed at CPA firms, the results show that tax partners rate the ethical environment of their firms as stronger than non-partner tax practitioners, particularly among those who describe a self-identified ethical dilemma. Tax partners also report having encountered more of the common examples of researcher-provided ethical dilemmas than non-partner tax practitioners, although non-partners perceive that certain ethical dilemmas occur at a higher rate than partners do. Overall, this study provides evidence of a disconnect between tax partners and non-partner tax practitioners with respect to perceptions of organizational ethics. Suggestions for potential remedies are offered. (shrink)
Can findings from psychology and cognitive neuroscience about the neural mechanisms involved in decision-making can tell us anything useful about the commonly-understood mental phenomenon of making voluntary choices? Two philosophical objections are considered. First, that the neural data is subpersonal, and so cannot enter into illuminating explanations of personal level phenomena like voluntary action. Secondly, that mental properties are multiply realized in the brain in such a way as to make them insusceptible to neuroscientific study. The paper argues that both (...) objections would be weakened by the discovery of empirical generalisations connecting subpersonal properties with the personal level. It gives three case studies that furnish evidence to that effect. It argues that the existence of such interrelations are consistent with a plausible construal of the personal-subpersonal distinction. Furthermore, there is no reason to suppose that the notion subpersonal representation relied on in cognitive neuroscience illicitly imports personal-level phenomena like consciousness or normativity, or is otherwise explanatorily problematic. (shrink)
The New Thinking contained in this volume rejects an Evolutionary Psychology that is committed to innate domain-specific psychological mechanisms: gene-based adaptations that are unlearnt, developmentally fixed and culturally universal. But the New Thinking does not simply deny the importance of innate psychological traits. The problem runs deeper: the concept of innateness is not suited to distinguishing between the two positions. That points to a more serious problem with the concept of innateness as it is applied to human psychological phenotypes. This (...) paper argues that the features of recent human evolution highlighted by the New Thinking imply that the concept of inherited representation, set out here, is a better tool for theorising about human cognitive evolution. (shrink)
This paper examines the metaphysical question of 'ensoulment' in relation to the theory, put forward in an earlier paper, that human life begins when the newly formed body organs and systems of the embryo begin to function as an organised whole, at which stage there is evidence of a change of nature. Although Roman Catholic theology teaches that a human being is a union of physical body and spiritual soul, it is incorrect to interpret this in a dualistic sense. The (...) meaning of 'soul' is considered and the conclusion reached that although both in the religious context and apart from it abortion is difficult to justify at any stage after conception, it does not follow that the use of 'spare' In Vitro Fertilisation (IVF) embryos should be rejected. If 'ensoulment' does not occur until the new organism functions as a whole then a decision not to make use of IVF embryos for medical purposes would be a heavy responsibility and not a 'safe' way out. (shrink)
We discuss a recent approach to investigating cognitive control, which has the potential to deal with some of the challenges inherent in this endeavour. In a model-based approach, the researcher deﬁnes a formal, computational model that performs the task at hand and whose performance matches that of a research participant. The internal variables in such a model might then be taken as proxies for latent variables computed in the brain. We discuss the potential advantages of such an approach for the (...) study of the neural underpinnings of cognitive control and its pitfalls, and we make explicit the assumptions underlying the interpretation of data obtained using this approach. (shrink)
Louis Bautain (1796–1867) has been described as the “French Newman” because of the resemblances between their lives and writings. This essay compares three aspects of the thought of Newman and Bautain: their respective understanding of faith, reason, and development. Both thinkers understood faith and reason in relation to conversion and the realities of life and viewed faith and reason as functioning in tandem with doctrinal development.