In Experiment, Right or Wrong, Allan Franklin continues his investigation of the history and philosophy of experiment presented in his previous book, The Neglect of Experiment. In this new study, Franklin considers the fallibility and corrigibility of experimental results and presents detailed histories of two such episodes: 1) the experiment and the development of the theory of weak interactions from Fermi's theory in 1934 to the V-A theory of 1957 and 2) atomic parity violation experiments and the Weinberg-Salam (...) unified theory of electroweak interactions of the 1970s and 1980s. In these episodes Franklin demonstrates not only that experimental results can be wrong, but also that theoretical calculations and the comparison between experiment and theory can also be incorrect. In the second episode, Franklin contrasts his view of an "evidence model" of science in which questions of theory choice, confirmation, and refutation are decided on the basis of reliable experimental evidence, with that proposed by the social constructivists. (shrink)
Conscious events interact with memory systems in learning, rehearsal and retrieval (Ebbinghaus 1885/1964; Tulving 1985). Here we present hypotheses that arise from the IDA computional model (Franklin,Kelemen and McCauley 1998; Franklin 2001b) of global workspace theory (Baars 1988, 2002). Our primary tool for this exploration is a flexible cognitive cycle employed by the IDA computational model and hypothesized to be a basic element of human cognitive processing. Since cognitive cycles are hypothesized to occur five to tentimes a second (...) and include interaction between conscious contents and several of the memory systems, they provide the means for an exceptionally fine-grained analysis of various cognitive tasks. We apply this tool to the small effect size of subliminal learning compared to supraliminal learning, to process dissociation, to implicit learning, to recognition vs. recall, and to the availability heuristic in recall. The IDA model elucidates the role of consciousness in the updating of perceptual memory, transient episodic memory, and procedural memory. In most cases, memory is hypothesized to interact with conscious events for its normal functioning. The methodology of the paper is unusual in that the hypotheses and explanations presented are derived from an empirically based, but broad and qualitative computational model of human cognition. (shrink)
Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational (...) approaches to higher-order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics, or Friendly AI. In this study, we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model, we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global workspace theory, proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin, Baars, Ramamurthy, & Ventura, 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agent’s selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision-making process, and we will elucidate a process whereby an agent can work through an ethical problem to reach a solution that takes account of ethically relevant factors. (shrink)
Replies to Kevin de Laplante’s ‘Certainty and Domain-Independence in the Sciences of Complexity’ (de Laplante, 1999), defending the thesis of J. Franklin, ‘The formal sciences discover the philosophers’ stone’, Studies in History and Philosophy of Science, 25 (1994), 513-33, that the sciences of complexity can combine certain knowledge with direct applicability to reality.
Replies to O. Hanfling, ‘Healthy scepticism?’, Philosophy 68 (1993), 91-3, which criticized J. Franklin, ‘Healthy scepticism’, Philosophy 66 (1991), 305-324. The symmetry argument for scepticism is defended (that there is no reason to prefer the realist alternative to sceptical ones).
Decision under conditions of uncertainty is an unavoidable fact of life. The available evidence rarely suffices to establish a claim with complete confidence, and as a result a good deal of our reasoning about the world must employ criteria of probable judgment. Such criteria specify the conditions under which rational agents are justified in accepting or acting upon propositions whose truth cannot be ascertained with certainty. Since the seventeenth century philosophers and mathematicians have been accustomed to consider belief under uncertainty (...) from the standpoint of the mathematical theory of probability. In 1654, Blaise Pascal entered into correspondence with Pierre de Fermat on two problems in the theory of probability that had been posed by the Chevalier De Méré – the first involved the just division of the stakes in a game of chance that has been interrupted, the second is the likelihood of throwing a given number in a fixed number of throws using fair dice. This correspondence resulted in fundamental results that are now regarded as the foundation of the mathematical approach to probability, and historical studies of probabilistic reasoning almost invariably begin with the Pascal-Fermat correspondence. Franklin has no interest in denying the significance of the mathematical treatment of probability – he is, after all, a professional mathematician – but the principal theme in his book is the gradual “coming to consciousness” of canons of inference governing uncertain cases. (shrink)
Questions on "animal rights" in a cross-national survey conducted in 1993 provide an opportunity to compare the applicability to this issue of two theories of the socio-political changes summed up in "postmodernity": Inglehart's (1997) thesis of "postmaterialist values" and Franklin's (1999) synthesis of theories of late modernity. Although Inglehart seems not to have addressed human-nonhuman animal relations, it is reasonable to apply his theory of changing values under conditions of "existential security" to "animal rights." Inglehart's postmaterialism thesis argues that (...) new values emerged within specific groups because of the achievement of material security. Although emphasizing human needs, they shift the agenda toward a series of lifestyle choices that favor extending lifestyle choices, rights, and environmental considerations. Franklin's account of nonhuman animals and modern cultures stresses a generalized "ontological insecurity." Under postmodern conditions, changes to core aspects of social and cultural life are both fragile and fugitive. As neighborhood, community, family, and friendship relations lose their normative and enduring qualities, companion animals increasingly are drawn in to those formerly exclusive human emotional spaces. With a method used by Inglehart and a focus in countries where his postmaterialist effects should be most evident, this study derives and tests different expectations from the theories, then tests them against data from a survey supporting Inglehart's theory. His theory is not well supported. We conclude that its own anthropocentrism limits it and that the allowance for hybrids of nature-culture in Franklin's account offers more promise for a social theory of animal rights in changing times. (shrink)
Defends the cosmological argument for the existence of God against Hume's criticisms. Hume objects that since a cause is before its effect, an eternal succession has no cause; but that would rule of by fiat the possibility of God's creating the world from eternity. Hume argues that once a cause is given for each of a collection of objects, there is not need to posit a cause of the whole collection; but that is to assume the universe to be a (...) heap of things arbitrarily grouped rather than a whole arbitrarily divided. (shrink)
"Does torture work?" is a factual rather than ethical or legal question. But legal and ethical discussions of torture should be informed by knowledge of the answer to the factual question of the reliability of torture as an interrogation technique. The question as to whether torture works should be asked before that of its legal admissibility—if it is not useful to interrogators, there is no point considering its legality in court.
According to Quine’s indispensability argument, we ought to believe in just those mathematical entities that we quantify over in our best scientific theories. Quine’s criterion of ontological commitment is part of the standard indispensability argument. However, we suggest that a new indispensability argument can be run using Armstrong’s criterion of ontological commitment rather than Quine’s. According to Armstrong’s criterion, ‘to be is to be a truthmaker (or part of one)’. We supplement this criterion with our own brand of metaphysics, 'Aristotelian (...) (...) realism', in order to identify the truthmakers of mathematics. We consider in particular as a case study the indispensability to physics of real analysis (the theory of the real numbers). We conclude that it is possible to run an indispensability argument without Quinean baggage. (shrink)
• It would be a moral disgrace for God (if he existed) to allow the many evils in the world, in the same way it would be for a parent to allow a nursery to be infested with criminals who abused the children. • There is a contradiction in asserting all three of the propositions: God is perfectly good; God is perfectly powerful; evil exists (since if God wanted to remove the evils and could, he would). • The religious believer (...) has no hope of getting away with excuses that evil is not as bad as it seems, or that it is all a result of free will, and so on. Piper avoids mentioning the best solution so far put forward to the problem of evil. It is Leibniz’s theory that God does not create a better world because there isn’t one — that is, that (contrary to appearances) if one part of the world were improved, the ramifications would result in it being worse elsewhere, and worse overall. It is a “bump in the carpet” theory: push evil down here, and it pops up over there. Leibniz put it by saying this is the “Best of All Possible Worlds”. That phrase was a public relations disaster for his theory, suggesting as it does that everything is perfectly fine as it is. He does not mean that, but only that designing worlds is a lot harder than it looks, and determining the amount of evil in the best one is no easy matter. Though humour is hardly appropriate to the subject matter, the point of Leibniz’s idea is contained in the old joke, “An optimist is someone who thinks this is the best of all possible worlds, and a pessimist thinks.. (shrink)
The debate over whether Frankfurt-style cases are counterexamples to the principle of alternative possibilities (PAP) has taken an interesting turn in recent years. Frankfurt originally envisaged his attack as an attempting to show that PAP is false—that the ability to do otherwise is not necessary for moral responsibility. To many this attack has failed. But Frankfurtians have not conceded defeat. Neo-Frankfurtians, as I will call them, argue that the upshot of Frankfurt-style cases is not that PAP is false, but that (...) it is explanatorily irrelevant. Derk Pereboom and David Hunt’s buffer cases are tailor made to establish this conclusion. In this paper I come to the aid of PAP, showing that buffer cases provide no reason for doubting either its truth or relevance with respect to explaining an agent’s moral responsibility. (shrink)
The winning entry in David Stove's Competition to Find the Worst Argument in the World was: “We can know things only as they are related to us/insofar as they fall under our conceptual schemes, etc., so, we cannot know things as they are in themselves.” That argument underpins many recent relativisms, including postmodernism, post-Kuhnian sociological philosophy of science, cultural relativism, sociobiological versions of ethical relativism, and so on. All such arguments have the same form as ‘We have eyes, therefore we (...) cannot see’, and are equally invalid. (shrink)
In this paper I seek to defend libertarianism about free will and moral responsibility against two well-known arguments: the luck argument and the Mind argument. Both of these arguments purport to show that indeterminism is incompatible with the degree of control necessary for free will and moral responsibility. I begin the discussion by elaborating these arguments, clarifying important features of my preferred version of libertarianism—features that will be central to an adequate response to the arguments—and showing why a strategy of (...) reconciliation (often referred to as “deliberative libertarianism”) will not work. I then consider four formulations of the luck argument and find them all wanting. This discussion will place us in a favorable position to understand why the Mind argument also fails. (shrink)
Dispostions, such as solubility, cannont be reduced to categorical properties, such as molecular structure, without some element of dipositionaity remaining. Democritus did not reduce all properties to the geometry of atoms - he had to retain the rigidity of the atoms, that is, their disposition not to change shape when a force is applied. So dispositions-not-to, like rigidity, cannot be eliminated. Neither can dispositions-to, like solubility.
Why do students take the instruction "prove" in examinations to mean "go to the next question"? Because they have not been shown the simple techniques of how to do it. Mathematicians meanwhile generate a mystique of proof, as if it requires an inborn and unteachable genius. True, creating research-level proofs does require talent; but reading and understanding the proof that the square of an even number is even is within the capacity of most mortals.
The logical interpretation of probability, or ``objective Bayesianism''''– the theory that (some) probabilitiesare strictly logical degrees of partial implication – is defended.The main argument against it is that it requires the assignment ofprior probabilities, and that any attempt to determine them by symmetryvia a ``principle of insufficient reason'''' inevitably leads to paradox.Three replies are advanced: that priors are imprecise or of little weight, sothat disagreement about them does not matter, within limits; thatit is possible to distinguish reasonable from unreasonable priorson (...) logical grounds; and that in real cases disagreement about priorscan usually be explained by differences in the background information.It is argued also that proponents of alternative conceptions ofprobability, such as frequentists, Bayesians and Popperians, areunable to avoid committing themselves to the basic principles oflogical probability. (shrink)
A crucial question for libertarians about free will and moral responsibility concerns how their accounts secure more control than compatibilism. This problem is particularly exasperating for event-causal libertarianism, as it seems that the only difference between these accounts and compatibilism is that the former require indeterminism. But how can indeterminism, a mere negative condition, enhance control? This worry has led many to conclude that the only viable form of libertarianism is agent-causal libertarianism. In this paper I show that this conclusion (...) is premature. I explain how event-causal libertarianism secures more control than compatibilism by offering a novel argument for incompatibilism. Part of the reason my solution has gone unnoticed is that it is often mistakenly assumed that an agent's control is wholly exhausted by the agent's powers and abilities. I argue, however, that control is constituted not just by what we have the ability to do, but also by what we have the opportunity to do. And it is by furnishing agents with new opportunities that event-causal libertarianism secures enhanced control. In order to defend this claim, I provide an analysis of opportunities and construct a novel incompatibilist argument to show that the opportunity to do otherwise is incompatible with determinism. (shrink)
The late twentieth century saw two long-term trends in popular thinking about ethics. One was an increase in relativist opinions, with the “generation of the Sixties” spearheading a general libertarianism, an insistence on toleration of diverse moral views (for “Who is to say what is right? – it’s only your opinion.”) The other trend was an increasing insistence on rights – the gross violations of rights in the killing fields of the mid-century prompted immense efforts in defence of the “inalienable” (...) rights of the victims of dictators, of oppressed peoples, of refugees. The obvious incompatibility of those ethical stances, one anti-objectivist, the other objectivist in the extreme, proved no obstacle to their both being held passionately, often by the same people. (shrink)
Philosophers of experiment have acknowledged that experiments are often more than mere hypothesis-tests, once thought to be an experiment's exclusive calling. Drawing on examples from contemporary biology, I make an additional amendment to our understanding of experiment by examining the way that `wide' instrumentation can, for reasons of efficiency, lead scientists away from traditional hypothesis-directed methods of experimentation and towards exploratory methods.
In 1947 Donald Cary Williams claimed in The Ground of Induction to have solved the Humean problem of induction, by means of an adaptation of reasoning ﬁrst advanced by Bernoulli in 1713. Later on David Stove defended and improved upon Williams’ argument in The Rational- ity of Induction (1986). We call this proposed solution of induction the ‘Williams-Stove sampling thesis’. There has been no lack of objections raised to the sampling thesis, and it has not been widely accepted. In our (...) opinion, though, none of these objections has the slightest force, and, moreover, the sampling thesis is undoubtedly true. What we will argue in this paper is that one particular objection that has been raised on numerous occasions is misguided. This concerns the randomness of the sample on which the inductive extrapolation is based. (shrink)
Mathematicians often speak of conjectures as being confirmed by evidence that falls short of proof. For their own conjectures, evidence justifies further work in looking for a proof. Those conjectures of mathematics that have long resisted proof, such as Fermat's Last Theorem and the Riemann Hypothesis, have had to be considered in terms of the evidence for and against them. It is argued here that it is not adequate to describe the relation of evidence to hypothesis as `subjective', `heuristic' or (...) `pragmatic', but that there must be an element of what it is rational to believe on the evidence, that is, of non-deductive logic. (shrink)
Philosophical discussions of species have focused on multicellular, sexual animals and have often neglected to consider unicellular organisms like bacteria. This article begins to fill this gap by considering what species concepts, if any, apply neatly to the bacterial world. First, I argue that the biological species concept cannot be applied to bacteria because of the variable rates of genetic transfer between populations, depending in part on which gene type is prioritized. Second, I present a critique of phylogenetic bacterial species, (...) arguing that phylogenetic bacterial classification requires a questionable metaphysical commitment to the existence of essential genes. I conclude by considering how microbiologists have dealt with these biological complexities by using more pragmatic and not exclusively evolutionary accounts of species. I argue that this pragmatism is not borne of laziness but rather of the substantial conceptual problems in classifying bacteria based on any evolutionary standard. (shrink)
The imperviousness of mathematical truth to anti-objectivist attacks has always heartened those who defend objectivism in other areas, such as ethics. It is argued that the parallel between mathematics and ethics is close and does support objectivist theories of ethics. The parallel depends on the foundational role of equality in both disciplines. Despite obvious differences in their subject matter, mathematics and ethics share a status as pure forms of knowledge, distinct from empirical sciences. A pure understanding of principles is possible (...) because of the simplicity of the notion of equality, despite the different origins of our understanding of equality of objects in general and of the equality of the ethical worth of persons. (shrink)
Interpretations of recollection in the "Phaedo" are divided between ordinary interpretations, on which recollection explains a kind of learning accomplished by all, and sophisticated interpretations, which restrict recollection to philosophers. A sophisticated interpretation is supported by the prominence of philosophical understanding and reflection in the argument. Recollection is supposed to explain the advanced understanding displayed by Socrates and Simmias (74b2-4). Furthermore, it seems to be a necessary condition on recollection that one who recollects also perform a comparison of sensible particulars (...) to Forms (74a5-7). I provide a new ordinary interpretation which explains these features of the argument. First, we must clearly distinguish the philosophical reflection which constitutes the argument for the Theory of Recollection from the ordinary learning which is its subject. The comparison of sensibles to Forms is the reasoning by which we see, as philosophers, that we must recollect. At the same time, we must also appreciate the continuity of ordinary and philosophical learning. Plato wants to explain the capacity for ordinary discourse, but with an eye to its role as the origin of philosophical reflection and learning. In the "Phaedo", recollection has ordinary learning as its immediate explanandum, and philosophical learning as its ultimate explanandum. (shrink)
A familiar feature of our moral responsibility practices are pleas: considerations, such as “That was an accident”, or “I didn’t know what else to do”, that attempt to get agents accused of wrongdoing off the hook. But why do these pleas have the normative force they do in fact have? Why does physical constraint excuse one from responsibility, while forgetfulness or laziness does not? I begin by laying out R. Jay Wallace’s (Responsibility and the moral sentiments, 1994 ) theory of (...) the normative force of excuses and exemptions. For each category of plea, Wallace offers a single governing moral principle that explains their normative force. The principle he identifies as governing excuses is the Principle of No Blameworthiness without Fault: an agent is blameworthy only if he has done something wrong. The principle he identifies as governing exemptions is the Principle of Reasonableness: an agent is morally accountable only if he is normatively competent. I argue that Wallace’s theory of exemptions is sound, but that his account of the normative force of excuses is problematic, in that it fails to explain the full range of excuses we offer in our practices, especially the excuses of addiction and extreme stress. I then develop a novel account of the normative force of excuses, which employs what I call the “Principle of Reasonable Opportunity,” that can explain the full range of excuses we offer and that is deeply unified with Wallace’s theory of the normative force of exemptions. An important implication of the theory I develop is that moral responsibility requires free will. (shrink)
Democracy has difficulties with the rights on non-voters (children, the mentally ill, foreigners etc). Democratic leaders have sometimes acted ethically, contrary to the wishes of voters, e.g. in accepting refugees as immigrants.
Throughout history, almost all mathematicians, physicists and philosophers have been of the opinion that space and time are infinitely divisible. That is, it is usually believed that space and time do not consist of atoms, but that any piece of space and time of non-zero size, however small, can itself be divided into still smaller parts. This assumption is included in geometry, as in Euclid, and also in the Euclidean and non- Euclidean geometries used in modern physics. Of the few (...) who have denied that space and time are infinitely divisible, the most notable are the ancient atomists, and Berkeley and Hume. All of these assert not only that space and time might be atomic, but that they must be. Infinite divisibility is, they say, impossible on purely conceptual grounds. (shrink)
Einstein, like most philosophers, thought that there cannot be mathematical truths which are both necessary and about reality. The article argues against this, starting with prima facie examples such as "It is impossible to tile my bathroom floor with (equally-sized) regular pentagonal tiles." Replies are given to objections based on the supposedly purely logical or hypothetical nature of mathematics.
Baars (1988, 1997) has proposed a psychological theory of consciousness, called global workspace theory. The present study describes a software agent implementation of that theory, called ''Conscious'' Mattie (CMattie). CMattie operates in a clerical domain from within a UNIX operating system, sending messages and interpreting messages in natural language that organize seminars at a university. CMattie fleshes out global workspace theory with a detailed computational model that integrates contemporary architectures in cognitive science and artificial intelligence. Baars (1997) lists the psychological (...) ''facts that any complete theory of consciousness must explain'' in his appendix to In the Theater of Consciousness; global workspace theory was designed to explain these ''facts.'' The present article discusses how the design of CMattie accounts for these facts and thereby the extent to which it implements global workspace theory. (shrink)
The classical arguments for scepticism about the external world are defended, especially the symmetry argument: that there is no reason to prefer the realist hypothesis to, say, the deceitful demon hypothesis. This argument is defended against the various standard objections, such as that the demon hypothesis is only a bare possibility, does not lead to pragmatic success, lacks coherence or simplicity, is ad hoc or parasitic, makes impossible demands for certainty, or contravenes some basic standards for a conceptual or linguistic (...) scheme. Since the conclusion of the sceptical argument is not true, it is concluded that one can only escape the force of the argument through some large premise, such as an aptitude of the intellect for truth, if necessary divinely supported. (shrink)
Pascal’s wager and Leibniz’s theory that this is the best of all possible worlds are latecomers in the Faith-and-Reason tradition. They have remained interlopers; they have never been taken as seriously as the older arguments for the existence of God and other themes related to faith and reason.
Aristotelian, or non-Platonist, realism holds that mathematics is a science of the real world, just as much as biology or sociology are. Where biology studies living things and sociology studies human social relations, mathematics studies the quantitative or structural aspects of things, such as ratios, or patterns, or complexity, or numerosity, or symmetry. Let us start with an example, as Aristotelians always prefer, an example that introduces the essential themes of the Aristotelian view of mathematics. A typical mathematical truth is (...) that there are six different pairs in four objects: Figure 1. There are 6 different pairs in 4 objects The objects may be of any kind, physical, mental or abstract. The mathematical statement does not refer to any properties of the objects, but only to patterning of the parts in the complex of the four objects. If that seems to us less a solid truth about the real world than the causation of flu by viruses, that may be simply due to our blindness about relations, or tendency to regard them as somehow less real than things and properties. But relations (for example, relations of equality between parts of a structure) are as real as colours or causes. (shrink)
Social constructionists believe that experimental evidence plays a minimal role in the production of scientific knowledge, while rationalists such as myself believe that experimental evidence is crucial in it. As one historical example in support of the rationalist position, I trace in some detail the theoretical and experimental research that led to our understanding of beta decay, from Enrico Fermi’s pioneering theory of 1934 to George Sudarshan and Robert Marshak’s and Richard Feynman and Murray Gell-Mann’s suggestion in 1957 and 1958, (...) respectively, of the V–A theory of weak interactions. This is not a history of an unbroken string of successes, but one that includes incorrect experimental results, incorrect experiment-theory comparisons, and faulty theoretical analyses. Nevertheless, we shall see that the constraints that Nature imposed made the V–A theory an almost inevitable outcome of this theoretical and experimental research. (shrink)