This paper argues that Duhem’s thesis does not decisively refute a corroboration-based account of scientific methodology (or ‘falsificationism’), but instead that auxiliary hypotheses are themselves subject to measurements of corroboration which can be used to inform practice. It argues that a corroboration-based account is equal to the popular Bayesian alternative, which has received much more recent attention, in this respect.
This article shows that Popper’s measure of corroboration is inapplicable if, as Popper argued, the logical probability of synthetic universal statements is zero relative to any evidence that we might possess. It goes on to show that Popper’s definition of degree of testability, in terms of degree of logical content, suffers from a similar problem. 1 The Corroboration Function and P(h|b) 2 Degrees of Testability and P(h|b).
Evidentiary propositions E 1 and E 2, each p-positively relevant to some hypothesis H, are mutually corroborating if p(H|E 1 ∩ E 2) > p(H|E i ), i = 1, 2. Failures of such mutual corroboration are instances of what may be called the corroboration paradox. This paper assesses two rather different analyses of the corroboration paradox due, respectively, to John Pollock and Jonathan Cohen. Pollock invokes a particular embodiment of the principle of insufficient reason to argue (...) that instances of the corroboration paradox are of negligible probability, and that it is therefore defeasibly reasonable to assume that items of evidence positively relevant to some hypothesis are mutually corroborating. Taking a different approach, Cohen seeks to identify supplementary conditions that are sufficient to ensure that such items of evidence will be mutually corroborating, and claims to have identified conditions which account for most cases of mutual corroboration. Combining a proposed common framework for the general study of paradoxes of positive relevance with a simulation experiment, we conclude that neither Pollock’s nor Cohen’s claims stand up to detailed scrutiny. I am quite prepared to be told…”oh, that is an extreme case: it could never really happen!” Now I have observed that this answer is always given instantly, with perfect confidence, and without any examination of the proposed case. It must therefore rest on some general principle: the mental process being something like this—“I have formed a theory. This case contradicts my theory. Therefore, this is an extreme case, and would never occur in practice.”Rev. Charles L. Dodgson. (shrink)
This paper presents a new 'discontinuous' view of Popper's theory of corroboration, where theories cease to have corroboration values when new severe tests are devised which have not yet been performed, on the basis of a passage from The Logic of Scientific Discovery . Through subsequent analysis and discussion, a novel problem for Popper's account of corroboration, which holds also for the standard ('continuous') view, emerges. This is the problem of the Big Test: that the severest (...)test of any hypothesis is actually to perform all possible tests (when 'possible' is suitably interpreted). But this means that Popper's demand for 'the severest tests' amounts simply to a demand for 'all possible tests'. The paper closes by considering how this bears on accommodation vs. prediction, with respect to corroboration. (shrink)
How are we to understand the use of probability in corroboration functions? Popper says logically, but does not show we could have access to, or even calculate, probability values in a logical sense. This makes the logical interpretation untenable, as Ramsey and van Fraassen have argued. -/- If corroboration functions only make sense when the probabilities employed therein are subjective, however, then what counts as impressive evidence for a theory might be a matter of convention, or even whim. (...) So isn’t so-called ‘corroboration’ just a matter of psychology? -/- In this paper, I argue that we can go some way towards addressing this objection by adopting an intersubjective interpretation, of the form advocated by Gillies, with respect to corroboration. I show why intersubjective probabilities are preferable to subjective ones when it comes to decision making in science: why group decisions are liable to be superior to individual ones, given a number of plausible conditions. I then argue that intersubjective corroboration is preferable to intersubjective confirmation of a Bayesian variety, because there is greater opportunity for principled agreement concerning the factors involved in the former. (shrink)
We examine three critical aspects of Popper’s formulation of the ‘ Logic of Scientific Discovery ’—evidence, content and degree of corroboration—and place these concepts in the context of the Tree of Life (ToL) problem with particular reference to molecular systematics. Content, in the sense discussed by Popper, refers to the breadth and scope of existence that a hypothesis purports to explain. Content, in conjunction with the amount of available and relevant evidence, determines the testability, or potential degree of (...) class='Hi'>corroboration, of a statement; content distinguishes scientific hypotheses from metaphysical assertions. Degree of corroboration refers to the relative and tentative confidence assigned to one hypothesis over another, based upon the performance of each under critical tests. Here we suggest that systematists attempt to maximize content and evidence to increase the potential degree of corroboration in all phylogenetic endeavors. Discussion of this “total evidence” approach leads to several interesting conclusions about generating ToL hypotheses. (shrink)
Chow's one-tailed null-hypothesis significance-test procedure, with its rationale based on the elimination of chance influences, is not appropriate for theory-corroboration experiments. Estimated effect sizes and their associated standard errors or confidence limits will always suffice.
Chow's defense of NHSTP ignores the fact that in psychology it is used to test substantive hypotheses in theory-corroborating research. In this role, NHSTP is not only inadequate, but damaging to the progress of psychology as a science. NHSTP does not fulfill the Popperian requirement that theories be tested severely. It also encourages nonspecific predictions and feeble theoretical formulations.
Watkins proposes a neo-Popperian solution to the pragmatic problem of induction. He asserts that evidence can be used non-Inductively to prefer the principle that corroboration is more successful over all human history than that, Say, Counter-Corroboration is more successful either over this same period or in the future. Watkins's argument for rejecting the first counter-Corroborationist alternative is beside the point, However, As whatever is the best strategy over all human history is irrelevant to the pragmatic problem of induction (...) since we are not required to act in the past, And his argument for rejecting the second presupposes induction. (shrink)
The philosophical significance of Dmitri Mendeleevâ€™s successful predictions of the properties of gallium and scandium vis a vis the acceptance of the Periodic Table 1874â€“1886 has been debated recently. This author presents evidence that De Boisbaudran and Cleve both respectively predicted the possible existence of gallium and scandium, but on the basis of the old TRIAD methodology. This suggests that these successful Mendeleev predictions were therefore not independent corroboration of the concept of the Periodic System. Instead the significantly independent (...) predictive successes for Mendeleevâ€™s system were (a) the determination of the atomic weight of the known element uranium as 240 instead of the previously accepted 120 in 1874 and (b) the isolation of germanium by Winkler in 1886. (shrink)
What role does non-genetic inheritance play in evolution? In recent work we have independently and collectively argued that the existence and scope of non-genetic inheritance systems, including epigenetic inheritance, niche construction/ecological inheritance, and cultural inheritance—alongside certain other theory revisions—necessitates an extension to the neo-Darwinian Modern Synthesis (MS) in the form of an Extended Evolutionary Synthesis (EES). However, this argument has been challenged on the grounds that non-genetic inheritance systems are exclusively proximate mechanisms that serve the ultimate function of calibrating organisms (...) to stochastic environments. In this paper we defend our claims, pointing out that critics of the EES (1) conflate non-genetic inheritance with early 20th-century notions of soft inheritance; (2) misunderstand the nature of the EES in relation to the MS; (3) confuse individual phenotypic plasticity with trans-generational non-genetic inheritance; (4) fail to address the extensive theoretical and empirical literature which shows that non-genetic inheritance can generate novel targets for selection, create new genetic equilibria that would not exist in the absence of non-genetic inheritance, and generate phenotypic variation that is independent of genetic variation; (5) artificially limit ultimate explanations for traits to gene-based selection, which is unsatisfactory for phenotypic traits that originate and spread via non-genetic inheritance systems; and (6) fail to provide an explanation for biological organization. We conclude by noting ways in which we feel that an overly gene-centric theory of evolution is hindering progress in biology and other sciences. (shrink)
How should we evaluate an argument in which two witnesses independently testify to some claim? In fact what would happen is that the testimony of the second witness would be taken to corroborate that of the first to some extent, thereby boosting up the plausibility of the first argument from testimony. But does that commit the fallacy of double counting, because the second testimony is already taken as independent evidence supporting the claim? Perhaps the corroboration effect should be considered (...) illogical, since each premise should be seen as representing a separate reason in a convergent argument for accepting the claim as plausible. In this paper, we tackle the problem using argumentation schemes and argument diagramming. We examine a number of examples, and come up with two hypotheses that offer methods of analyzing and evaluating this kind of evidence. (shrink)
Jonathan Cohen has claimed that in cases of witness agreement there is an inverse relationship between the prior probability and the posterior probability of what is being agreed: the posterior rises as the prior falls. As is demonstrated in this paper, this contention is not generally valid. In fact, in the most straightforward case exactly the opposite is true: a lower prior also means a lower posterior. This notwithstanding, there is a grain of truth to what Cohen is saying, as (...) there are special circumstances under which a thesis similar to his holds good. What characterises these circumstances is that they allow for the fact of agreement to be surprising. In making this precise, I draw on Paul Horwich's probabilistic analysis of surprise. I also consider a related claim made by Cohen concerning the effect of lowering the prior on the strength of corroboration. 1 Introduction 2 Cohen's claim 3 A counterexample 4 A weaker claim 5 A counterexample to the weaker claim 6 The grain of truth in Cohen's claim 7 Prior probability and strength of corroboration 8 Conclusion. (shrink)
Chow sets his version of statistical significance testing in an impoverished context of “theory corroboration” that explicitly excludes well-posed theories admitting of strong support by precise empirical evidence. He demonstrates no scientific usefulness for the problematic procedure he recommends instead. The important role played by significance testing in today's behavioral and brain sciences is wholly inconsistent with the rhetoric he would enforce.
In their book, Relevance, Sperber and Wilson make an important contribution towards constructing a credible theory of this unforthcoming notion. All is not clear sailing, however. If it is accepted as a condition on the adequacy of any account of relevance that it not be derivable either that nothing is relevant to anything or that everything is relevant to everything, it can be shown that Sperber and Wilson come close to violating the condition.
It is shown by means of a simple example that a good explanation of an event is not necessarily corroborated by the occurrence of that event. It is also shown that this contention follows symbolically if an explanation having higher "explicativity" than another is regarded as better.
In an earlier paper, I objected to certain elements of L. Jonathan Cohen's account of corroborating testimony (Olsson ). In their response to my article, Bovens, Fitelson, Hartmann and Snyder () suggest some significant improvements of the probabilistic model which I used in assessing Cohen's theses and answer some additional questions which my study raised. More problematically, they also seek to defend Cohen against my criticism. I argue, in this reply, that their attempts in this direction are unsuccessful.
Jonathan Weinberg (2007) has argued that we should not appeal to intuition as evidence because it cannot be externally corroborated. This paper argues for the normative claim that Weinberg’s demand for external corroboration is misguided. The idea is that Weinberg goes wrong in treating philosophical appeal to intuition analogous to the appeal to evidence in the sciences. Traditional practice is defended against Weinberg’s critique with the argument that some intuitions are true simply in virtue of being intuited by the (...) majority of people. The argument proceeds by way of examining a paradigm case, Putnam’s Twin Earth. (shrink)
Popper's account of refutation is the linchpin of his famous view that the method of science is the method of conjecture and refutation. This thesis critically examines his account of refutation, and in particular the practice he deprecates as avoiding a refutation. I try to explain how he comes to hold the views that he does about these matters; how he seeks to make them plausible; how he has influenced others to accept his mistakes, and how some of the ideas (...) or responses to Popper of such people are thus similarly mistaken. I draw some distinctions necessary to the provision of an adequate account of the so-called practice of avoiding a refutation, and try to rid the debate about this practice of at least one red herring. I analyse one case of 'avoiding' a refutation in detail to show how the rationality of scientific practice eludes both Popper and many of his commentators. Popper's skepticism about contingent knowledge prevents him from providing an acceptable account of contingent refutation, and so his method is really the method of conjecture and conjecture. He cannot do without the concepts of knowledge and refutation, however, if his account of science is to be plausible or persuasive, and so he equivocates between, amongst other things, refutation as disproof and refutation as the weaker notion of discorroboration. I criticise David Stove's account of this matter, in particular to show how he misses this point. An additional advantage Popper would secure from this equivocation is that if refutations were mere discorroborations they would be easier to achieve, and hence more common in science, than is the case. On Popper's weak notion of refutation, it would be possible to refute true theories since corroboration does not entail truth. There are two other related doctrines Popper holds about refutation which, if accepted, make some refutations seem easier to obtain than is the case. I call these doctrines 'Strong Popperian Falsificationism' (SPF) and 'Weak Popperian Falsificationism' (WPF). SPF is the false doctrine that if a prediction from some theory is refuted then that theory is refuted. Popper does not always endorse SPF. In particular, when confronted with a counterexample to it, he retreats to WPF, which is the false doctrine that if a prediction from some theory is refuted then that theory is prima facie refuted. WPF , or even SPF, can seem plausible if one has in mind predictions derived from theories in strong or conclusive tests of those theories, which I suggest Popper characteristically does. v Popper is disposed to describe any such case of predictive failure which does not lead to the refutation of the theory concerned as one in which that refutation has been avoided. To reinforce his portrayal of the refutation, or the attempted refutation, of major scientific theories as the rational core of scientific practice, Popper treats the so-called practice of avoiding a refutation as untypical of science, and much so-called avoidance he dismisses as unscientific or pseudo-scientific. I argue that his notion of avoiding a refutation is incoherent. Popper is further driven to believe that such avoidance is possible, however, because he conflates sentences with propositions and propositions with propositional beliefs. Also, he wishes to avoid being saddled with the relativisim that is a consequence of his weak account of refutation as discorroboration. Popper believes that ad hoc hypotheses are the most important of the unscientific means of avoiding a refutation. I argue that his account of such hypotheses is also incoherent, and that several hypotheses thought to be ad hoc in his sense are not. Such hypotheses appear to be so largely because of Popper's use of rhetoric and partly because these hypotheses are unacceptable for other reasons. I conclude that to know that a hypothesis is ad hoc in Popper's sense does not illuminate scientific practice. Popper has also attempted to explicate ad hocness in terms of some undesirable, or allegedly undesirable, properties of hypotheses or the explanations they would provide. The first such property is circularity, which is undesirable; the second such property is reduction in empirical content, which is not. In the former case I argue that non-circularity is clearly preferable to non-ad hocness as a criterion for a satisfactory explanation or explanans, as the case may be, and in the latter case that Popper is barking up the wrong tree. Some cases of so-called avoidance are obviously not unscientific. The discovery of Neptune from a prediction based on the reasonable belief that there were residual perturbations in the motion of Uranus is an important case in point, and one that is much discussed in the literature. The manifest failure of astronomers to account for Uranus's motion did not lead to the refutation of Newton's law of gravitiation, yet significant scientific progress obviously did result. Retreating to WPF, Popper claims that Newton's law was prima facie refuted. In general, astronomers have never shared this view, and they are correct in not doing so. I argue that the law of gravitation would have been prima facie refuted only if there had been good reason at the time to believe as false what is true, namely, that an unknown trans-Uranian planet was the cause of those Uranian residuals. Knowledge of the trans-Uranian region was then so slight that it was merely a convenient assumption, one which there was little reason to believe was false, that the known influences on Uranus's motion were the only such influences. I conclude that in believing vi or supposing that it was this assumption that was false, rather than the law of gravitation, Leverrier and Adams, the co-predictors of Neptune, were acting rationally and intelligently. Popper's commentators offer a variety of accounts of the alleged practice of avoiding a refutation, and of this case in particular. I analyse a sample of their accounts to show how common is the acceptance of some of Popper's basic mistakes, even amongst those who claim to reject his falsificationism, and to display the effects on their accounts of this acceptance of his mistakes. Many commentators recognize that anomalies are typically dealt with by changes in the boundary conditions or in other of the auxiliary propositions employed. Where many still go wrong, however, is in retaining the presupposition of WPF which encouraged Popper to hold the contradictory view about anomalies in the first place. Thus Imre Lakatos and others, for example, have developed a 'siege mentality' about major scientific theories; they see them as under continual threat of refutation from anomalies, and so come to believe that dogmatism is essential in science if such theories are to survive as they do. I examine various such doomed attempts to reconcile Popper with the history of science. It is a common failure in this literature to conflate or to fail to see the need to distinguish a belief from a supposition, and an epistemic reason from a pragmatic reason. I argue that only if one does draw these distinctions can one give an adequate account of how anomalies are rationally dealt with in science. The other important strand in Popper's thinking about 'avoidance' of refutation which has seriously misled some of his commentators is his unfounded belief in the dangers of ad hoc hypotheses. I examine the accounts that a sample of such commentators provide of the trans-Uranian planet hypotheses of Leverrier and Adams. These commentators imply or assert what Popper only hints at, namely, that there is something fishy about this hypothesis. I provide a further defence of the rationality of entertaining this hypothesis at the time. I conclude with a few remarks about Popper's dilemma in respect of scientific practice and his long standing emphasis on refutations. (shrink)
Popper’s Critical Rationalism presents Popper’s views on science, knowledge, and inquiry, and examines the significance and tenability of these in light of recent developments in philosophy of science, philosophy of probability, and epistemology. It develops a fresh and novel philosophical position on science, which employs key insights from Popper while rejecting other elements of his philosophy. -/- Central theses include: -/- Crucial questions about scientific method arise at the level of the group, rather than that of the individual. -/- Although (...) criticism is vital for science, dogmatism is important too. -/- Belief in scientific theories is permissible even in the absence of evidence in their favour. -/- The aim of science is to eliminate false theories. -/- Critical rationalism can be understood as a form of virtue epistemology -/- Contents: -/- Ch.1 Comprehensive rationalism, critical rationalism, and pancritical rationalism -- Ch.2 Induction and corroboration -- Ch.3 Corroboration and the interpretation of probability -- Ch.4 Corroboration, tests, and predictivism -- Ch.5 Corroboration and Duhem's thesis -- Ch.6 The roles of criticism and dogmatism in science: a group level view -- Ch.7 The aim of science and its evolution -- Ch.8 Thoughts and findings. (shrink)
JOSEPH AGASSI 1. Sir Karl Popper has offered two different theories of scientific progress, his theory of conjectures and refutations and corroboration, as well as his theory of verisimilitude increase. The former was attacked by some old-fashioned inductivists, yet is triumphant; the latter has been refuted by Tichy and by Miller to Popper’s own satisfaction. Oddly, however, the theory of verisimilitude was developed because of some deficiency in the theory of corroboration, and though in its present precise formulation (...) it was refuted, Popper still holds it in general terms, and I think he still hopes to find a better precise formulation of it. My aims in the present note are to pin-point the deficiency of Popper’s theory of corroboration and to use this for a precise formulation of verisimilitude increase acceptable to him. For my part, however, I see the situation in a different way, as will be indicated at the end of this note. (shrink)
Entertaining diverse assumptions about empirical research, commentators give a wide range of verdicts on the NHSTP defence in Statistical significance. The null-hypothesis significance-test procedure (NHSTP) is defended in a framework in which deductive and inductive rules are deployed in theory corroboration in the spirit of Popper's Conjectures and refutations (1968b). The defensible hypothetico-deductive structure of the framework is used to make explicit the distinctions between (1) substantive and statistical hypotheses, (2) statistical alternative and conceptual alternative hypotheses, and (3) (...) making statistical decisions and drawing theoretical conclusions. These distinctions make it easier to show that (1) H0 can be true, (2) the effect size is irrelevant to theory corroboration, and (3) “strong” hypotheses make no difference to NHSTP. Reservations about statistical power, meta-analysis, and the Bayesian approach are still warranted. (shrink)
Empirical analyses of the ethics of corporations with the aim to improve the state of corporate ethics are rare. This paper develops an integrated, normative model of corporate ethics by conceptualizing the ethical quality of organizations and by relating this contextual quality to various expressions of immoral behavior. This so-called Ethics Qualities Model for organizations, which contains 21 ethical qualities, allows one to assess the ethical content of institutional groups of individuals. A proper conceptualization is highly relevant both for the (...) empirical corroboration of business ethics theories and for managerial purposes, such as judging individual and group performance or informing external stakeholders. The empirical applicability of the model is illustrated by an explorative case study of a large, globally operating financial institution. This case-study demonstrates that the corporate ethical qualities differ with respect to their perceived optimality as well as to their estimated impact on (un)ethical conduct. The various results provide managers with many clues to understand their organization and to take effective measures to improve the ethical content of their organization. (shrink)
A cultural change occurred roughly 40,000 years ago. For the first time, there was evidence of belief in unseen agents and an afterlife. Before this time, humans did not show widespread evidence of being able to think about objects, persons, and other agents that they had not been in close contact with. I argue that one can explain this transition by appealing to a population increase resulting in greater exoteric (inter-group) communication. The increase in exoteric communication triggered the actualization of (...) a dormant potential for greater syntactic computational power; specifically it triggered syntactic movement. Syntactic movement, in turn, made possible variable binding, which crucially figures into cognition by description, a naturalistic analogue of Russell’s knowledge by description. Cognition by description made possible the ability to conceive of things one had never experienced, such as mythological beings, places only visited by the dead, and so forth. The Amazonian Pirahã provide some corroboration for this hypothesis, since they exhibit the combination of traits here attributed to Middle Paleolithic individuals, namely exclusively esoteric (intra-group) communication, evident lack of syntactic movement, and a limitation to knowledge (cognition) by acquaintance. (shrink)
The article shows the affinity of Simmel's formal sociology with Husserl's notion of eidetic science. This thesis is demonstrated by the corroboration of Simmel's revision of neo-Kantian epistemology for sociology with Husserl's phenomenology, and the parallel discussion of Simmel and Husserl concerning cognitive levels and exact and morphological eide. Simmel's analysis of dyads is explored as an exemplar of his eidetic insights. An important consequence of this demonstration is the vindication establishing the scientific legitimacy of Simmel's methodology regarding the (...) sociology of the forms of association. Woven throughout is discussion concerning the doctrine of the complementarity of eidetic and empirical science. Simmel's methodology is shown to have been ahead of its time through conjoining these two modes of scientific investigation. (shrink)
Commenting on Atkinson's paper I argue that leading to a successful real experiment is not the only scale on which a thought experiment's value is judged. Even the path from the original EPR thought experiment to Aspect's verification of the Bell inequalities was long-winded and involved considerable input from the sides of technology and mathematics. Von Neumann's construction of hidden variables was, moreover, a genuinely mathematical thought experiment that was successfully criticized by Bell. Such thought experiments are also possible (...) in string theory, where any (non-trivial) empirical corroboration seems to be out of reach. Yet appraising mathematical thought experiments and their contribution to physical thought experiments requires a dynamical account which in the spirit of Mach and Lakatos attributes due weight to informal mathematical reasoning or empirical intuition. (shrink)
This paper proposes to analyse the process that makes paths of action meaningful. It argues that this process is one of ‘figuration’. The term ‘figuration’ intends to outline how the experience of moral meaning is one that already positively marks out a field and to identify and analyse the mechanisms used for such marking and selection. It is my contention that these mechanisms predate the persuasion to a moral path; they are the process through which this path is constructed as (...) meaningful. This thesis is elucidated through an analysis of the tactics of meaning in Kant’s moral theory. Kant turns to aesthetics as a means of corroboration for his moral theory, but he also attempts to limit the scope of the interactions between his aesthetic and moral theory. For instance, when he writes on the topic of form in aesthetic taste or outlines the technical specifications of aesthetic judgment, it is arguably the arcane peculiarities of his system that are met. For this reason, Kant insists on the merely analogical relations between beauty and morality. However, it is also possible to see how certain aspects of Kant’s aesthetic theory execute wider, and potentially more important, functions for his practical philosophy, such as providing meaningful orientation for the ascetic moral attitude of his duty-ethics. In this respect, certain figures of Kant’s aesthetic theory may well be viewed as complementing the dependence in his moral philosophy, in the important sections on moral pedagogy and methodology, on appeals to heroic models and stories as ways of shaping and inculcating the moral disposition. This paper considers these aspects of interaction between Kant’s aesthetic and moral philosophies as both 1) a problem for the consistency of his philosophy given his avowed exclusion of aesthetic and religious elements of meaning in his duty-ethics; and 2) as a case study for the new, schematic analysis of ‘moral figuration’ outlined in the paper. (shrink)