I argue elsewhere (Roche 2014) that evidence of evidence is evidence under screening-off. Tal and Comesaña (2017) argue that my appeal to screening-off is subject to two objections. They then propose an evidence of evidence thesis involving the notion of a defeater. There is much to learn from their very careful discussion. I argue, though, that their objections fail and that their evidence of evidence thesis is open to counterexample.
In many fields of biology, both the phenomena to be explained and the mechanisms proposed to explain them are commonly presented in diagrams. Our interest is in how scientists construct such diagrams. Researchers begin with evidence, typically developed experimentally and presented in data graphs. To arrive at a robust diagram of the phenomenon or the mechanism, they must integrate a variety of data to construct a single, coherent representation. This process often begins as the researchers create a first sketch, and (...) it continues over an extended period as they revise the sketch until they arrive at a diagram they find acceptable. We illustrate this process by examining the sketches developed in the course of two research projects directed at understanding the generation of circadian rhythms in cyanobacteria. One identified a new aspect of the phenomenon itself, whereas the other aimed to develop a new mechanistic account. In both cases, the research resulted in a paper in which the conclusion was presented in a diagram that the authors deemed adequate to convey it. These diagrams violate some of the normative “cognitive design principles” advanced by cognitive scientists as constraints on successful visual communication. We suggest that scientists’ sketching is instead governed by norms of success that are broadly explanatory: conveying the phenomenon or mechanism. (shrink)
Human connection is universally important, particularly in the context of serious illness and at the end of life. The presence of close family and friends has many benefits when death is close. Hospital visitation restrictions during the Coronavirus pandemic therefore warrant careful consideration to ensure equity, proportionality, and the minimization of harm. The Australian and New Zealand Society for Palliative Medicine COVID-19 Special Interest Group utilized the relevant ethical and public health principles, together with the existing disease outbreak literature and (...) evolving COVID-19 knowledge, to generate a practical framework of visiting restrictions for inpatients receiving palliative and end-of-life care. Expert advice from an Infectious Diseases physician ensured relevance to community transmission dynamics. Three graded levels of visitor restrictions for inpatient settings are proposed, defining an appropriate level of minimum access. These depend upon the level of community transmission of COVID-19, the demand on health services, the potential COVID-19 status of the patient and visitors, and the imminence of the patient’s death. This framework represents a cohesive, considered, proportionate, and ethically robust approach to improve equity and consistency for inpatients receiving palliative care during the COVID-19 pandemic and may serve as a template for future disease outbreaks. (shrink)
Articulating the good of liberal education—what we should teach and why we should teach it—is necessary to resist the subversion of liberal education to economic or political ends and the mania for measurable skills. I argue that Iris Murdoch's philosophical writings enrich the work of contemporary Aristotelians, such as Joseph Dunne and Alasdair MacIntyre, on these issues. For Murdoch, studies in the arts and intellectual subjects, by connecting students to the inescapable contingency and finitude of human existence, contribute to the (...) cultivation of intellectual and moral virtues and thus to human flourishing. (shrink)
I develop a probabilistic account of coherence, and argue that at least in certain respects it is preferable to (at least some of) the main extant probabilistic accounts of coherence: (i) Igor Douven and Wouter Meijs’s account, (ii) Branden Fitelson’s account, (iii) Erik Olsson’s account, and (iv) Tomoji Shogenji’s account. Further, I relate the account to an important, but little discussed, problem for standard varieties of coherentism, viz., the “Problem of Justified Inconsistent Beliefs.”.
In George Sher’s recent article “Out of Control”, he discusses a series of 9 cases that he believes illustrates that some agents are uncontroversially morally responsible for actions they “cannot help” but perform (2006: 285). He argues these agents exert partial control over these actions insofar as their actions are determined from their character; but this is no control at all. Here I argue that in each of these cases the agent exerts morally relevant control over her actions and that (...) none of these are genuine instances of moral luck, nor counterexamples to the control principle. (shrink)
Ockham's views on many subjects have been misunderstood, including his views on ethics. This book is designed to avoid pitfalls that arise in reading medieval philosophy generally and Ockham in particular.
Helen Frowe (2006/2010) contends that there is a substantial moral difference between killing and letting die, arguing that in Michael Tooley's infamous machine case it is morally wrong to flip a coin to determine who lives or dies. Here I argue that Frowe fails to show that killing and letting die are morally inequivalent. However, I believe that she has succeeded in showing that it is wrong to press the button in Tooley's case, where pressing the button will change who (...) lives and dies. I argue that because killing and letting die are morally equivalent we have no reason to press the button in the machine case. Pressing the button in this case is morally wrong because there is no reason to do it; to press the button is to treat matters of life and death irreverently. (shrink)
In "Should Race Matter?," David Boonin proposes the compensation principle: When an agent wrongfully harms another person, she incurs a moral obligation to compensate that person for the harms she has caused. Boonin then argues that the United States government has wrongfully harmed black Americans by adopting pro-slavery laws and other discriminatory laws and practices following the end of slavery, and therefore the United States government has an obligation to pay reparations for slavery and discriminatory laws and practices to those (...) who have been harmed by them - in particular, to contemporary black Americans. Here I argue that the compensation principle is false because it violates the control principle, the foundational principle of ethics that states that moral responsibility requires control; for an agent to be morally responsible for something, whether or not she does that thing must be within her control. If the compensation principle creates a moral obligation for an agent to compensate a harmed party, failure to do so will result in that agent's being morally blameworthy for failing in her obligation. Because some harms cannot be compensated for, agents who wrongfully harm others will be required to do something that is outside of their control. (shrink)
There are numerous (Bayesian) confirmation measures in the literature. Festa provides a formal characterization of a certain class of such measures. He calls the members of this class “incremental measures”. Festa then introduces six rather interesting properties called “Matthew properties” and puts forward two theses, hereafter “T1” and “T2”, concerning which of the various extant incremental measures have which of the various Matthew properties. Festa’s discussion is potentially helpful with the problem of measure sensitivity. I argue, that, while Festa’s discussion (...) is illuminating on the whole and worthy of careful study, T1 and T2 are strictly speaking incorrect (though on the right track) and should be rejected in favor of two similar but distinct theses. (shrink)
The goal of the Department of Defense Net-Centric Data Strategy is to improve data sharing throughout the DoD. Data sharing is a critical element of interoperability in the emerging system-of-systems. Achieving interoperability requires the elimination of two types of data heterogeneity: differences of syntax and differences of semantics. This paper builds a path toward semantic uniformity through application of a disciplined approach to ontology. An ontology is a consensus framework representing the types of entities within a given domain and the (...) relations between them. The construction of an ontology begins when a Community of Interest (COI) identifies its authoritative data sources (ADS), which are usually manifest in relevant doctrinal publications, glossaries, data dictionaries, and logical data models. The identified terms are then defined in relation to a common logical framework that has been designed to ensure interoperability with other ontologies created on the basis of the same strategy. As will be described, the Command and Control (C2) Ontology will include representations of a substantial number of entities within the Command and Control (C2) domain. If domain ontologies (e.g. Strike and Counterinsurgency) semantically align with the C2 Ontology, then a substantial barrier to systems interoperability is thereby crossed. (shrink)
According to Michael Friedman’s theory of explanation, a law X explains laws Y1, Y2, …, Yn precisely when X unifies the Y’s, where unification is understood in terms of reducing the number of independently acceptable laws. Philip Kitcher criticized Friedman’s theory but did not analyze the concept of independent acceptability. Here we show that Kitcher’s objection can be met by modifying an element in Friedman’s account. In addition, we argue that there are serious objections to the use that Friedman makes (...) of the concept of independent acceptability. (shrink)
This article contends that recent attempts to construct Frankfurt-style cases (FSCs) are irrelevant to the debate over free will. The principle of alternate possibilities (PAP) states that moral responsibility requires indeterminism, or multiple possible futures. Frankfurt's original case purported to demonstrate PAP false by showing an agent can be blameworthy despite not having the ability to choose otherwise; however he admits the agent can come to that choice freely or by force, and thus has alternate possibilities. Neo-FSCs attempt to show (...) that alternate possibilities are irrelevant to explaining an agent's moral responsibility, but a successful Neo-FSC would be consistent with the truth of PAP, and thus is silent on the big metaphysical issues at the center of the free will debate. (shrink)
Can a perceptual experience justify (epistemically) a belief? More generally, can a nonbelief justify a belief? Coherentists answer in the negative: Only a belief can justify a belief. A perceptual experience can cause a belief but cannot justify a belief. Coherentists eschew all noninferential justification—justification independent of evidential support from beliefs—and, with it, the idea that justification has a foundation. Instead, justification is holistic in structure. Beliefs are justified together, not in isolation, as members of a coherent belief system. The (...) main question of the paper is whether coherentism is consistent. I set out an apparent inconsistency in coherentism and then give a resolution to that apparent inconsistency. (shrink)
Ted Poston's book Reason and Explanation: A Defense of Explanatory Coherentism is a book worthy of careful study. Poston develops and defends an explanationist theory of (epistemic) justification on which justification is a matter of explanatory coherence which in turn is a matter of conservativeness, explanatory power, and simplicity. He argues that his theory is consistent with Bayesianism. He argues, moreover, that his theory is needed as a supplement to Bayesianism. There are seven chapters. I provide a chapter-by-chapter summary along (...) with some substantive concerns. (shrink)
I develop a syntactic concept of circularity, which I call propositional circularity. With respect to a given use of an argument advanced as a statement of inference for the benefit of a reasoner R, if the direct and indirect premises R would have to accept in order to accept the conclusion includes the conclusion, then the collection of premises is propositionally circular. The argument fails to display a type of inference that R can perform. Appealing to propositional circularity, I articulate (...) a sufficient condition for a use of an argument to beg the question, highlighting why question-begging is a defect. (shrink)
In 2001, leaders with palliative care convened to discuss the standardization of palliative care and formed the National Consensus Project for Quality Palliative Care. In 2004, the National Consensus Project for Quality Palliative Care produced the first edition of Clinical Guidelines for Quality Palliative Care. The Guidelines were developed by leaders in the field who examined other national and international standards with the intent to promote consistent, accessible, comprehensive, optimal palliative care through the health care spectrum. Within the guidelines there (...) are eight domains to the provision of palliative care. This article focuses on the last, but very significant Domain 8—Ethical and Legal Aspects of Care. (shrink)
Research on religion can advance understanding of social cognition by building connections to sociology, a field in which much cognitively oriented work has been done. Among the schools of sociological thought that address religious cognition are: structural functionalism, symbolic interactionism, conflict theory, phenomenology, and, most recently, exchange theory. The gulf between sociology and cognitive science is an unfortunate historical accident.
This monograph explores how seven prominent German and Austrian novelists of the twentieth century—Franz Kafka, Thomas Mann, Anna Seghers, Uwe Johnson, Ingeborg Bachmann, Wolfgang Hilbig, and Marlene Steeruwitz—conveyed their literary figures' time spent waiting. By presenting states of waiting as emblematic of human existence in the turbulent twentieth century, these writers criticized hierarchical power structures in various historical contexts. Killing Time presents fresh readings of seven German-language novels, while providing insights into how and why German and Austrian writers repeatedly turned (...) to the waiting motif to expose the injustices inherent in interpersonal, political, and social hierarchies. In investigating the treatment of waiting in literary texts, William reexamines how prominent philosophers of metaphor and time influenced German and Austrian writers of the past century. This study is underpinned in part by the work of cultural and social theorists who have emphasized how the liminal status of the subjugated within social hierarchies ensures that they are kept perpetually waiting. (shrink)
Dialogus de imperio et pontificia potestate.--2. Compendium errorum Joannis XXII. Opus 90 dierum. Littere Fr. Michaelis de Cesena. Octo questionum decisiones.--3. Super 4 libros sententiarum: In sententiarum I--4. Super 4 libros sententiarum: In sententiarum II-IV. Centilogium tabule.