Throughout the biological and biomedical sciences there is a growing need for, prescriptive ‘minimum information’ (MI) checklists specifying the key information to include when reporting experimental results are beginning to find favor with experimentalists, analysts, publishers and funders alike. Such checklists aim to ensure that methods, data, analyses and results are described to a level sufficient to support the unambiguous interpretation, sophisticated search, reanalysis and experimental corroboration and reuse of data sets, facilitating the extraction of maximum value from data sets (...) them. However, such ‘minimum information’ MI checklists are usually developed independently by groups working within representatives of particular biologically- or technologically-delineated domains. Consequently, an overview of the full range of checklists can be difficult to establish without intensive searching, and even tracking thetheir individual evolution of single checklists may be a non-trivial exercise. Checklists are also inevitably partially redundant when measured one against another, and where they overlap is far from straightforward. Furthermore, conflicts in scope and arbitrary decisions on wording and sub-structuring make integration difficult. This presents inhibit their use in combination. Overall, these issues present significant difficulties for the users of checklists, especially those in areas such as systems biology, who routinely combine information from multiple biological domains and technology platforms. To address all of the above, we present MIBBI (Minimum Information for Biological and Biomedical Investigations); a web-based communal resource for such checklists, designed to act as a ‘one-stop shop’ for those exploring the range of extant checklist projects, and to foster collaborative, integrative development and ultimately promote gradual integration of checklists. (shrink)
There is a widespread attitude in epistemology that, if you know on the basis of perception, then you couldn’t have been wrong as a matter of chance. Despite the apparent intuitive plausibility of this attitude, which I’ll refer to here as “stochastic infallibilism”, it fundamentally misunderstands the way that human perceptual systems actually work. Perhaps the most important lesson of signal detection theory (SDT) is that our percepts are inherently subject to random error, and here I’ll highlight some key empirical (...) research that underscores this point. In doing so, it becomes clear that we are in fact quite willing to attribute knowledge to S that p even when S’s perceptual belief that p could have been randomly false. In short, perceptual processes can randomly fail, and perceptual knowledge is stochastically fallible. The narrow implication here is that any epistemological account that entails stochastic infallibilism, like safety, is simply untenable. More broadly, this myth of stochastic infallibilism provides a valuable illustration of the importance of integrating empirical findings into epistemological thinking. (shrink)
On the standard conception of risk, the degree to which an event is risky is the function of the probability of that event. Recently, Duncan Pritchard has challenged this view, proposing instead a modal account on which risk is conceived of in terms of modal ordering (2015). On this account, the degree of risk for any given event is a function of its modal distance from the actual world, not its likelihood. Pritchard's main motivation for this is that the probabilistic (...) account cannot always explain our judgements about risk. In certain cases, equally probable events are not judged to be equally risky. Here I will argue that Pritchard's account succumbs to a similar problem. Put simply, there are cases in which judgements about risk decouple from both probability and modal ordering. Thus, if we want a theory of risk that can explain our judgements about risk, neither the probabilistic nor modal account is successful. (shrink)
There are brains in vats in the actual world. These “cerebral organoids” are roughly comparable to the brains of three-month-old foetuses, and conscious cerebral organoids seem only a matter of time. Philosophical interest in conscious cerebral organoids has thus far been limited to bioethics, and the purpose of this paper is to discuss cerebral organoids in an epistemological context. In doing so, I will argue that it is now clear that there are close possible worlds in which we are BIVs. (...) Not only does this solidify our intuitive judgement that we cannot know that we are not BIVs, but it poses a fundamental problem for both the neo-Moorean antisceptical strategy, which purports to allow us to know that we aren’t BIVs, and the safety condition on knowledge itself. Accordingly, this case is especially instructive in illustrating just how epistemologically relevant empirical developments can be. (shrink)
This unique collection of essays on the late Pierre Hadot’s revolutionary approach to studying and practising philosophy traces the links between his work and that of thinkers from Wittgenstein to the French postmodernists. It shows how his secular spiritual exercises expand our horizons, enabling us to be in a fuller, more authentic way. Comprehensive treatment of a neglected theme: philosophy’s practical relevance in our lives Interdisciplinary analysis reflects the wide influence of Hadot’s thought Explores the links between Hadot’s ideas and (...) those of a wealth of ancient and modern thinkers, including the French postmodernists Offers a practical ‘third way’ in philosophy beyond the dichotomy of Continental and analytical traditions. (shrink)
One of the most widely recognised intuitions about knowledge is that knowing precludes believing truly as a matter of luck. On Pritchard’s highly influential modal account of epistemic luck, luckily true beliefs are, roughly, those for which there are many close possible worlds in which the same belief formed in the same way is false. My aim is to introduce a new challenge to this account. Starting from the observation—as documented by a number of recent EEG studies—that our capacity to (...) detect visual stimuli fluctuates with the phase of our neural oscillations, I argue that there can be very close possible worlds in which an actual-world detectable stimulus is undetectable. However, this doesn’t diminish our willingness to attribute knowledge in the case that the stimulus is detectable, even when undetectability would result in the same belief formed in the same way being false. As I will argue at length, the modal account appears unable to accommodate this result. (shrink)
The emerging neurocomputational vision of humans as embodied, ecologically embedded, social agents—who shape and are shaped by their environment—offers a golden opportunity to revisit and revise ideas about the physical and information-theoretic underpinnings of life, mind, and consciousness itself. In particular, the active inference framework makes it possible to bridge connections from computational neuroscience and robotics/AI to ecological psychology and phenomenology, revealing common underpinnings and overcoming key limitations. AIF opposes the mechanistic to the reductive, while staying fully grounded in a (...) naturalistic and information-theoretic foundation, using the principle of free energy minimization. The latter provides a theoretical basis for a unified treatment of particles, organisms, and interactive machines, spanning from the inorganic to organic, non-life to life, and natural to artificial agents. We provide a brief introduction to AIF, then explore its implications for evolutionary theory, ecological psychology, embodied phenomenology, and robotics/AI research. We conclude the paper by considering implications for machine consciousness. (shrink)
Multiple epistemological programs make use of intuitive judgments pertaining to an individual’s ability to gain knowledge from exclusively probabilistic/statistical information. This paper argues that these judgments likely form without deference to such information, instead being a function of the degree to which having knowledge is representative of an agent. Thus, these judgments fit the pattern of formation via a representativeness heuristic, like that famously described by Kahneman and Tversky to explain similar probabilistic judgments. Given this broad insensitivity to probabilistic/statistical information, (...) it directly follows that these epistemic judgments are insensitive to a given agent’s epistemic status. From this, the paper concludes that, breaking with common epistemological practice, we cannot assume that such judgments are reliable. (shrink)
The problem of historiographical evaluation is simply this: By what evaluative criteria might we say that certain works of historiography are better than others? One recently proposed solution to this problem comes by way of Kuukkanen’s postnarrativist philosophy of historiography. Kuukkanen argues that because many historiographically interesting statements lack truth-values, we cannot evaluate historiographical claims on a truth-functional basis. In the place of truth, Kuukkanen suggests that we evaluate historiographical claims in terms of justification. The problem with this proposal, as (...) I will argue here, is that it isn’t at all clear what it means for a neither-true-nor-false claim to be justified. Moreover, this proposal also runs into trouble with the factivity of knowledge. The solution I propose here might be called “two-valued” postnarrativism, which retains Kuukkanen’s framework, except with a stricter ontology devoid of neither-true-nor-false historiographical statements. (shrink)
Despite the ubiquity of knowledge attribution in human social cognition, its associated neural and cognitive mechanisms are poorly documented. A wealth of converging evidence in cognitive neuroscience has identified independent perspective-taking and inhibitory processes for belief attribution, but the extent to which these processes are shared by knowledge attribution isn't presently understood. Here, we present the findings of an EEG study designed to directly address this shortcoming. These findings suggest that belief attribution is not a component process in knowledge attribution, (...) contra a standard attitude taken by philosophers. Instead, observed differences in P3b amplitude indicate that knowledge attribution doesn't recruit the strong self-perspective inhibition characteristic of belief attribution. However, both belief and knowledge attribution were observed to display a late slow wave widely associated with mental state attribution, indicating that knowledge attribution also shares in more general processing of others' mental states. These results provide a new perspective both on how we think about knowledge attribution, as well as Theory of Mind processes generally. (shrink)
It is all but universally accepted in epistemology that knowledge is factive: S knows that p only if p. The purpose of this thesis is to present an argument against the factivity of knowledge and in doing so develop a non-factive approach to the analysis of knowledge. The argument against factivity presented here rests largely on empirical evidence, especially extant research into visuomotor noise, which suggests that the beliefs that guide everyday motor action are not strictly true. However, as we (...) still want to attribute knowledge on the basis of successful motor action, I argue that the best option is to replace factivity with a weaker constraint on knowledge, one on which certain false beliefs might still be known. In defence of this point, I develop the non-factive analysis of knowledge, which demonstrates that a non-factive constraint might do the same theoretical work as factivity. (shrink)
_Paradoxes from A to Z, Third edition_ is the essential guide to paradoxes, and takes the reader on a lively tour of puzzles that have taxed thinkers from Zeno to Galileo, and Lewis Carroll to Bertrand Russell. MichaelClark uncovers an array of conundrums, such as Achilles and the Tortoise, Theseus’ Ship, and the Prisoner’s Dilemma, taking in subjects as diverse as knowledge, science, art and politics. Clark discusses each paradox in non-technical terms, considering its significance and (...) looking at likely solutions. This third edition is revised throughout, and adds nine new paradoxes that have important bearings in areas such as law, logic, ethics and probability. Paradoxes from A to Z, Third edition is an ideal starting point for those interested not just in philosophical puzzles and conundrums, but anyone seeking to hone their thinking skills. (shrink)
There is currently an explosion of interest in grounding. In this article we provide an overview of the debate so far. We begin by introducing the concept of grounding, before discussing several kinds of scepticism about the topic. We then identify a range of central questions in the theory of grounding and discuss competing answers to them that have emerged in the debate. We close by raising some questions that have been relatively neglected but which warrant further attention.
This essential guide to paradoxes takes the reader on a lively tour of puzzles that have taxed thinkers from Zeno to Galileo and Lewis Carroll to Bertrand Russell. MichaelClark uncovers an array of conundrums, such as Achilles and the Tortoise, Theseus' Ship, Hempel's Raven, and the Prisoners' Dilemma, taking in subjects as diverse as knowledge, ethics, science, art and politics. Clark discusses each paradox in non-technical terms, considering its significance and looking at likely solutions.
The possibility of extended cognition invites the possibility of extended knowledge. We examine what is minimally required for such forms of technologically extended knowledge to arise and whether existing and future technologies can allow for such forms of epistemic extension. Answering in the positive, we explore some of the ensuing transformations in the ethical obligations and personal rights of the resulting ‘new humans.’.
First, a theoretical background to the volume’s topic, extended epistemology, is provided by a brief outline of its cross-disciplinary theoretical lineage and some key themes. In particular, it is shown how and why the emergence of recent and more egalitarian thinking in the cognitive sciences about the nature of human cognizing and its bounds—viz., the so-called ‘extended cognition’ program, and the related idea of an ‘extended mind’—has important and interesting ramifications in epistemology. Second, an overview is provided of the papers (...) included as chapters in the volume. The sixteen contributions are divided into two categories: those that engage with foundational issues to do with extended epistemology, and those that pursue applications of extended epistemology to new areas of research. (shrink)
Previous claims to have resolved the two-envelope paradox have been premature. The paradoxical argument has been exposed as manifestly fallacious if there is an upper limit to the amount of money that may be put in an envelope; but the paradoxical cases which can be described if this limitation is removed do not involve mathematical error, nor can they be explained away in terms of the strangeness of infinity. Only by taking account of the partial sums of the infinite series (...) of expected gains can the paradox be resolved. (shrink)
Strange inversions occur when things work in ways that turn received wisdom upside down. Hume offered a strangely inverted story about causation, and Darwin, about apparent design. Dennett suggests that a strange inversion also occurs when we project our own reactive complexes outward, painting our world with elusive properties like cuteness, sweetness, blueness, sexiness, funniness, and more. Such properties strike us as experiential causes, but they are really effects—a kind of shorthand for whole sets of reactive dispositions rooted in the (...) nuts and bolts of human information processing. Understanding the nature and origins of that strange inversion, Dennett believes, is thus key to understanding the nature and origins of human experience itself. This paper examines this claim, paying special attention to recent formulations that link that strange inversion to the emerging vision of the brain as a Bayesian estimator, constantly seeking to predict the unfolding sensory barrage. (shrink)
The question “What is humour?” has exercised in varying degrees such philosophers as Aristotle, Hobbes, Hume, Kant, Schopenhauer and Bergson and has traditionally been regarded as a philosophical question. And surely it must still be regarded as a philosophical question at least in so far as it is treated as a conceptual one. Traditionally the question has been regarded as a search for the essence of humour, whereas nowadays it has become almost a reflex response among some philosophers to dismiss (...) the search for essences as misconceived. Humour, it will be said, is a family-resemblance concept: no one could hope to compile any short list of essential properties abstracted from all the many varieties of humour— human misfortune and clumsiness, obscenity, grotesqueness, veiled insult, nonsense, wordplay and puns, human misdemeanours and so on, as manifested in forms as varied as parody, satire, drama, clowning, music, farce and cartoons. Yet even if the search for the essence of humour seems at first sight unlikely to succeed, I do not see how we can be sure in advance of any conceptual investigation; and in any case we might do well to start with the old established theories purporting to give the essence of humour, for even if they are wrong they may be illuminatingly wrong and may help us to compile a list of typical characteristics. (shrink)
In May 2010, philosophers, family and friends gathered at the University of Notre Dame to celebrate the career and retirement of Alvin Plantinga, widely recognized as one of the world's leading figures in metaphysics, epistemology, and the philosophy of religion. Plantinga has earned particular respect within the community of Christian philosophers for the pivotal role that he played in the recent renewal and development of philosophy of religion and philosophical theology. Each of the essays in this volume engages with some (...) particular aspect of Plantinga's views on metaphysics, epistemology, or philosophy of religion. Contributors include Michael Bergman, Ernest Sosa, Trenton Merricks, Richard Otte, Peter VanInwagen, Thomas P. Flint, Eleonore Stump, Dean Zimmerman and Nicholas Wolterstorff. The volume also includes responses to each essay by Bas van Fraassen, Stephen Wykstra, David VanderLaan, Robin Collins, Raymond VanArragon, E. J. Coffman, Thomas Crisp, and Donald Smith. (shrink)
Chapter 1 presents BS, a basic syllogistic system based on Aristotle's logic, in natural deduction form. Chapters 2 and 3 treat the metatheory of BS: consitency, soundness, independence, and completeness. Chapter 4 and 5 deal with syllogistic and, in turn, propositional and predicate logic, chapter 6 is on existential import, chapter 7 on subject and predicate and chapter 8 on classes. Chapter 9 adds negative variables to BS, and proves its soundness and completeness.
I argue that plausible claims in the logic of partial grounding, when combined with a plausible analysis of that concept, entail the falsity of plausible grounding claims. As our account of the concept of partial grounding and its logic should be consistent with plausible grounding claims, this is problematic. The argument hinges on the idea that some facts about what grounds what are grounded in others, which is an idea the paper aims to motivate.
Recently, Kroedel and Schulz have argued that the exclusion problem—which states that certain forms of non-reductive physicalism about the mental are committed to systematic and objectionable causal overdetermination—can be solved by appealing to grounding. Specifically, they defend a principle that links the causal relations of grounded mental events to those of grounding physical events, arguing that this renders mental–physical causal overdetermination unproblematic. Here, we contest Kroedel and Schulz’s result. We argue that their causal-grounding principle is undermotivated, if not outright false. (...) In particular, we contend that the principle has plausible counterexamples, resulting from the fact that some mental states are not fully grounded by goings on ‘in our heads’ but also require external factors to be included in their full grounds. We draw the sceptical conclusion that it remains unclear whether non-reductive physicalists can plausibly respond to the exclusion argument by appealing to considerations of grounding. (shrink)
If there are facts about what grounds what, are there any grounding relations between them? This paper suggests so, arguing that transitivity and amalgamation principles in the logic of grounding yield facts of grounding that are grounded by others. I develop and defend this view and note that combining it with extant accounts of iterated grounding commits us to seemingly problematic instances of ground-theoretic overdetermination. Taking the superinternality thesis as a case study, I discuss how defenders of this thesis should (...) respond. It emerges that our discussion puts pressure on superinternalists to make an interesting qualification to their view: to only regard as a fundamental metaphysical law a version of the superinternality thesis that is restricted to minimal and immediate grounding. (shrink)
We introduce the CUBISM system for the analysis and deep understanding of multi-participant dialogues. CUBISM brings together two typically separate forms of discourse analysis: semantic analysis and sociolinguistic analysis. In the paper proper, we describe and illustrate major components of the CUBISM system, and discuss the challenge posed by the system’s ultimate purpose, which is to automatically detect anomalous changes in participants’ expressed or implied beliefs about the world and each other, including shifts toward or away from cultural and community (...) norms. (shrink)
Extended Cognition examines the way in which features of a subject's cognitive environment can become constituent parts of the cognitive process itself. This volume explores the epistemological ramifications of this idea, bringing together academics from a variety of different areas, to investigate the very idea of an extended epistemology.
This volume explores the epistemology of distributed cognition, the idea that groups of people can generate cognitive systems that consist of all participating members. Can distributed cognitive systems generate knowledge in a similar way to individuals? If so, how does this kind of knowledge differ from normal, individual knowledge?
Background: In 2006, the Centers for Disease Control and Prevention (CDC) recommended three changes to HIV testing methods in US healthcare settings: (1) an opt-out approach, (2) removal of separate signed consent, and (3) optional HIV prevention counseling. These recommendations led to a public debate about their moral acceptability. Methods: We interviewed 25 members from the fields of US HIV advocacy, care, policy, and research about the ethical merits and demerits of the three changes to HIV testing methods. We performed (...) a qualitative analysis of the participant responses in the interviews and summarized the major themes. Results: In general, arguments in favor of the methods were based upon their ultimate contribution to increasing HIV testing and permitting the consequent benefits of identifying those who are HIV infected and linking them to further care. Conclusions: The prevailing theme of ethical concern focused on suspicions that the methods might not be properly implemented, and that further safeguards might be needed. (shrink)
For many, the case of the Exxon Valdez oil spill has become a symbol of unethical corporate behavior. Had Exxon’s managers not callously pursued their own interests at the expense of the environment and other parties, the accident would not have happened. In this paper, we (1) present a short case study of the Valdez incident; (2) argue that many analyses of the case either ignore or fail to give sufficient weight to the uncertainties managers often face when they make (...) decisions; and (3) propose a framework for moral management grounded in principles of communicative ethics, moral dialogue, and in the non-traditional ideas of many current management and behavioral decision theorists. From this view, the moral manager is not expected to know the “correct” answer to every ethical issue, but rather to participate responsibly in an open dialogue with other interested parties. (shrink)
As the functional capabilities of high-tech medical products converge, supplying organizations seek new opportunities to differentiate their offerings. Embracing product sustainability-related differentiators provides just such an opportunity. This study examines the challenge organizations face when attempting to understand how customers perceive environmental and social dimensions of sustainability by exploring and defining both dimensions on the basis of a review of extant literature and focus group research with a leading supplier of magnetic resonance imaging (MRI) scanning equipment. The study encompasses seven (...) hospitals and one private imaging center in the Netherlands and identifies five social aspects that cover 11 indicators. The authors conduct 22 customer perception interviews with key decision-making stakeholders involved in purchasing MRI scanning equipment. Respondents find environmental and social sustainability dimensions personally relevant but professionally secondary to cost, performance, and ability to use the equipment in their organizations' physical infrastructure. Finally, incorporating a product's environmental and social credentials within the marketing of MRI scanning equipment enhances the perception of the product offering in decisionmaking stakeholders' minds and provides a means of differentiation. (shrink)