Immunology researchers are beginning to explore the possibilities of reproducibility, reuse and secondary analyses of immunology data. Open-access datasets are being applied in the validation of the methods used in the original studies, leveraging studies for meta-analysis, or generating new hypotheses. To promote these goals, the ImmPort data repository was created for the broader research community to explore the wide spectrum of clinical and basic research data and associated findings. The ImmPort ecosystem consists of four components–Private Data, Shared Data, Data (...) Analysis, and Resources—for data archiving, dissemination, analyses, and reuse. To date, more than 300 studies have been made freely available through the ImmPort Shared Data portal , which allows research data to be repurposed to accelerate the translation of new insights into discoveries. (shrink)
According to the PubMed resource from the U.S. National Library of Medicine, over 750,000 scientific articles have been published in the ~5000 biomedical journals worldwide in the year 2007 alone. The vast majority of these publications include results from hypothesis-driven experimentation in overlapping biomedical research domains. Unfortunately, the sheer volume of information being generated by the biomedical research enterprise has made it virtually impossible for investigators to stay aware of the latest findings in their domain of interest, let alone to (...) be able to assimilate and mine data from related investigations for purposes of meta-analysis. While computers have the potential for assisting investigators in the extraction, management and analysis of these data, information contained in the traditional journal publication is still largely unstructured, free-text descriptions of study design, experimental application and results interpretation, making it difficult for computers to gain access to the content of what is being conveyed without significant manual intervention. In order to circumvent these roadblocks and make the most of the output from the biomedical research enterprise, a variety of related standards in knowledge representation are being developed, proposed and adopted in the biomedical community. In this chapter, we will explore the current status of efforts to develop minimum information standards for the representation of a biomedical experiment, ontologies composed of shared vocabularies assembled into subsumption hierarchical structures, and extensible relational data models that link the information components together in a machine-readable and human-useable framework for data mining purposes. (shrink)
ABSTRACTA conditional is natural if it fulfils the three following conditions. It coincides with the classical conditional when restricted to the classical values T and F; it satisfies the Modus Ponens; and it is assigned a designated value whenever the value assigned to its antecedent is less than or equal to the value assigned to its consequent. The aim of this paper is to provide a ‘bivalent’ Belnap-Dunn semantics for all natural implicative expansions of Kleene's strong 3-valued matrix with (...) two designated elements. (shrink)
ABSTRACTThis paper provides a naturalistic account of inference. We posit that the core of inference is constituted by bare inferential transitions, transitions between discursive mental representations guided by rules built into the architecture of cognitive systems. In further developing the concept of BITs, we provide an account of what Boghossian  calls ‘taking’—that is, the appreciation of the rule that guides an inferential transition. We argue that BITs are sufficient for implicit taking, and then, to analyse explicit taking, we posit (...) rich inferential transitions, which are transitions that the subject is disposed to endorse. (shrink)
Perceptual systems respond to proximal stimuli by forming mental representations of distal stimuli. A central goal for the philosophy of perception is to characterize the representations delivered by perceptual systems. It may be that all perceptual representations are in some way proprietarily perceptual and differ from the representational format of thought (Dretske 1981; Carey 2009; Burge 2010; Block ms.). Or it may instead be that perception and cognition always trade in the same code (Prinz 2002; Pylyshyn 2003). This paper rejects (...) both approaches in favor of perceptual pluralism, the thesis that perception delivers a multiplicity of representational formats, some proprietary and some shared with cognition. The argument for perceptual pluralism marshals a wide array of empirical evidence in favor of iconic (i.e., image-like, analog) representations in perception as well as discursive (i.e., language-like, digital) perceptual object representations. (shrink)
This study provides a comprehensive reinterpretation of the meaning of Locke's political thought. John Dunn restores Locke's ideas to their exact context, and so stresses the historical question of what Locke in the Two Treatises of Government was intending to claim. By adopting this approach, he reveals the predominantly theological character of all Locke's thinking about politics and provides a convincing analysis of the development of Locke's thought. In a polemical concluding section, John Dunn argues that liberal and (...) Marxist interpretations of Locke's politics have failed to grasp his meaning. Locke emerges as not merely a contributor to the development of English constitutional thought, or as a reflector of socio-economic change in seventeenth-century England, but as essentially a Calvinist natural theologian. (shrink)
Dispositionalism about belief has had a recent resurgence. In this paper we critically evaluate a popular dispositionalist program pursued by Eric Schwitzgebel. Then we present an alternative: a psychofunctional, representational theory of belief. This theory of belief has two main pillars: that beliefs are relations to structured mental representations, and that the relations are determined by the generalizations under which beliefs are acquired, stored, and changed. We end by describing some of the generalizations regarding belief acquisition, storage, and change.
This paper is a sequel to ‘Belnap-Dunn semantics for natural implicative expansions of Kleene's strong three-valued matrix with two designated values’, where a ‘bivalent’ Belnap-Dunn semantics is provided for all the expansions referred to in its title. The aim of the present paper is to carry out a parallel investigation for all natural implicative expansions of Kleene's strong 3-valued matrix now with only one designated value.
This paper explores allowing truth value assignments to be undetermined or "partial" and overdetermined or "inconsistent", thus returning to an investigation of the four-valued semantics that I initiated in the sixties. I examine some natural consequence relations and show how they are related to existing logics, including ukasiewicz's three-valued logic, Kleene's three-valued logic, Anderson and Belnap's relevant entailments, Priest's "Logic of Paradox", and the first-degree fragment of the Dunn-McCall system "R-mingle". None of these systems have nested implications, and I (...) investigate twelve natural extensions containing nested implications, all of which can be viewed as coming from natural variations on Kripke's semantics for intuitionistic logic. Many of these logics exist antecedently in the literature, in particular Nelson 's "constructible falsity". (shrink)
J. Michael Dunn’s Theorem in 3-Valued Model Theory and Graham Priest’s Collapsing Lemma provide the means of constructing first-order, three-valued structures from classical models while preserving some control over the theories of the ensuing models. The present article introduces a general construction that we call a Dunn–Priest quotient, providing a more general means of constructing models for arbitrary many-valued, first-order logical systems from models of any second system. This technique not only counts Dunn’s and Priest’s techniques as (...) special cases, but also provides a generalized Collapsing Lemma for Priest’s more recent plurivalent semantics in general. We examine when and how much control may be exerted over the resulting theories in particular cases. Finally, we expand the utility of the construction by showing that taking Dunn–Priest quotients of a family of structures commutes with taking an ultraproduct of that family, increasing the versatility of the tool. (shrink)
Short‐term memory in vision is typically thought to divide into at least two memory stores: a short, fragile, high‐capacity store known as iconic memory, and a longer, durable, capacity‐limited store known as visual working memory (VWM). This paper argues that iconic memory stores icons, i.e., image‐like perceptual representations. The iconicity of iconic memory has significant consequences for understanding consciousness, nonconceptual content, and the perception–cognition border. Steven Gross and Jonathan Flombaum have recently challenged the division between iconic memory and VWM by (...) arguing against the idea of capacity limits in favor of a flexible resource‐based model of short‐term memory. I argue that, while VWM capacity is probably governed by flexible resources rather than a sharp limit, the two memory stores should still be distinguished by their representational formats. Iconic memory stores icons, while VWM stores discursive (i.e., language‐like) representations. I conclude by arguing that this format‐based distinction between memory stores entails that prominent views about consciousness and the perception–cognition border will likely have to be revised. (shrink)
This comprehensive text shows how various notions of logic can be viewed as notions of universal algebra providing more advanced concepts for those who have an introductory knowledge of algebraic logic, as well as those wishing to delve into more theoretical aspects.
According to one important proposal, the difference between perception and cognition consists in the representational formats used in the two systems (Carey, 2009; Burge, 2010; Block, 2014). In particular, it is claimed that perceptual representations are iconic, or image-like, while cognitive representations are discursive, or language-like. Taking object perception as a test case, this paper argues on empirical grounds that it requires discursive label-like representations. These representations segment the perceptual field, continuously pick out objects despite changes in their features, and (...) abstractly represent high-level features, none of which appears possible for purely iconic representations. (shrink)
It is an orthodoxy in cognitive science that perception can occur unconsciously. Recently, Hakwan Lau, Megan Peters and Ian Phillips have argued that this orthodoxy may be mistaken. They argue that many purported cases of unconscious perception fail to rule out low degrees of conscious awareness while others fail to establish genuine perception. This paper presents a case of unconscious perception that avoids these problems. It also advances a general principle of ‘phenomenal coherence’ that can insulate some forms of evidence (...) for unconscious perception from the methodological critiques of Lau, Peters and Phillips. (shrink)
We often evaluate belief-forming processes, agents, or entire belief states for reliability. This is normally done with the assumption that beliefs are all-or-nothing. How does such evaluation go when we’re considering beliefs that come in degrees? I consider a natural answer to this question that focuses on the degree of truth-possession had by a set of beliefs. I argue that this natural proposal is inadequate, but for an interesting reason. When we are dealing with all-or-nothing belief, high reliability leads to (...) high levels of truth-possession. However, when it comes to degrees of belief, reliability and truth-possession part ways. The natural answer thus fails to be a good way to evaluate degrees of belief for reliability. I propose and develop an alternative method based on the notion of calibration, suggested by Frank Ramsey, which does not have this problem and consider why we should care about such assessments of reliability even if they are not tied directly to truth-possession. (shrink)
According to a classic but nowadays discarded philosophical theory, perceptual experience is a complex of nonconceptual sensory states and full-blown propositional beliefs. This classical dual-component theory of experience is often taken to be obsolete. In particular, there seem to be cases in which perceptual experience and belief conflict: cases of known illusions, wherein subjects have beliefs contrary to the contents of their experiences. Modern dual-component theories reject the belief requirement and instead hold that perceptual experience is a complex of nonconceptual (...) sensory states and some other sort of conceptual state. The most popular modern dual-component theory appeals to sui generis propositional attitudes called ‘perceptual seemings’. This article argues that the classical dual-component theory has the resources to explain known illusions without giving up the claim that the conceptual components of experience are beliefs. The classical dual-component view, though often viewed as outdated and implausible, should be regarded as a serious contender in contemporary debates about the nature of perceptual experience. (shrink)
We study an application of gaggle theory to unary negative modal operators. First we treat negation as impossibility and get a minimal logic system Ki that has a perp semantics. Dunn 's kite of different negations can be dealt with in the extensions of this basic logic Ki. Next we treat negation as “unnecessity” and use a characteristic semantics for different negations in a kite which is dual to Dunn 's original one. Ku is the minimal logic that (...) has a characteristic semantics. We also show that Shramko's falsification logic FL can be incorporated into some extension of this basic logic Ku. Finally, we unite the two basic logics Ki and Ku together to get a negative modal logic K-, which is dual to the positive modal logic K+ in . Shramko has suggested an extension of Dunn 's kite and also a dual version in . He also suggested combining them into a “united” kite. We give a united semantics for this united kite of negations. (shrink)
Reliabilism -- the view that a belief is justified iff it is produced by a reliable process -- is often characterized as a form of consequentialism. Recently, critics of reliabilism have suggested that, since a form of consequentialism, reliabilism condones a variety of problematic trade-offs, involving cases where someone forms an epistemically deficient belief now that will lead her to more epistemic value later. In the present paper, we argue that the relevant argument against reliabilism fails because it equivocates. While (...) there is a sense in which reliabilism is a kind of consequentialism, it is not of a kind on which we should expect problematic trade-offs. (shrink)
Stakeholder theory, as a method of management based on morals and behavior, must be grounded by a theory of ethics. However, traditional ethics of justice and rights cannot completely ground the theory. Following and expanding on the work of Wicks, Gilbert, and Freeman (1994), we believe that feminist ethics, invoking principles of caring, provides the missing element that allows moral theory to ground the stakeholder approach to management. Examples are given to support the suggested general principle for making business decisions (...) under feminist moral theory. (shrink)
Fichte's reputation at the present time is in some respects a curious one. On the one hand, he is by common consent acknowledged to have exercised a dominant influence upon the development of German thought during the opening decades of the nineteenth century. Thus from a specifically philosophical point of view he is regarded as an innovator who played a decisive role in transforming Kant's transcendental idealism into the absolute idealism of his immediate successors, while at a more general level (...) he is customarily seen as having put into currency certain persuasive conceptions which contributed—less directly but no less surely—to the emergence and spread of romanticism in some of its varied and ramifying forms. On the other hand, however, it is noticeable that detailed consideration of his work has not figured prominently in the recent revival of concern with post-Kantian thought as a whole which has been manifested by philosophers of the English-speaking world. Although his name is frequently mentioned in that connection, one suspects that his books may not be so often read. In part this may be due to his particular mode of expounding his views, which at times attains a level of opacity that can make even Hegel's obscurest passages seem comparatively tractable. It is also true that Fichte's principal theoretical works—if not his semipopular writings—are largely devoid of the allusions to scientific, historical, psychological or cultural matters with which his German contemporaries were prone to illustrate their philosophical doctrines and enliven their more abstract discussions: there is a daunting aridity about much of what he wrote which can raise nagging doubts in the modern reader's mind about the actual issues that are in question. Yet the fact remains that by the close of the eighteenth century his ideas had already made a profound impact, capturing the imagination of a host of German thinkers and intellectuals. The problem therefore arises as to what preoccupations, current at the time, they owed their indubitable appeal and to what puzzles they were welcomed as proffering a solution. If these can be identified, it may become at least partially intelligible that Fichte should have been widely regarded as having provided a framework within which certain hitherto intractable difficulties could be satisfactorily reformulated and resolved. Let me accordingly begin by saying something about them. (shrink)
The question of whether perception is encapsulated from cognition has been a major topic in the study of perception in the past decade. One locus of debate concerns the role of attention. Some theorists argue that attention is a vehicle for widespread violations of encapsulation; others argue that certain forms of cognitively driven attention are compatible with encapsulation, especially if attention only modulates inputs. This paper argues for an extreme thesis: no effect of attention, whether on the inputs to perception (...) or on perceptual processing itself, constitutes a violation of the encapsulation of perception. (shrink)
Russian public opinion in the first half of the nineteenth century was buffeted by a complex of cultural, psychological, and historiosophical dilemmas that destabilized many conventions about Russia's place in universal history. This article examines one response to these dilemmas: the Slavophile reconfiguration of Eastern Christianity as a modern religion of theocentric freedom and moral progress. Drawing upon methods of contextual analysis, the article challenges the usual scholarly treatment of Slavophile religious thought as a vehicle to address extrahistorical concerns by (...) placing the writings of A. S. Khomiakov and I. V. Kireevskii in the discursive and ideological framework in which they originated and operated. As such, the article considers the atheistic revolution in consciousness advocated by Russian Hegelians, the Schellingian proposition that human freedom and moral advancement were dependent upon the living God, P. Ia. Chaadaev's contention that a people's religious orientation determined its historical potential, and the Slavophile appropriation of Russia's dominant confession to resolve the problem of having attained historical consciousness in an age of historical stasis. (shrink)
As the COVID-19 pandemic impacts on health service delivery, health providers are modifying care pathways and staffing models in ways that require health professionals to be reallocated to work in critical care settings. Many of the roles that staff are being allocated to in the intensive care unit and emergency department pose additional risks to themselves, and new policies for staff reallocation are causing distress and uncertainty to the professionals concerned. In this paper, we analyse a range of ethical issues (...) associated with changes to staff allocation processes in the face of COVID-19. In line with a dominant view in the medical ethics literature, we claim, first, that no individual health professional has a specific, positive obligation to treat a patient when doing so places that professional at risk of harm, and so there is a clear ethical tension in any reallocation process in this context. Next, we argue that the changing asymmetries of health needs in hospitals means that careful consideration needs to be given to a stepwise process for deallocating staff from their usual duties. We conclude by considering how a justifiable process of reallocating professionals to high-risk clinical roles should be configured once those who are ‘fit for reallocation’ have been identified. We claim that this process needs to attend to three questions that we consider in detail: how the choice to make reallocation decisions is made, what justifiable models for reallocation might look like and what is owed to those who are reallocated. (shrink)
David Lewis ([1986b]) gives an attractive and familiar account of counterfactual dependence in the standard context. This account has recently been subject to a counterexample from Adam Elga (). In this article, I formulate a Lewisian response to Elga’s counterexample. The strategy is to add an extra criterion to Lewis’s similarity metric, which determines the comparative similarity of worlds. This extra criterion instructs us to take special science laws into consideration as well as fundamental laws. I argue that the Second (...) Law of Thermodynamics should be seen as a special science law, and give a brief account of what Lewisian special science laws should look like. If successful, this proposal blocks Elga’s counterexample. (shrink)
Rationalization through reduction of cognitive dissonance does not have the function of representational exchange. Instead, cognitive dissonance is part of the “psychological immune system” and functions to protect the self-concept against evidence of incompetence, immorality, and instability. The irrational forms of attitude change that protect the self-concept in dissonance reduction are useful primarily for maintaining motivation.
We give a set of postulates for the minimal normal modal logicK + without negation or any kind of implication. The connectives are simply , , , . The postulates (and theorems) are all deducibility statements . The only postulates that might not be obvious are.
We shall be concerned with the modal logic BK—which is based on the Belnap–Dunn four-valued matrix, and can be viewed as being obtained from the least normal modal logic K by adding ‘strong negation’. Though all four values ‘truth’, ‘falsity’, ‘neither’ and ‘both’ are employed in its Kripke semantics, only the first two are expressible as terms. We show that expanding the original language of BK to include constants for ‘neither’ or/and ‘both’ leads to quite unexpected results. To be (...) more precise, adding one of these constants has the effect of eliminating the respective value at the level of BK-extensions. In particular, if one adds both of these, then the corresponding lattice of extensions turns out to be isomorphic to that of ordinary normal modal logics. (shrink)
Epistemic Consequentialism Consequentialism is the view that, in some sense, rightness is to be understood in terms conducive to goodness. Much of the philosophical discussion concerning consequentialism has focused on moral rightness or obligation or normativity. But there is plausibly also epistemic rightness, epistemic obligation, and epistemic normativity. Epistemic rightness is often denoted with talk … Continue reading Consequentialism Epistemic →.
Two types of criticism are frequently levelled at the history of ideas in general and the history of political theory in particular. The first is very much that of historians practising in other fields; that it is written as a saga in which all the great deeds are done by entities which could not, in principle, do anything. In it, Science is always wrestling with Theology, Empiricism with Rationalism, monism with dualism, evolution with the Great Chain of Being, artifice with (...) nature, Politik with political moralism. Its protagonists are never humans, but only reified abstractions—or, if humans by inadvertence, humans only as the loci of these abstractions. The other charge, one more frequently levelled by philosophers, is that it is insensitive to the distinctive features of ideas, unconcerned with, or more often ineffectual in its concern with, truth and falsehood, its products more like intellectual seed-catalogues than adequate studies of thought In short it is characterised by a persistent tension between the threats of falsity in its history and incompetence in its philosophy. (shrink)
Both I and Belnap, motivated the "Belnap-Dunn 4-valued Logic" by talk of the reasoner being simply "told true" (T) and simply "told false" (F), which leaves the options of being neither "told true" nor "told false" (N), and being both "told true" and "told false" (B). Belnap motivated these notions by consideration of unstructured databases that allow for negative information as well as positive information (even when they conflict). We now experience this on a daily basis with the Web. (...) But the 4-valued logic is deductive in nature, and its matrix is discrete: there are just four values. In this paper I investigate embedding the 4-valued logic into a context of probability. Jøsang's Subjective Logic introduced uncertainty to allow for degrees of belief, disbelief, and uncertainty. We extend this so as to allow for two kinds of uncertainty— that in which the reasoner has too little information (ignorance) and that in which the reasoner has too much information (conflicted). Jøsang's "Opinion Triangle" becomes an "Opinion Tetrahedron" and the 4-values can be seen as its vertices. I make/prove various observations concerning the relation of non-classical "probability" to non-classical logic. (shrink)
The law of informed consent to medical treatment has recently been extensively overhauled in England. The 2015 Montgomery judgment has done away with the long-held position that the information to be disclosed by doctors when obtaining valid consent from patients should be determined on the basis of what a reasonable body of medical opinion agree ought to be disclosed in the circumstances. The UK Supreme Court concluded that the information that is material to a patient’s decision should instead be judged (...) by reference to a new two-limbed test founded on the notions of the ‘reasonable person’ and the ‘particular patient’. The rationale outlined in Montgomery for this new test of materiality, and academic comment on the ruling’s significance, has focused on the central ethical importance that the law now accords to respect for patient autonomy in the process of obtaining consent from patients. In this paper, we dispute the claim that the new test of materiality articulated in Montgomery equates with respect for autonomy being given primacy in re-shaping the development of the law in this area. We also defend this position, arguing that our revised interpretation of Montgomery’s significance does not equate with a failure by the courts to give due legal consideration to what is owed to patients as autonomous decision-makers in the consent process. Instead, Montgomery correctly implies that doctors are ethically obliged to attend to a number of relevant ethical considerations in framing decisions about consent to treatment, which include subtle interpretations of the values of autonomy and well-being. Doctors should give appropriate consideration to how these values are fleshed out and balanced in context in order to specify precisely what information ought to be disclosed to a patient as a requirement of obtaining consent, and as a core component of shared decision-making within medical encounters more generally. (shrink)
In this paper we introduce canonical extensions of partially ordered sets and monotone maps and a corresponding discrete duality. We then use these to give a uniform treatment of completeness of relational semantics for various substructural logics with implication as the residual(s) of fusion.
J. David Velleman casts foreknowledge of one's own next move as psychologically active. As agents, we form prior intentions about what we will do next. Such prior intentions are licensed self-fulfilling beliefs or directive cognitions. These cognitions differ from ordinary predictions in their psychological relation to the evidence, in that they precede that crucial part of the evidence which consists in the fact that they have been formed. However, once formed, these cognitions are epistemologically unremarkable: they are directly justified by (...) evidence, which saliently includes the fact of their own existence. I argue that Velleman distorts both the epistemology and the etiology of self-knowing agency. Self-knowing agents typically know what they will do next non-evidentially, and yet their knowledge of their own next move is formed in response to their (perspective-relative) epistemic grounds. Velleman's account of self-knowing agency is doubly distortive because it ignores the role of the purely first-person point of view which typically characterizes such agency. In developing an alternative account of self-knowing agency, I argue that the kind of knowledge that we typically have of what we are about to do is like the kind of knowledge we have when we non-evidentially know what our own current, conscious propositional thoughts are. We can non-evidentially know what we think in virtue of having made up our minds what to think. Likewise, we can non-evidentially know what we are about to do in virtue of having settled on what to do next. (shrink)