Current approaches to cyber-security are not working. Rather than producing more security, we seem to be facing less and less. The reason for this is a multi-dimensional and multi-faceted security dilemma that extends beyond the state and its interaction with other states. It will be shown how the focus on the state and “its” security crowds out consideration for the security of the individual citizen, with detrimental effects on the security of the whole system. The threat arising from cyberspace to (...) (national) security is presented as possible disruption to a specific way of life, one building on information technologies and critical functions of infrastructures, with relatively little consideration for humans directly. This non-focus on people makes it easier for state actors to militarize cyber-security and (re-)assert their power in cyberspace, thereby overriding the different security needs of human beings in that space. Paradoxically, the use of cyberspace as a tool for national security, both in the dimension of war fighting and the dimension of mass-surveillance, has detrimental effects on the level of cyber-security globally. A solution out of this dilemma is a cyber-security policy that is decidedly anti-vulnerability and at the same time based on strong considerations for privacy and data protection. Such a security would have to be informed by an ethics of the infosphere that is based on the dignity of information related to human beings. (shrink)
Perceptual systems respond to proximal stimuli by forming mental representations of distal stimuli. A central goal for the philosophy of perception is to characterize the representations delivered by perceptual systems. It may be that all perceptual representations are in some way proprietarily perceptual and differ from the representational format of thought (Dretske 1981; Carey 2009; Burge 2010; Block ms.). Or it may instead be that perception and cognition always trade in the same code (Prinz 2002; Pylyshyn 2003). This paper rejects (...) both approaches in favor of perceptual pluralism, the thesis that perception delivers a multiplicity of representational formats, some proprietary and some shared with cognition. The argument for perceptual pluralism marshals a wide array of empirical evidence in favor of iconic (i.e., image-like, analog) representations in perception as well as discursive (i.e., language-like, digital) perceptual object representations. (shrink)
ABSTRACTThis paper provides a naturalistic account of inference. We posit that the core of inference is constituted by bare inferential transitions, transitions between discursive mental representations guided by rules built into the architecture of cognitive systems. In further developing the concept of BITs, we provide an account of what Boghossian  calls ‘taking’—that is, the appreciation of the rule that guides an inferential transition. We argue that BITs are sufficient for implicit taking, and then, to analyse explicit taking, we posit (...) rich inferential transitions, which are transitions that the subject is disposed to endorse. (shrink)
ABSTRACTA conditional is natural if it fulfils the three following conditions. It coincides with the classical conditional when restricted to the classical values T and F; it satisfies the Modus Ponens; and it is assigned a designated value whenever the value assigned to its antecedent is less than or equal to the value assigned to its consequent. The aim of this paper is to provide a ‘bivalent’ Belnap-Dunn semantics for all natural implicative expansions of Kleene's strong 3-valued matrix with (...) two designated elements. (shrink)
Dispositionalism about belief has had a recent resurgence. In this paper we critically evaluate a popular dispositionalist program pursued by Eric Schwitzgebel. Then we present an alternative: a psychofunctional, representational theory of belief. This theory of belief has two main pillars: that beliefs are relations to structured mental representations, and that the relations are determined by the generalizations under which beliefs are acquired, stored, and changed. We end by describing some of the generalizations regarding belief acquisition, storage, and change.
Most theories of concepts take concepts to be structured bodies of information used in categorization and inference. This paper argues for a version of atomism, on which concepts are unstructured symbols. However, traditional Fodorian atomism is falsified by polysemy and fails to provide an account of how concepts figure in cognition. This paper argues that concepts are generative pointers, that is, unstructured symbols that point to memory locations where cognitively useful bodies of information are stored and can be deployed to (...) resolve polysemy. The notion of generative pointers allows for unresolved ambiguity in thought and provides a basis for conceptual engineering. (shrink)
Short‐term memory in vision is typically thought to divide into at least two memory stores: a short, fragile, high‐capacity store known as iconic memory, and a longer, durable, capacity‐limited store known as visual working memory (VWM). This paper argues that iconic memory stores icons, i.e., image‐like perceptual representations. The iconicity of iconic memory has significant consequences for understanding consciousness, nonconceptual content, and the perception–cognition border. Steven Gross and Jonathan Flombaum have recently challenged the division between iconic memory and VWM by (...) arguing against the idea of capacity limits in favor of a flexible resource‐based model of short‐term memory. I argue that, while VWM capacity is probably governed by flexible resources rather than a sharp limit, the two memory stores should still be distinguished by their representational formats. Iconic memory stores icons, while VWM stores discursive (i.e., language‐like) representations. I conclude by arguing that this format‐based distinction between memory stores entails that prominent views about consciousness and the perception–cognition border will likely have to be revised. (shrink)
It is an orthodoxy in cognitive science that perception can occur unconsciously. Recently, Hakwan Lau, Megan Peters and Ian Phillips have argued that this orthodoxy may be mistaken. They argue that many purported cases of unconscious perception fail to rule out low degrees of conscious awareness while others fail to establish genuine perception. This paper presents a case of unconscious perception that avoids these problems. It also advances a general principle of ‘phenomenal coherence’ that can insulate some forms of evidence (...) for unconscious perception from the methodological critiques of Lau, Peters and Phillips. (shrink)
J. Michael Dunn’s Theorem in 3-Valued Model Theory and Graham Priest’s Collapsing Lemma provide the means of constructing first-order, three-valued structures from classical models while preserving some control over the theories of the ensuing models. The present article introduces a general construction that we call a Dunn–Priest quotient, providing a more general means of constructing models for arbitrary many-valued, first-order logical systems from models of any second system. This technique not only counts Dunn’s and Priest’s techniques as (...) special cases, but also provides a generalized Collapsing Lemma for Priest’s more recent plurivalent semantics in general. We examine when and how much control may be exerted over the resulting theories in particular cases. Finally, we expand the utility of the construction by showing that taking Dunn–Priest quotients of a family of structures commutes with taking an ultraproduct of that family, increasing the versatility of the tool. (shrink)
This study provides a comprehensive reinterpretation of the meaning of Locke's political thought. John Dunn restores Locke's ideas to their exact context, and so stresses the historical question of what Locke in the Two Treatises of Government was intending to claim. By adopting this approach, he reveals the predominantly theological character of all Locke's thinking about politics and provides a convincing analysis of the development of Locke's thought. In a polemical concluding section, John Dunn argues that liberal and (...) Marxist interpretations of Locke's politics have failed to grasp his meaning. Locke emerges as not merely a contributor to the development of English constitutional thought, or as a reflector of socio-economic change in seventeenth-century England, but as essentially a Calvinist natural theologian. (shrink)
According to a classic but nowadays discarded philosophical theory, perceptual experience is a complex of nonconceptual sensory states and full-blown propositional beliefs. This classical dual-component theory of experience is often taken to be obsolete. In particular, there seem to be cases in which perceptual experience and belief conflict: cases of known illusions, wherein subjects have beliefs contrary to the contents of their experiences. Modern dual-component theories reject the belief requirement and instead hold that perceptual experience is a complex of nonconceptual (...) sensory states and some other sort of conceptual state. The most popular modern dual-component theory appeals to sui generis propositional attitudes called ‘perceptual seemings’. This article argues that the classical dual-component theory has the resources to explain known illusions without giving up the claim that the conceptual components of experience are beliefs. The classical dual-component view, though often viewed as outdated and implausible, should be regarded as a serious contender in contemporary debates about the nature of perceptual experience. (shrink)
According to one important proposal, the difference between perception and cognition consists in the representational formats used in the two systems (Carey, 2009; Burge, 2010; Block, 2014). In particular, it is claimed that perceptual representations are iconic, or image-like, while cognitive representations are discursive, or language-like. Taking object perception as a test case, this paper argues on empirical grounds that it requires discursive label-like representations. These representations segment the perceptual field, continuously pick out objects despite changes in their features, and (...) abstractly represent high-level features, none of which appears possible for purely iconic representations. (shrink)
This paper explores allowing truth value assignments to be undetermined or "partial" and overdetermined or "inconsistent", thus returning to an investigation of the four-valued semantics that I initiated in the sixties. I examine some natural consequence relations and show how they are related to existing logics, including ukasiewicz's three-valued logic, Kleene's three-valued logic, Anderson and Belnap's relevant entailments, Priest's "Logic of Paradox", and the first-degree fragment of the Dunn-McCall system "R-mingle". None of these systems have nested implications, and I (...) investigate twelve natural extensions containing nested implications, all of which can be viewed as coming from natural variations on Kripke's semantics for intuitionistic logic. Many of these logics exist antecedently in the literature, in particular Nelson 's "constructible falsity". (shrink)
Focusing on the interactions between people suffering from neuromuscular diseases and their wheelchairs, the author raises the question of action: how is action made possible for people suffering from neuromuscular diseases? Starting with actor-network theory, the author shows that action not only results from distribution and delegation to heterogeneous entities but emerges from hard and lengthy work that makes the relation between them possible and transforms the entities involved. The author describes this work, called the process of adjustment, as work (...) on the links making a person, his or her body, and his or her world. Through this work, new possibilities of action emerge for the person, but also new abilities; the person’s identity is transformed and shaped. This analysis leads to a particular conception of the person as made up through his or her relations to other entities. (shrink)
This paper is a sequel to ‘Belnap-Dunn semantics for natural implicative expansions of Kleene's strong three-valued matrix with two designated values’, where a ‘bivalent’ Belnap-Dunn semantics is provided for all the expansions referred to in its title. The aim of the present paper is to carry out a parallel investigation for all natural implicative expansions of Kleene's strong 3-valued matrix now with only one designated value.
The question of whether perception is encapsulated from cognition has been a major topic in the study of perception in the past decade. One locus of debate concerns the role of attention. Some theorists argue that attention is a vehicle for widespread violations of encapsulation; others argue that certain forms of cognitively driven attention are compatible with encapsulation, especially if attention only modulates inputs. This paper argues for an extreme thesis: no effect of attention, whether on the inputs to perception (...) or on perceptual processing itself, constitutes a violation of the encapsulation of perception. (shrink)
Reliabilism -- the view that a belief is justified iff it is produced by a reliable process -- is often characterized as a form of consequentialism. Recently, critics of reliabilism have suggested that, since a form of consequentialism, reliabilism condones a variety of problematic trade-offs, involving cases where someone forms an epistemically deficient belief now that will lead her to more epistemic value later. In the present paper, we argue that the relevant argument against reliabilism fails because it equivocates. While (...) there is a sense in which reliabilism is a kind of consequentialism, it is not of a kind on which we should expect problematic trade-offs. (shrink)
This comprehensive text shows how various notions of logic can be viewed as notions of universal algebra providing more advanced concepts for those who have an introductory knowledge of algebraic logic, as well as those wishing to delve into more theoretical aspects.
Rationalization through reduction of cognitive dissonance does not have the function of representational exchange. Instead, cognitive dissonance is part of the “psychological immune system” and functions to protect the self-concept against evidence of incompetence, immorality, and instability. The irrational forms of attitude change that protect the self-concept in dissonance reduction are useful primarily for maintaining motivation.
We study an application of gaggle theory to unary negative modal operators. First we treat negation as impossibility and get a minimal logic system Ki that has a perp semantics. Dunn 's kite of different negations can be dealt with in the extensions of this basic logic Ki. Next we treat negation as “unnecessity” and use a characteristic semantics for different negations in a kite which is dual to Dunn 's original one. Ku is the minimal logic that (...) has a characteristic semantics. We also show that Shramko's falsification logic FL can be incorporated into some extension of this basic logic Ku. Finally, we unite the two basic logics Ki and Ku together to get a negative modal logic K-, which is dual to the positive modal logic K+ in . Shramko has suggested an extension of Dunn 's kite and also a dual version in . He also suggested combining them into a “united” kite. We give a united semantics for this united kite of negations. (shrink)
Stakeholder theory, as a method of management based on morals and behavior, must be grounded by a theory of ethics. However, traditional ethics of justice and rights cannot completely ground the theory. Following and expanding on the work of Wicks, Gilbert, and Freeman (1994), we believe that feminist ethics, invoking principles of caring, provides the missing element that allows moral theory to ground the stakeholder approach to management. Examples are given to support the suggested general principle for making business decisions (...) under feminist moral theory. (shrink)
We often evaluate belief-forming processes, agents, or entire belief states for reliability. This is normally done with the assumption that beliefs are all-or-nothing. How does such evaluation go when we’re considering beliefs that come in degrees? I consider a natural answer to this question that focuses on the degree of truth-possession had by a set of beliefs. I argue that this natural proposal is inadequate, but for an interesting reason. When we are dealing with all-or-nothing belief, high reliability leads to (...) high levels of truth-possession. However, when it comes to degrees of belief, reliability and truth-possession part ways. The natural answer thus fails to be a good way to evaluate degrees of belief for reliability. I propose and develop an alternative method based on the notion of calibration, suggested by Frank Ramsey, which does not have this problem and consider why we should care about such assessments of reliability even if they are not tied directly to truth-possession. (shrink)
As the COVID-19 pandemic impacts on health service delivery, health providers are modifying care pathways and staffing models in ways that require health professionals to be reallocated to work in critical care settings. Many of the roles that staff are being allocated to in the intensive care unit and emergency department pose additional risks to themselves, and new policies for staff reallocation are causing distress and uncertainty to the professionals concerned. In this paper, we analyse a range of ethical issues (...) associated with changes to staff allocation processes in the face of COVID-19. In line with a dominant view in the medical ethics literature, we claim, first, that no individual health professional has a specific, positive obligation to treat a patient when doing so places that professional at risk of harm, and so there is a clear ethical tension in any reallocation process in this context. Next, we argue that the changing asymmetries of health needs in hospitals means that careful consideration needs to be given to a stepwise process for deallocating staff from their usual duties. We conclude by considering how a justifiable process of reallocating professionals to high-risk clinical roles should be configured once those who are ‘fit for reallocation’ have been identified. We claim that this process needs to attend to three questions that we consider in detail: how the choice to make reallocation decisions is made, what justifiable models for reallocation might look like and what is owed to those who are reallocated. (shrink)
David Lewis ([1986b]) gives an attractive and familiar account of counterfactual dependence in the standard context. This account has recently been subject to a counterexample from Adam Elga (). In this article, I formulate a Lewisian response to Elga’s counterexample. The strategy is to add an extra criterion to Lewis’s similarity metric, which determines the comparative similarity of worlds. This extra criterion instructs us to take special science laws into consideration as well as fundamental laws. I argue that the Second (...) Law of Thermodynamics should be seen as a special science law, and give a brief account of what Lewisian special science laws should look like. If successful, this proposal blocks Elga’s counterexample. (shrink)
We give a set of postulates for the minimal normal modal logicK + without negation or any kind of implication. The connectives are simply , , , . The postulates (and theorems) are all deducibility statements . The only postulates that might not be obvious are.
Surtout percus aujourd'hui comme de grands penseurs juifs, Hermann Cohen (1842-1918) et Franz Rosenzweig (1886-1929) avaient aussi ete des specialistes de tout premier plan de la pensee idealiste allemande: Cohen fut l'un des fondateurs de ...
The law of informed consent to medical treatment has recently been extensively overhauled in England. The 2015 Montgomery judgment has done away with the long-held position that the information to be disclosed by doctors when obtaining valid consent from patients should be determined on the basis of what a reasonable body of medical opinion agree ought to be disclosed in the circumstances. The UK Supreme Court concluded that the information that is material to a patient’s decision should instead be judged (...) by reference to a new two-limbed test founded on the notions of the ‘reasonable person’ and the ‘particular patient’. The rationale outlined in Montgomery for this new test of materiality, and academic comment on the ruling’s significance, has focused on the central ethical importance that the law now accords to respect for patient autonomy in the process of obtaining consent from patients. In this paper, we dispute the claim that the new test of materiality articulated in Montgomery equates with respect for autonomy being given primacy in re-shaping the development of the law in this area. We also defend this position, arguing that our revised interpretation of Montgomery’s significance does not equate with a failure by the courts to give due legal consideration to what is owed to patients as autonomous decision-makers in the consent process. Instead, Montgomery correctly implies that doctors are ethically obliged to attend to a number of relevant ethical considerations in framing decisions about consent to treatment, which include subtle interpretations of the values of autonomy and well-being. Doctors should give appropriate consideration to how these values are fleshed out and balanced in context in order to specify precisely what information ought to be disclosed to a patient as a requirement of obtaining consent, and as a core component of shared decision-making within medical encounters more generally. (shrink)
This is a study of the choices faced by socialist movements as they developed within capitalist societies. Professor Przeworski examines the three principal choices confronted by socialism: whether to work through elections; whether to rely exclusively on the working class; and whether to try to reform or abolish capitalism. He brings to his analysis a number of abstract models of political and economic structure, and illustrates the issues in the context of historical events, tracing the development of socialist strategies since (...) the mid-nineteenth century. Several of the conclusions are novel and provocative. Professor Przeworski argues that economic issues cannot justify a socialist programme, and that the workers had good reasons to struggle for the improvement of capitalism. Therefore, the project of a socialist transformation, and the fight for economic advancement, were separate historical phenomena. (shrink)
Two types of criticism are frequently levelled at the history of ideas in general and the history of political theory in particular. The first is very much that of historians practising in other fields; that it is written as a saga in which all the great deeds are done by entities which could not, in principle, do anything. In it, Science is always wrestling with Theology, Empiricism with Rationalism, monism with dualism, evolution with the Great Chain of Being, artifice with (...) nature, Politik with political moralism. Its protagonists are never humans, but only reified abstractions—or, if humans by inadvertence, humans only as the loci of these abstractions. The other charge, one more frequently levelled by philosophers, is that it is insensitive to the distinctive features of ideas, unconcerned with, or more often ineffectual in its concern with, truth and falsehood, its products more like intellectual seed-catalogues than adequate studies of thought In short it is characterised by a persistent tension between the threats of falsity in its history and incompetence in its philosophy. (shrink)
Epistemic Consequentialism Consequentialism is the view that, in some sense, rightness is to be understood in terms conducive to goodness. Much of the philosophical discussion concerning consequentialism has focused on moral rightness or obligation or normativity. But there is plausibly also epistemic rightness, epistemic obligation, and epistemic normativity. Epistemic rightness is often denoted with talk … Continue reading Consequentialism Epistemic →.
Both I and Belnap, motivated the "Belnap-Dunn 4-valued Logic" by talk of the reasoner being simply "told true" (T) and simply "told false" (F), which leaves the options of being neither "told true" nor "told false" (N), and being both "told true" and "told false" (B). Belnap motivated these notions by consideration of unstructured databases that allow for negative information as well as positive information (even when they conflict). We now experience this on a daily basis with the Web. (...) But the 4-valued logic is deductive in nature, and its matrix is discrete: there are just four values. In this paper I investigate embedding the 4-valued logic into a context of probability. Jøsang's Subjective Logic introduced uncertainty to allow for degrees of belief, disbelief, and uncertainty. We extend this so as to allow for two kinds of uncertainty— that in which the reasoner has too little information (ignorance) and that in which the reasoner has too much information (conflicted). Jøsang's "Opinion Triangle" becomes an "Opinion Tetrahedron" and the 4-values can be seen as its vertices. I make/prove various observations concerning the relation of non-classical "probability" to non-classical logic. (shrink)
Consider: -/- The Evidence Question: When, and under what conditions does an agent have proposition E as evidence (at t)? -/- Timothy Williamson's (2000) answer to this question is the well-known E = K thesis: -/- E = K: E is a member of S's evidence set at t iff S knows E at t. -/- I will argue that this answer is inconsistent with the version of Bayesianism that Williamson advocates. This is because E = K allows an agent (...) to garner evidence via inductive inference whereas standard Bayesian views disallow such a thing. Since Williamson's version of Bayesianism shares the key features with the standard Bayesian view, there is an inconsistency. (shrink)
We shall be concerned with the modal logic BK—which is based on the Belnap–Dunn four-valued matrix, and can be viewed as being obtained from the least normal modal logic K by adding ‘strong negation’. Though all four values ‘truth’, ‘falsity’, ‘neither’ and ‘both’ are employed in its Kripke semantics, only the first two are expressible as terms. We show that expanding the original language of BK to include constants for ‘neither’ or/and ‘both’ leads to quite unexpected results. To be (...) more precise, adding one of these constants has the effect of eliminating the respective value at the level of BK-extensions. In particular, if one adds both of these, then the corresponding lattice of extensions turns out to be isomorphic to that of ordinary normal modal logics. (shrink)