Perceptual systems respond to proximal stimuli by forming mental representations of distal stimuli. A central goal for the philosophy of perception is to characterize the representations delivered by perceptual systems. It may be that all perceptual representations are in some way proprietarily perceptual and differ from the representational format of thought (Dretske 1981; Carey 2009; Burge 2010; Block ms.). Or it may instead be that perception and cognition always trade in the same code (Prinz 2002; Pylyshyn 2003). This paper rejects (...) both approaches in favor of perceptual pluralism, the thesis that perception delivers a multiplicity of representational formats, some proprietary and some shared with cognition. The argument for perceptual pluralism marshals a wide array of empirical evidence in favor of iconic (i.e., image-like, analog) representations in perception as well as discursive (i.e., language-like, digital) perceptual object representations. (shrink)
Dispositionalism about belief has had a recent resurgence. In this paper we critically evaluate a popular dispositionalist program pursued by Eric Schwitzgebel. Then we present an alternative: a psychofunctional, representational theory of belief. This theory of belief has two main pillars: that beliefs are relations to structured mental representations, and that the relations are determined by the generalizations under which beliefs are acquired, stored, and changed. We end by describing some of the generalizations regarding belief acquisition, storage, and change.
ABSTRACTThis paper provides a naturalistic account of inference. We posit that the core of inference is constituted by bare inferential transitions, transitions between discursive mental representations guided by rules built into the architecture of cognitive systems. In further developing the concept of BITs, we provide an account of what Boghossian [2014] calls ‘taking’—that is, the appreciation of the rule that guides an inferential transition. We argue that BITs are sufficient for implicit taking, and then, to analyse explicit taking, we posit (...) rich inferential transitions, which are transitions that the subject is disposed to endorse. (shrink)
Most theories of concepts take concepts to be structured bodies of information used in categorization and inference. This paper argues for a version of atomism, on which concepts are unstructured symbols. However, traditional Fodorian atomism is falsified by polysemy and fails to provide an account of how concepts figure in cognition. This paper argues that concepts are generative pointers, that is, unstructured symbols that point to memory locations where cognitively useful bodies of information are stored and can be deployed to (...) resolve polysemy. The notion of generative pointers allows for unresolved ambiguity in thought and provides a basis for conceptual engineering. (shrink)
Short‐term memory in vision is typically thought to divide into at least two memory stores: a short, fragile, high‐capacity store known as iconic memory, and a longer, durable, capacity‐limited store known as visual working memory (VWM). This paper argues that iconic memory stores icons, i.e., image‐like perceptual representations. The iconicity of iconic memory has significant consequences for understanding consciousness, nonconceptual content, and the perception–cognition border. Steven Gross and Jonathan Flombaum have recently challenged the division between iconic memory and VWM by (...) arguing against the idea of capacity limits in favor of a flexible resource‐based model of short‐term memory. I argue that, while VWM capacity is probably governed by flexible resources rather than a sharp limit, the two memory stores should still be distinguished by their representational formats. Iconic memory stores icons, while VWM stores discursive (i.e., language‐like) representations. I conclude by arguing that this format‐based distinction between memory stores entails that prominent views about consciousness and the perception–cognition border will likely have to be revised. (shrink)
According to one important proposal, the difference between perception and cognition consists in the representational formats used in the two systems (Carey, 2009; Burge, 2010; Block, 2014). In particular, it is claimed that perceptual representations are iconic, or image-like, while cognitive representations are discursive, or language-like. Taking object perception as a test case, this paper argues on empirical grounds that it requires discursive label-like representations. These representations segment the perceptual field, continuously pick out objects despite changes in their features, and (...) abstractly represent high-level features, none of which appears possible for purely iconic representations. (shrink)
The notion of an object file figures prominently in recent work in philosophy and cognitive science. Object files play a role in theories of singular reference, object individuation, perceptual memory, and the development of cognitive capacities. However, the philosophical literature lacks a detailed, empirically informed theory of object files. In this paper, we articulate and defend the multiple-slots view, which specifies both the format and architecture of object files. We argue that object files represent in a non-iconic, propositional format that (...) incorporates discrete symbols for separate features. Moreover, we argue that features of separate categories are stored in separate memory slots within an object file. We supplement this view with a computational framework that characterizes how information about objects is stored and retrieved. (shrink)
The question of whether perception is encapsulated from cognition has been a major topic in the study of perception in the past decade. One locus of debate concerns the role of attention. Some theorists argue that attention is a vehicle for widespread violations of encapsulation; others argue that certain forms of cognitively driven attention are compatible with encapsulation, especially if attention only modulates inputs. This paper argues for an extreme thesis: no effect of attention, whether on the inputs to perception (...) or on perceptual processing itself, constitutes a violation of the encapsulation of perception. (shrink)
According to a classic but nowadays discarded philosophical theory, perceptual experience is a complex of nonconceptual sensory states and full-blown propositional beliefs. This classical dual-component theory of experience is often taken to be obsolete. In particular, there seem to be cases in which perceptual experience and belief conflict: cases of known illusions, wherein subjects have beliefs contrary to the contents of their experiences. Modern dual-component theories reject the belief requirement and instead hold that perceptual experience is a complex of nonconceptual (...) sensory states and some other sort of conceptual state. The most popular modern dual-component theory appeals to sui generis propositional attitudes called ‘perceptual seemings’. This article argues that the classical dual-component theory has the resources to explain known illusions without giving up the claim that the conceptual components of experience are beliefs. The classical dual-component view, though often viewed as outdated and implausible, should be regarded as a serious contender in contemporary debates about the nature of perceptual experience. (shrink)
It is an orthodoxy in cognitive science that perception can occur unconsciously. Recently, Hakwan Lau, Megan Peters and Ian Phillips have argued that this orthodoxy may be mistaken. They argue that many purported cases of unconscious perception fail to rule out low degrees of conscious awareness while others fail to establish genuine perception. This paper presents a case of unconscious perception that avoids these problems. It also advances a general principle of ‘phenomenal coherence’ that can insulate some forms of evidence (...) for unconscious perception from the methodological critiques of Lau, Peters and Phillips. (shrink)
Unconscious logical inference seems to rely on the syntactic structures of mental representations (Quilty-Dunn & Mandelbaum 2018). Other transitions, such as transitions using iconic representations and associative transitions, are harder to assimilate to syntax-based theories. Here we tackle these difficulties head on in the interest of a fuller taxonomy of mental transitions. Along the way we discuss how icons can be compositional without having constituent structure, and expand and defend the “symmetry condition” on Associationism (the idea that associative links and (...) transitions are perfectly symmetric). In the end, we show how a BIT (“bare inferential transition”) theory can cohabitate with these other non-inferential mental transitions. (shrink)
Many contemporary epistemologists take rational inference to be a conscious action performed by the thinker (Boghossian 2014; 2018; Valaris 2014; Malmgren 2018). It is tempting to think that rational evaluability requires responsibility, which in turn requires conscious action. In that case, unconscious cognition involves merely associative or otherwise arational processing. This paper argues instead for deep rationalism: unconscious inference often exhibits the same rational status and richly structured logical character as conscious inference. The central case study is rationalization, in which (...) people shift their attitudes in logically structured, reason-responsive ways in response to evidence of their own incompetence or immorality. These attitude shifts are irrational in a way that reflects on the thinker. Thus rationally evaluable inference extends downward into the unconscious. Many take the sole aim of belief to be truth (Velleman 2000) or knowledge (Williamson 2000), but the prevalence of rationalization suggests that belief updating often aims instead at preserving our positive conceptions of ourselves—that is, belief updating is part of a psychological immune system (Gilbert 2006; Mandelbaum 2019). This paper argues that the psychological immune system comprises a suite of distinct cognitive mechanisms, some (ir)rational and some arational, which are united by a common function of avoiding the maladaptive predomination of negative affect and maintaining stable motivation. Other aspects of the psychological immune system include (i) a domain-general positive bias in evaluative attitudes and (ii) “terror management,” i.e., the systematic strengthening of meaning-conferring beliefs to avoid death anxiety. The multiplicity of processes underlying the psychological immune system point toward an irrational but adaptive function of cognition to keep us motivated in a world rife with negativity and death. (shrink)
This paper reports the first empirical investigation of the hypothesis that epistemic appraisals form part of the structure of concepts. To date, studies of concepts have focused on the way concepts encode properties of objects and the way those features are used in categorization and in other cognitive tasks. Philosophical considerations show the importance of also considering how a thinker assesses the epistemic value of beliefs and other cognitive resources and, in particular, concepts. We demonstrate that there are multiple, reliably (...) judged, dimensions of epistemic appraisal of concepts. Four of these dimensions are accounted for by a common underlying factor capturing how well people believe they understand a concept. Further studies show how dimensions of concept appraisal relate to other aspects of concepts. First, they relate directly to the hierarchical organization of concepts, reflecting the increase in specificity from superordinate to basic and subordinate levels. Second, they predict inductive choices in category-based induction. Our results suggest that epistemic appraisals of concepts form a psychologically important yet previously overlooked aspect of the structure of concepts. These findings will be important in understanding why individuals sometimes abandon and replace certain concepts; why social groups do so, for example, during a “scientific revolution”; and how we can facilitate such changes when we engage in deliberate “conceptual engineering” for epistemic, social, and political purposes. (shrink)
Rationalization through reduction of cognitive dissonance does not have the function of representational exchange. Instead, cognitive dissonance is part of the “psychological immune system” and functions to protect the self-concept against evidence of incompetence, immorality, and instability. The irrational forms of attitude change that protect the self-concept in dissonance reduction are useful primarily for maintaining motivation.
Perceptual representations pick out individuals and attribute properties to them. This paper considers the role of perceptual attribution in determining or guiding perceptual reference to objects. We consider three extant models of the relation between perceptual attribution and perceptual reference–all attribution guides reference, no attribution guides reference, or a privileged subset of attributions guides reference–and argue that empirical evidence undermines all three. We then defend a flexible-attributives model, on which the range of perceptual attributives used to guide reference shifts adaptively (...) with context. This model underscores the remarkable and dynamic intelligence of our perceptual capacities. We elucidate implications of the model for the boundary between perception and propositional thought. (shrink)
Perceptual experiences justify beliefs. A perceptual experience of a dog justifies the belief that there is a dog present. But there is much evidence that perceptual states can occur without being conscious, as in experiments involving masked priming. Do unconscious perceptual states provide justification as well? The answer depends on one’s theory of justification. While most varieties of externalism seem compatible with unconscious perceptual justification, several theories have recently afforded to consciousness a special role in perceptual justification. We argue that (...) such views face a dilemma: either consciousness should be understood in functionalist terms, in which case our best current theories of consciousness do not seem to imbue consciousness with any special epistemic features, or it should not, in which case it is mysterious why only conscious states are justificatory. We conclude that unconscious perceptual justification is quite plausible. (shrink)
There are issues in Reid scholarship as well as the primary texts that seem to suggest that Reid is not a direct realist about visual perception. In this paper, I examine two key issues ? colour perception and visible figure ? and attempt to defend the direct realism of Reid's theory through an interpretation of ?directness? as well as what Reid calls ?acquired perception?, which is ?mediate? in that it requires prior perception of signs, but nonetheless constitutes direct perception.
Reid endorsed a doxastic theory of perception, on which beliefs are constituents of perceptual experiences. This theory faces the problem of known illusions: we can perceive that p while believing that not-p. Some scholars argue that the problem of known illusions and other problems entail that Reid’s view cannot be charitably interpreted as a doxastic theory. This paper explores Reid’s theoretical commitments with respect to belief acquisition and uses textual evidence to show that his theory is genuinely doxastic. It then (...) argues that a Reidian response to the problem of known illusions can be formulated by appeal to the thesis that perceptual beliefs are formed noninferentially. Reid can also resist the intuition that we lack illusory beliefs in known-illusion cases given his independent reasons for doubting our capacity to identify perceptual beliefs by introspection. The paper then surveys other problems raised in the secondary literature and argues that none decisively undermine the doxastic interpretation of Reid. (shrink)
Ensemble perception—the encoding of objects by their group properties—is known to be resistant to outlier noise. However, this resistance is somewhat paradoxical: how can the visual system determine which stimuli are outliers without already having derived statistical properties of the ensemble? A simple solution would be that ensemble perception is not a simple, one-step process; instead, outliers are detected through iterative computations that identify items with high deviance from the mean and reduce their weight in the representation over time. Here (...) we tested this hypothesis. In Experiment 1, we found evidence that outliers are discounted from mean orientation judgments, extending previous results from ensemble face perception. In Experiment 2, we tested the timing of outlier rejection by having participants perform speeded judgments of sets with or without outliers. We observed significant increases in reaction time (RT) when outliers were present, but a decrease compared to no-outlier sets of matched range suggesting that range alone did not drive RTs. In Experiment 3 we tested the timing by which outlier noise reduces over time. We presented sets for variable exposure durations and found that noise decreases linearly over time. Altogether these results suggest that ensemble representations are optimized through iterative computations aimed at reducing noise. The finding that ensemble perception is an iterative process provides a useful framework for understanding contextual effects on ensemble perception. (shrink)
Stand‐up comedy is often viewed in two contrary ways. In one view, comedians are hailed as providing genuine social insight and telling truths. In the other, comedians are seen as merely trying to entertain and not to be taken seriously. This tension raises a foundational question for the aesthetics of stand‐up: Do stand‐up comedians perform genuine assertions in their performances? This article considers this question in the light of several theories of assertion. We conclude that comedians on stage do not (...) count as making genuine assertions—rather, much like actors on a stage, they merely pretend to perform speech acts. However, due to norms of authenticity that govern stand‐up comedy, performers can nonetheless succeed in conveying genuine insights. Thus, our account accommodates both the seemingly incompatible aspects of our ordinary appreciation of stand‐up comedy and points toward deeper philosophical understanding of stand‐up comedy as a unique art form. (shrink)
A recent study has established that thinkers reliably engage in epistemic appraisals of concepts of natural categories. Here, five studies are reported which investigated the effects of different manipulations of category learning context on appraisal of the concepts learnt. It was predicted that dimensions of concept appraisal could be affected by manipulating either procedural factors or declarative factors. While known effects of these manipulations on metacognitive judgements such as category learning judgements and confidence at test were replicated, procedural factors had (...) no reliable effects on the dimensions of concept appraisal. Effects of declarative manipulations on some forms of concept appraisal were observed. (shrink)
Inference has long been a concern in epistemology, as an essential means by which we extend our knowledge and test our beliefs. Inference is also a key notion in influential psychological or philosophical accounts of mental capacities, from perception via utterance comprehension to problem-solving. Consciousness, on the other hand, has arguably been the defining interest of philosophy of mind over recent decades. Comparatively little attention, however, has been devoted to the significance of consciousness for the proper understanding of the nature (...) and role of inference. It is commonly suggested that inference may be either conscious or unconscious. Yet how unified are these various supposed instances of inference? Does either enjoy explanatory priority in relation to the other? In what ways or senses, can an inference be conscious, or fail to be conscious, and how does this matter? This book brings together original essays from established scholars and emerging theorists that illustrate how several current debates in epistemology, philosophy of psychology, and philosophy of mind can benefit from reflections on these and related questions about the significance of consciousness for inference. Contributors include: Kirk Ludwig and Wade Munroe; Michael Rescorla; Federico Bongiorno and Lisa Bortolotti; Berit Brogaard; Nicholas Allott; Jake Quilty-Dunn and Eric Mandelbaum; Corine Besson; Anders Nes; David Henderson, Terry Horgan, and Matjaž Potrč; Elijah Chudnoff; and Ram Neta. (shrink)
A wealth of cases – most notably blindsight and priming under inattention or suppression – have convinced philosophers and scientists alike that perception occurs outside awareness. In recent work, I dispute this consensus, arguing that any putative case of unconscious perception faces a dilemma. The dilemma divides over how absence of awareness is established. If subjective reports are used, we face the problem of the criterion: the concern that such reports underestimate conscious experience. If objective measures are used, we face (...) the problem of attribution: the concern that the case does not involve genuine individual-level perception. Quilty-Dunn presents an apparently compelling example of unconscious perception due to Mitroff et al. which, he contends, evades this dilemma. The case is fascinating. However, as I here argue, it does not escape the dilemma’s clutches. (shrink)
A collection of new articles in philosophy of perception, experimental psychology, and cognitive neurosciences written by world experts of the domain: Mohan Matthen, Scott Johnson, Berit Brogaard & Thomas Sørensen, EJ Green, Jake Quilty-Dunn, Brian Scholl, Philip Kellman, Frédérique de Vignemont, Mazviita Chirimuuta, Bence Nanay & Nick Young, Yale Cohen, William Lycan, Jonas Olofsson, Clare Batty & Barry Smith, Benjamin Young, Aleksandra Mroczko-Wąsowicz, Błażej Skrzypulec, Jonathan Cohen, Charles Spence, Casey O’Callaghan, Simon Lacey & Krish Sathian, Fabrizio Calzavarini & Alberto (...) Voltolini, and Matthew Fulkerson among others. (shrink)
This book by leading international scholars in the fields of history, philosophy and politics restores the subject to a place at the very centre of political theory and practice.
A survey of the recent literature suggests that physicians should engage religious patients on religious grounds when the patient cites religious considerations for a medical decision. We offer two arguments that physicians ought to avoid engaging patients in this manner. The first is the Public Reason Argument. We explain why physicians are relevantly akin to public officials. This suggests that it is not the physician’s proper role to engage in religious deliberation. This is because the public character of a physician’s (...) role binds him/her to public reason, which precludes the use of religious considerations. The second argument is the Fiduciary Argument. We show that the patient-physician relationship is a fiduciary relationship, which suggests that the patient has the clinical expectation that physicians limit themselves to medical considerations. Since engaging in religious deliberations lies outside this set of considerations, such engagement undermines trust and therefore damages the patient-physician relationship. (shrink)
Innovative practice occurs when a clinician provides something new, untested, or nonstandard to a patient in the course of clinical care, rather than as part of a research study. Commentators have noted that patients engaged in innovative practice are at significant risk of suffering harm, exploitation, or autonomy violations. By creating a pathway for harmful or nonbeneficial interventions to spread within medical practice without being subjected to rigorous scientific evaluation, innovative practice poses similar risks to the wider community of patients (...) and society as a whole. Given these concerns, how should we control and oversee innovative practice, and in particular, how should we coordinate innovative practice and clinical research? In this article, I argue that an ethical approach to overseeing innovative practice must encourage the early transition to rigorous clinical research without delaying or deferring the development of beneficial innovations or violating the autonomy rights of clinicians and their patients. (shrink)
John Blund's Treatise on the Soul is probably the earliest text of its kind: a witness to the first reception of Greek and Arabic psychology at Oxford and foundation for a new area of medieval philosophical speculation. This book contains Hunt's Latin edition with a new English translation and a new introduction to the text by Michael Dunne.
Aulisio and Arora argue that the moral significance of value imposition explains the moral distinction between traditional conscientious objection and non-traditional conscientious objection. The former objects to directly performing actions, whereas the latter objects to indirectly assisting actions on the grounds that indirectly assisting makes the actor morally complicit. Examples of non-traditional conscientious objection include objections to the duty to refer. Typically, we expect physicians who object to a practice to refer, but the non-traditional conscientious objector physician refuses to refer. (...) Aulisio and Arora argue that physicians have a duty to refer because refusing to do so violates the patient’s values. While we agree with Aulisio and Arora’s conclusions, we argue value imposition cannot adequately explain the moral difference between traditional conscientious objection and non-traditional conscientious objection. Treating autonomy as the freedom to live in accordance with one’s values, as Aulisio and Arora do, is a departure from traditional liberal conceptions of autonomy and consequently fails to explain the moral difference between the two kinds of objection. We outline how a traditional liberal understanding of autonomy would help in this regard, and we make two additional arguments—one that maintains that non-traditional conscientious objection undermines society’s autonomy, and another that maintains that it undermines the physician-patient relationship—to establish why physicians have a duty to refer. (shrink)
It is well known that classical, aka ‘sharp’, Bayesian decision theory, which models belief states as single probability functions, faces a number of serious difficulties with respect to its handling of agnosticism. These difficulties have led to the increasing popularity of so-called ‘imprecise’ models of decision-making, which represent belief states as sets of probability functions. In a recent paper, however, Adam Elga has argued in favour of a putative normative principle of sequential choice that he claims to be borne out (...) by the sharp model but not by any promising incarnation of its imprecise counterpart. After first pointing out that Elga has fallen short of establishing that his principle is indeed uniquely borne out by the sharp model, I cast aspersions on its plausibility. I show that a slight weakening of the principle is satisfied by at least one, but interestingly not all, varieties of the imprecise model and point out that Elga has failed to motivate his stronger commitment. (shrink)
I argue that sports clubs should be punished for bad behaviour by their fans in a way that affects the club’s sporting success: for example, we are justified in imposing points deductions and competition disqualifications on the basis of racist chanting. This is despite a worry that punishing clubs in such a way is unfair because it targets the sports team rather than the fans who misbehaved. I argue that this belies a misunderstanding of the nature of sports clubs and (...) of the nature of sporting success. Further, I argue that fans should want to be held responsible in such a way because it vindicates the significant role that they play in the life of their club. (shrink)
ABSTRACTA conditional is natural if it fulfils the three following conditions. It coincides with the classical conditional when restricted to the classical values T and F; it satisfies the Modus Ponens; and it is assigned a designated value whenever the value assigned to its antecedent is less than or equal to the value assigned to its consequent. The aim of this paper is to provide a ‘bivalent’ Belnap-Dunn semantics for all natural implicative expansions of Kleene's strong 3-valued matrix with two (...) designated elements. (shrink)
Organizations are making massive investments in artificial intelligence, and recent demonstrations and achievements highlight the immense potential for AI to improve organizational and human welfare. Yet realizing the potential of AI necessitates a better understanding of the various ethical issues involved with deciding to use AI, training and maintaining it, and allowing it to make decisions that have moral consequences. People want organizations using AI and the AI systems themselves to behave ethically, but ethical behavior means different things to different (...) people, and many ethical dilemmas require trade-offs such that no course of action is universally considered ethical. How should organizations using AI—and the AI itself—process ethical dilemmas where humans disagree on the morally right course of action? Though a variety of ethical AI frameworks have been suggested, these approaches do not adequately address how people make ethical evaluations of AI systems or how to incorporate the fundamental disagreements people have regarding what is and is not ethical behavior. Drawing on moral foundations theory, we theorize that a person will perceive an organization’s use of AI, its data procedures, and the resulting AI decisions as ethical to the extent that those decisions resonate with the person’s moral foundations. Since people hold diverse moral foundations, this highlights the crucial need to consider individual moral differences at multiple levels of AI. We discuss several unresolved issues and suggest potential approaches for thinking about conflicts in moral judgments concerning AI. (shrink)
I argue that campus closures and shifts to online instruction in the early stages of the COVID-19 pandemic created an obligation to offer courses asynchronously. This is because some students could not have reasonably foreseen circumstances making continued synchronous participation impossible. Offering synchronous participation options to students who could continue to participate thusly would have been unfair to students who could not participate synchronously. I also discuss why ex post facto consideration of this decision is warranted, noting that similar actions (...) may be necessary in the future and that other tough pedagogical cases share important similarities with this case. (shrink)
J. Michael Dunn’s Theorem in 3-Valued Model Theory and Graham Priest’s Collapsing Lemma provide the means of constructing first-order, three-valued structures from classical models while preserving some control over the theories of the ensuing models. The present article introduces a general construction that we call a Dunn–Priest quotient, providing a more general means of constructing models for arbitrary many-valued, first-order logical systems from models of any second system. This technique not only counts Dunn’s and Priest’s techniques as special cases, but (...) also provides a generalized Collapsing Lemma for Priest’s more recent plurivalent semantics in general. We examine when and how much control may be exerted over the resulting theories in particular cases. Finally, we expand the utility of the construction by showing that taking Dunn–Priest quotients of a family of structures commutes with taking an ultraproduct of that family, increasing the versatility of the tool. (shrink)
Introductory students regularly endorse naïve skepticism—unsupported or uncritical doubt about the existence and universality of truth—for a variety of reasons. Though some of the reasons for students’ skepticism can be traced back to the student—for example, a desire to avoid engaging with controversial material or a desire to avoid offense—naïve skepticism is also the result of how introductory courses are taught, deemphasizing truth to promote students’ abilities to develop basic disciplinary skills. While this strategy has a number of pedagogical benefits, (...) it prevents students in early stages of intellectual development from understanding truth as a threshold concept. Using philosophy as a case study, I argue that we can make progress against naïve skepticism by clearly discussing how metadisciplinary aims differ at the disciplinary and course levels in a way that is meaningful, reinforced, and accessible. (shrink)
We critically investigate and refine Dunn's relevant predication, his formalisation of the notion of a real property. We argue that Dunn's original dialectical moves presuppose some interpretation of relevant identity, though none is given. We then re-motivate the proposal in a broader context, considering the prospects for a classical formalisation of real properties, particularly of Geach's implicit distinction between real and ''Cambridge'' properties. After arguing against these prospects, we turn to relevance logic, re-motivating relevant predication with Geach's distinction in mind. (...) Finally we draw out some consequences of Dunn's proposal for the theory of identity in relevance logic. (shrink)
This essay takes up a challenge recently posed by Graham Oppy: to clearly express, in premise-conclusion form, Hegel's version of the ontological argument. In addition to employing this format, it seeks to supplement existing treatments by locating a core component of Hegel's argument in a slightly different place than is common. Whereas some prominent recent treatments focus on Hegel's definition of the Absolute as the Concept, from the third part of his Science of Logic, mine focuses on earlier definitions from (...) the first. As I hope to show, there are even more resources in Hegel's Logic for an ontological argument than those emphasized in recent treatments: the concept, the Idea, etc. Already in the first third of the Logic, we find a compelling response to a famous Kantian counter-argument to the ontological proof. The counter-argument is summed up in the phrase ‘existence [Sein] is not a real predicate’. Hence, Hegel's response as I interpret it will take the form of a competing analysis of Being, a Lehre vom Sein. What do we learn when we put the ontology back into Hegel's ontological argument? That Being is neither predicate, nor subject, nor copula, but a monist category. The larger importance of this exercise to our understanding of Hegel's thought lies in the way it clarifies his profound debt to even non-idealist conceptions of God, such as the one espoused by Spinoza. (shrink)
We shall be concerned with the modal logic BK—which is based on the Belnap–Dunn four-valued matrix, and can be viewed as being obtained from the least normal modal logic K by adding ‘strong negation’. Though all four values ‘truth’, ‘falsity’, ‘neither’ and ‘both’ are employed in its Kripke semantics, only the first two are expressible as terms. We show that expanding the original language of BK to include constants for ‘neither’ or/and ‘both’ leads to quite unexpected results. To be more (...) precise, adding one of these constants has the effect of eliminating the respective value at the level of BK-extensions. In particular, if one adds both of these, then the corresponding lattice of extensions turns out to be isomorphic to that of ordinary normal modal logics. (shrink)