Locke establishes the primary-secondary quality distinction in two steps. First, he identifies the primary qualities by means of a separability argument that involves transdictive inference about the properties of the minute, imperceptible parts of matter. Second, he identifies the secondary qualities by means of a dispensability argument that relies on the principle that bodies normally act by ‘impulse.’ I suggest this principle is also justified through transdictive inference. This allows us to see Locke’s claims about primary and secondary qualities as (...) unified, fallibilist and rooted in empiricism and the methodology of Newtonian science. (shrink)
Locke’s argument for God’s immateriality in _Essay_ IV x is usually interpreted as involving a principle that in some way prohibits the causation of thought by matter. I reject these causal readings in favor of one that involves a principle which says a thinking being cannot be composed out of unthinking parts. This Composition Principle, as I call it, is crucial to understanding how Locke’s theistic argument can succeed in the face of his skepticism about the substance of matter and (...) the cause of thought, as well as his belief in the possibility of thinking matter. It also explains why Locke held the soul’s immateriality to be highly probable. (shrink)
Various writers in the Western liberal and libertarian tradition have challenged the argument that enforcement of law and protection of property rights are public goods that must be provided by governments. Many of these writers argue explicitly for the provision of law enforcement services through private market relations.
Tyler Burge presents an original study of the most primitive ways in which individuals represent the physical world. By reflecting on the science of perception and related psychological and biological sciences, he gives an account of constitutive conditions for perceiving the physical world, and thus aims to locate origins of representational mind.
In Burge 2005, Tyler Burge reads disjunctivism as the denial that there are explanatorily relevant states in common between veridical perceptions and corresponding illusions. He rejects the position as plainly inconsistent with what is known about perception. I describe a disjunctive approach to perceptual experience that is immune to Burge's attack. The main positive moral concerns how to think about fallibility.
Non‐Humean theories of natural necessity invoke modally‐laden primitives to explain why nature exhibits lawlike regularities. However, they vary in the primitives they posit and in their subsequent accounts of laws of nature and related phenomena (including natural properties, natural kinds, causation, counterfactuals, and the like). This article provides a taxonomy of non‐Humean theories, discusses influential arguments for and against them, and describes some ways in which differences in goals and methods can motivate different versions of non‐Humeanism (and, for that matter, (...) Humeanism). In short, this article provides an introduction to non‐Humeanism concerning the metaphysics of laws of nature and natural necessity. (shrink)
Increased investment in ethics education has prompted a variety of instructional objectives and frameworks. Yet, no systematic procedure to classify these varying instructional approaches has been attempted. In the present study, a quantitative clustering procedure was conducted to derive a typology of instruction in ethics education. In total, 330 ethics training programs were included in the cluster analysis. The training programs were appraised with respect to four instructional categories including instructional content, processes, delivery methods, and activities. Eight instructional approaches were (...) identified through this clustering procedure, and these instructional approaches showed different levels of effectiveness. Instructional effectiveness was assessed based on one of nine commonly used ethics criteria. With respect to specific training types, Professional Decision Processes Training and Field-Specific Compliance Training appear to be viable approaches to ethics training based on Cohen’s d effect size estimates. By contrast, two commonly used approaches, General Discussion Training and Norm Adherence Training, were found to be considerably less effective. The implications for instruction in ethics training are discussed. (shrink)
In all probability, future generations will outnumber us by thousands or millions to one. In the aggregate, their interests therefore matter enormously, and anything we can do to steer the future of civilization onto a better trajectory is of tremendous moral importance. This is the guiding thought that defines the philosophy of longtermism. Political science tells us that the practices of most governments are at stark odds with longtermism. But the problems of political short-termism are neither necessary nor inevitable. In (...) principle, the state could serve as a powerful tool for positively shaping the long-term future. In this chapter, we make some suggestions about how to align government incentives with the interests of future generations. First, in Section II, we explain the root causes of political short-termism. Then, in Section III, we propose and defend four institutional reforms that we think would be promising ways to increase the time horizons of governments: 1) government research institutions and archivists; 2) posterity impact assessments; 3) futures assemblies; and 4) legislative houses for future generations. Section IV concludes with five additional reforms that are promising but require further research: to fully resolve the problem of political short-termism we must develop a comprehensive research program on effective longtermist political institutions. (shrink)
The concept of artificial intelligence is not new nor is the notion that it should be granted legal protections given its influence on human activity. What is new, on a relative scale, is the notion that artificial intelligence can possess citizenship—a concept reserved only for humans, as it presupposes the idea of possessing civil duties and protections. Where there are several decades’ worth of writing on the concept of the legal status of computational artificial artefacts in the USA and elsewhere, (...) it is surprising that law makers internationally have come to a standstill to protect our silicon brainchildren. In this essay, it will be assumed that future artificial entities, such as Sophia the Robot, will be granted citizenship on an international scale. With this assumption, an analysis of rights will be made with respect to the needs of a non-biological intelligence possessing legal and civic duties akin to those possessed by humanity today. This essay does not present a full set of rights for artificial intelligence—instead, it aims to provide international jurisprudence evidence aliunde ab extra de lege lata for any future measures made to protect non-biological intelligence. (shrink)
According to the Nomological Argument, observed regularities in nature are best explained by an appeal to a supernatural being. A successful explanation must avoid two perils. Some explanations provide too little structure, predicting a universe without regularities. Others provide too much structure, thereby precluding an explanation of certain types of lawlike regularities featured in modern scientific theories. We argue that an explanation based in the creative, intentional action of a supernatural being avoids these two perils whereas leading competitors do not. (...) Although our argument falls short of a full defense, it does suggest that the Nomological Argument is worthy of philosophical attention. (shrink)
David Armstrong accepted the following three theses: universals are immanent, laws are relations between universals, and laws govern. Taken together, they form an attractive position, for they promise to explain regularities in nature—one of the most important desiderata for a theory of laws and properties—while remaining compatible with naturalism. However, I argue that the three theses are incompatible. The basic idea is that each thesis makes an explanatory claim, but the three claims can be shown to run in a problematic (...) circle. I then consider which thesis we ought to reject and suggest some general lessons for the metaphysics of laws. (shrink)
According to structuralism, all natural properties are individuated by their roles in causal/nomological structures. According to quidditism, at least some natural properties are individuated in some other way. Because these theses deal with the identities of natural properties, this distinction cuts to the core of a serious metaphysical dispute: Are the intrinsic natures of all natural properties essentially causal/nomological in character? I'll argue that the answer is ‘no’, or at least that this answer is more plausible than many critics of (...) quidditism have recognized. In section 1, I distinguish between two versions of quidditism. Bare quidditism holds that worlds with distinct properties and isomorphic structures must be qualitatively identical in the following sense: inhabiting one world would be indistinguishable from inhabiting the other. In contrast, qualitative quidditism allows such worlds to have qualitative differences. In section 2, I discuss an epistemological position that allows us to bett... (shrink)
Tyler Burge presents a collection of his seminal essays on Gottlob Frege (1848-1925), who has a strong claim to be seen as the founder of modern analytic philosophy, and whose work remains at the centre of philosophical debate today. Truth, Thought, Reason gathers some of Burge's most influential work from the last twenty-five years, and also features important new material, including a substantial introduction and postscripts to four of the ten papers. It will be an essential resource for any (...) historian of modern philosophy, and for anyone working on philosophy of language, epistemology, or philosophical logic. (shrink)
Waiting time is widely used in health and social policy to make resource allocation decisions, yet no general account of the moral significance of waiting time exists. We provide such an account. We argue that waiting time is not intrinsically morally significant, and that the first person in a queue for a resource does not ipso facto have a right to receive that resource first. However, waiting time can and sometimes should play a role in justifying allocation decisions. First, there (...) is a duty of fairness prohibiting line-cutting where a sufficiently just queue exists. Second, waiting time has several morally attractive features that can justify its incorporation into allocation schemes. Where candidates are in relevantly similar circumstances, allocating by waiting time is relatively efficient, maximizes distribution equality relative to other Pareto efficient distributions, and treats candidates fairly. The claim that allocation using waiting time is fair is controversial. Some have claimed that formal lotteries are a fairer way to select among equal beneficiaries. We argue that lotteries are no fairer than allocation based on waiting time when it is equiprobable how a prospective queue will be ordered. In practice, lotteries share many of the disadvantages of queues; which is fairer will depend on contingent features of the allocation scenario. The upshot is that first-come, first-served is in fact a just way to allocate resources in many of the cases where it seems pre-theoretically compelling, and waiting time has unique normative properties which frequently justify its incorporation into resource allocation schemes. (shrink)
Do philosophic views affect job performance? The authors found that possessing a belief in free will predicted better career attitudes and actual job performance. The effect of free will beliefs on job performance indicators were over and above well-established predictors such as conscientiousness, locus of control, and Protestant work ethic. In Study 1, stronger belief in free will corresponded to more positive attitudes about expected career success. In Study 2, job performance was evaluated objectively and independently by a supervisor. Results (...) indicated that employees who espoused free will beliefs were given better work performance evaluations than those who disbelieve in free will, presumably because belief in free will facilitates exerting control over one’s actions. (shrink)
One reason to posit governing laws is to explain the uniformity of nature. Explanatory power can be purchased by accepting new primitives, and scientists invoke laws in their explanations without providing any supporting metaphysics. For these reasons, one might suspect that we can treat laws as wholly unanalyzable primitives. (John Carroll’s *Laws of Nature* (1994) and Tim Maudlin’s *The Metaphysics Within Physics* (2007) offer recent defenses of primitivism about laws.) Whatever defects primitive laws might have, explanatory weakness should not be (...) one of them. However, in this essay I’ll argue that wholly primitive laws cannot explain the uniformity of nature. The basic argument is based on the following idea: though a primitive law that P makes P likely, the primitive status of the law provides no reason to think that P must describe (or otherwise give rise to) a natural regularity. After identifying the problem for primitive laws, I consider an extension of the objection to all theories of governing laws and suggest that it may be avoided by a version of the Dretske/Tooley/Armstrong theory according to which laws are relations between universals. (shrink)
The necessitarian solution to the problem of induction involves two claims: first, that necessary connections are justified by an inference to the best explanation; second, that the best theory of necessary connections entails the timeless uniformity of nature. In this paper, I defend the second claim. My arguments are based on considerations from the metaphysics of laws, properties, and fundamentality.
In McDowell, I responded to Burge's attack on disjunctivism. In Burge Burge rejects my response. He stands by his main claim that disjunctivism is incompatible with the science of perception, and in a supplementary spirit he argues against the detail of my attempt to defend disjunctivism. Here I explain how disjunctivism is compatible with the science, and I respond to some of Burge's supplementary arguments.
The aim of this paper is to offer a formal criterion for physical computation that allows us to objectively distinguish between competing computational interpretations of a physical system. The criterion construes a computational interpretation as an ordered pair of functions mapping (1) states of a physical system to states of an abstract machine, and (2) inputs to this machine to interventions in this physical system. This interpretation must ensure that counterfactuals true of the abstract machine have appropriate counterparts which are (...) true of the physical system. The criterion proposes that rival interpretations be assessed on the basis of simplicity. Simplicity is construed as the Kolmogorov complexity of the interpretation. This approach is closely related to the notion of algorithmic information distance and draws on earlier work on real patterns. (shrink)
You’re imagining, in the course of a different game of make-believe, that you’re a bank robber. You don’t believe that you’re a bank robber. You are moved to point your finger, gun-wise, at the person pretending to be the bank teller and say, “Stick ‘em up! This is a robbery!”.
One widely used method for allocating health care resources involves the use of cost-effectiveness analysis (CEA) to rank treatments in terms of quality-adjusted life-years (QALYs) gained. CEA has been criticized for discriminating against people with disabilities by valuing their lives less than those of non-disabled people. Avoiding discrimination seems to lead to the ’QALY trap’: we cannot value saving lives equally and still value raising quality of life. This paper reviews existing responses to the QALY trap and argues that all (...) are problematic. Instead, we argue that adopting a moderate form of prioritarianism avoids the QALY trap and disability discrimination. (shrink)
Tyler conducted a longitudinal study of 1,575 Chicago inhabitants to determine why people obey the law. His findings show that the law is obeyed primarily because people believe in respecting legitimate authority, not because they fear punishment. The author concludes that lawmakers and law enforcers would do much better to make legal systems worthy of respect than to try to instill fear of punishment.
Consequentialism is thought to be in significant conflict with animal rights theory because it does not regard activities such as confinement, killing, and exploitation as in principle morally wrong. Proponents of the “Logic of the Larder” argue that consequentialism results in an implausibly pro-exploitation stance, permitting us to eat farmed animals with positive well- being to ensure future such animals exist. Proponents of the “Logic of the Logger” argue that consequentialism results in an implausibly anti-conservationist stance, permitting us to exterminate (...) wild animals with negative well-being to ensure future such animals do not exist. We argue that this conflict is overstated. Once we have properly accounted for indirect effects, such as the role that our policies play in shaping moral attitudes and behavior and the importance of accepting policies that are robust against deviation, we can see that consequentialism may converge with animal rights theory significantly, even if not entirely. (shrink)
This contribution aims at proposing a framework for articulating different kinds of “normativities” that are and can be attributed to “algorithmic systems.” The technical normativity manifests itself through the lineage of technical objects. The norm expresses a technical scheme’s becoming as it mutates through, but also resists, inventions. The genealogy of neural networks shall provide a powerful illustration of this dynamic by engaging with their concrete functioning as well as their unsuspected potentialities. The socio-technical normativity accounts for the manners in (...) which engineers, as actors folded into socio-technical networks, willingly or unwittingly, infuse technical objects with values materialized in the system. Surveillance systems’ design will serve here to instantiate the ongoing mediation through which algorithmic systems are endowed with specific capacities. The behavioral normativity is the normative activity, in which both organic and mechanical behaviors are actively participating, undoing the identification of machines with “norm following,” and organisms with “norminstitution”. This proposition productively accounts for the singularity of machine learning algorithms, explored here through the case of recommender systems. The paper will provide substantial discussions of the notions of “normative” by cutting across history and philosophy of science, legal, and critical theory, as well as “algorithmics,” and by confronting our studies led in engineering laboratories with critical algorithm studies. (shrink)
This article is concerned with the relationship between scientific practice and the metaphysics of laws of nature and natural properties. I begin by examining an argument by Michael Townsen Hicks and Jonathan Schaffer that an important feature of scientific practice—namely, that scientists sometimes invoke non-fundamental properties in fundamental laws—is incompatible with metaphysical theories according to which laws govern. I respond to their argument by developing an epistemology for governing laws that is grounded in scientific practice. This epistemology is of general (...) interest for non-Humean theories of laws, for it helps to explain our epistemic access to non-Humean theoretical entities such as governing laws or fundamental powers. (shrink)
The paper develops a conception of epistemic warrant as applied to perceptual belief, called "entitlement", that does not require the warranted individual to be capable of understanding the warrant. The conception is situated within an account of animal perception and unsophisticated perceptual belief. It characterizes entitlement as fulfillment of an epistemic norm that is apriori associated with a certain representational function that can be known apriori to be a function of perception. The paper connects anti-individualism, a thesis about the nature (...) of mental states, and perceptual entitlement. It presents an argument that explains the objectivity and validity of perceptual entitlement partly in terms of the nature of perceptual states–hence the nature of perceptual beliefs, which are constitutively associated with perceptual states. The paper discusses ways that an individual can be entitled to perceptual belief through its connection to perception, and ways that entitlement to perceptual belief can be undermined. (shrink)
What does free will mean to laypersons? The present investigation sought to address this question by identifying how laypersons distinguish between free and unfree actions. We elicited autobiographical narratives in which participants described either free or unfree actions, and the narratives were subsequently subjected to impartial analysis. Results indicate that free actions were associated with reaching goals, high levels of conscious thought and deliberation, positive outcomes, and moral behavior (among other things). These findings suggest that lay conceptions of free will (...) fit well with the view that free will is a form of action control. (shrink)
According to the Fittingness Defense, even if the consequences of anger are overall bad, it does not follow that we should aim to avoid it. This is because fitting anger involves an accurate appraisal of wrongdoing and is essential for appreciating injustice and signaling our disapproval. My aim in this paper is to show that the Fittingness Defense fails. While accurate appraisals are prima facie rational and justified on epistemic grounds, I argue that this type of fittingness does not vindicate (...) anger because there are alternative modes of recognizing and appreciating wrongdoing that can generate the benefits of anger without the harmful effects. Moreover, anger involves more than its appraisal of wrongdoing—it also consists of attitudes and motivations that are arguably of intrinsic disvalue. (shrink)
The canonical history of mathematics suggests that the late 19th-century “arithmetization” of calculus marked a shift away from spatial-dynamic intuitions, grounding concepts in static, rigorous definitions. Instead, we argue that mathematicians, both historically and currently, rely on dynamic conceptualizations of mathematical concepts like continuity, limits, and functions. In this article, we present two studies of the role of dynamic conceptual systems in expert proof. The first is an analysis of co-speech gesture produced by mathematics graduate students while proving a theorem, (...) which reveals a reliance on dynamic conceptual resources. The second is a cognitive-historical case study of an incident in 19th-century mathematics that suggests a functional role for such dynamism in the reasoning of the renowned mathematician Augustin Cauchy. Taken together, these two studies indicate that essential concepts in calculus that have been defined entirely in abstract, static terms are nevertheless conceptualized dynamically, in both contemporary and historical practice. (shrink)
Although the free-will reply to divine hiddenness is often associated with Kant, the argument typically presented in the literature is not the strongest Kantian response. Kant’s central claim is not that knowledge of God would preclude the possibility of transgression, but rather that it would preclude one’s viewing adherence to the moral law as a genuine sacrifice of self-interest. After explaining why the Kantian reply to hiddenness is superior to standard formulations, I argue that, despite Kant’s general skepticism about theodicy, (...) his insights pertaining to hiddenness also provide the foundation for a new theodicy that merits serious attention. (shrink)
This addendum expands upon the arguments made in the author’s 2020 essay, “Legal Personhood for Artificial Intelligence: Citizenship as the Exception to the Rule”, in an effort to display the significance human augmentation technologies will have on (feasibly) inadvertently providing legal protections to artificial intelligence systems (AIS)—a topic only briefly addressed in that work. It will also further discuss the impacts popular media have on imprinting notions of computerised behaviour and its subsequent consequences on the attribution of legal protections to (...) AIS and on speculative technological advancement that would aid the sophistication of AIS. (shrink)
I want to reflect on some functions of memory and their relations to traditional issues about personal identity. I try to elicit ways in which having memory, with its presupposition of agent identity over time, is integral to being a person, indeed to having a representational mind.
This essay is a long one. It is not meant to be read in a single sitting. Its structure is as follows. In section I, I explicate perceptual anti-individualism. Section II centers on the two aspects of the representational content of perceptual states. Sections III and IV concern the nature of the empirical psychology of vision, and its bearing on the individuation of perceptual states. Section V shows how what is known from empirical psychology undermines disjunctivism and hence certain further (...) views that entail it, including naive realism. In Section VI, I raise a further point against disjunctivism. Section VII indicates how general reflection on perceptual perspective and epistemic ability supports the constraints from empirical psychology. It also explains how reflection on veridicality conditions, psychological explanation, and cognitive ability conspire to force recognition of the two kinds of representation mentioned in the preceding paragraph. In the Appendix, I criticize attempts to support disjunctivism. (shrink)
Commentators such as Terence Irwin (1999) and Christopher Shields (2006) claim that the Ring of Gyges argument in Republic II cannot demonstrate that justice is chosen only for its consequences. This is because valuing justice for its own sake is compatible with judging its value to be overridable. Through examination of the rational commitments involved in valuing normative ideals such as justice, we aim to show that this analysis is mistaken. If Glaucon is right that everyone would endorse Gyges’ behavior, (...) it follows that nobody values justice intrinsically. Hence, the Gyges story constitutes a more serious challenge than critics maintain. (shrink)