Yujin Nagasawa presents a new, stronger version of perfect being theism, the conception of God as the greatest possible being. Nagasawa argues that God should be understood, not as omniscient, omnipotent, and omnibenevolent, but rather as a being that has the maximal consistent set of knowledge, power, and benevolence.
To use Kantian ethics in an applied context, decision makers typically try to determine whether the “maxim” of their possible action conforms to Kant’s supreme principle of morality: “I ought never to act except in such a way that I could also will that my maxim should become a universal law” (4:402). The action’s maxim is a way of expressing the decision maker’s (a) putative action and (b) conditions that prompt the action in a (c) preposition of (...) a form that will allow her to perform (d) tests, specified by Kant’s moral principle, which determine the action’s moral permissibility. Despite the clear importance of crafting this maxim—e.g., so it meets standards (a)-(d)—for using Kantian ethics to answer real and pressing ethical questions, existing accounts do not offer sustained guidance about maxim crafting. This paper offers such guidance. The paper also shows that properly crafting maxims, using guidelines that Kant implies but does not himself set forth, allows Kantian ethics to defeat classic objections. Both projects contribute to the aim of showing that Kantian ethics should play a role both in debating current ethical challenges and in teaching ethics. (shrink)
Abstract: In this article, I offer a proposal to clarify what I believe is the proper relation between value maximization and stakeholder theory, which I call enlightened value maximization. Enlightened value maximization utilizes much of the structure of stakeholder theory but accepts maximization of the long-run value of the firm as the criterion for making the requisite tradeoffs among its stakeholders, and specifies long-term value maximization or value seeking as the firm’s objective. This proposal therefore solves the problems that arise (...) from the multiple objectives that accompany traditional stakeholder theory. I also discuss the Balanced Scorecard, which is the managerial equivalent of stakeholder theory, explaining how this theory is flawed because it presents managers with a scorecard that gives no score—that is, no single-valued measure of how they have performed. Thus managers evaluated with such a system (which can easily have two dozen measures and provides no information on the tradeoffs between them) have no way to make principled or purposeful decisions. The solution is to define a true (single dimensional) score for measuring performance for the organization or division (and it must be consistent with the organization’s strategy), and as long as their score is defined properly, (and for lower levels in the organization it will generally not be value) this will enhance their contribution to the firm. (shrink)
The Lockean Thesis says that you must believe p iff you’re sufficiently confident of it. On some versions, the 'must' asserts a metaphysical connection; on others, it asserts a normative one. On some versions, 'sufficiently confident' refers to a fixed threshold of credence; on others, it varies with proposition and context. Claim: the Lockean Thesis follows from epistemic utility theory—the view that rational requirements are constrained by the norm to promote accuracy. Different versions of this theory generate different versions of (...) Lockeanism; moreover, a plausible version of epistemic utility theory meshes with natural language considerations, yielding a new Lockean picture that helps to model and explain the role of beliefs in inquiry and conversation. Your beliefs are your best guesses in response to the epistemic priorities of your context. Upshot: we have a new approach to the epistemology and semantics of belief. And it has teeth. It implies that the role of beliefs is fundamentally different than many have thought, and in fact supports a metaphysical reduction of belief to credence. (shrink)
Human activity is permeated by norms of all sorts: moral norms provide the 'code' for what we ought to do, norms of logic regulate how we ought to reason, scientific norms set the standards for what counts as knowledge, legal norms determine what is lawfully permitted and what isn't, aesthetic norms establish canons of beauty and shape artistic trends and practices, and socio-cultural norms provide criteria for what counts as tolerable, just, praiseworthy, or unacceptable in a community or milieu. Given (...) the diversity of norm-governed phenomena prevailing in our everyday experience, it is not surprising that the question of normativity has recently generated important debates in philosophy. However, the more specific question concerning the nature and function of 'norms in perceptual experience' has received comparatively little attention. This volume brings together scholars from philosophy of mind, philosophy of perception, and phenomenology to explore this question. (shrink)
Although written in Japanese, 意志・格率・道徳法則(Will, Maxim and the Moral Law)pursues the logical connection of these Kantian tools in ethics. Note: the structure of the uploaded document is not the same as the published one.
This paper argues in favor of a particular account of decision‐making under normative uncertainty: that, when it is possible to do so, one should maximize expected choice‐worthiness. Though this position has been often suggested in the literature and is often taken to be the ‘default’ view, it has so far received little in the way of positive argument in its favor. After dealing with some preliminaries and giving the basic motivation for taking normative uncertainty into account in our decision‐making, we (...) consider and provide new arguments against two rival accounts that have been offered—the accounts that we call ‘My Favorite Theory’ and ‘My Favorite Option’. We then give a novel argument for comparativism—the view that, under normative uncertainty, one should take into account both probabilities of different theories and magnitudes of choice‐worthiness. Finally, we further argue in favor of maximizing expected choice‐worthiness and consider and respond to five objections. (shrink)
In set theory, a maximality principle is a principle that asserts some maximality property of the universe of sets or some part thereof. Set theorists have formulated a variety of maximality principles in order to settle statements left undecided by current standard set theory. In addition, philosophers of mathematics have explored maximality principles whilst attempting to prove categoricity theorems for set theory or providing criteria for selecting foundational theories. This article reviews recent work concerned with the formulation, investigation and justification (...) of maximality principles. (shrink)
Maximize Presupposition! is an economy condition that adjudicates between contextually equivalent competing structures. Building on data discovered by O. Percus, I will argue that the constraint is checked in the local contexts of embedded constituents. I will argue that this architecture leads to a general solution to the problem of antipresupposition projection, and also allows I. Heim’s ‘Novelty/Familiarity Condition’ to be eliminated as a constraint on operations of context change.
A property, F, is maximal iff, roughly, large parts of an F are not themselves Fs.' Maximality makes trouble for a recent analysis of intrinsicality by Rae Langton and David Lewis.
How do we think about what we plan to do? One dominant answer is that we select the best possible option available. However, a growing number of philosophers would offer a different answer: since we are not equipped to maximize we often choose the next best alternative, one that is no more than satisfactory. This strategy choice is called satisficing. This collection of essays explores both these accounts of practical reason, examining the consequences for adopting one or the other for (...) moral theory in general and the theory of practical rationality in particular. It aims to address a constituency larger than contemporary moral philosophers and bring these questions to the attention of those interested in the applications of decision theory in economics, psychology and political science. (shrink)
A property, F, is maximal i?, roughly, large parts of an F are not themselves Fs. Maximal properties are typically extrinsic, for their instantiation by x depends on what larger things x is part of. This makes trouble for a recent argument against microphysical superve- nience by Trenton Merricks. The argument assumes that conscious- ness is an intrinsic property, whereas consciousness is in fact maximal and extrinsic.
In the region where some cat sits, there are many very cat-like items that are proper parts of the cat (or otherwise mereologically overlap the cat) , but which we are inclined to think are not themselves cats, e.g. all of Tibbles minus the tail. The question is, how can something be so cat-like without itself being a cat. Some have tried to answer this “Problem of the Many” (a problem that arises for many different kinds of things we regularly (...) encounter, including desks, persons, rocks, and clouds) by relying on a mereological maximality principle, according to which, something cannot be a member of a kind K if it is a large proper part of, or otherwise greatly mereologically overlaps, a K. It has been shown, however, that a maximality constraint of this type, i.e. one that restricts mereological overlap, is open to strong objections. Inspired by the insights of, especially, Sutton and Madden, I develop a type of functional-maximality principle that avoids these objections (and has other merits), and thereby provides a better answer to the Problem of the Many. (shrink)
The paper generalizes Van McGee's well-known result that there are many maximal consistent sets of instances of Tarski's schema to a number of non-classical theories of truth. It is shown that if a non-classical theory rejects some classically valid principle in order to avoid the truth-theoretic paradoxes, then there will be many maximal non-trivial sets of instances of that principle that the non-classical theorist could in principle endorse. On the basis of this it is argued that the idea of classical (...) recapture, which plays such an important role for non-classical logicians, can only be pushed so far. (shrink)
During the Zimbabwean crisis, millions crossed through the apartheid-era border fence, searching for ways to make ends meet. Maxim Bolt explores the lives of Zimbabwean migrant labourers, of settled black farm workers and their dependants, and of white farmers and managers, as they intersect on the border between Zimbabwe and South Africa. Focusing on one farm, this book investigates the role of a hub of wage labour in a place of crisis. A close ethnographic study, it addresses the complex, (...) shifting labour and life conditions in northern South Africa's agricultural borderlands. Underlying these challenges are the Zimbabwean political and economic crisis of the 2000s and the intensified pressures on commercial agriculture in South Africa following market liberalization and post-apartheid land reform. But, amidst uncertainty, farmers and farm workers strive for stability. The farms on South Africa's margins are centers of gravity, islands of residential labour in a sea of informal arrangements. (shrink)
Recent semantic research has made increasing use of a principle, Maximize Presupposition, which requires that under certain circumstances the strongest possible presupposition be marked. This principle is generally taken to be irreducible to standard Gricean reasoning because the forms that are in competition have the same assertive content. We suggest, however, that Maximize Presupposition might be reducible to the theory of scalar implicatures. (i)First, we consider a special case: the speaker utters a sentence with a presupposition p which is not (...) initially taken for granted by the addressee, but the latter takes the speaker to be an authority on the matter. Signaling the presupposition provides new information to the addressee; but it also follows from the logic of presupposition qua common belief that the presupposition is thereby satisfied (Stalnaker, Ling Philos 25(5–6):701–721, 2002). (ii) Second, we generalize this solution to other cases. We assume that even when p is common belief, there is a very small chance that the addressee might forget it (‘Fallibility’); in such cases, marking a presupposition will turn out to generate new information by re-establishing part of the original context. We also adopt from Raj Singh (Nat Lang Semantics 19(2):149–168, 2011) the hypothesis that presupposition maximization is computed relative to local contexts—and we assume that these too are subject to Fallibility; this accounts for cases in which the information that justifies the presupposition is linguistically provided. (iii) Finally, we suggest that our assumptions have benefits in the domain of implicatures: they make it possible to reinterpret Magri’s ‘blind’ (i.e. context-insensitive) implicatures as context-sensitive implicatures which just happen to be misleading. (shrink)
The article is a reappraisal of the requirement of maximal specificity (RMS) proposed by the author as a means of avoiding "ambiguity" in probabilistic explanation. The author argues that RMS is not, as he had held in one earlier publication, a rough substitute for the requirement of total evidence, but is independent of it and has quite a different rationale. A group of recent objections to RMS is answered by stressing that the statistical generalizations invoked in probabilistic explanations must be (...) lawlike, and by arguing that predicates fit for occurrence in lawlike statistical probability statements must meet two conditions, at least one of which is violated in each of the counterexamples adduced in the objections. These considerations suggest the conception that probabilistic-statistical laws concern the long-run frequency of some characteristic within a reference class as characterized by some particular "description" or predicate expression, and that replacement of such a description by a coextensive one may turn a statement that is lawlike into another that is not. Finally, to repair a defect noted by Grandy, the author's earlier formulation of RMS is replaced by a modified version. (shrink)
I argue that many of the priority rankings that have been proposed by effective altruists seem to be in tension with apparently reasonable assumptions about the rational pursuit of our aims in the face of uncertainty. The particular issue on which I focus arises from recognition of the overwhelming importance and inscrutability of the indirect effects of our actions, conjoined with the plausibility of a permissive decision principle governing cases of deep uncertainty, known as the maximality rule. I conclude that (...) we lack a compelling decision theory that is consistent with a longtermist perspective and does not downplay the depth of our uncertainty, while also supporting orthodox effective altruist conclusions about cause prioritization. (shrink)
Maximizing act consequentialism holds that actions are morally permissible if and only if they maximize the value of consequences—if and only if, that is, no alternative action in the given choice situation has more valuable consequences.[i] It is subject to two main objections. One is that it fails to recognize that morality imposes certain constraints on how we may promote value. Maximizing act consequentialism fails to recognize, I shall argue, that the ends do not always justify the means. Actions with (...) maximally valuable consequences are not always permissible. The second main objection to maximizing act consequentialism is that it mistakenly holds that morality requires us to maximize value. Morality, I shall argue, only requires that we satisfice (promote sufficiently) value, and thus leaves us a greater range of options than maximizing act consequentialism recognizes. (shrink)
The fields of linguistic pragmatics and legal interpretation are deeply interrelated. The purpose of this paper is to show how pragmatics and the developments in argumentation theory can contribute to the debate on legal interpretation. The relation between the pragmatic maxims and the presumptions underlying the legal canons are brought to light, unveiling the principles that underlie the types of argument usually used to justify a construction. The Gricean maxims and the arguments of legal interpretation are regarded as presumptions subject (...) to default used to justify an interpretation. This approach can allow one to trace the different legal interpretive arguments back to their basic underlying presumptions, so that they can be compared, ordered, and assessed according to their defeasibility conditions. This approach allows one to understand the difference between various types of interpretive canons, and their strength in justifying an interpretation. (shrink)
Utility maximization is a key element of a number of theoretical approaches to explaining human behavior. Among these approaches are rational analysis, ideal observer theory, and signal detection theory. While some examples of these approaches define the utility maximization problem with little reference to the bounds imposed by the organism, others start with, and emphasize approaches in which bounds imposed by the information processing architecture are considered as an explicit part of the utility maximization problem. These latter approaches are the (...) topic of this issue of the journal. (shrink)
The notion that natural selection is a process of fitness maximization gets a bad press in population genetics, yet in other areas of biology the view that organisms behave as if attempting to maximize their fitness remains widespread. Here I critically appraise the prospects for reconciliation. I first distinguish four varieties of fitness maximization. I then examine two recent developments that may appear to vindicate at least one of these varieties. The first is the ‘new’ interpretation of Fisher's fundamental theorem (...) of natural selection, on which the theorem is exactly true for any evolving population that satisfies some minimal assumptions. The second is the Formal Darwinism project, which forges links between gene frequency change and optimal strategy choice. In both cases, I argue that the results fail to establish a biologically significant maximization principle. I conclude that it may be a mistake to look for universal maximization principles justified by theory alone. A more promising approach may be to find maximization principles that apply conditionally and to show that the conditions were satisfied in the evolution of particular traits. (shrink)
This book outlines and circumvents two serious problems that appear to attach to Kant’s moral philosophy, or more precisely to the model of rational agency that underlies that moral philosophy: the problem of experiential incongruence and the problem of misdirected moral attention. The book’s central contention is that both these problems can be sidestepped. In order to demonstrate this, it argues for an entirely novel reading of Kant’s views on action and moral motivation. In addressing the two main problems in (...) Kant’s moral philosophy, the book explains how the first problem arises because the central elements of Kant’s theory of action seem not to square with our lived experience of agency, and moral agency in particular. For example, the idea that moral deliberation invariably takes the form of testing personal policies against the Categorical Imperative seems at odds with the phenomenology of such reasoning, as does the claim that all our actions proceed from explicitly adopted general policies, or maxims. It then goes on to discuss the second problem showing how it is a result of Kant’s apparent claim that when an agent acts from duty, her reason for doing so is that her maxim is lawlike. This seems to put the moral agent’s attention in the wrong place: on the nature of her own maxims, rather than on the world of other people and morally salient situations. The book shows how its proposed novel reading of Kant’s views ultimately paints an unfamiliar but appealing picture of the Kantian good-willed agent as much more embedded in and engaged with the world than has traditionally been supposed. (shrink)
According to Bayesian epistemology, the epistemically rational agent updates her beliefs by conditionalization: that is, her posterior subjective probability after taking account of evidence X, pnew, is to be set equal to her prior conditional probability pold(·|X). Bayesians can be challenged to provide a justification for their claim that conditionalization is recommended by rationality—whence the normative force of the injunction to conditionalize? There are several existing justifications for conditionalization, but none directly addresses the idea that conditionalization will be epistemically rational (...) if and only if it can reasonably be expected to lead to epistemically good outcomes. We apply the approach of cognitive decision theory to provide a justification for conditionalization using precisely that idea. We assign epistemic utility functions to epistemically rational agents; an agent’s epistemic utility is to depend both upon the actual state of the world and on the agent’s credence distribution over possible states. We prove that, under independently motivated conditions, conditionalization is the unique updating rule that maximizes expected epistemic utility. (shrink)
Maximality is a desirable property of paraconsistent logics, motivated by the aspiration to tolerate inconsistencies, but at the same time retain from classical logic as much as possible. In this paper we introduce the strongest possible notion of maximal paraconsistency, and investigate it in the context of logics that are based on deterministic or non-deterministic three-valued matrices. We show that all reasonable paraconsistent logics based on three-valued deterministic matrices are maximal in our strong sense. This applies to practically all three-valued (...) paraconsistent logics that have been considered in the literature, including a large family of logics which were developed by da Costa's school. Then we show that in contrast, paraconsistent logics based on three-valued properly nondeterministic matrices are not maximal, except for a few special cases (which are fully characterized). However, these non-deterministic matrices are useful for representing in a clear and concise way the vast variety of the (deterministic) three-valued maximally paraconsistent matrices. The corresponding weaker notion of maximality, called premaximal paraconsistency, captures the "core" of maximal paraconsistency of all possible paraconsistent determinizations of a non-deterministic matrix, thus representing what is really essential for their maximal paraconsistency. (shrink)
This paper investigates a connection between the semantic notion provided by the ordering * among theories in model theory and the syntactic SOPn hierarchy of Shelah. It introduces two properties which are natural extensions of this hierarchy, called SOP2 and SOP1. It is shown here that SOP3 implies SOP2 implies SOP1. In Shelah's article 229) it was shown that SOP3 implies *-maximality and we prove here that *-maximality in a model of GCH implies a property called SOP2″. It has been (...) subsequently shown by Shelah and Usvyatsov that SOP2″ and SOP2 are equivalent, so obtaining an implication between *-maximality and SOP2. It is not known if SOP2 and SOP3 are equivalent. Together with the known results about the connection between the SOPn hierarchy and the existence of universal models in the absence of GCH, the paper provides a step toward the classification of unstable theories without the strict order property. (shrink)
This collection documents the work of the Hyperuniverse Project which is a new approach to set-theoretic truth based on justifiable principles and which leads to the resolution of many questions independent from ZFC. The contributions give an overview of the program, illustrate its mathematical content and implications, and also discuss its philosophical assumptions. It will thus be of wide appeal among mathematicians and philosophers with an interest in the foundations of set theory. The Hyperuniverse Project was supported by the John (...) Templeton Foundation from January 2013 until September 2015. (shrink)
We show that the maximal linear extension theorem for well partial orders is equivalent over RCA 0 to ATR 0. Analogously, the maximal chain theorem for well partial orders is equivalent to ATR 0 over RCA 0.
Apart from a passing reference to Kant, Grice never explains in his writings how he came to discover his conversational maxims. He simply proclaims them without justification. Yet regardless of how his ingenious invention really came about, one might wonder how the conversational maxims can be detected and distinguished from other sorts of maxims. We argue that the conversational maxims can be identified by the use of a transcendental argument in the spirit of Kant. To this end, we introduce Grice’s (...) account of conversational maxims and categories and compare it briefly with Kant’s thoughts on categories. Subsequently, we pursue a thought experiment concerning what would happen if speakers constantly broke one or another of the maxims. It seems that it would not be possible for children to recognize a significant number of lexical meanings under such circumstances. Hence, the conversational maxims are rules whose occasional application is a necessary condition of language and conversation. (shrink)
For intermediate logics, there is obtained in the paper an algebraic equivalent of the disjunction propertyDP. It is proved that the logic of finite binary trees is not maximal among intermediate logics withDP. Introduced is a logicND, which has the only maximal extension withDP, namely, the logicML of finite problems.
Maximization theory, which is borrowed from economics, provides techniques for predicing the behavior of animals - including humans. A theoretical behavioral space is constructed in which each point represents a given combination of various behavioral alternatives. With two alternatives - behavior A and behavior B - each point within the space represents a certain amount of time spent performing behavior A and a certain amount of time spent performing behavior B. A particular environmental situation can be described as a constraint (...) on available points (a circumscribed area) within the space. Maximization theory assumes that animals always choose the available point with the highest numerical value. The task of maximization theory is to assign to points in the behavioral space values that remain constant across various environmental situations; as those situations change, the point actually chosen is always the one with the highest assigned value. (shrink)
l investigate versions of the Maximality Principles for the classes of forcings which are <κ-closed. <κ-directed-closed, or of the form Col (κ. <Λ). These principles come in many variants, depending on the parameters which are allowed. I shall write MPΓ(A) for the maximality principle for forcings in Γ, with parameters from A. The main results of this paper are: • The principles have many consequences, such as <κ-closed-generic $\Sigma _{2}^{1}(H_{\kappa})$ absoluteness, and imply. e.g., that ◇κ holds. I give an application (...) to the automorphism tower problem, showing that there are Souslin trees which are able to realize any equivalence relation, and hence that there are groups whose automorphism tower is highly sensitive to forcing. • The principles can be separated into a hierarchy which is strict, for many κ. • Some of the principles can be combined, in the sense that they can hold at many different κ simultaneously. The possibilities of combining the principles are limited, though: While it is consistent that MP<κ-closed(HκT) holds at all regular κ below any fixed α. the "global" maximality principle, stating that MP<κ-closed(Hκ ∪ {κ}) holds at every regular κ. is inconsistent. In contrast to this, it is equiconsistent with ZFC that the maximality principle for directed-closed forcings without any parameters holds at every regular cardinal. It is also consistent that every local statement with parameters from HκT that's provably <κ-closed-forceably necessary is true, for all regular κ. (shrink)
The centerpiece of Jeffrey Bub's book Interpreting the Quantum World is a theorem (Bub and Clifton 1996) which correlates each member of a large class of no-collapse interpretations with some 'privileged observable'. In particular, the Bub-Clifton theorem determines the unique maximal sublattice L(R,e) of propositions such that (a) elements of L(R,e) can be simultaneously determinate in state e, (b) L(R,e) contains the spectral projections of the privileged observable R, and (c) L(R,e) is picked out by R and e alone. In (...) this paper, we explore the issue of maximal determinate sets of observables using the tools provided by the algebraic approach to quantum theory; and we call the resulting algebras of determinate observables, "maximal *beable* subalgebras". The capstone of our exploration is a generalized version of Bub and Clifton's theorem that applies to arbitrary (i.e., both mixed and pure) quantum states, to Hilbert spaces of arbitrary (i.e., both finite and infinite) dimension, and to arbitrary observables (including those with a continuous spectrum). Moreover, in the special case covered by the original Bub-Clifton theorem, our theorem reproduces their result under strictly weaker assumptions. This added level of generality permits us to treat several topics that were beyond the reach of the original Bub-Clifton result. In particular: (a) We show explicitly that a (non-dynamical) version of the Bohm theory can be obtained by granting privileged status to the position observable. (b) We show that Clifton's (1995) characterization of the Kochen-Dieks modal interpretation is a corollary of our theorem in the special case when the density operator is taken as the privileged observable. (c) We show that the 'uniqueness' demonstrated by Bub and Clifton is only guaranteed in certain special cases -- viz., when the quantum state is pure, or if the privileged observable is compatible with the density operator. We also use our results to articulate a solid mathematical foundation for certain tenets of the orthodox Copenhagen interpretation of quantum theory. For example, the uncertainty principle asserts that there are strict limits on the precision with which we can know, simultaneously, the position and momentum of a quantum-mechanical particle. However, the Copenhagen interpretation of this fact is not simply that a precision momentum measurement necessarily and uncontrollably disturbs the value of position, and vice-versa; but that position and momentum can never in reality be thought of as simultaneously determinate. We provide warrant for this stronger 'indeterminacy principle' by showing that there is no quantum state that assigns a sharp value to both position and momentum; and, a fortiori, that it is mathematically impossible to construct a beable algebra that contains both the position observable and the momentum observable. We also prove a generalized version of the Bub-Clifton theorem that applies to "singular" states (i.e., states that arise from non-countably-additive probability measures, such as Dirac delta functions). This result allows us to provide a mathematically rigorous reconstruction of Bohr's response to the original EPR argument -- which makes use of a singular state. In particular, we show that if the position of the first particle is privileged (e.g., as Bohr would do in a position measuring context), the position of the second particle acquires a definite value by virtue of lying in the corresponding maximal beable subalgebra. But then (by the indeterminacy principle) the momentum of the second particle is not a beable; and EPR's argument for the simultaneous reality of both position and momentum is undercut. (shrink)
I argue that although Paul Grice’s picture of conversational maxims and conversational implicature is an immensely useful theoretical tool, his view about the nature of the maxims is misguided. Grice portrays conversational maxims as tenets of rationality, but I will contend that they are best seen as social norms. I develop this proposal in connection to Philip Pettit’s account of social norms, with the result that conversational maxims are seen as grounded in practices of social approval and disapproval within a (...) given group. This shift to seeing conversational maxims as social norms has several advantages. First, it allows us to neatly accommodate possible variation with respect to the maxims across well-functioning linguistic groups. Second, it facilitates a more psychologically plausible account of flouting. And third, it generates insights about the nature of social norms themselves. (shrink)
In this paper, I confront Parfit’s Mixed Maxims Objection. I argue that recent attempts to respond to this objection fail, and I argue that their failure is compounded by the failure of recent attempts to show how the Formula of Universal Law can be used to demarcate the category of obligatory maxims. I then set out my own response to the objection, drawing on remarks from Kant’s Metaphysics of Morals for inspiration and developing a novel account of how the Formula (...) of Universal Law can be employed to determine the deontic status of action tokens, action types, and maxims. (shrink)
Plato's dialogues may be interpreted in a number of ways. One interpretation sees Plato's concept of The Good as a precursor of maximization theory, a modern behavioral theory. Plato identifies goodness with an ideal pattern of people's overt choices under the constraints of everyday life. Correspondingly, maximization theory sees goodness (in terms of "value") as a quantifiable function of overt, constrained choices of an animal. In both conceptions goodness may be increased by expanding the temporal extent over which a behavioral (...) pattern is integrated. (shrink)
According to an influential objection, which Martha Nussbaum has powerfully restated, expressing anger in democratic public discourse is counterproductive from the standpoint of justice. To resist this challenge, this article articulates a crucial yet underappreciated sense in which angry discourse is epistemically productive. Drawing on recent developments in the philosophy of emotion, which emphasize the distinctive phenomenology of emotion, I argue that conveying anger to one’s listeners is epistemically valuable in two respects: first, it can direct listeners’ attention to elusive (...) morally relevant features of the situation; second, it enables them to register injustices that their existing evaluative categories are not yet suited to capturing. Thus, when employed skillfully, angry speech promotes a greater understanding of existing injustices. This epistemic role is indispensable in highly divided societies, where the injustices endured by some groups are often invisible to, or misunderstood by, other... (shrink)
I begin with Kant's notion of a maxim and consider the role which this notion plays in Kant's formulations of the fundamental categorical imperative. This raises the question of what a maxim is, and why there is not the same requirement for resolutions of other kinds to be universalizable. Drawing on Bernard Williams' notion of a thick ethical concept, I proffer an answer to this question which is intended neither in a spirit of simple exegesis nor as a (...) straightforward exercise in moral philosophy but as something that is poised somewhere between the two. My aim is to provide a kind of rational reconstruction of Kant. In the final section of the essay, I argue that this reconstruction, while it manages to salvage something distinctively Kantian, also does justice to the relativism involved in what J. L. Mackie calls 'people's adherence to and participation in different ways of life'. (shrink)
Generalizations of partial meet contraction are introduced that start out from the observation that only some of the logically closed subsets of the original belief set are at all viable as contraction outcomes. Belief contraction should proceed by selection among these viable options. Several contraction operators that are based on such selection mechanisms are introduced and then axiomatically characterized. These constructions are more general than the belief base approach. It is shown that partial meet contraction is exactly characterized by adding (...) to one of these constructions the condition that all logically closed subsets of the belief set can be obtained as the outcome of a single (multiple) contraction. Examples are provided showing the counter-intuitive consequences of that condition, thus confirming the credibility of the proposed generalization of the AGM framework. (shrink)
Recent developments in the philosophy of logic suggest that the correct foundational logic is like God in that both are maximally infinite and only partially graspable by finite beings. This opens the door to a new argument for the existence of God, exploiting the link between God and logic through the intermediary of the Logos. This article explores the argument from the nature of God to the nature of logic, and sketches the converse argument from the nature of logic to (...) the existence of God. (shrink)
The authors propose a model for business ethics which arises directly from business practice. This model is based on a behavioral definition of the economic theory of profit maximization and situates business ethics within opportunity costs. Within that context, they argue that good business and good ethics are synonymous, that ethics is at the heart and center of business, that profits and ethics are intrinsically related.
Deontic constraints prohibit an agent performing acts of a certain type even when doing so will prevent more instances of that act being performed by others. In this article I show how deontic constraints can be interpreted as either maximizing or non-maximizing rules. I then argue that they should be interpreted as maximizing rules because interpreting them as non-maximizing rules results in a problem with moral advice. Given this conclusion, a strong case can be made that consequentialism provides the best (...) account of deontic constraints. (shrink)