My goal is to illuminate truth-making by way of illuminating the relation of making. My strategy is not to ask what making is, in the hope of a metaphysical theory about is nature. It's rather to look first to the language of making. The metaphor behind making refers to agency. It would be absurd to suggest that claims about making are claims about agency. It is not absurd, however, to propose that the concept of making somehow emerges from some feature (...) to do with agency. That's the contention to be explore in this paper. The way to do this is through expressivism,. Truth-making claims, and making-claims generfally, are claims in which we express mental states linked to our maipulation of concepts, like truth. In particular, they express disposition to undertake derivations using inference rules, in which introductionrules have a specific role. I then show how this theory explains our intuitions about truth's asymmetric dependence on being. (shrink)
In the proof-theoretic semantics approach to meaning, harmony , requiring a balance between introduction-rules (I-rules) and elimination rules (E-rules) within a meaning conferring natural-deduction proof-system, is a central notion. In this paper, we consider two notions of harmony that were proposed in the literature: 1. GE-harmony , requiring a certain form of the E-rules, given the form of the I-rules. 2. Local intrinsic harmony : imposes the existence of certain transformations of derivations, known (...) as reduction and expansion . We propose a construction of the E-rules (in GE-form) from given I-rules, and prove that the constructed rules satisfy also local intrinsic harmony. The construction is based on a classification of I-rules, and constitute an implementation to Gentzen’s (and Pawitz’) remark, that E-rules can be “read off” I-rules. (shrink)
Is there a universal set of rules for discovering and testing scientific hypotheses? Since the birth of modern science, philosophers, scientists, and other thinkers have wrestled with this fundamental question of scientific practice. Efforts to devise rigorous methods for obtaining scientific knowledge include the twenty-one rules Descartes proposed in his Rules for the Direction of the Mind and the four rules of reasoning that begin the third book of Newton's Principia , and continue today in debates (...) over the very possibility of such rules. Bringing together key primary sources spanning almost four centuries, Science Rules introduces readers to scientific methods that have played a prominent role in the history of scientific practice. Editor Peter Achinstein includes works by scientists and philosophers of science to offer a new perspective on the nature of scientific reasoning. For each of the methods discussed, he presents the original formulation of the method selections written by a proponent of the method together with an application to a particular scientific example and a critical analysis of the method that draws on historical and contemporary sources. The methods included in this volume are Cartesian rationalism with an application to Descartes' laws of motion Newton's inductivism and the law of gravity two versions of hypothetico-deductivism -- those of William Whewell and Karl Popper -- and the nineteenth-century wave theory of light Paul Feyerabend's principle of proliferation and Thomas Kuhn's views on scientific values, both of which deny that there are universal rules of method, with an application to Galileo's tower argument. Included also is a famous nineteenth-century debate about scientific reasoning between the hypothetico-deductivist William Whewell and the inductivist John Stuart Mill and an account of the realism-antirealism dispute about unobservables in science, with a consideration of Perrin's argument for the existence of molecules in the early twentieth century. (shrink)
The interplay of introduction and elimination rules for propositional connectives is often seen as suggesting a distinguished role for intuitionistic logic. We prove three formal results concerning intuitionistic propositional logic that bear on that perspective, and discuss their significance. First, for a range of connectives including both negation and the falsum, there are no classically or intuitionistically correct introductionrules. Second, irrespective of the choice of negation or the falsum as a primitive connective, classical and intuitionistic (...) consequence satisfy exactly the same structural, introduction, and elimination (briefly, elementary) rules. Third, for falsum as primitive only, intuitionistic consequence is the least consequence relation that satisfies all classically correct elementary rules. (shrink)
The intention here is that of giving a formal underpinning to the idea of ‘meaning-is-use’ which, even if based on proofs, it is rather different from proof-theoretic semantics as in the Dummett–Prawitz tradition. Instead, it is based on the idea that the meaning of logical constants are given by the explanation of immediate consequences, which in formalistic terms means the effect of elimination rules on the result of introductionrules, i.e. the so-called reduction rules. For that (...) we suggest an extension to the Curry– Howard interpretation which draws on the idea of labelled deduction, and brings back Frege’s device of variable-abstraction to operate on the labels (i.e., proof-terms) alongside formulas of predicate logic. (shrink)
: This introduction highlights two of Mondzain's contributions in the chapter reproduced here, "Iconic Space and the Rule of Lands." The first is her discussion of a link between images and power, which stresses the formal characteristics of paintings rather than their narratives. The second is her examination of the specific task which representation is called on to perform in religious as opposed to secular contexts, where spiritual, otherworldly figures are given physical shape and form.
The inversion principle for logical rules expresses a relationship between introduction and elimination rules for logical constants. Hallnäs & Schroeder-Heister (1990, 1991) proposed the principle of definitional reflection, which embodies basic ideas of inversion in the more general context of clausal definitions. For the context of admissibility statements, this has been further elaborated by Schroeder-Heister (2007). Using the framework of definitional reflection and its admissibility interpretation, we show that, in the sequent calculus of minimal propositional logic, the (...) left introductionrules are admissible when the right introductionrules are taken as the definitions of the logical constants and vice versa. This generalizes the well-known relationship between introduction and elimination rules in natural deduction to the framework of the sequent calculus. (shrink)
The introduction of connectionist or parallel distributed processing (PDP) systems to model cognitive functions has raised the question of the possible relations between these models and traditional information processing models which employ rules to manipulate representations. After presenting a brief account of PDP models and two ways in which they are commonly interpreted by those seeking to use them to explain cognitive functions, I present two ways one might relate these models to traditional information processing models and so (...) not totally repudiate the tradition of modelling cognition through systems of rules and representations. The proposal that seems most promising is that PDP-type structures might provide the underlying framework in which a rule and representation model might be implemented. To show how one might pursue such a strategy, I discuss recent research by Barsalou on the instability of concepts and show how that might be accounted for in a system whose microstructure had a PDP architecture. I also outline how adopting a multi-leveled view of the mind, where on one level the mind employed a PDP-type system and at another level constituted a rule processing system, would allow researchers to relocate some problems which seemed difficult to explain at one level, such as the capacity for concept learning, to another level where it could be handled in a straightforward manner. (shrink)
Introduction -- Part I: Starting points -- Some decisions are easier than others -- Easy decisions -- More difficult decisions -- Moral dilemmas -- The deep basis of the moral life -- Practical decision making -- Why ethics is ultimately religious -- Acceptable and unacceptable forms of revelation -- The useful incomplete ness of religious tradition -- Moral virtue and character -- Intuition and deliberation in moral decision-making -- The absolute and the relative in moral life -- Have we (...) become too relativistic? -- The natural law approach -- God as the absolute -- Facts and values -- Individual integrity and communal authority -- The transcendent absolute -- Rules and relationships -- The moral burden of proof -- The legal analogy -- Applying the idea of "presumption" to ethical decision-making -- Moral presumptions as a common starting point -- Basic moral presumptions -- Uses of scripture -- Positive Christian value presumptions -- The limits and flaws in human nature -- Presumptions that preserve balance -- A presumption for Scripture and tradition -- When presumptions are in conflict -- Part II: Applications and illustrations -- Difficult personal decisions -- Sexual intimacy and family life -- Contraception and abortion -- Choosing a spouse -- Divorce -- Vocational choices -- The uses of our money -- Political choices -- Hard choices in the public arena -- Abortion -- Homosexuality -- The dilemma of "affirmative action" -- Securing economic justice -- Environmental policies -- Criminal justice -- Uses of military power -- Hard choices at the global level -- International institution building -- International security and policing -- Nuclear disarmament -- Economic globalization -- Global warming -- Hard choices in communities of faith. (shrink)
The thesis that, in a system of natural deduction, the meaning of a logical constant is given by some or all of its introduction and elimination rules has been developed recently in the work of Dummett, Prawitz, Tennant, and others, by the addition of harmony constraints. Introduction and elimination rules for a logical constant must be in harmony. By deploying harmony constraints, these authors have arrived at logics no stronger than intuitionist propositional logic. Classical logic, they (...) maintain, cannot be justified from this proof-theoretic perspective. This paper argues that, while classical logic can be formulated so as to satisfy a number of harmony constraints, the meanings of the standard logical constants cannot all be given by their introduction and/or elimination rules; negation, in particular, comes under close scrutiny. (shrink)
Fuzzy logic has become an important tool for a number of different applications ranging from the control of engineering systems to artificial intelligence. In this concise introduction, the author presents a succinct guide to the basic ideas of fuzzy logic, fuzzy sets, fuzzy relations, and fuzzy reasoning, and shows how they may be applied. The book culminates in a chapter which describes fuzzy logic control: the design of intelligent control systems using fuzzy if-then rules which make use of (...) human knowledge and experience to behave in a manner similar to a human controller. Throughout, the level of mathematical knowledge required is kept basic and the concepts are illustrated with numerous diagrams to aid in comprehension. As a result, all those curious to know more about fuzzy concepts and their real-world application will find this a good place to start. (shrink)
Analytical commentary -- Fruits upon one tree -- The continuation of the early draft into philosophy of mathematics -- Hidden isomorphism -- A common methodology -- The flatness of philosophical grammar -- Following a rule 185-242 -- Introduction to the exegesis -- Rules and grammar -- The tractatus and rules of logical syntax -- From logical syntax to philosophical grammar -- Rules and rule-formulations -- Philosophy and grammar -- The scope of grammar -- Some morals -- (...) Exegesis 185-8 -- Accord with a rule -- Initial compass bearings -- Accord and the harmony between language and reality -- Rules of inference and logical machinery -- Formulations and explanations of rules by examples -- Interpretations, fitting and grammar -- Further misunderstandings -- Exegesis 189-202 -- Following rules, mastery of techniques and practices -- Following a rule -- Practices and techniques -- Doing the right thing and doing the same thing -- Privacy and the community view -- On not digging below bedrock -- Private linguists and private linguists : Robinson Crusoe sails again -- Is a language necessarily shared with a community of speakers? -- Innate knowledge of a language -- Robinson Crusoe sails again -- Solitary cavemen and monolinguists -- Private languages and private languages -- Exegesis 203-37 -- Agreement in definitions, judgements, and form of life -- The scaffolding of facts -- The role of our nature -- Forms of life -- Agreement : consensus of human beings and their actions -- Exegesis 238-42 -- Grammar and necessity -- Setting the stage -- Leitmotifs -- External guidelines -- Necessary propositions and norms of representation -- Concerning the truth and falsehood of necessary propositions -- What necessary truths are about illusions of correspondence : ideal objects, kinds of reality, and ultra-physics -- The psychology of the A priori -- Knowledge -- Belief -- Certainty -- Surprise -- Discoveries and conjectures -- Compulsion -- Propositions of logic and laws of thought -- Alternative forms of representation -- The arbitrariness of grammar -- A kinship to the non-arbitrary -- Proof in mathematics -- Conventionalism. (shrink)
1. There is only one rule of inference, modus ponens. This is true both in the presentations of Begriffsschrift and Grundgesetze. (But cf. note regarding the latter.) There are other ways of making transitions between propositions in proofs, but these are never labeled by Frege “rules of inference.” These pertain to scope of quantification, parsing of formulas (bracketing), introduction of definitions, conventions for the use and replacement of the various letters(variables), and certain structural reorganizations, (e.g. amalgamation of (...) horizontals, and of identical subcomponents); cf. the list in Gg §48. (shrink)
A system of natural deduction rules is proposed for an idealized form of English. The rules presuppose a sharp distinction between proper names and such expressions as the c, a (an) c, some c, any c, and every c, where c represents a common noun. These latter expressions are called quantifiers, and other expressions of the form that c or that c itself, are called quantified terms. Introduction and elimination rules are presented for any, every, some, (...) a (an), and the, and also for any which, every which, and so on, as well as rules for some other concepts. One outcome of these rules is that Every man loves some woman is implied by, but does not imply, Some woman is loved by every man, since the latter is taken to mean the same as Some woman is loved by all men. Also, Jack knows which woman came is implied by Some woman is known by Jack to have come, but not by Jack knows that some woman came. (shrink)
This book provides a rich, systematic, and accessible introduction to moral psychology, aimed at undergraduate philosophy and psychology majors. There are eight chapters, in addition to a short introduction, prospective conclusion, and extensive bibliography. The recipe for each chapter will be: a) to introduce a philosophical topic (e.g., altruism, virtue, preferences, rules) and some prominent positions on it, without assuming prior acquaintance on the part of the reader b) to canvass and explain the relevance of a particular (...) domain of empirical inquiry (e.g., evolutionary psychology, behavioral economics, neuroscience) to the topic c) to argue for some tentative conclusions about the topic d) to suggest further avenues for conceptual and empirical research The guiding theme of the book is that moral philosophy without psychological content is empty, whereas psychological investigation without philosophical insight is blind. Thus, I advocate a holistic approach that pictures moral psychology as a project of collaborative inquiry into the descriptive and normative aspects of the human condition. Ideally, students will come away from (a course built around) the book with the sense that, though philosophy may not be the queen of the sciences, its role is not merely to interpret scientific results. (shrink)
Proof and Disproof in Formal Logic is a lively and entertaining introduction to formal logic providing an excellent insight into how a simple logic works. Formal logic allows you to check a logical claim without considering what the claim means. This highly abstracted idea is an essential and practical part of computer science. The idea of a formal system-a collection of rules and axioms, which define a universe of logical proofs-is what gives us programming languages and modern-day programming. (...) This book concentrates on using logic as a tool: making and using formal proofs and disproofs of particular logical claims. The logic it uses-natural deduction-is very small and very simple; working with it helps you see how large mathematical universes can be built on small foundations. The book is divided into four parts: Part I "Basics" gives an introduction to formal logic with a short history of logic and explanations of some technical words. Part II "Formal Syntactic Proof" show you how to do calculations in a formal system where you are guided by shapes and never need to think about meaning. Your experiments are aided by Jape, which can operate as both inquisitor and oracle. Part III "Formal Semantic Disproof" shows you how to construct mathematical counterexamples to shoe that proof is impossible. Jape can check the counterexamples you build. Part IV " Program Specification and Proof" describes how to apply your logical understanding to a real computer science problem, the accurate description and verification of programs. Jape helps, as far as arithmetic allows. Aimed at undergraduates and graduates in computer science, logic, mathematics and philosophy, the text includes reference to and exercises based on the computer software package Jape, an interactive teaching and research tool designed and hosted by the author that is freely available on the web. (shrink)
In this paper, I consider some problems concerning the structure of legal systems. In order to do this, I basically analyze the promulgation and derogation of legal rules. Frequently, promulgation has been referred to as the introduction of a rule into, and derogation as the removal of a rule from, a normative system. I try to show that there is more to it than that. One of the main ideas of the paper is that the enactment or derogation (...) of a legal rule by an authority na restricts the competence of all lower authorities: Once a rule R has been enacted by an authority na, authorities inferior to na cannot remove R from the normative system; and when R has been derogated by na, lower authorities do not have the competence to introduce R into the system. Further important questions include: What happens with derogated rules? What is the structure of the set of derogated rules? When does a rule belong to a derogated set, and when is it removed from a derogated set? These questions are very important for a theory of legal systems, and I try to give some possible answers. Perhaps the main conclusion of the paper is that promulgation and derogation can be considered very similar processes with respect to legal systems. (shrink)
In this paper we start an investigation of a logic called the logic of algebraic rules. The relation of derivability of this logic is defined on universal closures of special disjunctions of equations extending the relation of derivability of the usual equational logic. The paper contains some simple theorems and examples given in justification for the introduction of our logic. A number of open questions is posed.
General introduction -- I. Basic themes. Kant's ethical theory : an overview ; Kantian normative ethics ; Kantian constructivism as normative ethics -- II. Virtue. Finding value in nature ; Kant on weakness of will ; Kantian virtue and "virtue ethics" ; Kant's Tugendlehre as normative ethics -- III. Moral rules and principles. The dignity of persons : Kant, problems, and a proposal ; Assessing moral rules : utilitarian and Kantian perspectives ; The importance of moral (...) class='Hi'>rules and principles ; Moral construction as a task : sources and limits -- IV. Practical questions. Questions about Kant's opposition to revolution ; Treating criminals as ends in themselves ; Kant and humanitarian intervention ; Moral responsibilities of bystanders. (shrink)
A uniform calculus for linear logic is presented. The calculus has the form of a natural deduction system in sequent calculus style with general introduction and elimination rules. General elimination rules are motivated through an inversion principle, the dual form of which gives the general introductionrules. By restricting all the rules to their single-succedent versions, a uniform calculus for intuitionistic linear logic is obtained. The calculus encompasses both natural deduction and sequent calculus that (...) are obtained as special instances from the uniform calculus. Other instances give all the invertibilities and partial invertibilities for the sequent calculus rules of linear logic. The calculus is normalizing and satisfies the subformula property for normal derivations. (shrink)
The question whether corporations should be used as a means for administering distributive justice is crucial. There are two fundamental issues associated with this. Firstly, would the introduction of rules have any distributional effect? Secondly, what would be the efficiency cost? In this paper, we explore both questions with reference to a job-security corporate rule. We show that the job-security rule will always produce distributional consequences which are consistent with its objectives. However, whether or not it is a (...) socially desirable policy depends on whether the economy is at an efficient allocation and what motivated the policy. It can be shown that under some conditions, even if the initial allocation is efficient, yet not the socially desirable one, a corporate rule—like the job security one—may produce socially desirable gains. The same can be said in the case where the initial allocation is inefficient even in the presence of competitive markets (dueto incompleteness problems). However, these social gains would come at the expense of competitiveness. In an internationally open environment this may offset the initial social gains from this policy. Whether or not this happens would then depend on whether agents have a social dimension to their preferences or not. If they do, their response to the initiative may offset the initial loss of competitiveness. Alternatively, society may face the dilemma whether to remain open to uphold local conceptions of distributive justice. (shrink)
The distinction between ‘partial’ and ‘total’ interpretations (models) is discussed and related to the distinction between proof-theoretical and model-theoretical treatments of logic. It is claimed that there is a parallel between the construction of a proof based on a set of premises and e.g. the production of a natural-language text which is based on information in some kind of data-base. The main part of the paper is devoted to a discussion of the relations between the deduction rules traditionally associated (...) with the existential quantifier and notions pertaining to the theory of reference such as specificity and referentiality /attributivity. Two types of specificity are distinguished, which can be connected with ‘Existential Elimination’ and ‘Existential Introduction’, respectively. A distinction is further made between trivial and non-trivial ‘Existential Introduction’, where only the latter kind involves erasure of ‘coreference links.’ It is argued that an analogous treatment of the referential-attributive distinction is a way of making sense of Donnellan's suggestion that the latteT may depend on the description's role in an argument. Finally, the notions of 'external anchoring' and ‘stability of individual concepts’ are related to the distinctions made earlier in the paper. (shrink)
What is it to be scientific? Is there such a thing as scientific method? And if so, how might such methods be justified? -/- Robert Nola and Howard Sankey seek to provide answers to these fundamental questions in their exploration of the major recent theories of scientific method. Although for many scientists their understanding of method is something they just “pick up” in the course of being trained, Nola and Sankey argue that it is possible to be explicit about what (...) this tacit understanding of method is, rather than leave it as some unfathomable mystery. They robustly defend the idea that there is such a thing as scientific method and show how this might be legitimated. -/- The book begins with the question of what methodology might mean and explores the notions of values, rules and principles, before investigating how methodologists have sought to show that our scientific methods are rational. Part 2 of the book sets out some principles of inductive method and examines its alternatives including abduction, IBE, and hypothetico-deductivism. Part 3 introduces probabilistic modes of reasoning, particularly Bayesianism in its various guises, and shows how it is able to give an account of many of the values and rules of method. Part 4 considers the ideas of philosophers who have proposed distinctive theories of method such as Popper, Lakatos, Kuhn and Feyerabend and Part 5 continues this theme by considering philosophers who have proposed “naturalised” theories of method such as Quine, Laudan and Rescher. -/- The book offers readers a comprehensive introduction to the idea of scientific method and a wide-ranging discussion of how historians of science, philosophers of science and scientists have grappled with the question over the last fifty years. -/- . (shrink)
The interpretation of implications as rules motivates a different left-introduction schema for implication in the sequent calculus, which is conceptually more basic than the implication-left schema proposed by Gentzen. Corresponding to results obtained for systems with higher-level rules, it enjoys the subformula property and cut elimination in a weak form.
In his classic work The Mind and its Place in Nature published in 1925 at the height of the development of quantum mechanics but several years after the chemists Lewis and Langmuir had already laid the foundations of the modern theory of valence with the introduction of the covalent bond, the analytic philosopher C. D. Broad argued for the emancipation of chemistry from the crass physicalism that led physicists then and later—with support from a rabblement of philosophers who knew (...) as much about chemistry as etymologists—to believe that chemistry reduced to physics. Here Broad’s thesis is recast in terms more familiar to chemists. In the hard sell of particle physics, several prominent figures in chemistry—Hoffmann, Primas, and Pauling—have had their views interpreted to imply that they were sympathetic to greedy reductionism when in fact they were not. Indeed, being chemists without physicists as alter egos, they could not but side with Broad’s contention that chemistry, as a science that deals primarily in emergent phenomena which are beyond the purview of physicalism, owes no acquiescence to particle physics and its ethereal wares. Historically, among the most widely used expediencies in chemistry and materials science are additivity or mixture rules and their cohort transferability, all of which are devised and used under the mantle of naive reductionism. Here it is argued that while the transfer of functional groups between molecules works empirically to an extent, it is strictly outlawed by the no-cloning theorem of quantum mechanics. Several illustrative examples related to chemistry’s irreducibility to physics are presented and discussed. The failure of naive reductionism exhibited by the deep-inelastic scattering of leptons by A > 2 nuclei is traced to the same flawed reasoning that was the original basis of Moffitt’s ‘atoms in molecules’ hypothesis, the neglect of context, nuclei in the case of high-energy physics and molecules in the case of chemistry. A non-exhaustive list of other contexts from physics, chemistry, and molecular biology evidencing similar departures from the ideal of additivity or reductionism is provided for the perusal of philosophers. Had the call by the mathematician J. T. Schwartz for developments in mathematical linguistics possessed of a less single, less literal, and less simple-minded nature been met, perhaps it might have persuaded scientists to abandon their regressive fixation with unphysical reductionism and to adapt to new methodologies that engender a more nuanced handling of ubiquitous emergent phenomena as they arise in Nature than is the case today. (shrink)
Sport, Rules and Values presents a philosophical perspective on some issues concerning the character of sport. Central questions for the text are motivated from real life sporting examples as described in newspaper reports. For instance, the (supposed) subjectivity of umpiring decisions is explored via an examination of the judging ice-skating at the Salt Lake City Olympic Games of 2002. Throughout, the presentation is rich in concrete cases from sporting situations, including baseball, football, and soccer. While granting the constitutive nature (...) of the rules of sport, discussion focuses on three broad uses commonly urged for rules: in defining sport; in judging or assessing sport (as deployed by judges or umpires); and in characterizing the value of sport, especially if that value is regarded as moral value. In general, McFee rejects a conception of the determinacy of rules as possible within sport, and a parallel picture of the determinacy assumed to be required by philosophy. (shrink)
Several recent studies and initiatives have emphasized the importance of a strong ethical organizational DNA (ODNA) to create and promote an effective corporate governance culture of trust, integrity and intellectual honesty. This paper highlights the drawbacks of an excessively heavy reliance on rules-based approaches that increase the cost of doing business, overshadow essential elements of good corporate governance, create a culture of dependency, and can result in legal absolutism. The paper makes the case that the way forward for effective (...) corporate governance is to strike an optimal balance between rules-based and principles-based approaches. The recent corporate scandals have demonstrated that the ethical ODNA is critical to the driving force and basis of legal and regulatory requirements. Effective governance means adhering to ethical principles, not merely complying with rules, and is a crucial guardian of a firm’s reputation and integrity. It is through an effective corporate governance program (that is, one that optimally captures and integrates the appropriate aspects of rules-based and principles-based approaches, and identifies and assesses the related risks) that an organization can reconfigure its ODNA for improved performance. Focusing on the ethical ODNA as the basis of new governance measures provides an opportunity to develop a competitive advantage as it represents a potential source of differentiation, strengthens the relationship with all stakeholders of the organization by building a culture of trust and integrity, and re-instills investor confidence. This paper employs dialectical reasoning that links the ODNA through principles-driven rules in developing a risks-based approach. A comparison from a risk assessment perspective between rules-based and principles-based approaches is presented. Although there have been few applications employing dialectical reasoning in business research, this methodology can be extremely useful in isolating ethical issues and integrating them into the business process. The risks-based approach captures the benefits of both rules-based and principles-based approaches, and incorporates trust-based principles such␣as solidarity, subsidiarity and covenantal relationships. (shrink)
We reflect on lessons that the lottery and preface paradoxes provide for the logic of uncertain inference. One of these lessons is the unreliability of the rule of conjunction of conclusions in such contexts, whether the inferences are probabilistic or qualitative; this leads us to an examination of consequence relations without that rule, the study of other rules that may nevertheless be satisfied in its absence, and a partial rehabilitation of conjunction as a ‘lossy’ rule. A second lesson is (...) the possibility of rational inconsistent belief; this leads us to formulate criteria for deciding when an inconsistent set of beliefs may reasonably be retained. (shrink)
The value of solidarity, which is exemplified in noble groups like the Civil Rights Movement along with more mundane teams, families and marriages, is distinctive in part because people are in solidarity over, for or with regard to something, such as common sympathies, interests, values, etc. I use this special feature of solidarity to resolve a longstanding puzzle about enacted social moral rules, which is, aren’t these things just heuristics, rules of thumb or means of coordination that we (...) ‘fetishize’ or ‘worship’ if we stubbornly insist on sticking to them when we can do more good by breaking them? I argue that when we are in a certain kind of solidarity with others, united by social moral rules that we have established among ourselves, the rules we have developed and maintain are a constitutive part of our solidary relationships with one another; and it is part of being in this sort of solidarity with our comrades that we are presumptively required to follow the social moral rules that join us together. Those in the Polish Revolution, for example, were bound by informally enforced rules about publicity, free speech and the use of violence, so following their own rules became a way of standing in a valuable sort of solidarity with one another. I explain why we can have non-instrumental reasons to follow the social moral rules that exist in our own society, improve our rules and even sometimes to break the otherwise good rules that help to unite us. (shrink)
Rules proliferate; some are kept with a bureaucratic stringency bordering on the absurd, while others are manipulated and ignored in ways that injure our sense of justice. Under what conditions should we make exceptions to rules, and when should they be followed despite particular circumstances? The two dominant models in the current literature on rules are the particularist account and that which sees the application of rules as normative. Taking a position that falls between these two (...) extremes, Alan Goldman is the first to provide a systematic framework to clarify when we need to follow rules in our moral, legal, and prudential decisions, and when we ought not to do so. This book will be of great interest to advanced students and professionals working in philosophy, law, decision theory, and the social sciences. (shrink)
Genuine rules cannot capture our intuitive moral judgments because, if usable, they mention only a limited number of factors as relevant to decisions. But morally relevant factors are both numerous and unpredictable in the ways they interact to change priorities among them. Particularists have pointed this out, but their account of moral judgment is also inadequate, leaving no room for genuine reasoning or argument. Reasons must be general even if not universal. Particularists can insist that our judgments be reflective, (...) unbiased, informed, and sensitive, requiring a background of experiences that expand sympathy and empathy for others. But beyond this, our judgments must be coherent. This requirement provides a way to reason to the correct answer to a controversial issue—the answer most coherent with or body of settled judgments. Rawls' account of coherence in terms of reflective equilibrium, where we adjust particular judgments to match rules and adjust rules to match judgments, is rejected since rules have no independent force. Instead, the central requirement is that we not judge cases differently without being able to cite a morally relevant difference between them. Such differences must make a difference else-where as well, although they need not do so universally. Factors cannot be relevant in only one context because they reflect values that must recur to be maintained. The method of moral reasoning based on this requirement is specified as follows: first, the specification of competing values or interests in the problematic case; second, the location of paradigm cases in which these competing values are prioritized, making sure that these settled judgments are reflective, informed, and sensitive; third, the search for relevant differences between the settled and problematic cases or the location of alternative, more closely analogous paradigms. The paper ends with an illustration of the method applied to the issue of doctor assisted suicide. (shrink)
I summarise a conception of morality as containing a set of rules which hold ceteris paribus and which impose pro-tanto obligations. I explain two ways in which moral rules are ceteris-paribus, according to whether an exception is duty-voiding or duty-overriding. I defend the claim that moral rules are ceteris-paribus against two qualms suggested by Luke Robinson’s discussion of moral rules and against the worry that such rules are uninformative. I show that Robinson’s argument that moral (...)rules cannot ground pro-tanto obligations is unsound, because it confuses an absolute reason for an obligation with a reason for an absolute obligation, and because it overlooks the possibility that priority rules may be rules for ordering pro-tanto obligations rather than rules for eliminating contenders for the status of absolute obligation. (shrink)
This article examines the learning of a scientific procedure, and its connection to the greater scientific community through the notion of Wittgensteinian rules. The analysis reveals this connection by demonstrating that learning in interaction is largely grounded in rule-based community descriptions and judgments rather than any inner process. This same analysis also demonstrates that learning processes are particularly suited for such an analysis because rules and concomitant phenomena comprise a significant portion of any learning interaction. This analysis further (...) reveals the elucidating merit of Wittgensteinian rules, their relation to community and the concept of practice, and promotes the efficacy of participant-generated rule-formulations as analytic descriptors. (shrink)
This article questions the continued use and application of EVA® (economic value added) because it is epistemologically a non-sequitur, fails to satisfy the requirements of sound research methodology in terms of being a reliable and valid metric, and is unlikely to satisfy the requirements of Rule 702 of the Federal Rules of Evidence. In the light of these insufficiencies, the continued use of EVA® is ethically questionable, and moreover in time is likely to result in class actions.
The article discusses burden of proof rules in social criticism. By social criticism I mean an argumentative situation in which an opponent publicly argues against certain social practices; the examples I consider are discrimination on the basis of species and discrimination on the basis of one's nationality. I argue that burden of proof rules assumed by those who defend discrimination are somewhat dubious. In social criticism, there are no shared values which would uncontroversially determine what is the reasonable (...) presumption and who has the burden of proof, nor are there formal rules which would end the debate and determine the winner at a specific point. (shrink)
A certain type of inference rules in (multi-) modal logics, generalizing Gabbay's Irreflexivity rule, is introduced and some general completeness results about modal logics axiomatized with such rules are proved.
Inflectional morphology has been taken as a paradigmatic example of rule-governed grammatical knowledge (Pinker, 1999). The plausibility of this claim may be related to the fact that it is mainly based on studies of English, which has a very simple inflectional system. We examined the representation of inflectional morphology in Serbian, which encodes number, gender, and case for nouns. Linguists standardly characterize this system as a complex set of rules, with disagreements about their exact form. We present analyses of (...) a large corpus of nouns which showed that, as in English, Serbian inflectional morphology is quasiregular: It exhibits numerous partial regularities creating neighborhoods that vary in size and consistency. We then asked whether a simple connectionist network could encode this statistical information in a manner that also supported generalization. A network trained on 3,244 Serbian nouns learned to produce correctly inflected phonological forms from a specification of a word's lemma, gender, number, and case, and generalized to untrained cases. The model's performance was sensitive to variables that also influence human performance, including surface and lemma frequency. It was also influenced by inflectional neighborhood size, a novel measure of the consistency of meaning to form mapping. A word-naming experiment with native Serbian speakers showed that this measure also affects human performance. The results suggest that, as in English, generating correctly inflected forms involves satisfying a small number of simultaneous probabilistic constraints relating form and meaning. Thus, common computational mechanisms may govern the representation and use of inflectional information across typologically diverse languages. (shrink)
We describe PADUA, a protocol designed to support two agents debating a classification by offering arguments based on association rules mined from individual datasets. We motivate the style of argumentation supported by PADUA, and describe the protocol. We discuss the strategies and tactics that can be employed by agents participating in a PADUA dialogue. PADUA is applied to a typical problem in the classification of routine claims for a hypothetical welfare benefit. We particularly address the problems that arise from (...) the extensive number of misclassified examples typically found in such domains, where the high error rate is a widely recognised problem. We give examples of the use of PADUA in this domain, and explore in particular the effect of intermediate predicates. We have also done a large scale evaluation designed to test the effectiveness of using PADUA to detect misclassified examples, and to provide a comparison with other classification systems. (shrink)
The paper introduces the concept of polar decision rules and establishes that majority rules are polar rules. We identify second best rules and penultimate rules in cases that majority rules are optimal or the most inferior, respectively. We especially specify the almost expert rule and the almost majority rule as the secondary rules of the expert and majority rules, respectively.
Summary A small step is made in the direction of defining some general basic rules which can serve as a framework for research in several fields of the social sciences. The method of working with analogies asks for a more accurate approach. Starting from the concept of evolution in the form of a basic rule another basic rule is formulated. This rule shows what are the most important factors in long term developments and what types of development one can (...) expect. (shrink)
We consider the problem of choosing the location of a public facility either (a) on a tree network or (b) in a Euclidean space. (a) (1996) characterize the class of target rules on a tree network by Pareto efficiency and population-monotonicity. Using Vohra's (1999) characterization of rules that satisfy Pareto efficiency and replacement-domination, we give a short proof of the previous characterization and show that it also holds on the domain of symmetric preferences. (b) The result obtained for (...) model (a) proves to be crucial for the analysis of the problem of choosing the location of a public facility in a Euclidean space. Our main result is the characterization of the class of coordinatewise target rules by unanimity, strategy-proofness, and either replacement-domination or population-monotonicity. (shrink)
I briefly consider why Descartes stopped work on the _Rules_ towards the end of my paper. My main concern is to accurately characterize the project represented in the _Rules_, especially in its relation to early-modern logic.
Visser's rules form a basis for the admissible rules of . Here we show that this result can be generalized to arbitrary intermediate logics: Visser's rules form a basis for the admissible rules of any intermediate logic for which they are admissible. This implies that if Visser's rules are derivable for then has no nonderivable admissible rules. We also provide a necessary and sufficient condition for the admissibility of Visser's rules. We apply these (...) results to some specific intermediate logics and obtain that Visser's rules form a basis for the admissible rules of, for example, De Morgan logic, and that Dummett's logic and the propositional Gödel logics do not have nonderivable admissible rules. (shrink)
We discuss a `negative' way of defining frame classes in (multi)modal logic, and address the question of whether these classes can be axiomatized by derivation rules, the `non-ξ rules', styled after Gabbay's Irreflexivity Rule. The main result of this paper is a metatheorem on completeness, of the following kind: If Λ is a derivation system having a set of axioms that are special Sahlqvist formulas and Λ+ is the extension of Λ with a set of non-ξ rules, (...) then Λ+ is strongly sound and complete with respect to the class of frames determined by the axioms and the rules. (shrink)
The purpose of this research review is to examine the usefulness of reconstructing problematic interpersonal conflict behavior as violations of rules for critical discussions. Dialectical reconstruction of interpersonal conflict behavior sheds light on the ways in which dialectical fallacies influence not only the course of a critical discussion, but also the personal and relationship outcomes experienced by arguers. Conflict sequences such as cross complaining and demand/withdraw are shown to be problematic, in part, because they prevent parties from resolving their (...) difference through rational dialogue. The paper concludes by presenting some implications of the pragma-dialectical reconstruction of interpersonal conflict behavior. (shrink)