This is a report on the 3-day workshop “The Neuroscience of Responsibility” that was held in the Philosophy Department at Delft University of Technology in The Netherlands during February 11th–13th, 2010. The workshop had 25 participants from The Netherlands, Germany, Italy, UK, USA, Canada and Australia, with expertise in philosophy, neuroscience, psychology, psychiatry and law. Its aim was to identify current trends in neurolaw research related specifically to the topic of responsibility, and to foster international collaborative research on this topic. (...) The workshop agenda was constructed by the participants at the start of each day by surveying the topics of greatest interest and relevance to participants. In what follows, we summarize (1) the questions which participants identified as most important for future research in this field, (2) the most prominent themes that emerged from the discussions, and (3) the two main international collaborative research project plans that came out of this meeting. (shrink)
It has been argued that ethically correct robots should be able to reason about right and wrong. In order to do so, they must have a set of do’s and don’ts at their disposal. However, such a list may be inconsistent, incomplete or otherwise unsatisfactory, depending on the reasoning principles that one employs. For this reason, it might be desirable if robots were to some extent able to reason about their own reasoning—in other words, if they had some meta-ethical capacities. (...) In this paper, we sketch how one might go about designing robots that have such capacities. We show that the field of computational meta-ethics can profit from the same tools as have been used in computational metaphysics. (shrink)
The paper presents a formal explication of the early Wittgenstein's views on ontology, the syntax and semantics of an ideal logical language, and the propositional attitudes. It will be shown that Wittgenstein gave a language of thought analysis of propositional attitude ascriptions, and that his ontological views imply that such ascriptions are truth-functions of (and supervenient upon) elementary sentences. Finally, an axiomatization of a quantified doxastic modal logic corresponding to Tractarian semantics will be given.
In 1926, Ernst Mally proposed a number of deontic postulates. He added them as axioms to classical propositional logic. The resulting system was unsatisfactory because it had the consequence that A is the case if and only if it is obligatory that A. We present an intuitionistic reformulation of Mally’s deontic logic. We show that this system does not provide the just-mentioned objectionable theorem while most of the theorems that Mally considered acceptable are still derivable. The resulting system is unacceptable (...) as a deontic logic, but it does make sense as a lax logic in the modern sense of the word. (shrink)
When thinking about ethics, technology is often only mentioned as the source of our problems, not as a potential solution to our moral dilemmas. When thinking about technology, ethics is often only mentioned as a constraint on developments, not as a source and spring of innovation. In this paper, we argue that ethics can be the source of technological development rather than just a constraint and technological progress can create moral progress rather than just moral problems. We show this by (...) an analysis of how technology can contribute to the solution of so-called moral overload or moral dilemmas. Such dilemmas typically create a moral residue that is the basis of a second-order principle that tells us to reshape the world so that we can meet all our moral obligations. We can do so, among other things, through guided technological innovation. (shrink)
In 1926, Mally proposed the first formal deontic system. As Mally and others soon realized, this system had some rather strange consequences. We show that the strangeness of Mally's system is not so much due to Mally's informal deontic principles as to the fact that he formalized those principles in terms of the propositional calculus. If they are formalized in terms of relevant logic rather than classical logic, one obtains a system which is related to Anderson's relevant deontic logic and (...) not nearly as strange as Mally's own system. (shrink)
De mens is in de afgelopen drie eeuwen vaak vergeleken met allerlei soorten machines. In de achttiende eeuw was de klokmetafoor tamelijk populair; psychologische termen als ‘drijfveer’, ‘van slag raken’ en ‘opgewonden zijn’ herinneren hier nog aan [Vroon and Draaisma, 1985]. In de negentiende eeuw overheerste de stoommachine-metafoor. De psychologie van Freud wordt wel als een uitgewerkte versie van deze metafoor beschouwd [Russelmann, 1983]. Ook uitdrukkingen als ‘uitlaatkleppen’, ‘stoom afblazen’ en ‘iemand opstoken’ zijn eraan te danken. De stoommachine-metafoor wordt nog (...) steeds serieus genomen. Zo pleegt men de menselijke geest in de nieuwe ‘dynamische’ school in de cognitieve wetenschappen bij voorkeur te vergelijken met James Watts centrifugale regulateur (1788), het apparaat dat ervoor zorgt dat een stoommachine op een constant snelheid werkt [van Gelder, 1995, 1998]. De laatste vijftig jaar komt men de metafoor van de seri¨ele digitale computer vaak tegen. Een PC is een voorbeeld van zo’n computer. Hij is serieel omdat de centrale processor slechts ´e´en berekening tegelijk kan uitvoeren; hij is digitaal omdat hij alleen met gehele getallen kan omgaan. Het voorbeeld bij uitstek van een seri¨ele digitale computer is de Turing-machine, waarover aanstonds meer. De seri¨ele digitale computer metafoor is op verschillende manieren op mensen toegepast. Zo beschouwen sommigen de gehele mens als een computer van dit type, terwijl anderen menen dat de afzonderlijke zenuwcellen op deze manier beschreven kunnen worden. In het onderstaande zal ik de vraag bespreken in hoeverre men de mens, of onderdelen van de mens, inderdaad als seri¨ele digitale computer mag opvatten. Ik zal laten zien dat deze vergelijking maar een beperkte waarde heeft. Het valt niet uit te sluiten dat de mens in computationeel opzicht een veel sterker of zwakker soort machine is dan de Turing-machine. (shrink)
Basisbegrippen. Een formeel model voor de ontwikkeling van de kunst is een structuur T, <, K, , d, p, q, s, B , waarbij T een verzameling van “tijdstippen” is, < (“is eerder dan”) een relatie op T is, K een verzameling van “mogelijke kunstwerken” is, (“levert commentaar op”) een relatie op K is, d, p, q en s functies van K naar de verzameling van alle deelverzamelingen van K zijn, en B een functie van T naar de verzameling van (...) alle deelverzamelingen van K is. d(x) is de discipline waartoe kunstwerk x behoort, p(x) is het proc´ed´e waarmee x vervaardigd is, q(x) is de kwaliteit van x, s(x) is de stijl van x, en B(t) is de verzameling kunstwerken die op tijdstip t bestaan. (shrink)
We describe a new way in which theories about the deontic status of actions can be represented in terms of the standard two-sorted first-order extensional predicate calculus. Some of the resulting formal theories are easy to implement in Prolog; one prototype implementation—R. M. Lee's deontic expert shell DX—is briefly described.
We present axiomatizations of the deontic fragment of Anderson's relevant deontic logic (the logic of obligation and related concepts) and the eubouliatic fragment of Anderson's eubouliatic logic (the logic of prudence, safety, risk, and related concepts).
The paper presents a formal explication of the early Wittgenstein's views on ontology, the syntax and semantics of an ideal logical language, and the propositional attitudes. It will be shown that Wittgenstein gave a "language of thought" analysis of propositional attitude ascriptions, and that his ontological views imply that such ascriptions are truth-functions of (and supervenient upon) elementary sentences. Finally, an axiomatization of a quantified doxastic modal logic corresponding to Tractarian semantics will be given.
We describe a new way in which theories about the deontic status of actions can be represented in terms of the standard two-sorted extensional predicate calculus. Some of the resulting formal theories are easy to implement in Prolog; one prototype implementation--R. M. Lee's deontic expert shell DX--is briefly described.
The Dutch health care system is developing a two, or multiple, tier system. How can moral principles be of help in assessing whether this is the right track? Instead of dismissing as unhelpful the principles that have been suggested so far and exchanging them for other, usually more complex, principles, it is suggested that the methods of moral inquiry be reconsidered. Keywords: diversification in health care, health care financing, public and private responsibility in health care CiteULike Connotea Del.icio.us What's this?
In his article 'Specifying, balancing and interpreting bioethical principles' (Richardson, 2000), Henry Richardson claims that the two dominant theories in bioethics - principlism, put forward by Beauchamp and Childress in Principles of Bioethics , and common morality, put forward by Gert, Culver and Clouser in Bioethics: A Return to Fundamentals - are deficient because they employ balancing rather than specification to resolve disputes between principles or rules. We show that, contrary to Richardson's claim, the major problem with principlism, either (...) the original version, or the specified principlism of Richardson, is that it conceives of morality as being composed of free-standing principles, rather than as common morality conceives it, as being a complete public system, composed of rules, ideals, morally relevant features, and a procedure for determining when a rule can be justifiably violated. (shrink)
Moral problems do not always come in the form of great social controversies. More often, the moral decisions we make are made quietly, constantly, and within the context of everyday activities and quotidian dilemmas. Indeed, these smaller decisions are based on a moral foundation that few of us ever stop to think about but which guides our every action. Here distinguished philosopher Bernard Gert presents a clear and concise introduction to what he calls "common morality" -- the moral system (...) that most thoughtful people implicitly use when making everyday, common sense moral decisions and judgments. Common Morality is useful in that -- while not resolving every disagreement on controversial issues -- it is able to distinguish between acceptable and unacceptable answers to moral problems. In the first part of the book Gert lays out the fundamental features of common morality: moral rules, moral ideals, and a two-step procedure for determining when a violation of a moral rule is justified. Written in a non-technical style, the ten general moral rules include rules on which everyone can agree, such as "do not kill," "do not deceive," and "keep your promises." The moral ideals include similarly uncontroversial precepts such as "Relieve pain" and "Aid the needy." In the second part of the book Gert examines the underlying concepts that justify common morality, such as the notions of rationality and impartiality. The distillation of over 40 years of scholarship, this book is the most accessible version of Gert's influential theory of morality as well as an eye-opening look at the moral foundations of our everyday actions. Throughout the discussion is clear enough for a reader with little or no philosophy background. (shrink)
This book offers the fullest and most sophisticated account of Gert's influential moral theory, a model first articulated in the classic work The Moral Rules: A New Rational Foundation for Morality, published in 1970. In this final revision, Gert makes clear that the moral rules are only one part of an informal system that does not provide unique answers to every moral question but does always provide a range of morally acceptable options. A new chapter on reasons includes (...) an account of what makes one reason better than another and a second new chapter is devoted to the question of justifying violations of the rules. Moral impartiality, the moral ideals, and virtue and vice, are all treated in greater detail. Throughout, Gert attempts to answer all of the challenges that his work has provoked. (shrink)
Joshua Gert presents a new account of normative practical reasons and the way in which they contribute to the rationality of action. He argues that, rather than simply "counting in favor of" action, normative reasons play two logically distinct roles--that of requiring action and that of justifying action. Gert's book will appeal to a range of readers interested in practical reasoning in particular, and moral theory more generally.
An updated and expanded successor to Culver and Gert's Philosophy in Medicine, this book integrates moral philosophy with clinical medicine to present a comprehensive summary of the theory, concepts, and lines of reasoning underlying the ...
This volume is a revised, enlarged, and broadened version of Gert's classic 1970 book, The Moral Rules. Advocating an approach he terms "morality as impartial rationality," Gert here presents a full discussion of his moral theory, adding a wealth of new illuminating detail to his analysis of the concepts--rationality/irrationality, good/evil, and impartiality--by which he defines morality. He constructs a "moral system" that includes rules prohibiting the kinds of actions that cause evil, procedures for determining when violation of the (...) rules is permitted, and ideals which encourage actions that prevent or relieve suffering. To be valid, Gert argues, any such system must be "a public system that applies to all rational persons." The book concludes with a discussion of medical ethics, demonstrating the link between moral theory and its application to real moral problems. (shrink)
Although it goes against a widespread significant misunderstanding of his view, Michael Smith is one of the very few moral philosophers who explicitly wants to allow for the commonsense claim that, while morally required action is always favored by some reason, selfish and immoral action can also be rationally permissible. One point of this paper is to make it clear that this is indeed Smith’s view. It is a further point to show that his way of accommodating this claim is (...) inconsistent with his well-known “practicality requirement” on moral judgments: the thesis that any rational person will always have at least some motivation to do what she judges to be right. The general conclusion is that no view that, like Smith’s, associates the normative strength of a reason with the motivational strength of an ideal desire will allow for the wide range of rational permissibility that Smith wants to capture. (shrink)
Terry Horgan and Mark Timmons have recently presented a series of papers in which they argue against what has come to be called the ‘new wave’ moral realism and moral semantics of David Brink, Richard Boyd, Peter Railton, and a number of other philosophers. The central idea behind Horgan and Timmons’s criticism of these ‘new wave’ theories has been extended by Sean Holland to include the sort of realism that drops out of response-dependent accounts that make use of an analogy (...) between moral properties and secondary qualities. This paper argues that Holland’s extension depends crucially on the fact that his target is a direct response-dependent account of moral value. His argument does not work against such accounts of more basic normative notions such as ‘harm’ or ‘benefit’. And these more basic notions may then serve as the basic normative building blocks for an indirectly response-dependent moral theory. (shrink)
Jason Stanley has criticized a contextualist solution to the sorites paradox that treats vagueness as a kind of indexicality. His objection rests on a feature of indexicals that seems plausible: that their reference remains fixed in verb phrase ellipsis. But the force of Stanley’s criticism depends on the undefended assumption that vague terms, if they are a special sort of indexical, must function in the same way that more paradigmatic indexicals do. This paper argues that there can be more than (...) one sort of indexicality, that one term might easily have both sorts, and that therefore, and despite Stanley’s worries, vagueness might easily be assimilated to one form. (shrink)
Whether or not one endorses realism about colour, it is very tempting to regard realism about determinable colours such as green and yellow as standing or falling together with realism about determinate colours such as unique green or green31. Indeed some of the most prominent representatives of both sides of the colour realism debate explicitly endorse the idea that these two kinds of realism are so linked. Against such theorists, the present paper argues that one can be a realist about (...) the determinable colours of objects, and thus hold that most of the colour ascriptions made by competent speakers are literally true, while denying that there are any positive facts of the matter as to the determinate colours of objects. The result is a realistic colour realism that can certify most of our everyday colour ascriptions as literally correct, while acknowledging the data regarding individual variation. (shrink)
In The Sources of Normativity, Christine Korsgaard presents and defends a neo-Kantian theory of normativity. Her initial account of reasons seems to make them dependent upon the practical identity of the agent, and upon the value the agent must place on her own humanity. This seems to make all reasons agent-relative. But Korsgaard claims that arguments similar to Wittgenstein's private-language argument can show that reasons are in fact essentially agent-neutral. This paper explains both of Korsgaard's Wittgensteinian arguments, and shows why (...) neither of them work. The paper also provides a brief sketch of a different Wittgensteinian account of reasons that distinguishes the normative role of justification from that of requirement. On this account, the real agent-neutrality of reasons applies to their justificatory role, but not to their requiring role. (shrink)
Many contemporary accounts of normative reasons for action accord a single strength value to normative reasons. This paper first uses some examples to argue against such views by showing that they seem to commit us to intransitive or counterintuitive claims about the rough equivalence of the strengths of certain reasons. The paper then explains and defends an alternate account according to which normative reasons for action have two separable dimensions of strength: requiring strength, and justifying strength. Such an account explains (...) our intuitions in the cases that make trouble for single-value views. The justifying/requiring account is compared with two other solutions that have been offered to justify and explain our intuitions about these sorts of cases. These other solutions appeal to the notions of incommensurability of reasons, and to second-order normative entities called `exclusionary permissions'. It is argued that the justifying/requiring distinction provides a superior solution. (shrink)
One strategy for providing an analysis of practical rationality is to start with the notion of a practical reason as primitive. Then it will be quite tempting to think that the rationality of an action can be defined rather simply in terms of ‘the balance of reasons’. But just as, for many philosophical purposes, it is extremely useful to identify the meaning of a word in terms of the systematic contribution the word makes to the meanings of whole sentences, this (...) paper argues that it is extremely useful to explain the nature of practical reasons in terms of the systematic contributions that such reasons make to the wholesale rational statuses of actions. This strategy gives us a clear view of two logically distinct normative roles for practical reasons – justifying and requiring – that are often conflated, and it allows us to give clear definitions of what ‘the strength of a reason’ means within each of these roles. The final section of the paper explores some implications of the resulting view for the internalism/externalism debate about practical reasons, and for the practical significance of moral theory. (shrink)