The Politics of Human Rights provides a systematic introductory overview of the nature and development of human rights. At the same time it offers an engaging argument about human rights and their relationship with politics. The author argues that human rights have only a slight relation to natural rights and they are historically novel: In large part they are a post-1945 reaction to genocide which is, in turn, linked directly to the lethal potentialities of the nation-state. He suggests that an (...) understanding of human rights should nonetheless focus primarily on politics and that there are no universally agreed moral or religious standards to uphold them, they exist rather in the context of social recognition within a political association. A consequence of this is that the 1948 Universal Declaration is a political, not a legal or moral, document. Vincent goes on to show that human rights are essentially reliant upon the self-limitation capacity of the civil state. With the development of this state, certain standards of civil behaviour have become, for a sector of humanity, slowly and painfully more customary. He shows that these standards of civility have extended to a broader society of states. At their best human rights are an ideal civil state vocabulary. The author explains that we comprehend both our own humanity and human rights through our recognition relations with other humans, principally via citizenship of a civil state. Vincent concludes that the paradox of human rights is that they are upheld, to a degree, by the civil state, but the point of such rights is to protect against another dimension of this same tradition (the nation-state). Human rights are essentially part of a struggle at the core of the state tradition. (shrink)
Various authors debate the question of whether neuroscience is relevant to criminal responsibility. However, a plethora of different techniques and technologies, each with their own abilities and drawbacks, lurks beneath the label “neuroscience”; and in criminal law responsibility is not a single, unitary and generic concept, but it is rather a syndrome of at least six different concepts. Consequently, there are at least six different responsibility questions that the criminal law asks – at least one for each responsibility concept – (...) and, I will suggest, a multitude of ways in which the techniques and technologies that comprise neuroscience might help us to address those diverse questions. In a way, on my account neuroscience is relevant to criminal responsibility in many ways, but I hesitate to state my position like this because doing so obscures two points which I would rather highlight: one, neither neuroscience nor criminal responsibility are as unified as that; and two, the criminal law asks many different responsibility questions and not just one generic question. (shrink)
Luck egalitarians think that considerations of responsibility can excuse departures from strict equality. However critics argue that allowing responsibility to play this role has objectionably harsh consequences. Luck egalitarians usually respond either by explaining why that harshness is not excessive, or by identifying allegedly legitimate exclusions from the default responsibility-tracking rule to tone down that harshness. And in response, critics respectively deny that this harshness is not excessive, or they argue that those exclusions would be ineffective or lacking in justification. (...) Rather than taking sides, after criticizing both positions I also argue that this way of carrying on the debate – i.e. as a debate about whether the harsh demands of responsibility outweigh other considerations, and about whether exclusions to responsibility-tracking would be effective and/or justified – is deeply problematic. On my account, the demands of responsibility do not – in fact, they can not – conflict with the demands of other normative considerations, because responsibility only provides a formal structure within which those other considerations determine how people may be treated, but it does not generate its own practical demands. (shrink)
This paper centres on the question as to whether human rights can be reconciled with patriotism. It lays out the more conventional arguments which perceive them as incommensurable concepts. A central aspect of this incommensurability relates to the close historical tie between patriotism and the state. One further dimension of this argument is then articulated, namely, the contention that patriotism is an explicitly political concept. The implicit antagonism between, on the one hand, the state, politics and patriotism, and, on the (...) other hand, human rights, is illustrated via the work of Carl Schmitt. However, in the last few decades there has been a resurgence of interest in patriotism and an attempt to formulate a more moderate form, which tries to reconcile itself with universal ethical themes. Some of these arguments are briefly summarised; the discussion then focuses on Jürgen Habermas’s understanding of constitutional patriotism. This is seen to provide an effective response to Schmitt’s arguments. There are weaknesses in the constitutional patriotic argument which relate to its limited understanding of both the state and politics. This leads me to formulate my own argument for “unpatriotic patriotism.” The discussion then examines and responds to certain potential criticisms of this argument. (shrink)
This thesis considers two allegations which conservatives often level at no-fault systems — namely, that responsibility is abnegated under no-fault systems, and that no-fault systems under- and over-compensate. I argue that although each of these allegations can be satisfactorily met – the responsibility allegation rests on the mistaken assumption that to properly take responsibility for our actions we must accept liability for those losses for which we are causally responsible; and the compensation allegation rests on the mistaken assumption that tort (...) law’s compensatory decisions provide a legitimate norm against which no-fault’s decisions can be compared and criticized – doing so leads in a direction which is at odds with accident law reform advocates’ typical recommendations. On my account, accident law should not just be reformed in line with no-fault’s principles, but rather it should be completely abandoned since the principles that protect no- fault systems from the conservatives’ two allegations are incompatible with retaining the category of accident law, they entail that no-fault systems are a form of social welfare and not accident law systems, and that under these systems serious deprivation – and to a lesser extent causal responsibility – should be conditions of eligibility to claim benefits. (shrink)
Egalitarians must address two questions: i. What should there be an equality of, which concerns the currency of the ‘equalisandum’; and ii. How should this thing be allocated to achieve the so-called equal distribution? A plausible initial composite answer to these two questions is that resources should be allocated in accordance with choice, because this way the resulting distribution of the said equalisandum will ‘track responsibility’ — responsibility will be tracked in the sense that only we will be responsible for (...) the resources that are available to us, since our allocation of resources will be a consequence of our own choices. But the effects of actual choices should not be preserved until the prior effects of luck in constitution and circumstance are first eliminated. For instance, people can choose badly because their choice-making capacity was compromised due to a lack of intelligence (i.e. due to constitutional bad luck), or because only bad options were open to them (i.e. due to circumstantial bad luck), and under such conditions we are not responsible for our choices. So perhaps a better composite answer to our two questions (from the perspective of tracking responsibility) might be that resources should be allocated so as to reflect people’s choices, but only once those choices have been corrected for the distorting effects of constitutional and circumstantial luck, and on this account choice preservation and luck elimination are two complementary aims of the egalitarian ideal. Nevertheless, it is one thing to say that luck’s effects should be eliminated, but quite another to figure out just how much resource redistribution would be required to achieve this outcome, and so it was precisely for this purpose that in 1981 Ronald Dworkin developed the ingenuous hypothetical insurance market argumentative device (HIMAD), which he then used in conjunction with the talent slavery (TS) argument, to arrive at an estimate of the amount of redistribution that would be required to reduce the extent of luck’s effects. However recently Daniel Markovits has cast doubt over Dworkin’s estimates of the amount of redistribution that would be required, by pointing out flaws with his understanding of how the hypothetical insurance market would function. Nevertheless, Markovits patched it up and he used this patched-up version of Dworkin’s HIMAD together with his own version of the TS argument to reach his own conservative estimate of how much redistribution there ought to be in an egalitarian society. Notably though, on Markovits’ account once the HIMAD is patched-up and properly understood, the TS argument will also allegedly show that the two aims of egalitarianism are not necessarily complementary, but rather that they can actually compete with one another. According to his own ‘equal-agent’ egalitarian theory, the aim of choice preservation is more important than the aim of luck elimination, and so he alleges that when the latter aim comes into conflict with the former aim then the latter will need to be sacrificed to ensure that people are not subordinated to one another as agents. I believe that Markovits’ critique of Dworkin is spot on, but I also think that his own positive thesis — and hence his conclusion about how much redistribution there ought to be in an egalitarian society — is flawed. Hence, this paper will begin in Section I by explaining how Dworkin uses the HIMAD and his TS argument to estimate the amount of redistribution that there ought to be in an egalitarian society — this section will be largely expository in content. Markovits’ critique of Dworkin will then be outlined in Section II, as will be his own positive thesis. My critique of Markovits, and my own positive thesis, will then make a fleeting appearance in Section III. Finally, I will conclude by rejecting both Dworkin’s and Markovits’ estimates of the amount of redistribution that there ought to be in an egalitarian society, and by reaffirming the responsibility-tracking egalitarian claim that choice preservation and luck elimination are complementary and not competing egalitarian aims. (shrink)
It could be argued that tort law is failing, and arguably an example of this failure is the recent public liability and insurance (‘PL&I’) crisis. A number of solutions have been proposed, but ultimately the chosen solution should address whatever we take to be the cause of this failure. On one account, the PL&I crisis is a result of an unwarranted expansion of the scope of tort law. Proponents of this position sometimes argue that the duty of care owed by (...) defendants to plaintiffs has expanded beyond reasonable levels, such that parties who were not really responsible for another’s misfortune are successfully sued, while those who really were to blame get away without taking any responsibility. However people should take responsibility for their actions, and the only likely consequence of allowing them to shirk it is that they and others will be less likely to exercise due care in the future, since the deterrents of liability and of no compensation for accidentally self-imposed losses will not be there. Others also argue that this expansion is not warranted because it is inappropriately fueled by ‘deep pocket’ considerations rather than by considerations of fault. They argue that the presence of liability insurance sways the judiciary to award damages against defendants since they know that insurers, and not the defendant personally, will pay for it in the end anyway. But although it may seem that no real person has to bear these burdens when they are imposed onto insurers, in reality all of society bears them collectively when insurers are forced to hike their premiums to cover these increasing damages payments. In any case, it seems unfair to force insurers to cover these costs simply because they can afford to do so. If such an expansion is indeed the cause of the PL&I crisis, then a contraction of the scope of tort liability, and a pious return to the fault principle, might remedy the situation. However it could also be argued that inadequate deterrence is the cause of this crisis. On this account the problem would lie not with the tort system’s continued unwarranted expansion, but in the fact that defendants really have been too careless. If prospective injurers were appropriately deterred from engaging in unnecessarily risky activities, then fewer accidents would ever occur in the first place, and this would reduce the need for litigation at its very source. If we take this to be the cause of tort law’s failure then our solution should aim to improve deterrence. Glen Robinson has argued that improved deterrence could be achieved if plaintiffs were allowed to sue defendants for wrongful exposure to ongoing risks of future harm, even in the absence of currently materialized losses. He argues that at least in toxic injury type cases the tortious creation of risk [should be seen as] an appropriate basis of liability, with damages being assessed according to the value of the risk, as an alternative to forcing risk victims to abide the outcome of the event and seek damages only if and when harm materializes. In a sense, Robinson wishes to treat newly-acquired wrongful risks as de-facto wrongful losses, and these are what would be compensated in liability for risk creation (‘LFRC’) cases. Robinson argues that if the extent of damages were fixed to the extent of risk exposure, all detected unreasonable risk creators would be forced to bear the costs of their activities, rather than only those who could be found responsible for another’s injuries ‘on the balance of probabilities’. The incidence of accidents should decrease as a result of improved deterrence, reduce the ‘suing fest’, and so resolve the PL&I crisis. So whilst the first solution involves contracting the scope of tort liability, Robinson’s solution involves an expansion of its scope. However Robinson acknowledges that LFRC seems prima facie incompatible with current tort principles which in the least require the presence of plaintiff losses, defendant fault, and causation to be established before making defendants liable for plaintiffs’ compensation. Since losses would be absent in LFRC cases by definition, the first evidentiary requirement would always be frustrated, and in its absence proof of defendant fault and causation would also seem scant. If such an expansion of tort liability were not supported by current tort principles then it would be no better than proposals to switch accident law across to no-fault, since both solutions would require comprehensive legal reform. However Robinson argues that the above three evidentiary requirements could be met in LFRC cases to the same extent that they are met in other currently accepted cases, and hence that his solution would therefore be preferable to no-fault solutions as it would only require incremental but not comprehensive legal reform. Although I believe that actual losses should be present before allowing plaintiffs to seek compensation, I will not present a positive argument for this conclusion. My aim in this paper is not to debate the relative merits of Robinson’s solution as compared to no-fault solutions, nor to determine which account of the cause of the PL&I crisis is closer to the truth, but rather to find out whether Robinson’s solution would indeed require less radical legal reform than, for example, proposed no-fault solutions. I will argue that Robinson fails to show that current tort principles would support his proposed solution, and hence that his solution is at best on an even footing with no-fault solutions since both would require comprehensive legal reform. (shrink)
Could neuroimaging evidence help us to assess the degree of a person’s responsibility for a crime which we know that they committed? This essay defends an affirmative answer to this question. A range of standard objections to this high-tech approach to assessing people’s responsibility is considered and then set aside, but I also bring to light and then reject a novel objection—an objection which is only encountered when functional (rather than structural) neuroimaging is used to assess people’s responsibility.
In "Torts, Egalitarianism and Distributive Justice" (Ashgate, 2007), Tsachi Keren-Paz presents impressingly detailed analysis that bolsters the case in favour of incremental tort law reform. However, although this book's greatest strength is the depth of analysis offered, at the same time supporters of radical law reform proposals may interpret the complexity of the solution that is offered (and its respective cost) as conclusive proof that tort law can only take adequate account of egalitarian aims at an unacceptably high cost.
This is a report on the 3-day workshop The Neuroscience of Responsibility that was held in the Philosophy Department at Delft University of Technology in The Netherlands during February 11th–13th, 2010. The workshop had 25 participants from The Netherlands, Germany, Italy, UK, USA, Canada and Australia, with expertise in philosophy, neuroscience, psychology, psychiatry and law. Its aim was to identify current trends in neurolaw research related specifically to the topic of responsibility, and to foster international collaborative research on this topic. (...) The workshop agenda was constructed by the participants at the start of each day by surveying the topics of greatest interest and relevance to participants. In what follows, we summarize (1) the questions which participants identified as most important for future research in this field, (2) the most prominent themes that emerged from the discussions, and (3) the two main international collaborative research project plans that came out of this meeting. (shrink)
Garrath Williams claims that truly responsible people must possess a “capacity … to respond [appropriately] to normative demands” (2008:462). However, there are people whom we would normally praise for their responsibility despite the fact that they do not yet possess such a capacity (e.g. consistently well-behaved young children), and others who have such capacity but who are still patently irresponsible (e.g. some badly-behaved adults). Thus, I argue that to qualify for the accolade “a responsible person” one need not possess such (...) a capacity, but only to be earnestly willing to do the right thing and to have a history that testifies to this willingness. Although we may have good reasons to prefer to have such a capacity ourselves, and to associate ourselves with others who have it, at a conceptual level I do not think that such considerations support the claim that having this capacity is a necessary condition of being a responsible person in the virtue sense. (shrink)
The way in which we characterize the structural and functional differences between psychopath and normal brains – either as biological disorders or as mere biological differences – can influence our judgments about psychopaths’ responsibility for criminal misconduct. However, Marga Reimer (Neuroethics 1(2):14, 2008) points out that whether our characterization of these differences should be allowed to affect our judgments in this manner “is a difficult and important question that really needs to be addressed before policies regarding responsibility... can be implemented (...) with any confidence”. This paper is an attempt to address Reimer’s difficult and important question; I argue that irrespective of which of these two characterizations is chosen, our judgments about psychopaths’ responsibility should not be affected, because responsibility hinges not on whether a particular difference is (referred to as) a disorder or not, but on how that difference affects the mental capacities required for moral agency. (shrink)
Nationalism has had a complex relation with the discipline of political theory during the 20th century. Political theory has often been deeply uneasy with nationalism in relation to its role in the events leading up to and during the Second World War. Many theorists saw nationalism as an overly narrow and potentially irrationalist doctrine. In essence it embodied a closed vision of the world. This article focuses on one key contributor to the immediate post-war debate—Karl Popper—who retained deep misgivings about (...) nationalism until the end of his life, and indeed saw the events of the early 1990s (shortly before his death) as a confirmation of this distrust. Popper was one of a number of immediate post war writers, such as Friedrich Hayek and Ludwig von Mises, who shared this unease with nationalism. They all had a powerful effect on social and political thought in the English-speaking world. Popper particularly articulated a deeply influential perspective that fortuitously encapsulated a cold war mentality in the 1950s. In 2005 Popper's critical views are doubly interesting, since the last decade has seen a renaissance of nationalist interests. The collapse of the Berlin wall in 1989, and the changing political landscape of international and domestic politics, has seen once again a massive growth of interest in nationalism, particularly from liberal political theorists and a growing, and, at times, immensely enthusiastic academic literature, trying to provide a distinctively benign benediction to nationalism. (shrink)
New concepts may prove necessary to profit from the avalanche of sequence data on the genome, transcriptome, proteome and interactome and to relate this information to cell physiology. Here, we focus on the concept of large activity-based structures, or hyperstructures, in which a variety of types of molecules are brought together to perform a function. We review the evidence for the existence of hyperstructures responsible for the initiation of DNA replication, the sequestration of newly replicated origins of replication, cell division (...) and for metabolism. The processes responsible for hyperstructure formation include changes in enzyme affinities due to metabolite-induction, lipid-protein affinities, elevated local concentrations of proteins and their binding sites on DNA and RNA, and transertion. Experimental techniques exist that can be used to study hyperstructures and we review some of the ones less familiar to biologists. Finally, we speculate on how a variety of in silico approaches involving cellular automata and multi-agent systems could be combined to develop new concepts in the form of an Integrated cell (I-cell) which would undergo selection for growth and survival in a world of artificial microbiology. (shrink)
Third-party property insurance (TPPI) protects insured drivers who accidentally damage an expensive car from the threat of financial ruin. Perhaps more importantly though, TPPI also protects the victims whose losses might otherwise go uncompensated. Ought responsible drivers therefore take out TPPI? This paper begins by enumerating some reasons for why a rational person might believe that they have a moral obligation to take out TPPI. It will be argued that if what is at stake in taking responsibility is the ability (...) to compensate our possible future victims for their losses, then it might initially seem that most people should be thankful for the availability of relatively inexpensive TPPI because without it they may not have sufficient funds to do the right thing and compensate their victims in the event of an accident. But is the ability to compensate one's victims really what is at stake in taking responsibility? The second part of this paper will critically examine the arguments for the above position, and it will argue that these arguments do not support the conclusion that injurers should compensate their victims for their losses, and hence that drivers need not take out TPPI in order to be responsible. Further still, even if these arguments did support the conclusion that injurers should compensate their victims for their losses, then (perhaps surprisingly) nobody should to be allowed to take out TPPI because doing so would frustrate justice. (shrink)
This exploratory ethics study of a publication and presentation practice herein defined as streaming investigates the attitudes of deans of schools of business and business professors regarding such behavior. Streaming publications is the practice of presenting or publishing an article at one outlet and then taking the same article with perhaps minor revisions and presenting or publishing it at another publication outlet. The results of the survey suggest that the most important ethical behavior regarding streaming practices is disclosure. If authors (...) fully disclose the intellectual history of a paper's developmental process, allegations of possible professional misconduct will be minimized if not eliminated. (shrink)
If code is law then standards bodies are governments. This flawed but powerful metaphor suggests the need to examine more closely those standards bodies that are defining standards for the Internet. In this paper we examine the International Telecommunications Union, the Institute for Electrical and Electronics Engineers Standards Association, the Internet Engineering Task Force, and the World Wide Web Consortium. We compare the organizations on the basis of participation, transparency, authority, openness, security and interoperability. We conclude that the IETF and (...) the W3C are becoming increasingly similar. We also conclude that the classical distinction between standards and implementations is decreasingly useful as standards are embodies in code – itself a form of speech or documentation. Recent Internet standards bodies have flourished in part by discarding or modifying the implementation/standards distinction. We illustrate that no single model is superior on all dimensions. The IETF is not effectively scaling, struggling with its explosive growth with the creation of thousands of working groups. The IETF coordinating body, the Internet Society, addressed growth by reorganization that removed democratic oversight. The W3C, initially the most closed, is becoming responsive to criticism and now includes open code participants. The IEEE SA and ITU have institutional controls appropriate for hardware but too constraining for code. Each organization has much to learn from the others. (shrink)
It is conventional to think of modernity as being characterised by the irremediable separation of philosophy and theology, of reason and faith. Failing to reconsider the idea of such a divorce, post-modernity has pushed this postulate to its very limits by attempting to abolish all types of normativity whether on the grounds of reason or any other basis. Against these prevailing conceptions, we argue that there exist, within philosophy and theology, processes of differentiation as well as original combinations. To illustrate (...) the possibility of mutually enriching exchanges between the philosophical and the theological ethical traditions we will call upon the historical example of solidarism. This will enable us to show that the two traditions are not so heterogeneous as may be first thought by those who underestimate the importance of identifying the conditions, both pragmatic and ideological, that govern practical in situation ethical judgements. (shrink)
A new approach to information is proposed with the intention of providing a conceptual tool adapted to biology, including a semantic value.Information involves a material support as well as a significance, adapted to the cognitive domain of the receiver and/or the transmitter. A message does not carry any information, only data. The receiver makes an identification by a procedure of recognition of the forms, which activate previously learned significance. This treatment leads to a new significance (or new knowledge).
For living beings, information is as fundamental as matter or energy. In this paper we show: a) inadequacies of quantitative theories of information, b) how a qualitative analysis leads to a classification of information systems and to a modelling of intercellular communication.From a quantitative point of view, the application in biology of information theories borrowed from communication techniques proved to be disappointing. These theories ignore deliberately the significance of messages, and do not give any definition of information. They refer to (...) quantities, based upon arbitrarily defined probabilistic events. Probability is subjective. The receiver of the message needs to have meta-knowledge of the events. The quantity of information depends on language, coding, and arbitrary definition of disorder. The suggested objectivity is fallacious. (shrink)
In his new book on Pascal's Wager, Jeff Jordan argues that only the ‘Jamesian’ version of the wager argument, as he sees it presented in William James' essay The Will to Believe , constitutes a sound pragmatic argument in favour of theism, whereas Pascal's original wager argument is doomed to fail on various grounds. This article argues that Jordan's theory is untenable. The many-gods objection is used as an example: it is demonstrated that the Jamesian Wager argument too is (...) powerless to rebut this objection. (shrink)
The main aim of Jeff McMahan's manuscript on the morality of war is to answer the question: why and accordingly when is it justified or permissible to kill people in war? However, McMahan argues that the same principles apply to individual actions and to war. McMahan rejects all doctrines of collective responsibility and liability. His claim is that every individual is liable for what he has done and not for the actions of others - even if both are part (...) of the same collective. Accordingly, McMahan challenges the common view that it is much easier to justify killing in war compared to killing in other contexts. Therefore, the scope of his project exceeds the context of war and extends to interpersonal conflicts between individuals that do not qualify as war. Many of McMahan's main claims are appealing. Particularly, appealing is his rejection of the collectivist account of war. Indeed, it seems that the simple story according to which people are responsible solely for their actions - rather than (also) to the actions of others - should be held on until a different, more complex, account of collective responsibility is put forward and its plausibility is explained. Therefore, the article focuses on the general principles advocated by McMahan with regard to the resolution of all interpersonal conflicts: Whether these conflicts are small scale or large scale (that is, whether few or a many people are involved in the conflict), and within the latter category of conflicts involving many people, whether these conflicts qualify as war (according to some standard) or not. (shrink)
As Vincent Hendricks remarks early on in this book, the formal and mainstream traditions of epistemic theorising have mostly evolved independently of each other. This initial impression is confirmed by a comparison of the main problems and methods practitioners in each tradition are concerned with. Mainstream epistemol- ogy engages in a dialectical game of proposing and challenging definitions of knowledge. Formal epistemologists proceed differently, as they design a wide variety of axiomatic and model-theoretic methods whose consequences they investigate independently (...) of the need of giving counterexample-free definitions of knowledge. Or at least, this is a common way to explain where both disciplines stand in the larger landscape of epistemic theorising, and why interactions between them remain scarce. The main ambition of this book is to show that the distinction between formal and mainstream approaches should not preclude a fruitful interaction, and that it only takes the right outlook on their respective practices to disclose plenty of room for interaction. (shrink)
This essay responds to Jeff Malpas's foregoing article, itself written in response to my various publications over the past two decades concerning Donald Davidson's ideas about truth, meaning, and interpretation. It has to do mainly with our disagreement as regards the substantive content of Davidson's truth-based semantic approach in relation to the problematic legacy of logical empiricism, including Quine's incisive but no less problematical critique of that legacy. I also raise questions with respect to Malpas's coupling of Davidson with (...) Heidegger, intended to provide a more adequate depth-ontological grounding for the formalized (logico-semantic) conception of truth that Davidson adopts from Tarski. My essay then argues the case for an outlook of objectivist causal realism joined with a theory of inference to the best, most rational explanation that would satisfy this need in more philosophically (as well as scientifically) accountable terms. (shrink)
Vincent Brümmer has recently, by taking his starting-point in the writings of Wittgenstein, defended the idea that the debate about the truth or falsehood of the claim that God exists has no future. I suggest that the arguments Brümmer develops to support this claim fail. This is so because he does not show why any attempt to prove or disprove the truth or falsehood of the belief in the existence of God is circular or how the purported non-provability of (...) the belief that God exists entails that the theism-atheism debate of the truth or falsehood of this belief has no future. In addition, Brümmer does not acknowledge that there are many different religious language-games, that within the theistic language-games the claim that God exists is used in many different ways and that, as a result, it is not true that, within the religious language-game, the belief in the existence of God cannot be doubted, denied, or treated as a hypothesis. (shrink)
St. Vincent de Paul (1581–1660) is well known for his contribution to charitable and social works. Even though he left no detailed examination of his business practices, by examining his life and his commitment to the poor, it is possible to frame a Vincentian theology of business ethics. Such an understanding would include educating students in the social teaching of the Catholic Church, a preferential option for the poor, good organization, sound business theory, economizing, and a foundation in the (...) liberal arts. (shrink)
Bernadette Bensaude-Vincent and Jonathan Simon: Chemistry, the impure science Content Type Journal Article Category Book Review Pages 1-2 DOI 10.1007/s10698-011-9132-y Authors George B. Kauffman, Department of Chemistry, California State University, Fresno, Fresno, CA 93740-8034, USA Journal Foundations of Chemistry Online ISSN 1572-8463 Print ISSN 1386-4238.
From the point of view of a saint's life, the article addresses the question of integrating holiness and business dealings. By analyzing the heavy involvement of Vincent de Paul, a seventeenth century French saint, in the world of finance and politics as he ministered to the poor of his day, the study attempts to show that it is both possible and beneficial to join together the world of business with that of a religiously inspired ethic. The spiritually grounded (...) manner in which Vincent de Paul approached his institutional tasks and the ways in which those endeavors gave body to his spirituality present an unitary, non-dualistic instance of how business and morality can interact. (shrink)
Can we interpret human reason simultaneously as a product of neurochemistry and natural selection and as a transcendental standard? Jeff Mason asks the analogous question of philosophical writing. Can we interpret philosophical discourse as "rhetorical," embodied in language, and designed to persuade historical audiences, and at the same time preserve its traditional intention to disclose truths that transcend language, history, and audiences? Mason argues that these polar attitudes toward philosophical writing are untenable precisely when they exclude each other. This (...) is a significant project with important literary and metaphilosophical consequences. (shrink)
In this volume, a distinguished collection of historians and political scientists reflect on France's evolution as a political community from the nineteenth century to the present. France is often seen as a 'Jacobin' polity, committed to the principles of national unity and state centralization, a robust conception of patriotism, the promotion of a uniform and homogenous culture on its society, and the defence of the general interest against sectional concerns. The overall aims of the book are threefold: firstly to map (...) out the key features of this 'Jacobin' model as it emerged in nineteenth century France; secondly to explore the institutional, political, and social realities which lay behind its rhetoric, and often subverted its grand objectives; and thirdly to offer an overview of the transformation of this French Jacobinism, as it has sought to adapt itself to such significant changes as the impact of successive wars, the establishment of republican government, the emergence of the welfare state, the drive towards European integration, and development of regionalism and multiculturalism. -/- Among the principal themes of the book are: the place of war in shaping republican political culture, the role of elites, the administrative structure of the French state, the definition of the principles of good citizenship, and the question of territoriality. French specialists from Britain, Europe, and United States come together to offer an original and timely evaluation of the 'French model' of state building, associational activity, and civic integration. Shedding new light on the specificities of modern French political culture, this collection of essays will appeal to historians and political scientists interested in the transformation of French public institutions and society, as well as comparativists seeking a deeper understanding of the French political system. -/- This volume is a tribute to the scholarship of the late Vincent Wright, former Official Fellow, Nuffield College, Univeristy of Oxford. (shrink)
According to the dominant position in the just war tradition from Augustine to Anscombe and beyond, there is no “moral equality of combatants.” That is, on the traditional view the combatants participating in a justified war may kill their enemy combatants participating in an unjustified war— but not vice versa (barring certain qualifications). I shall argue here, however, that in the large number of wars (and in practically all modern wars) where the combatants on the justified side violate the rights (...) of innocent people (“collateral damage”), these combatants are in fact liable to attack by the combatants on the unjustified side. I will support this view with a rights-based account of liability to attack and then defend it against a number of objections raised in particular by Jeff McMahan. The result is that the thesis of the moral equality of combatants holds good for a large range of armed conflicts while the opposing thesis is of very limited practical relevance. (shrink)
I will focus on the topic announced in the subtitle of Professor Descombes’ profound and provocative work: The Mind’s Provisions: A Critique of Cognitivism. In the end, I will agree with practically everything in his incisive ‘critique’ except its conclusion: that cognitivism is incoherent. What he shows instead, I think, is that cognitivism, as an account of human thought and understanding, is deeply false. The difference matters because incoherence is harder to prove and, prima facie, less plausible. But, if the (...) same argument, slightly recast, shows falsehood with even more conviction, then the essential point is saved after all. So, following a quick characterization of cognitivism, I will attempt to distill what I take to be the main grounds and themes of Descombes’ critique, explain why I don’t think they expose an incoherence, and then show how they might be recast in a way that is devastating all the same. (shrink)
Philosophical logicians proposing theories of rational belief revision have had little to say about whether their proposals assist or impede the agent's ability to reliably arrive at the truth as his beliefs change through time. On the other hand, reliability is the central concern of formal learning theory. In this paper we investigate the belief revision theory of Alchourron, Gardenfors and Makinson from a learning theoretic point of view.