The author uses a series of examples to illustrate two versions of a new, nonprobabilist principle of epistemic rationality, the special and general versions of the metacognitive, expected relative frequency principle. These are used to explain the rationality of revisions to an agent’s degrees of confidence in propositions based on evidence of the reliability or unreliability of the cognitive processes responsible for them—especially reductions in confidence assignments to propositions antecedently regarded as certain—including certainty-reductions to instances of the law of excluded (...) middle or the law of noncontradiction in logic or certainty-reductions to the certainties of probabilist epistemology. The author proposes special and general versions of the MERF principle and uses them to explain the examples, including the reasoning that would lead to thoroughgoing fallibilism—that is, to a state of being certain of nothing. The author responds to the main defenses of probabilism: Dutch Book arguments, Joyce’s potential accuracy defense, and the potential calibration defenses of Shimony and van Fraassen by showing that, even though they do not satisfy the probability axioms, degrees of belief that satisfy the MERF principle minimize expected inaccuracy in Joyce’s sense; they can be externally calibrated in Shimony and van Fraassen’s sense; and they can serve as a basis for rational betting, unlike probabilist degrees of belief, which, in many cases, human beings have no rational way of ascertaining. The author also uses the MERF principle to subsume the various epistemic akrasia principles in the literature. Finally, the author responds to Titelbaum’s argument that epistemic akrasia principles require that we be certain of some epistemological beliefs, if we are rational. (shrink)
The evolutionist challenge to moral realism is the skeptical challenge that, if evolution is true, it would only be by chance, a “happy coincidence” as Sharon Street puts it, if human moral beliefs were true. The author formulates Street’s “happy coincidence” argument more precisely using a distinction between probabilistic sensitivity and insensitivity introduced by Elliott Sober. The author then considers whether it could be rational for us to believe that human moral judgments about particular cases are probabilistically sensitive to strongly (...) universal fundamental moral standards of cooperation and fair division. The author provides an explanation of why there would be a benign correlation between human moral judgments in particular cases and the requirements of strongly universal fundamental moral standards. The explanation of the benign correlation is based on group selection for groups of individuals with an egalitarian satisficing psychology and egalitarian norms, because of the ability of such groups to more efficiently solve gene-propagation collective action problems. (shrink)
Originally published in 1990. Examining epistemic justification, truth and logic, this book works towards a holistic theory of knowledge. It discusses evidence, belief, reliability and many philosophical theories surrounding the nature of true knowledge. A thorough Preface updates the main work from when it was written in 1976 to include theories ascendant in the ‘80s.
The article begins with a review of the structural differences between act consequentialist theories and human rights theories, as illustrated by Amartya Sen's paradox of the Paretian liberal and Robert Nozick's utilitarianism of rights. It discusses attempts to resolve those structural differences by moving to a second-order or indirect consequentialism, illustrated by J.S. Mill and Derek Parfit. It presents consequentialist (though not utilitarian) interpretations of the contractualist theories of Jürgen Habermas and the early John Rawls (Theory of Justice) and of (...) the capability theories of Sen and Martha Nussbaum. It also discusses two roles that well-being or a surrogate for well-being typically plays in theories of human rights: (a) well-being plays a role in the justification of at least some exceptions to human rights principles; and (b) some human rights seem to be best understood as rights to some level of well-being or expected well-being (or of a surrogate for one of them). It reviews two consequentialist challenges to the moral adequacy of non-consequentialist accounts of human rights, one based on a duty to relieve suffering and the other generated by Parfit's Non-Identity Problem, and concludes with a contrast between two ways of looking at the history of the development and implementation of human rights conventions and laws. Video abstract (click to view). (shrink)
In this book, William J. Talbott examines the meaning of moral progress, claiming that improvements to our moral or legal practices are changes that, when evaluated as a practice, contribute to equitably promoting well-being. Talbott completes the project begun in his 2005 book of identifying the human rights that should be universal.
The consequentialist project for human rights -- Exceptions to libertarian natural rights -- The main principle -- What is well-being? What is equity? -- The two deepest mysteries in moral philosophy -- Security rights -- Epistemological foundations for the priority of autonomy rights -- The millian epistemological argument for autonomy rights -- Property rights, contract rights, and other economic rights -- Democratic rights -- Equity rights -- The most reliable judgment standard for weak paternalism -- Liberty rights and privacy rights (...) -- Clarifications and responses to objections -- Conclusion. (shrink)
‘Bayesian epistemology’ became an epistemological movement in the 20th century, though its two main features can be traced back to the eponymous Reverend Thomas Bayes (c. 1701-61). Those two features are: (1) the introduction of a formal apparatus for inductive logic; (2) the introduction of a pragmatic self-defeat test (as illustrated by Dutch Book Arguments) for epistemic rationality as a way of extending the justification of the laws of deductive logic to include a justification for the laws of inductive logic. (...) The formal apparatus itself has two main elements: the use of the laws of probability as coherence constraints on rational degrees of belief (or degrees of confidence) and the introduction of a rule of probabilistic inference, a rule or principle of conditionalization. (shrink)
In this reply to his three critics, Talbott develops several important themes from his book, Which Rights Should Be Universal?, in ways that go beyond the discussion in the book. Among them are the following: the prescriptive role of human rights theory; the need to guarantee an expansive list of basic rights as a basis for a government to be able to claim recognitional legitimacy; the futility of trying to define human rights in terms of what there can be reasonable (...) disagreement about; and the problems for any proceduralist account of human rights. Talbott also further elaborates his consequentialist defense of basic human rights and his arguments against cultural relativism about human rights. (shrink)
Although the focus of "Globalizing Democracy and Human Rights" is practical, Gould does not shy away from hard theoretical questions, such as the relentless debate over cultural relativism, and the relationship between terrorism and democracy.
"We hold these truths to be self-evident..." So begins the U.S. Declaration of Independence. What follows those words is a ringing endorsement of universal rights, but it is far from self-evident. Why did the authors claim that it was? William Talbott suggests that they were trapped by a presupposition of Enlightenment philosophy: That there was only one way to rationally justify universal truths, by proving them from self-evident premises. With the benefit of hindsight, it is clear that the authors of (...) the U.S. Declaration had no infallible source of moral truth. For example, many of the authors of the Declaration of Independence endorsed slavery. The wrongness of slavery was not self-evident; it was a moral discovery. In this book, William Talbott builds on the work of John Rawls, Jurgen Habermas, J.S. Mill, Amartya Sen, and Henry Shue to explain how, over the course of history, human beings have learned how to adopt a distinctively moral point of view from which it is possible to make universal, though not infallible, judgments of right and wrong. He explains how this distinctively moral point of view has led to the discovery of the moral importance of nine basic rights. Undoubtedly, the most controversial issue raised by the claim of universal rights is the issue of moral relativism. How can the advocate of universal rights avoid being a moral imperialist? In this book, Talbott shows how to defend basic individual rights from a universal moral point of view that is neither imperialistic nor relativistic. Talbott avoids moral imperialism by insisting that all of us, himself included, have moral blindspots and that we usually depend on others to help us to identify those blindspots. Talbott's book speaks to not only debates on human rights but to broader issues of moral and cultural relativism, and will interest a broad range of readers. (shrink)
In the movie Regarding Henry , the main character, Henry Turner, is a lawyer who suffers brain damage as a result of being shot during a robbery. Before being wounded, the Old Henry Turner had been a successful lawyer, admired as a fierce competitor and well-known for his killer instinct. As a result of the injury to his brain, the New Henry Turner loses the personality traits that had made the Old Henry such a formidable adversary.
I agree with Mele that self-deception is not intentional deception; but I do believe that self-deception involves intentional biasing, primarily for two reasons: (1) There is a Bayesian model of self-deception that explains why the biasing is rational. (2) It is implausible that the observed behavior of self- deceivers could be generated by Mele's “blind” mechanisms.
Examples involving common causes — most prominently, examples involving genetically influenced choices — are analytically equivalent not to standard Newcomb Problems — in which the Predictor genuinely predicts the agent's decision — but to non-standard Newcomb Problems — in which the Predictor guarantees the truth of her predictions by interfering with the agent's decision to make the agent choose as it was predicted she would. When properly qualified, causal and epistemic decision theories diverge only on standard — not on non-standard (...) — Newcomb Problems, and thus not on examples involving common causes. (shrink)