Fixing Frege is one of the most important investigations to date of Fregean approaches to the foundations of mathematics. In addition to providing an unrivalled survey of the technical program to which Frege’s writings have given rise, the book makes a large number of improvements and clariﬁcations. Anyone with an interest in the philosophy of mathematics will enjoy and beneﬁt from the careful and well informed overview provided by the ﬁrst of its three chapters. Specialists will ﬁnd the book an (...) indispensable reference and an invaluable source of insights and new results. Although Frege is widely regarded as the father of analytic philosophy, his work on the foundations of mathematics was for a long time rather peripheral to the ongoing research. The main reason for this is no doubt Russell’s discovery in 1901 that the paradox now bearing his name can be derived in Frege’s logical system. But recent decades have seen a huge surge of interest in Fregean approaches to the foundations of mathematics. (The work of George Boolos, Kit Fine, Bob Hale, Richard Heck, Stewart Shapiro, and Crispin Wright is singled out for particular attention in the present monograph.) A variety of consistent theories have been discovered that can be salvaged from Frege’s inconsistent system, and foundational and philosophical claims have been made on behalf of many of these theories. Burgess claims quite plausibly that the signiﬁcance of any such modiﬁed Fregean theory will in large part depend on how much of ordinary mathematics it enables us to develop.1 His.. (shrink)
For many commentators, September 11 inaugurated a new era of fear. But as Corey Robin shows in his unsettling tour of the Western imagination--the first intellectual history of its kind--fear has shaped our politics and culture since time immemorial. From the Garden of Eden to the Gulag Archipelago to today's headlines, Robin traces our growing fascination with political danger and disaster. As our faith in positive political principles recedes, he argues, we turn to fear as the justifying language (...) of public life. We may not know the good, but we do know the bad. So we cling to fear, abandoning the quest for justice, equality, and freedom. But as fear becomes our intimate, we understand it less. In a startling reexamination of fear's greatest modern interpreters--Hobbes, Montesquieu, Tocqueville, and Arendt--Robin finds that writers since the eighteenth century have systematically obscured fear's political dimensions, diverting attention from the public and private authorities who sponsor and benefit from it. For fear, Robin insists, is an exemplary instrument of repression--in the public and private sector. Nowhere is this politically repressive fear--and its evasion--more evident than in contemporary America. In his final chapters, Robin accuses our leading scholars and critics of ignoring "Fear, American Style," which, as he shows, is the fruit of our most prized inheritances--the Constitution and the free market. With danger playing an increasing role in our daily lives and justifying a growing number of government policies, Robin's Fear offers a bracing, and necessary, antidote to our contemporary culture of fear. (shrink)
The conceptual model presented in this article argues that corporations exhibit specific behaviors that signal their true level of moral development. Accordingly, the authors identify five levels of moral development and discuss the dynamics that move corporations from one level to another. Examples of corporate behavior which are indicative of specific stages of moral development are offered.
Numbers and other mathematical objects are exceptional in having no locations in space or time or relations of cause and effect. This makes it difficult to account for the possibility of the knowledge of such objects, leading many philosophers to embrace nominalism, the doctrine that there are no such objects, and to embark on ambitious projects for interpreting mathematics so as to preserve the subject while eliminating its objects. This book cuts through a host of technicalities that have obscured previous (...) discussions of these projects, and presents clear, concise accounts of a dozen strategies for nominalistic interpretation of mathematics, thus equipping the reader to evaluate each and to compare different ones. The authors also offer critical discussion, rare in the literature, of the aims and claims of nominalistic interpretation, suggesting that it is significant in a very different way from that usually assumed. (shrink)
The form of nominalism known as 'mathematical fictionalism' is examined and found wanting, mainly on grounds that go back to an early antinominalist work of Rudolf Carnap that has unfortunately not been paid sufficient attention by more recent writers.
What is the simplest and most natural axiomatic replacement for the set-theoretic definition of the minimal fixed point on the Kleene scheme in Kripke’s theory of truth? What is the simplest and most natural set of axioms and rules for truth whose adoption by a subject who had never heard the word "true" before would give that subject an understanding of truth for which the minimal fixed point on the Kleene scheme would be a good model? Several axiomatic systems, old (...) and new, are examined and evaluated as candidate answers to these questions, with results of Harvey Friedman playing a significant role in the examination. (shrink)
Quine correctly argues that Carnap's distinction between internal and external questions rests on a distinction between analytic and synthetic, which Quine rejects. I argue that Quine needs something like Carnap's distinction to enable him to explain the obviousness of elementary mathematics, while at the same time continuing to maintain as he does that the ultimate ground for holding mathematics to be a body of truths lies in the contribution that mathematics makes to our overall scientific theory of the world. Quine's (...) arguments against the analytic/synthetic distinction, even if fully accepted, still leave room for a notion of pragmatic analyticity sufficient for the indicated purpose. (shrink)
This paper re-examines the question of whether quirks of early human foetal development tell against the view (conceptionism) that we are human beings at conception. A zygote is capable of splitting to give rise to identical twins. Since the zygote cannot be identical with either human being it will become, it cannot already be a human being. Parallel concerns can be raised about chimeras in which two embryos fuse. I argue first that there are just two ways of dealing with (...) cases of fission and fusion and both seem to be available to the conceptionist. One is the Replacement View according to which objects cease to exist when they fission or fuse. The other is the Multiple Occupancy View – both twins may be present already in the zygote and both persist in a chimera. So, is the conceptionist position tenable after all? I argue that it is not. A zygote gives rise not only to a human being but also to a placenta – it cannot already be both a human being and a placenta. Neither approach to fission and fusion can help the conceptionist with this problem. But worse is in store. Both fission and fusion can occur before and after the development of the inner cell mass of the blastocyst – the entity which becomes the embryo proper. The idea that we become human beings with the arrival of the inner cell mass leads to bizarre results however we choose to accommodate fission and fusion. (shrink)
One textbook may introduce the real numbers in Cantor’s way, and another in Dedekind’s, and the mathematical community as a whole will be completely indifferent to the choice between the two. This sort of phenomenon was famously called to the attention of philosophers by Paul Benacerraf. It will be argued that structuralism in philosophy of mathematics is a mistake, a generalization of Benacerraf’s observation in the wrong direction, resulting from philosophers’ preoccupation with ontology.
I aim to show how and why some definitions can be benignly circular. According to Lloyd Humberstone, a definition that is analytically circular need not be inferentially circular and so might serve to illuminate the application-conditions for a concept. I begin by tidying up some problems with Humberstone's account. I then show that circular definitions of a kind commonly thought to be benign have inferentially circular truth-conditions and so are malign by Humberstone's test. But his test is too demanding. The (...) inferences we actually use to establish the applicability of, e.g., colour concepts are designed to establish warranted assertability and not truth. Understood thus, dispositional analyses are not inferentially circular. (shrink)
A new axiomatization of set theory, to be called Bernays-Boolos set theory, is introduced. Its background logic is the plural logic of Boolos, and its only positive set-theoretic existence axiom is a reflection principle of Bernays. It is a very simple system of axioms sufficient to obtain the usual axioms of ZFC, plus some large cardinals, and to reduce every question of plural logic to a question of set theory.
Appointment as a director of a company board often represents the pinnacle of a management career. Worldwide, it has been noted that very few women are appointed to the boards of directors of companies. Blame for the low numbers of women of company boards can be partly attributed to the widely publicized "glass ceiling". However, the very low representation of women on company boards requires further examination. This article reviews the current state of women's representation on boards of directors and (...) summarizes the reasons as to why women are needed on company boards. Given that more women on boards are desirable, the article then describes how more women could be appointed to boards, and the actions that organizations and women could take to help increase the representation of women. Finally, the characteristics of those women that have succeeded in becoming members of company boards are described from an international perspective. Unfortunately, answers to the vexing question of whether these women have gained board directorships in their own right as extremely competent managers, or whether they are mere token female appointments in a traditional male dominated culture, remains elusive. (shrink)
This study represents an improvement in the ethics scales inventory published in a 1988 Journal of Business Ethics article. The article presents the distillation and validation process whereby the original 33 item inventory was reduced to eight items. These eight items comprise the following ethical dimensions: a moral equity dimension, a relativism dimension, and a contractualism dimension. The multidimensional ethics scale demonstrates significant predictive ability.
The argument from potential has been hard to assess because the versions presented by friends and those presented by enemies have born very little resemblance to each other. I here try to improve this situation by attempting to bring both versions into enforced contact. To this end, I sketch a more detailed analysis of the modern concept of potential than any hitherto attempted. As one would expect, arguments from potential couched in terms of that notion are evident non-starters. I then (...) ask how the modern notion of potential needs to be supplemented in order to produce a more convincing argument. I then enquire whether the supplementations utilised in the most distinguished recent presentations of the argument have anything better than an ad hoc role to play in contemporary metaphysics. I conclude that the rehabilitation of the argument is unlikely; in any event, the onus of proof seems to be on the friend of that argument to show that it is uncontrived. Finally, I argue that the (modern) notion of potential has an important role to play in any plausible account of foetal value. (shrink)
The field of business ethics has been active for several decades, but it has yet to develop a generally agreed upon applied ethical perspective for the discipline. Academics in business disciplines have developed useful science-based models explaining why business people behave ethically but without a generally accepted definition of ethical behavior. Academics in moral philosophy have attempted to formulate what they believe ethical behavior is, but many seem to ignore or reject the basic mission of business. The purpose of this (...) article is to offer one view of ethics in business that accommodates the mission of business. This purpose is achieved by reviewing the mission of ethics in applied disciplines like business and melding it into the mission of business in capitalistic societies. (shrink)
A revision of a sermon on the evils of calling model theory “semantics”, preached at Notre Dame on Saint Patrick’s Day, 2005. Provisional version: references remain to be added. To appear in Mathematics, Modality, and Models: Selected Philosophical Papers, coming from Cambridge University Press.
Dummett's case against platonism rests on arguments concerning the acquisition and manifestation of knowledge of meaning. Dummett's arguments are here criticized from a viewpoint less Davidsonian than Chomskian. Dummett's case against formalism is obscure because in its prescriptive considerations are not clearly separated from descriptive. Dummett's implicit value judgments are here made explicit and questioned. ?Combat Revisionism!? Chairman Mao.
The discovery of the note cards for Quine’s previously unpublished 1946 lecture on nominalism provides an obvious occasion for commenting on the differences between the issue of nominalism as Quine first publicized it to a wide philosophical audience and the issue of nominalism as debated among Quine’s successors today. Yet as I read and reread the text of Quine’s lecture, I found myself struck less by the differences between Quine’s position there and the positions of present-day writers than by differences (...) between Quine’s position there and the positions of Quine himself in later writings — and not his writings from many years later but his writings from the next few years, and especially one of his writings from the very next year, his notorious joint paper with Goodman. (shrink)
In this era when results of empirical scientific research are being appealed to all across philosophy, when we even find moral philosophers invoking the results of brain scans, many profess to practice "naturalized epistemology," or to be "epistemological naturalists." Such phrases derive from the title of a well-known essay by Quine, but Paul Gregory's thesis in the work under review is that there is less connection than is usually assumed between Quine's variety of naturalized epistemology and what is today taken, (...) by opponents and proponents alike, to constitute epistemological naturalism. To put it bluntly, as Gregory does in the opening sentence of his introduction, Quine "has not been well understood." If there is less connection between the Quinian and other epistemological naturalisms than there has often been taken to be, on Gregory's account there is also much more connection between Quine's position on epistemology and his positions on other contentious issues. (shrink)
: Decisions about funding health services are crucial to controlling costs in health care insurance plans, yet they encounter serious challenges from intellectual property protection—e.g., patents—of health care services. Using Myriad Genetics' commercial genetic susceptibility test for hereditary breast cancer (BRCA testing) in the context of the Canadian health insurance system as a case study, this paper applies concepts from social contract theory to help develop more just and rational approaches to health care decision making. Specifically, Daniels's and Sabin's "accountability (...) for reasonableness" is compared to broader notions of public consultation, demonstrating that expert assessments in specific decisions must be transparent and accountable and supplemented by public consultation. (shrink)
The question, "Which modal logic is the right one for logical necessity?," divides into two questions, one about model-theoretic validity, the other about proof-theoretic demonstrability. The arguments of Halldén and others that the right validity argument is S5, and the right demonstrability logic includes S4, are reviewed, and certain common objections are argued to be fallacious. A new argument, based on work of Supecki and Bryll, is presented for the claim that the right demonstrability logic must be contained in S5, (...) and a more speculative argument for the claim that it does not include S4.2 is also presented. (shrink)
This article discusses the major criticisms posed in On Measuring Ethical Judgments concerning our ethics scale development work. We agree that the authors of the criticism do engage in what they accurately refer to as armchair theorizing. We point out the errors in their comments.
Blair proposes that fluid intelligence, working memory, and executive function form a unitary construct: fluid cognition. Recently, our group has utilized a combined correlational–experimental cognitive neuroscience approach, which we argue is beneficial for investigating relationships among these individual differences in terms of neural mechanisms underlying them. Our data do not completely support Blair's strong position. (Published Online April 5 2006).
Adapated from talks at the UCLA Logic Center and the Pitt Philosophy of Science Series. Exposition of material from Fixing Frege, Chapter 2 (on predicative versions of Frege’s system) and from “Protocol Sentences for Lite Logicism” (on a form of mathematical instrumentalism), suggesting a connection. Provisional version: references remain to be added. To appear in Mathematics, Modality, and Models: Selected Philosophical Papers, coming from Cambridge University Press.
My contribution to the symposium on Goedel’s philosophy of mathematics at the spring 2006 Association for Symbolic Logic meeting in Montreal. Provisional version: references remain to be added. To appear in an ASL volume of proceedings of the Goedel sessions at that meeting.
The Newcomb problem is analysed here as a type of common cause problem. In relation to such problems, if you take the dominated option your expected outcome will be good and if you take the dominant option your expected outcome will be not so good. As is explained, however, these arenot conventional conditional expected outcomes but `conditional evidence expected outcomes' and while in the deliberation process, the evidence on which they are based is only hypothetical evidence.Conventional conditional expected outcomes are (...) more sensitive to your current epistemic state in that they are based purely on actual evidence which is available to you during the deliberation process. So although they are conditional on a certain act being performed, they are not based on evidence that you would have only if that act is performed. Moreover, for any given epistemic state during the deliberation process, your conventional conditional expected outcome for the dominant option will be better than that for the dominated option. The principle of dominance is thus in perfect harmony with the conventional conditional expected outcomes. In relation to the Newcomb problem then, the evidence unequivocally supports two-boxing as the rational option. Yet what is advanced here is not simply a two-boxing strategy. To see why, two stages to the problem need to be recognised. The first stage is that which occurs before the information used by the predictor in making his predictions has been gained. The second stage is after this point. Provided that you are still in the first stage, you have an opportunity to influence whether or not the predictor places the $1m in the opaque box. To maximise the probability that it is, you need to commit yourself to one-boxing. (shrink)
1 Choice conjecture In axiomatizing nonclassical extensions of classical sentential logic one tries to make do, if one can, with adding to classical sentential logic a finite number of axiom schemes of the simplest kind and a finite number of inference rules of the simplest kind. The simplest kind of axiom scheme in effect states of a particular formula P that for any substitution of formulas for atoms the result of its application to P is to count (...) as an axiom. The simplest kind of onepremise inference rule in effect states of a particular pair of formulas P and Q that for any substitution of formulas for atoms, if the result of its application to P is a theorem, then the result of its application to Q is to count as a theorem; similarly for many-premise rules. Such are the schemes and rules of all the best-known modal and tense logics, for instance. Sometimes it is difficult to find such simple schemes and rules (though it is usually even more difficult to prove that none exist). In that case one may resort to less simple schemes or less simple rules. There is no generally recognized rigorous definition of "next simplest kind" of scheme. (In the case of schemes, one fact that makes a rigorous definition difficult is that, if the logic in question is axiomatizable at all, which is to say, if the set of formulas wanted as theorems is recursively enumerable, then by Craig’s trick one can always get a primitive recursive set of schemes of the simplest kind, even if one cannot get a finite set. Intuitively, some primitive recursive sets are much simpler than others, but it is difficult to reduce this intuition to a rigorous definition.) Neither is there any generally recognized definition of "next simplest kind" of rule, and hence there is no fully rigorous enunciation of the choice conjecture, the conjecture that schemes of the next simplest kind can always be avoided in favor of rules of the next simplest kind and vice versa. Nonetheless, there are cases where intuitively one does recognize that the schemes or rules in a given axiomatization are only slightly more complex than the simplest kind, including cases where one does have a choice between adopting slightly-more-complex-than-simplest schemes and adopting slightly-more-complex-than-simplest rules. In tense logic early examples of slightly more complex rules are found in  and : there is one example of the embarrassed use of such rules in the former, and many examples of the enthusiastic use of such rules in the latter and its sequels. Accordingly the rules in question have come to be called "Gabbay-style" rules.. (shrink)
Hintikka and Sandu have recently claimed that Frege's notion of function was substantially narrower than that prevailing in real analysis today. In the present note, their textual evidence for this claim is examined in the light of relevant historical and biographical background and judged insufficient.
Philosophical Analysis in the Twentieth Century by Scott Soames reminds me of nothing so much as Lectures on Literature by Vladimir Nabokov. Both are works that arose immediately out of the needs of undergraduate teaching, yet each manages to say much of significance to knowledgeable professionals. Each indirectly provides an outline of the history of its field, through a presentation of selected major works, taken in chronological order and including items that are generally recognized as marking decisive turning points. Yet (...) neither Soames’s work nor Nabokov’s is a history in any conventional sense, both being immediately disqualified from that category by the general absence of coverage of minor and middling works and writers. The emphasis is pedagogical rather than historiographical: the emphasis is on introducing the student to the field through very close examination of the limited number of key texts selected for inclusion. The author’s distinctive personality is also apparent in both works. Each writer has a favorite theme he repeatedly sounds: for Soames, the danger of conflating the analytic, the a priori, and the necessary; for Nabokov, the philistinism of expecting an uplifting “message” from works of literary art. Each also includes some quirky, individual selections: The Right and the Good, The Strange Case of Dr. Jekyll and Mr. Hyde. Few others would have taken R. L. Stevenson to be up there with Dickens, Flaubert, and Proust, or W. D. Ross with Russell, Wittgenstein, and Quine. Each also sets aside for separate treatment elsewhere a major body of work one might have expected to be covered. Nabokov reserves Russian literature for a companion volume, while Soames gives only slight coverage to what he describes as “work in logic, the foundations of logic, and the application of logical techniques to the study of language” — a category that in practice turns out to include the bulk of the relevant material (by such writers as Frege, Carnap, and Tarski) that published originally in German without simultaneous English translation.. (shrink)
This comment is offered in response to Hansen's A Multidimensional Scale for Measuring Business Ethics: A Purification and Refinement. Five issues arising from Hansen's purification and refinement efforts are addressed. These include the issues of parsimony, predictive validity, collinearity, reliability, and what we see as a confusion between normative and positive theory.
This article provides a summary of current knowledge about memory illusions. The memory illusions described here focus on the recall of imagined events that have never actually occurred. The purpose is to review theoretical ideas and empirical evidence about the reality-monitoring processes involved in memory illusions. Reality monitoring means deciding whether the memory has been perceptually derived or been self-generated (thought or imagined). A few key findings from the literature have been reported in this paper and these focus on internal (...) source-monitoring judgments which distinguish perceptual events from imagined events. Finally, the experimental paradigms used to shed light on processes occurring in the failure of reality monitoring in healthy subjects may be extended to an examination of the causes and the prevention of hallucinations in patients. (shrink)
This long-awaited volume is a must-read for anyone with a serious interest in\nphilosophy of mathematics. The book falls into two parts, with the primary focus of\nthe first on ontology and structuralism, and the second on intuition and\nepistemology, though with many links between them. The style throughout involves\nunhurried examination from several points of view of each issue addressed, before\nreaching a guarded conclusion. A wealth of material is set before the reader along\nthe way, but a reviewer wishing to summarize the author’s views (...) crisply will be\nfrustrated. The chapter-by-chapter survey below conveys at best a very incomplete\nand imperfect impression of the work’s virtues, and even of its contents, falling\nshort even of supplying a full menu for the banquet of food for thought that Parsons\nserves up to his readers. (shrink)
Physician assisted suicide or active euthanasia is analyzed as a medicalization of the needs of persons who are suffering interminably. As with other medicalized responses to personal needs, the availability of active euthanasia will likely divert attention and resources from difficult social and personal aspects of the needs of dying and suffering persons, continuing the pattern of privatization of the costs of caregiving for persons who are candidates for active euthanasia, limiting the ability of caregivers to assist suffering persons to (...) make their continued suffering tolerable, and casting doubt on the voluntariness of the choice of active euthanasia. Keywords: caregivers, euthanasia, family, medicalization, voluntariness CiteULike Connotea Del.icio.us What's this? (shrink)
This study reports on the development of scale items derived from the pluralistic moral philosophy literature. In addition, the manner in which individuals combine aspects of the different philosophies in making ethical evaluations was explored.
It is shown that for invariance under the action of special groups the statements "Every invariant PCA is decomposable into (1 invariant Borel sets" and "Every pair of invariant PCA is reducible by a pair of invariant PCA sets" are independent of the axioms of set theory.
Recently it has become almost the received wisdom in certain quarters that Kripke models are appropriate only for something like metaphysical modalities, and not for logical modalities. Here the line of thought leading to Kripke models, and reasons why they are no less appropriate for logical than for other modalities, are explained. It is also indicated where the fallacy in the argument leading to the contrary conclusion lies. The lessons learned are then applied to the question of the status of (...) the formula. (shrink)
This article examines perceptions of tax partners and non-partner tax practitioners regarding their CPA firms’ ethical environment, as well as experiences with ethical dilemmas. Prior research emphasizes the importance of executive leadership in creating an ethical climate (e.g., Weaver et al., Acad Manage Rev 42(1):41–57, 1999 ; Trevino et al., Hum Relat 56(1):5–37, 2003 ; Schminke et al., Organ Dyn 36(2):171–186, 2007 ). Thus, it is important to consider whether firm partners and other employees have congruent perceptions and experiences. (...) Based on the responses of 144 tax practitioners employed at CPA firms, the results show that tax partners rate the ethical environment of their firms as stronger than non-partner tax practitioners, particularly among those who describe a self-identified ethical dilemma. Tax partners also report having encountered more of the common examples of researcher-provided ethical dilemmas than non-partner tax practitioners, although non-partners perceive that certain ethical dilemmas occur at a higher rate than partners do. Overall, this study provides evidence of a disconnect between tax partners and non-partner tax practitioners with respect to perceptions of organizational ethics. Suggestions for potential remedies are offered. (shrink)
This study examines the relation between firms’ corporate philanthropic giving and their performance in three other social domains – employee relations, environmental issues, and product safety. Based on a sample of 384 U.S. companies and using data pooled from 1998 through 2000, we find that worse performers in the other social areas are both more likely to make charitable contributions and that the extent of their giving is larger than for better performers. Analyses of each separate area of social performance, (...) however, indicate that the relation between giving and negative social performance (cited concerns) only holds for the environmental issues and product safety areas. We find no significant association between corporate philanthropy and employee relations concerns. In general, these findings suggest that corporate philanthropy may be more a tool of legitimization than a measure of corporate social responsibility. (shrink)
For the sentences of languages that contain operators that express the concepts of definiteness and indefiniteness, there is an unavoidable tension between a truth-theoretic semantics that delivers truth conditions for those sentences that capture their propositional contents and any model-theoretic semantics that has a story to tell about how indetifiniteness in a constituent affects the semantic value of sentences which imbed it. But semantic theories of both kinds play essential roles, so the tension needs to be resolved. I argue that (...) it is the truth theory which correctly characterises the notion of truth, per se. When we take into account the considerations required to bring model theory into harmony with truth theory, those considerations undermine the arguments standardly used to motivate supervaluational model theories designed to validate classical logic. But those considerations also show that celebration would be premature for advocates of the most frequently encountered rival approach – many-valued model theory. (shrink)
Computability and Logic has become a classic because of its accessibility to students without a mathematical background and because it covers not simply the staple topics of an intermediate logic course, such as Godel’s incompleteness theorems, but also a large number of optional topics, from Turing’s theory of computability to Ramsey’s theorem. Including a selection of exercises, adjusted for this edition, at the end of each chapter, it offers a new and simpler treatment of the representability of recursive functions, a (...) traditional stumbling block for students on the way to the Godel incompleteness theorems. (shrink)
Genome Canada has funded a research project to evaluate the usefulness of different forms of ethical analysis for assessing the moral weight of public opinion in the governance of genomics. This paper will describe a role of public consultation for ethical analysis and a contribution of ethical analysis to public consultation and the governance of genomics/biotechnology. Public consultation increases the robustness of ethical analysis with a more diverse and rich accounts experiences. Consultation must be carefully and respectfully designed to generate (...) sufficiently diverse and rich accounts of moral experiences. Since dominant groupstend to define ethical or policy issues in a manner that excludes some interests or perspectives, it is important to identify the range of interests that diverse publics hold before defining the issue and scope of a consultation. Similarly, a heavy policy focus and pressures to commercialize products risk oversimplification of the discussion and the premature foreclosure of ethical dialogue. Consequently, a significant contribution of ethical dialogue strengthened by social analysis is to consider the context and non-policy use of power to govern genomics and to sustain social debate on enduring ethical issues. (shrink)
Two experiments using a realistic version of the selection task examined the relationship between participants probability estimates of finding a counter example and their selections. Experiment 1 used everyday categories in the context of a scenario to determine whether or not the number of instances in a category affected the estimated probability of a counter-example. Experiment 2 modified the scenario in order to alter participants estimates of finding a specific counter-example. Unlike Kirby 1994a , but consistent with his proposals, (...) both studies showed that probability estimates significantly predicted selection. Overall results point to the value of understanding selections in terms of their subjective expected utility. (shrink)