In these essays, we are concerned with virtue in journalism and the media but are mindful of the tension between the commercial foundations of publishing and broadcasting, on the one hand, and journalism's democratic obligations on the other. Adam outlines, first, a moral vision of journalism focusing on individualistic concepts of authorship and craft. Next, Craft attempts to bridge individual and organizational concerns by examining the obligations of organizations to the individuals working within them. Finally, Cohen discusses the importance (...) of resisting the powerful corporate logic that pervades the news media in the United States and calls on journalists to be courageous. (shrink)
Fixing Frege is one of the most important investigations to date of Fregean approaches to the foundations of mathematics. In addition to providing an unrivalled survey of the technical program to which Frege’s writings have given rise, the book makes a large number of improvements and clariﬁcations. Anyone with an interest in the philosophy of mathematics will enjoy and beneﬁt from the careful and well informed overview provided by the ﬁrst of its three chapters. Specialists will ﬁnd the book an (...) indispensable reference and an invaluable source of insights and new results. Although Frege is widely regarded as the father of analytic philosophy, his work on the foundations of mathematics was for a long time rather peripheral to the ongoing research. The main reason for this is no doubt Russell’s discovery in 1901 that the paradox now bearing his name can be derived in Frege’s logical system. But recent decades have seen a huge surge of interest in Fregean approaches to the foundations of mathematics. (The work of George Boolos, Kit Fine, Bob Hale, Richard Heck, Stewart Shapiro, and Crispin Wright is singled out for particular attention in the present monograph.) A variety of consistent theories have been discovered that can be salvaged from Frege’s inconsistent system, and foundational and philosophical claims have been made on behalf of many of these theories. Burgess claims quite plausibly that the signiﬁcance of any such modiﬁed Fregean theory will in large part depend on how much of ordinary mathematics it enables us to develop.1 His.. (shrink)
Third World Citizens and the Information Technology Revolution Content Type Journal Article Category Review Pages 515-522 DOI 10.1558/jcr.v11i4.515 Authors Nicolas Adam, Centre d’études sur l’intégration et la mondialisation (CEIM), Université du Québec à Montréal, 400, rue Sainte-Catherine Est, Pavillon Hubert-Aquin, 1er étage, bureau A-1560, Montréal (Québec) H2L 2C5 Canada Journal Journal of Critical Realism Online ISSN 1572-5138 Print ISSN 1476-7430 Journal Volume Volume 11 Journal Issue Volume 11, Number 4 / 2012.
The form of nominalism known as 'mathematical fictionalism' is examined and found wanting, mainly on grounds that go back to an early antinominalist work of Rudolf Carnap that has unfortunately not been paid sufficient attention by more recent writers.
This paper considers the ways that Information Ethics (IE) treats things. A number of critics have focused on IE’s move away from anthropocentrism to include non-humans on an equal basis in moral thinking. I enlist Actor Network Theory, Dennett’s views on ‹as if’ intentionality and Magnani’s characterization of ‹moral mediators’. Although they demonstrate different philosophical pedigrees, I argue that these three theories can be pressed into service in defence of IE’s treatment of things. Indeed the support they lend to the (...) extension of moral status to non-human objects can be seen as part of a trend towards the accommodation of non-humans into our moral and social networks. A number of parallels are drawn between philosophical arguments over artificial intelligence and information ethics. (shrink)
This paper is based on the premise that the analysis of some cyberethics problems would benefit from a feminist treatment. It is argued that both cyberstalking and Internet child pornography are two such areas which have a `gendered' aspect which has rarely been explored in the literature. Against a wide ranging feminist literature of potential relevance, the paper explores a number of cases through a focused approach which weaves together feminist concepts of privacy and the gaze.
I aim to show how and why some definitions can be benignly circular. According to Lloyd Humberstone, a definition that is analytically circular need not be inferentially circular and so might serve to illuminate the application-conditions for a concept. I begin by tidying up some problems with Humberstone's account. I then show that circular definitions of a kind commonly thought to be benign have inferentially circular truth-conditions and so are malign by Humberstone's test. But his test is too demanding. The (...) inferences we actually use to establish the applicability of, e.g., colour concepts are designed to establish warranted assertability and not truth. Understood thus, dispositional analyses are not inferentially circular. (shrink)
A new axiomatization of set theory, to be called Bernays-Boolos set theory, is introduced. Its background logic is the plural logic of Boolos, and its only positive set-theoretic existence axiom is a reflection principle of Bernays. It is a very simple system of axioms sufficient to obtain the usual axioms of ZFC, plus some large cardinals, and to reduce every question of plural logic to a question of set theory.
Computer ethics is a relatively young discipline,hence it needs time both for reflection and forexploring alternative ethical standpoints in buildingup its own theoretical framework. Feminist ethics isoffered as one such alternative particularly to informissues of equality and power. We argue that feministethics is not narrowly confined to women''s issues but is an approach with wider egalitarianapplications. The rise of feminist ethics in relationto feminist theory in general is described and withinthat the work of Gilligan and others on an ethic of (...) care. We argue for the need to connect theory toempirical evidence. Empirical studies of gender andbusiness and computer ethics are reviewed. We noteconcerns with surveying a student audience, the issueof how far questionnaires and interviews can get tothe heart of ethical beliefs and problems ofperforming statistical analyses of quantitative data.Although we recognize them, our own small surveycannot avoid all these problems. Nevertheless byrefining our scenarios we are able to offer analternative reading of a hacking problem in terms ofan ethic of care thereby pointing a way forward forfuture research in computer ethics inspired byfeminist theory. (shrink)
This paper argues that AI follows classical versions of epistemology in assuming that the identity of the knowing subject is not important. In other words this serves to `delete the subject''. This disguises an implicit hierarchy of knowers involved in the representation of knowledge in AI which privileges the perspective of those who design and build the systems over alternative perspectives. The privileged position reflects Western, professional masculinity. Alternative perspectives, denied a voice, belong to less powerful groups including women. Feminist (...) epistemology can be used to approach this from new directions, in particular, to show how women''s knowledge may be left out of consideration by AI''s focus on masculine subjects. The paper uncovers the tacitly assumed Western professional male subjects in two flagship AI systems, Cyc and Soar. (shrink)
Appointment as a director of a company board often represents the pinnacle of a management career. Worldwide, it has been noted that very few women are appointed to the boards of directors of companies. Blame for the low numbers of women of company boards can be partly attributed to the widely publicized "glass ceiling". However, the very low representation of women on company boards requires further examination. This article reviews the current state of women's representation on boards of directors and (...) summarizes the reasons as to why women are needed on company boards. Given that more women on boards are desirable, the article then describes how more women could be appointed to boards, and the actions that organizations and women could take to help increase the representation of women. Finally, the characteristics of those women that have succeeded in becoming members of company boards are described from an international perspective. Unfortunately, answers to the vexing question of whether these women have gained board directorships in their own right as extremely competent managers, or whether they are mere token female appointments in a traditional male dominated culture, remains elusive. (shrink)
Backdating of stock options is an example of an agency problem. It has emerged despite all the measures (i.e., new regulations and additional corporate governance mechanisms) aimed at addressing such problems? Beyond such negative controlling measures, a more positive empowering approach based on ethics may also be necessary. What ethical measures need to be taken to address the agency problem? What values and norms should guide the board of directors in protecting the shareholders' interests? To examine these issues, we first (...) discuss the role values and norms can play with respect to underlying corporate governance and the proper role of directors, such as transparency, accountability, integrity (which is reflected in proper mechanisms of checks and balances), and public responsibility. Second, we discuss various stakeholder approaches (e.g., government, directors, managers, and shareholders) by which conflicts of interest (i.e., the agency problem) can be addressed. Third, we assess the practice of backdating stock options, as an illustration of the agency problem, in terms of whether the practice is legally acceptable or ethically justifiable. Fourth, we proceed to an analysis of good corporate governance practice involving backdating options based on a series of ethical standards including: (1) trustworthiness; (2) utilitarianism; (3) justice; and (4) Kantianism. We conclude that while executive compensation schemes (e. g., stock options) were originally intended to help remedy the agency problem by tying together the interests of the executives and shareholders, these schemes may have actually become "part of the problem," and that the solution ultimately depends upon whether directors and executives accept that all of their actions must be based on a set of core ethical values. (shrink)
This paper addresses the question of delegation of morality to a machine, through a consideration of whether or not non-humans can be considered to be moral. The aspect of morality under consideration here is protection of privacy. The topic is introduced through two cases where there was a failure in sharing and retaining personal data protected by UK data protection law, with tragic consequences. In some sense this can be regarded as a failure in the process of delegating morality to (...) a computer database. In the UK, the issues that these cases raise have resulted in legislation designed to protect children which allows for the creation of a huge database for children. Paradoxically, we have the situation where we failed to use digital data in enforcing the law to protect children, yet we may now rely heavily on digital technologies to care for children. I draw on the work of Floridi, Sanders, Collins, Kusch, Latour and Akrich, a spectrum of work stretching from philosophy to sociology of technology and the “seamless web” or “actor–network” approach to studies of technology. Intentionality is considered, but not deemed necessary for meaningful moral behaviour. Floridi’s and Sanders’ concept of “distributed morality” accords with the network of agency characterized by actor–network approaches. The paper concludes that enfranchizing non-humans, in the shape of computer databases of personal data, as moral agents is not necessarily problematic but a balance of delegation of morality must be made between human and non-human actors. (shrink)
: Decisions about funding health services are crucial to controlling costs in health care insurance plans, yet they encounter serious challenges from intellectual property protection—e.g., patents—of health care services. Using Myriad Genetics' commercial genetic susceptibility test for hereditary breast cancer (BRCA testing) in the context of the Canadian health insurance system as a case study, this paper applies concepts from social contract theory to help develop more just and rational approaches to health care decision making. Specifically, Daniels's and Sabin's "accountability (...) for reasonableness" is compared to broader notions of public consultation, demonstrating that expert assessments in specific decisions must be transparent and accountable and supplemented by public consultation. (shrink)
It is a widely shared view among philosophers of science that the theory-dependence (or theory-ladenness) of observations is worrying, because it can bias empirical tests in favour of the tested theories. These doubts are taken to be dispelled if an observation is influenced by a theory independent of the tested theory and thus circularity is avoided, while (partially) circular tests are taken to require special attention. Contrary to this consensus, it is argued that the epistemic value of theory-dependent tests has (...) nothing to do with the circularity or non-circularity of the test, but is instead based on the minimal empiricality and reliability of observations. Since theory-dependence does not in general prevent observations fulfilling these requirements, it should not be regarded as a phenomenon that is basically detrimental, but as neutral with respect to successful scientific knowledge gathering. (shrink)
This article considers the question of embodiment in relation to gender and whether there are models of artificial intelligence (AI) which can enrol a concept of gender in their design. A central concern for feminist epistemology is the role of the body in the making of knowledge. I consider how this may inform a critique of the AI project and the related area of artificial life (A-Life), the latter area being of most interest in this paper. I explore briefly the (...) tensions between the treatment of the body in different branches of feminist theory, especially the tensions between the approaches of feminist sociology and feminist philosophy. I explore the ways in which writing from category theory and anthropological phenomenology offers rich suggestions as to how the body has been left out of objectivist accounts of epistemology, but struggles to offer an account of why. In its analysis of the links between women, knowledge and the body, feminist revisions of epistemology offer a more convincing why. This is explored briefly through a critique of symbolic AI, and more substantially through the problem of embodiment in artificial life. (shrink)
The Newcomb problem is analysed here as a type of common cause problem. In relation to such problems, if you take the dominated option your expected outcome will be good and if you take the dominant option your expected outcome will be not so good. As is explained, however, these arenot conventional conditional expected outcomes but `conditional evidence expected outcomes' and while in the deliberation process, the evidence on which they are based is only hypothetical evidence.Conventional conditional expected outcomes are (...) more sensitive to your current epistemic state in that they are based purely on actual evidence which is available to you during the deliberation process. So although they are conditional on a certain act being performed, they are not based on evidence that you would have only if that act is performed. Moreover, for any given epistemic state during the deliberation process, your conventional conditional expected outcome for the dominant option will be better than that for the dominated option. The principle of dominance is thus in perfect harmony with the conventional conditional expected outcomes. In relation to the Newcomb problem then, the evidence unequivocally supports two-boxing as the rational option. Yet what is advanced here is not simply a two-boxing strategy. To see why, two stages to the problem need to be recognised. The first stage is that which occurs before the information used by the predictor in making his predictions has been gained. The second stage is after this point. Provided that you are still in the first stage, you have an opportunity to influence whether or not the predictor places the $1m in the opaque box. To maximise the probability that it is, you need to commit yourself to one-boxing. (shrink)
Socially Responsible Investment (SRI) indices play a major role in the stock markets. A connection between doing good and doing well in business is implied. Leading indices, such as the Domini Social Index and others, exemplify the movement toward investing in socially responsible corporations. However, the question remains: Does the ratings-based methodology for assessing corporate social responsibility (CSR) provide an incentive to firms excluded from SRI indices to invest in CSR? Not in its current format. The ratings-based methodology employed by (...) SRI indices in their selection processes excludes many corporations by creating limited-membership lists. This received ratings-based structure is yet to offer an incentive for most of the excluded corporations to invest in improving their levels of CSR. We, therefore, ask under what circumstances a ratings-based method for assessing CSR could provide an incentive to firms excluded from SRI indices to invest in CSR. In this article, we attempt to offer a theoretical reply to this question. We show that when all firms are publicly ranked according to SRI index parameters, such indices can indeed create a market incentive for increased investment by firms in improving their performance in the area of social responsibility. We further show that this incentive tapers off as the amount of investment required exceeds a certain point or if the amount of payback on that investment fails to reach a certain threshold. (shrink)
Hintikka and Sandu have recently claimed that Frege's notion of function was substantially narrower than that prevailing in real analysis today. In the present note, their textual evidence for this claim is examined in the light of relevant historical and biographical background and judged insufficient.
In the late 19th century great changes in theories of light and electricity were in direct conflict with certitude, the view that scientific knowledge is infallible. What is, then, the epistemic status of scientific theory? To resolve this issue Duhem and Poincaré proposed images of fallible knowledge, Instrumentalism and Conventionalism, respectively. Only in 1919–1922, after Einstein's relativity was published, he offered arguments to support Fallibilism, the view that certainty cannot be achieved in science. Though Einstein did not consider Duhem's Instrumentalism, (...) he argued against Poincaré's Conventionalism. Hitherto, Einstein's Fallibilism, as presented at first in a rarely known essay of 1919, was left in the dark. Recently, Howard obscured its meaning. Einstein's essay was never translated into English. In my paper I provide its translation and attempt to shed light on Einstein's view and its context; I also direct attention to Einstein's images of philosophical opportunism in scientific practice. (shrink)
Philosophical Analysis in the Twentieth Century by Scott Soames reminds me of nothing so much as Lectures on Literature by Vladimir Nabokov. Both are works that arose immediately out of the needs of undergraduate teaching, yet each manages to say much of significance to knowledgeable professionals. Each indirectly provides an outline of the history of its field, through a presentation of selected major works, taken in chronological order and including items that are generally recognized as marking decisive turning points. Yet (...) neither Soames’s work nor Nabokov’s is a history in any conventional sense, both being immediately disqualified from that category by the general absence of coverage of minor and middling works and writers. The emphasis is pedagogical rather than historiographical: the emphasis is on introducing the student to the field through very close examination of the limited number of key texts selected for inclusion. The author’s distinctive personality is also apparent in both works. Each writer has a favorite theme he repeatedly sounds: for Soames, the danger of conflating the analytic, the a priori, and the necessary; for Nabokov, the philistinism of expecting an uplifting “message” from works of literary art. Each also includes some quirky, individual selections: The Right and the Good, The Strange Case of Dr. Jekyll and Mr. Hyde. Few others would have taken R. L. Stevenson to be up there with Dickens, Flaubert, and Proust, or W. D. Ross with Russell, Wittgenstein, and Quine. Each also sets aside for separate treatment elsewhere a major body of work one might have expected to be covered. Nabokov reserves Russian literature for a companion volume, while Soames gives only slight coverage to what he describes as “work in logic, the foundations of logic, and the application of logical techniques to the study of language” — a category that in practice turns out to include the bulk of the relevant material (by such writers as Frege, Carnap, and Tarski) that published originally in German without simultaneous English translation.. (shrink)
In the process of implementing an ethical code of conduct, a business organization uses formal methods. Of these, training, courses and means of enforcement are common and are also suitable for self-regulation. The USA is encouraging business corporations to self regulate with the Federal Sentencing Guidelines (FSG). The Guidelines prescribe similar formal methods and specify that, unless such methods are used, the process of implementation will be considered ineffective, and the business will therefore not be considered to have complied with (...) the guidelines. Business organizations invest enormous funds on formal methods. However, recent events indicate that these are not, by themselves, yielding the desired results. Our study, based on a sample of 812 employees and conducted in an Israeli subsidiary of a leading multinational High-Tech corporation headquartered in the US, indicates that, of the methods used in the process of implementation, one of the informal methods (namely, the social norms of the organization) is perceived by employees to have the most influence on their conduct. This result, when examined against employee tenure, remains relatively stable over the years, and stands in contradistinction to the formalistic approach embedded in the FSG. We indirectly measure the effectiveness of the percieved most influential implementation process methods by analyzing their impact on employee attitudes (namely, personal ethical commitment and employees'' commitment to organizational values). Our results indicate that the informal methods (manager sets an example or social norms of the organization) are likely to yield greater commitment with respect to both employee attitudes than the formal method (training and courses on the subject of ethics). The personal control method (my own personal values) differs significantly from all the other methods in that it yields the highest degree of personal ethical commitment and the lowest degree of employees'' commitment to organizational values. (shrink)
For the sentences of languages that contain operators that express the concepts of definiteness and indefiniteness, there is an unavoidable tension between a truth-theoretic semantics that delivers truth conditions for those sentences that capture their propositional contents and any model-theoretic semantics that has a story to tell about how indetifiniteness in a constituent affects the semantic value of sentences which imbed it. But semantic theories of both kinds play essential roles, so the tension needs to be resolved. I argue that (...) it is the truth theory which correctly characterises the notion of truth, per se. When we take into account the considerations required to bring model theory into harmony with truth theory, those considerations undermine the arguments standardly used to motivate supervaluational model theories designed to validate classical logic. But those considerations also show that celebration would be premature for advocates of the most frequently encountered rival approach – many-valued model theory. (shrink)
We analyse the reception of Niklas Luhmanns social metatheory in Slovenian social. The first part outlines the intellectual climate that prevailed in the decade before the post-socialist transition. The decline of the previously dominant Marxist ideology created space for other social theories. Luhmanns ideas were the most prominent among social macro theories in the initial phase. The second part describes variations in the reception of his ideas. The initial affirmative approach was upgraded by a number of more selective and critical (...) approaches. The third part shows that, although his ideas are no longer quite so prominent, his work is both well recognized and firmly embedded in Slovenian social thought. (shrink)
Genome Canada has funded a research project to evaluate the usefulness of different forms of ethical analysis for assessing the moral weight of public opinion in the governance of genomics. This paper will describe a role of public consultation for ethical analysis and a contribution of ethical analysis to public consultation and the governance of genomics/biotechnology. Public consultation increases the robustness of ethical analysis with a more diverse and rich accounts experiences. Consultation must be carefully and respectfully designed to generate (...) sufficiently diverse and rich accounts of moral experiences. Since dominant groupstend to define ethical or policy issues in a manner that excludes some interests or perspectives, it is important to identify the range of interests that diverse publics hold before defining the issue and scope of a consultation. Similarly, a heavy policy focus and pressures to commercialize products risk oversimplification of the discussion and the premature foreclosure of ethical dialogue. Consequently, a significant contribution of ethical dialogue strengthened by social analysis is to consider the context and non-policy use of power to govern genomics and to sustain social debate on enduring ethical issues. (shrink)
Adam Smith’s account of sympathy or ‘fellow feeling’ has recently become exceedingly popular. It has been used as an antecedent of the concept of simulation: understanding, or attributing mental states to, other people by means of simulating them. It has also been singled out as the first correct account of empathy. Finally, to make things even more complicated, some of Smith’s examples for sympathy or ‘fellow feeling’ have been used as the earliest expression of emotional contagion. The aim of (...) the paper is to suggest a new interpretation of Smith’s concept of sympathy and point out that on this interpretation some of the contemporary uses of this concept, as a precursor of simulation and empathy, are misleading. My main claim is that Smith's concept of sympathy, unlike simulation and empathy, does not imply any correspondence between the mental states of the sympathizer and of the person she is sympathizing with. (shrink)
Quine correctly argues that Carnap's distinction between internal and external questions rests on a distinction between analytic and synthetic, which Quine rejects. I argue that Quine needs something like Carnap's distinction to enable him to explain the obviousness of elementary mathematics, while at the same time continuing to maintain as he does that the ultimate ground for holding mathematics to be a body of truths lies in the contribution that mathematics makes to our overall scientific theory of the world. Quine's (...) arguments against the analytic/synthetic distinction, even if fully accepted, still leave room for a notion of pragmatic analyticity sufficient for the indicated purpose. (shrink)
A revision of a sermon on the evils of calling model theory “semantics”, preached at Notre Dame on Saint Patrick’s Day, 2005. Provisional version: references remain to be added. To appear in Mathematics, Modality, and Models: Selected Philosophical Papers, coming from Cambridge University Press.
The discovery of the note cards for Quine’s previously unpublished 1946 lecture on nominalism provides an obvious occasion for commenting on the differences between the issue of nominalism as Quine first publicized it to a wide philosophical audience and the issue of nominalism as debated among Quine’s successors today. Yet as I read and reread the text of Quine’s lecture, I found myself struck less by the differences between Quine’s position there and the positions of present-day writers than by differences (...) between Quine’s position there and the positions of Quine himself in later writings — and not his writings from many years later but his writings from the next few years, and especially one of his writings from the very next year, his notorious joint paper with Goodman. (shrink)
Ever since work of Paul Feyerabend, Russell Hanson and Thomas Kuhn in the 1960s, the thesis of the theory-ladenness of scientific observation has attracted much attention both in the philosophy and the sociology of science. The main concern has always been epistemic. It was argued –or feared– that if scientific observations depend on prevalent theories, an objective empirical test of theories and hypotheses by independent observation and experience is impossible. This suggests that theories might appear to be well confirmed by (...) observation, and yet it is not likely that they are largely true or empirically adequate. While some philosophers like Ian Hacking have argued that serious theory-dependence is less common than often assumed, sociologists such as David Bloor, Stephen Shapin, Karin Knorr-Cetina or Harry Collins have based their constructivist programs for the sociology of science on strong claims of theory-ladenness. (shrink)
Adapated from talks at the UCLA Logic Center and the Pitt Philosophy of Science Series. Exposition of material from Fixing Frege, Chapter 2 (on predicative versions of Frege’s system) and from “Protocol Sentences for Lite Logicism” (on a form of mathematical instrumentalism), suggesting a connection. Provisional version: references remain to be added. To appear in Mathematics, Modality, and Models: Selected Philosophical Papers, coming from Cambridge University Press.
My contribution to the symposium on Goedel’s philosophy of mathematics at the spring 2006 Association for Symbolic Logic meeting in Montreal. Provisional version: references remain to be added. To appear in an ASL volume of proceedings of the Goedel sessions at that meeting.
Rational drug design is a method for developing new pharmaceuticals that typically involves the elucidation of fundamental physiological mechanisms. It thus combines the quest for a scientific understanding of natural phenomena with the design of useful technology and hence integrates epistemic and practical aims of research and development. Case studies of the rational design of the cardiovascular drugs propranolol, captopril and losartan provide insights into characteristics and conditions of this integration. Rational drug design became possible in the 1950s when theoretical (...) knowledge of drug-target interaction and experimental drug testing could interlock in cycles of mutual advancement. The integration does not, however, diminish the importance of basic research for pharmaceutical development. Rather, it can be shown that still in the 1990s, linear processes of innovation and the close combination of practical and epistemic work were interdependent. (shrink)
This long-awaited volume is a must-read for anyone with a serious interest in\nphilosophy of mathematics. The book falls into two parts, with the primary focus of\nthe first on ontology and structuralism, and the second on intuition and\nepistemology, though with many links between them. The style throughout involves\nunhurried examination from several points of view of each issue addressed, before\nreaching a guarded conclusion. A wealth of material is set before the reader along\nthe way, but a reviewer wishing to summarize the author’s views (...) crisply will be\nfrustrated. The chapter-by-chapter survey below conveys at best a very incomplete\nand imperfect impression of the work’s virtues, and even of its contents, falling\nshort even of supplying a full menu for the banquet of food for thought that Parsons\nserves up to his readers. (shrink)
Scientific claims can be assessed epistemically in either of two ways: according to scientific standards, or by means of philosophical arguments such as the no-miracle argument in favor of scientific realism. This paper investigates the basis of this duality of epistemic assessments. It is claimed that the duality rests on two different notions of epistemic justification that are well-known from the debate on internalism and externalism in general epistemology: a deontological and an alethic notion. By discussing the conditions for the (...) scientific acceptability of empirical results, it is argued that intrascientific justification employs the deontological notion. Philosophical disputes such as those on scientific realism can by contrast be shown to rest on the alethic notion. The implications of these findings both for the nature of the respective epistemic projects and for their interrelation are explored. (shrink)
Industrial drug design methodology has undergone remarkable changes in the recent history. Up to the 1970s, the screening of large numbers of randomly selected substances in biological test system was often a crucial step in the development of novel drugs. From the early 1980s, such ‘blind’ screening was increasingly rejected by many pharmaceutical researchers and gave way to ‘rational drug design’, a method that grounds the design of new drugs on a detailed mechanistic understanding of the drug action. Surprisingly, however, (...) the chance-based method of random screening returned to center stage of industrial drug development in the 1990s in the form of ‘high-throughput screening’ (HTS). I will argue in this paper that this to-and-fro in the prominence of random screening comes with fundamental changes in the epistemic signiﬁcance of chance experiments in pharmaceutical development. While up to the 1970s, random screening used to be chosen as an empirical search strategy primarily because suﬃ- cient knowledge of the mechanistic basis of drug action was lacking, it has turned with high-throughput screening into an experimental method that employs chance variation and testing to illuminate this mechanistic basis. As a consequence, research into the underlying mechanisms of drug action and the development of new drugs have become closely integrated. The rise of HTS therefore not only shows how chance experiments have assumed a new epistemic role in drug development. It also allows for a detailed study of the much debated emergence of a new relationship between scientiﬁc understanding and the development of technological artifacts. (shrink)
Adam Smith’s lasting fame certainly does not come from his work on language. He published very little on this topic and he is not usually mentioned in standard histories of linguistics or the philosophy of language. His most elaborate publication on the subject is a 1761 monograph on the origin and development of languages (FoL). Smith’s monograph joins a long list of speculative work on this then fashionable topic (cf. Hewes 1975, 1996). The fact that he later included it (...) as an appendix to his successful.. (shrink)
This paper foregrounds one argument in Rawls’s work that is crucial to his case for one, determinate, form of political economy: a property-owning democracy. Section one traces the evolution of this idea from the seminal work of Cambridge economist James Meade; section two demonstrates how a commitment to a property-owning democracy flows from Rawls’s own principles; section three focuses on Rawls’s striking critique of orthodox welfare state capitalism. This all sets the stage for an argument, presented in section four, from (...) the complexity of economic interactions to the strategy of making markets fair in the only feasible way that they can be made fair, namely, by “patterning” their effects. Section five concludes by asking whether any scheme of this general type is a realistic form of utopianism for a society such as ours. (shrink)
In this paper I call attention to Adam Smith’s 'Considerations Concerning the First Formation of Languages' in order to facilitate understanding Adam Smith from a Darwinian perspective. By ‘Darwinian’ I mean a position that explains differential selection over time through natural mechanisms. First, I argue that right near the start of Wealth of Nations Smith signals that human nature has probably evolved over a very long amount of time. Second, I connect this evidence with an infamous passage on (...) infanticide in The Theory of Moral Sentiments in order to argue that Smith is committed to group selection. Third, I argue that in Dissertation on Languages one can find building blocks for the claim that mind and language co-develop over time. More controversially I claim that in TMS there is a distinction between natural sentiments and moral sentiments. Natural sentiments are evolved (presumably through cultural selection) and moral sentiments are developed (through acculturation within society). Along the way, I argue that this distinction would have improved Darwin’s Descent of Man by blocking a move toward eugenics. (shrink)
Adam Smith is usually thought to argue that the result of everyone pursuing their own interests will be the maximization of the interests of society. The invisible hand of the free market will transform the individual''s pursuit of gain into the general utility of society. This is the invisible hand argument.Many people, although Smith did not, draw a moral corollary from this argument, and use it to defend the moral acceptability of pursuing one''s own self-interest.
This paper presents a theoretical elaboration of the ethical framework of classical capitalism as formulated by Adam Smith in reaction to the dominant mercantilism of his day. It is seen that Smith's project was profoundly ethical and designed to emancipate the consumer from a producer and state dominated economy. Over time, however, the various dysfunctions of a capitalist economy — e.g., concentration of wealth, market power — became manifest and the utilitarian ethical basis of the system eroded. Contemporary capitalism, (...) dominated as it is by large corporations, entrenched political interests and persistent social pathologies, bears little resemblance to the system which Smith envisioned would serve the common man. Most critiques of capitalism are launched from a Marxian-based perspective. We find, however, that by illustrating the wide gap between the reality of contemporary capitalism and the model of amoral political economy developed by Smith, the father of capitalism proves to be the most trenchant critic of the current order. (shrink)
John Burgess in a 2004 paper combined plural logic and a new version of the idea of limitation of size to give an elegant motivation of the axioms of ZFC set theory. His proposal is meant to improve on earlier work by Paul Bernays in two ways. I argue that both attempted improvements fail. I am grateful to Philip Welch, two anonymous referees, and especially Ignacio Jané for written comments on earlier versions of this paper, which have led to (...) substantial improvements. Thanks also to the participants in a discussion group at the University of Bristol, where an earlier version was presented. (shrink)
The essay is framed by conflict between Christianity and Darwinian science over the history of the world and the nature of human personhood. Evolutionary science narrates a long prehuman geological and biological history filled with vast amounts, kinds, and distributions of apparently random brutal and pointless suffering. It also strongly suggests that the first modern humans were morally primitive. This science seems to discredit Christianity's common meta-narrative of the Fall, understood as a story of Paradise Lost. The author contends that (...) this Augustinian story and its character of Adam as endowed with superhuman gifts, and yet as so fragile as to fall, as claimed, is implausible, at any rate, even apart from science. He proposes that Christians consider adopting a Supralapsarian metaphysics of divine purpose supported by the intuitions of Irenaeus, who depicted the first human beings as comparable to innocent, but morally undeveloped children. In this approach the existence of evils is part of the divine plan to "defeat" them in and through the Incarnation, Atonement, and Resurrection of Christ. Putting an "Irenaean Adam" in place of the "Augustinian" counterpart may not remove conflict with science completely, but at least reduces it, and leads to a Christian narrative that is more plausible, in the light of science. (shrink)
Both Adam Smith and Herbert spencer, albeit in quite different ways, have been enormously influential in what we today take to be philosophies of modern capitalism. Surprisingly it is Spencer, not Smith, who is the individualist, perhaps an egoist, and supports a "night watchman" theory of the state. Smith's concept of political economy is a notion that needs to be revisited, and Spencer's theory of democratic workplace management offers a refreshing twist on contemporary libertarianism.
As J. Baird Callicott has argued, Adam Smith's moral theory is a philosophical ancestor of recent work in environmental ethics. However, Smith's "all important emotion of sympathy" (Callicott, 2001, p. 209) seems incapable of extension to entities that lack emotions with which one can sympathize. Drawing on the distinctive account of sympathy developed in Smith's Theory of Moral Sentiments, as well as his account of anthropomorphizing nature in "History of Astronomy and Physics," I show that sympathy with non-sentient nature (...) is possible within a Smithian ethics. This provides the possibility of extending sympathy, and thereby benevolence and justice, to nature. (shrink)
D. D. Raphael examines the moral philosophy of Adam Smith (1723-90), best known for his famous work on economics, The Wealth of Nations, and shows that his thought still has much to offer philosophers today. Raphael gives particular attention to Smith's original theory of conscience, with its emphasis on the role of 'sympathy' (shared feelings).
In this paper I revisit Adam Smith’s treatment of Copernicanism and Newtonianism in his essay, “The History of Astronomy” (hereafter: “Astronomy”), in light of a surprisingly ignored context: David Hume. This remark will strike most scholars of Adam Smith as unfounded—David Hume’s philosophy is often invoked as a source of Smith’s approach in the “Astronomy” or as its target. Yet, Hume’s occasional remarks on Copernicanism nor his treatment of the history of science in the History of England (1754-62, (...) but revised throughout Hume’s life) have not been carefully analyzed in light of the “Astronomy.” In the first five sections of this paper I offer a detailed analysis of all of Hume’s remarks on the Copernican system in his oeuvre. I show that David Hume believed that Copernicus achieved a “revolution” in philosophy. Moreover, I argue that Hume increasingly treats Galileo as the hero of the Copernican revolution. In doing so, Hume appears surprisingly blind to the importance of post-Galilean natural philosophy, especially the (dynamical) arguments that Huygens and Newton provided for the rotation of the Earth. In the last section of the paper, I argue that Adam Smith does show appreciation of dynamic views. I show that Smith and the mature Hume agree on the importance of Galileo, even describing his method in strikingly similar language, but that they evaluate the evidence differently in light of two conflicting commitments: i) Hume is committed to the “true philosophy”—-a certain kind of scepticism which Smith does not share; ii) Hume never seems to have assimilated the way Newton changed the evidential standards within science. (shrink)
Adam Smith was a philosopher before he ever wrote about economics, yet until now there has never been a philosophical commentary on the Wealth of Nations . Samuel Fleischacker suggests that Smith's vastly influential treatise on economics can be better understood if placed in the light of his epistemology, philosophy of science, and moral theory. He lays out the relevance of these aspects of Smith's thought to specific themes in the Wealth of Nations , arguing, among other things, that (...) Smith regards social science as an extension of common sense rather than as a discipline to be approached mathematically, that he has moral as well as pragmatic reasons for approving of capitalism, and that he has an unusually strong belief in human equality that leads him to anticipate, if not quite endorse, the modern doctrine of distributive justice. Fleischacker also places Smith's views in relation to the work of his contemporaries, especially his teacher Francis Hutcheson and friend David Hume, and draws out consequences of Smith's thought for present-day political and philosophical debates. The Companion is divided into five general sections, which can be read independently of one another. It contains an index that points to commentary on specific passages in Wealth of Nations . Written in an approachable style befitting Smith's own clear yet finely honed rhetoric, it is intended for professional philosophers and political economists as well as those coming to Smith for the first time. (shrink)
In this paper I call attention to Adam Smith’s “Considerations Concerning the First Formation of Languages” in order to facilitate understanding Adam Smith from a Darwinian perspective. By ‘Darwinian’ I mean a position that explains differential selection over time through natural mechanisms. First, I argue that right near the start of Wealth of Nations Smith signals that human nature has probably evolved over a very long amount of time. Second, I connect this evidence with an infamous passage on (...) infanticide in The Theory of Moral Sentiments in order to argue that Smith is committed to group selection. Third, I argue that in Dissertation on Languages one can find building blocks for the claim that mind and language co-develop over time. More controversially I claim that in TMS there is a distinction between natural sentiments and moral sentiments. Natural sentiments are evolved (presumably through cultural selection) and moral sentiments are developed (through acculturation within society). Along the way, I argue that this distinction would have improved Darwin’s Descent of Man by blocking a move toward eugenics. (shrink)
When Adam Smith published his celebrated writings on economics and moral philosophy he famously referred to the operation of an invisible hand. Adam Smith's Political Philosophy makes visible the invisible hand by examining its significance in Smith's political philosophy and relating it to similar concepts used by other philosophers, revealing a distinctive approach to social theory that stresses the significance of the unintended consequences of human action. This book introduces greater conceptual clarity to the discussion of the invisible (...) hand and the related concept of unintended order in the work of Smith and in political theory more generally. By examining the application of spontaneous order ideas in the work of Smith, Hume, Hayek and Popper, Adam Smith's Political Philosophy traces similarities in approach and from these builds a conceptual, composite model of an invisible hand argument. While setting out a clear model of the idea of spontaneous order the book also builds the case for using the idea of spontaneous order as an explanatory social theory, with chapters on its application in the fields of science, moral philosophy, law and government. (shrink)
As J. Baird Callicott has argued, Adam Smith’s moral theory is a philosophical ancestor of recent work in environmental ethics. However, Smith’s “all important emotion of sympathy” (Callicott 2001: 209) seems incapable of extension to entities that lack emotions with which one can sympathize. Drawing on the distinctive account of sympathy developed in Smith’s Theory of Moral Sentiments , as well as his account of anthropomorphizing nature in “History of Astronomy and Physics,” I show that sympathy with non-sentient nature (...) is possible within a Smithian ethics. This provides the possibility of extending sympathy, and thereby benevolence and justice, to nature. (shrink)
Frans de Waal’s view that empathy is at the basis of morality directly seems to build on Darwin, who considered sympathy as the crucial instinct. Yet when we look closer, their understanding of the central social instinct differs considerably. De Waal sees our deeply ingrained tendency to sympathize (or rather: empathize) with others as the good side of our morally dualistic nature. For Darwin, sympathizing was not the whole story of the workings of sympathy ; the (selfish) need to receive (...) sympathy played just as central a role in the complex roads from sympathy to morality. Darwin’s understanding of sympathy stems from Adam Smith, who argued that the presence of morally impure motives should not be a reason for cynicism about morality. I suggest that De Waal’s approach could benefit from a more thorough alignment with the analysis of the workings of sympathy in the work of Darwin and Adam Smith. (shrink)