This collection of essays, offered in honor of the distinguished career of prominent political philosophy professor Clifford Orwin, brings together internationally renowned scholars to provide a wide context and discuss various aspects of the virtue of “humanity” through the history of political philosophy.
This collection of essays, offered in honor of the distinguished career of prominent political philosophy professor Clifford Orwin, brings together internationally renowned scholars to provide a wide context and discuss various aspects of the virtue of “humanity” through the history of political philosophy.
Perspectives on the relationship between psychology and religion have run the gamut from integration to mutual suspicion to open hostility. Despite increasing calls for greater sensitivity to the issues surrounding the psychological study of religion, significant conceptual and methodological problems remain. We propose that the pluralistic philosophy of William James provides not only an example of how a radically empirical psychology might be formulated, but also how such an approach allows for a serious psychological investigation of religion and religious experience. (...) We argue that James offers an important corrective to the reductive approaches all-too-common in the study of religion and religious experience by allowing for the possibility that theistic understandings may be taken more seriously in psychological research and theorizing. (shrink)
The work reported in this monograph was begun in the winter of 1967 in a graduate seminar at Berkeley. Many of the basic data were gathered by members of the seminar and the theoretical framework presented here was initially developed in the context of the seminar discussions. Much has been discovered since1969, the date of original publication, regarding the psychophysical and neurophysical determinants of universal, cross-linguistic constraints on the shape of basic color lexicons, and something, albeit less, can now also (...) be said with some confidence regarding the constraining effects of these language-independent processes of color perception and conceptualization on the direction of evolution of basic color term lexicons. (shrink)
For much of the past two centuries, religion has been understood as a universal phenomenon, a part of the “natural” human experience that is essentially the same across cultures and throughout history. Individual religions may vary through time and geographically, but there is an element, religion, that is to be found in all cultures during all time periods. Taking apart this assumption, Brent Nongbri shows that the idea of religion as a sphere of life distinct from politics, economics, or (...) science is a recent development in European history—a development that has been projected outward in space and backward in time with the result that religion now appears to be a natural and necessary part of our world. Examining a wide array of ancient writings, Nongbri demonstrates that in antiquity, there was no conceptual arena that could be designated as “religious” as opposed to “secular.” Surveying representative episodes from a two-thousand-year period, while constantly attending to the concrete social, political, and colonial contexts that shaped relevant works of philosophers, legal theorists, missionaries, and others, Nongbri offers a concise and readable account of the emergence of the concept of religion. (shrink)
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...) affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms. (shrink)
The capacity to collect and analyse data is growing exponentially. Referred to as ‘Big Data’, this scientific, social and technological trend has helped create destabilising amounts of information, which can challenge accepted social and ethical norms. Big Data remains a fuzzy idea, emerging across social, scientific, and business contexts sometimes seemingly related only by the gigantic size of the datasets being considered. As is often the case with the cutting edge of scientific and technological progress, understanding of the ethical implications (...) of Big Data lags behind. In order to bridge such a gap, this article systematically and comprehensively analyses academic literature concerning the ethical implications of Big Data, providing a watershed for future ethical investigations and regulations. Particular attention is paid to biomedical Big Data due to the inherent sensitivity of medical information. By means of a meta-analysis of the literature, a thematic narrative is provided to guide ethicists, data scientists, regulators and other stakeholders through what is already known or hypothesised about the ethical risks of this emerging and innovative phenomenon. Five key areas of concern are identified: informed consent, privacy, ownership, epistemology and objectivity, and ‘Big Data Divides’ created between those who have or lack the necessary resources to analyse increasingly large datasets. Critical gaps in the treatment of these themes are identified with suggestions for future research. Six additional areas of concern are then suggested which, although related have not yet attracted extensive debate in the existing literature. It is argued that they will require much closer scrutiny in the immediate future: the dangers of ignoring group-level ethical harms; the importance of epistemology in assessing the ethics of Big Data; the changing nature of fiduciary relationships that become increasingly data saturated; the need to distinguish between ‘academic’ and ‘commercial’ Big Data practices in terms of potential harm to data subjects; future problems with ownership of intellectual property generated from analysis of aggregated datasets; and the difficulty of providing meaningful access rights to individual data subjects that lack necessary resources. Considered together, these eleven themes provide a thorough critical framework to guide ethical assessment and governance of emerging Big Data practices. (shrink)
My primary aim is to defend a nonreductive solution to the problem of action. I argue that when you are performing an overt bodily action, you are playing an irreducible causal role in bringing about, sustaining, and controlling the movements of your body, a causal role best understood as an instance of agent causation. Thus, the solution that I defend employs a notion of agent causation, though emphatically not in defence of an account of free will, as most theories of (...) agent causation are. Rather, I argue that the notion of agent causation introduced here best explains how it is that you are making your body move during an action, thereby providing a satisfactory solution to the problem of action. (shrink)
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...) the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly. (shrink)
Two studies tested the relationship between three facets of personality—conscientiousness, agreeableness, and openness to experience—as well as moral identity, on individuals’ ethical ideology. Study 1 showed that moral personality and the centralityof moral identity to the self were associated with a more principled ethical ideology in a sample of female speech therapists. Study 2 replicated these findings in a sample of male and female college students, and showed that ideology mediated therelationship between personality, moral identity, and two organizationally relevant outcomes: (...) organizational citizenship behavior and the propensity to morally disengage. Implications for business ethics are discussed. (shrink)
This paper proposes a new Separabilist account of thick concepts, called the Expansion View (or EV). According to EV, thick concepts are expanded contents of thin terms. An expanded content is, roughly, the semantic content of a predicate along with modifiers. Although EV is a form of Separabilism, it is distinct from the only kind of Separabilism discussed in the literature, and it has many features that Inseparabilists want from an account of thick concepts. EV can also give non-cognitivists a (...) novel escape from the Anti-Disentangling Argument. §I explains the approach of all previous Separabilists, and argues that there’s no reason for Separabilists to take this approach. §II explains EV. §III fends off objections. And §IV explains how non-cognitivist proponents of EV can escape the Anti-Disentangling Argument. (shrink)
Ethicists are typically willing to grant that thick terms (e.g. ‘courageous’ and ‘murder’) are somehow associated with evaluations. But they tend to disagree about what exactly this relationship is. Does a thick term’s evaluation come by way of its semantic content? Or is the evaluation pragmatically associated with the thick term (e.g. via conversational implicature)? In this paper, I argue that thick terms are semantically associated with evaluations. In particular, I argue that many thick concepts (if not all) conceptually entail (...) evaluative contents. The Semantic View has a number of outspoken critics, but I shall limit discussion to the most recent--Pekka Väyrynen--who believes that objectionable thick concepts present a problem for the Semantic View. After advancing my positive argument in favor of the Semantic View (section II), I argue that Väyrynen’s attack is unsuccessful (section III). One reason ethicists cite for not focusing on thick concepts is that such concepts are supposedly not semantically evaluative whereas traditional thin concepts (e.g. good and wrong) are. But if my view is correct, then this reason must be rejected. (shrink)
Income inequality in the US has now reached levels not seen since the 1920s. Management, as a field of scholarly inquiry, has the potential to contribute in significant ways to our understanding of recent inequality trends. We review and assess recent research, both in the management literature and in other fields. We then delineate a conceptual framework that highlights the mechanisms through which business practice may be linked to income inequality. We then outline four general areas in which management scholars (...) are uniquely positioned to contribute to ongoing research: data and description, organizational dynamics, collective action, and value flows and tradeoffs. To stimulate future research, we highlight a number of relevant research questions and link these questions to existing management research streams that could be leveraged to address them. (shrink)
Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that the GDPR will legally mandate a ‘right to explanation’ of all decisions made by automated or artificially intelligent algorithmic systems. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the (...) right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive meaningful, but properly limited, information (Articles 13-15) about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems, what we term a ‘right to be informed’. Further, the ambiguity and limited scope of the ‘right not to be subject to automated decision-making’ contained in Article 22 (from which the alleged ‘right to explanation’ stems) raises questions over the protection actually afforded to data subjects. These problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless. We propose a number of legislative and policy steps that, if taken, may improve the transparency and accountability of automated decision-making when the GDPR comes into force in 2018. (shrink)
On the Use and Abuse of Foucault for Politics provides an accessible interpretation of Foucault's political philosophy, demonstrating how Foucault is relevant for contemporary democratic theory. Brent Pickett lays out an overview of Foucault's politics, including a comprehensive overview of the reasons for various conflicting interpretations, and then explores how well the different "Foucaults" can be used in progressive politics and democratic theory.
Edwin Hartman argues that ethical principles should not derive from abstract theory, but from the real world of experience in organizations. He explains how ethical principles derive from what workers learn in their communities (firms), and that an ethical firm is one that creates the good life for the workers who contribute to its mission. His approach is based on the Aristotelian tradition of refined common sense, from recent work on collective action problems in organizations, and from social contract (...) theory. (shrink)
The concept of individuality as applied to species, an important advance in the philosophy of evolutionary biology, is nevertheless in need of refinement. Four important subparts of this concept must be recognized: spatial boundaries, temporal boundaries, integration, and cohesion. Not all species necessarily meet all of these. Two very different types of pluralism have been advocated with respect to species, only one of which is satisfactory. An often unrecognized distinction between grouping and ranking components of any species concept is necessary. (...) A phylogenetic species concept is advocated that uses a grouping criterion of monophyly in a cladistic sense, and a ranking criterion based on those causal processes that are most important in producing and maintaining lineages in a particular case. Such causal processes can include actual interbreeding, selective constraints, and developmental canalization. The widespread use of the biological species concept is flawed for two reasons: because of a failure to distinguish grouping from ranking criteria and because of an unwarranted emphasis on the importance of interbreeding as a universal causal factor controlling evolutionary diversification. The potential to interbreed is not in itself a process; it is instead a result of a diversity of processes which result in shared selective environments and common developmental programs. These types of processes act in both sexual and asexual organisms, thus the phylogenetic species concept can reflect an underlying unity that the biological species concept can not. (shrink)
Mature information societies are characterised by mass production of data that provide insight into human behaviour. Analytics has arisen as a practice to make sense of the data trails generated through interactions with networked devices, platforms and organisations. Persistent knowledge describing the behaviours and characteristics of people can be constructed over time, linking individuals into groups or classes of interest to the platform. Analytics allows for a new type of algorithmically assembled group to be formed that does not necessarily align (...) with classes or attributes already protected by privacy and anti-discrimination law or addressed in fairness- and discrimination-aware analytics. Individuals are linked according to offline identifiers and shared behavioural identity tokens, allowing for predictions and decisions to be taken at a group rather than individual level. This article examines the ethical significance of such ad hoc groups in analytics and argues that the privacy interests of algorithmically assembled groups in inviolate personality must be recognised alongside individual privacy rights. Algorithmically grouped individuals have a collective interest in the creation of information about the group, and actions taken on its behalf. Group privacy is proposed as a third interest to balance alongside individual privacy and social, commercial and epistemic benefits when assessing the ethical acceptability of analytics platforms. (shrink)
Charles Sanders Peirce was born in September 1839 and died five months before the guns of August 1914. He is perhaps the most important mind the United States has ever produced. He made significant contributions throughout his life as a mathematician, astronomer, chemist, geodesist, surveyor, cartographer, metrologist, engineer, and inventor. He was a psychologist, a philologist, a lexicographer, a historian of science, a lifelong student of medicine, and, above all, a philosopher, whose special fields were logic and semiotics. He is (...) widely credited with being the founder of pragmatism. In terms of his importance as a philosopher and a scientist, he has been compared to Plato and Aristotle. He himself intended "to make a philosophy like that of Aristotle." Peirce was also a tormented and in many ways tragic figure. He suffered throughout his life from various ailments, including a painful facial neuralgia, and had wide swings of mood which frequently left him depressed to the state of inertia, and other times found him explosively violent. Despite his consistent belief that ideas could find meaning only if they "worked" in the world, he himself found it almost impossible to make satisfactory economic and social arrangements for himself. This brilliant scientist, this great philosopher, this astounding polymath was never able, throughout his long life, to find an academic post that would allow him to pursue his major interest, the study of logic, and thus also fulfill his destiny as America's greatest philosopher. Much of his work remained unpublished in his own time, and is only now finding publication in a coherent, chronologically organized edition. Even more astounding is that,despite many monographic studies, there has been no biography until now, almost eighty years after his death. Brent has studied the Peirce papers in detail and enriches his account with numerous quotations from letters by Peirce and by his friends. This is a fascinating account of a p. (shrink)
A formal theory of quantity T Q is presented which is realist, Platonist, and syntactically second-order (while logically elementary), in contrast with the existing formal theories of quantity developed within the theory of measurement, which are empiricist, nominalist, and syntactically first-order (while logically non-elementary). T Q is shown to be formally and empirically adequate as a theory of quantity, and is argued to be scientifically superior to the existing first-order theories of quantity in that it does not depend upon empirically (...) unsupported assumptions concerning existence of physical objects (e.g. that any two actual objects have an actual sum). The theory T Q supports and illustrates a form of naturalistic Platonism, for which claims concerning the existence and properties of universals form part of natural science, and the distinction between accidental generalizations and laws of nature has a basis in the second-order structure of the world. (shrink)
The internet of things is increasingly spreading into the domain of medical and social care. Internet-enabled devices for monitoring and managing the health and well-being of users outside of traditional medical institutions have rapidly become common tools to support healthcare. Health-related internet of things (H-IoT) technologies increasingly play a key role in health management, for purposes including disease prevention, real-time tele-monitoring of patient’s functions, testing of treatments, fitness and well-being monitoring, medication dispensation, and health research data collection. H-IoT promises many (...) benefits for health and healthcare. However, it also raises a host of ethical problems stemming from the inherent risks of Internet enabled devices, the sensitivity of health-related data, and their impact on the delivery of healthcare. This paper maps the main ethical problems that have been identified by the relevant literature and identifies key themes in the on-going debate on ethical problems concerning H-IoT. (shrink)
The underlying structures that are common to the world's languages bear an intriguing connection with early emerging forms of “core knowledge”, which are frequently studied by infant researchers. In particular, grammatical systems often incorporate distinctions that reflect those made in core knowledge. Here, I argue that this connection occurs because non-verbal core knowledge systematically biases processes of language evolution. This account potentially explains a wide range of cross-linguistic grammatical phenomena that currently lack an adequate explanation. Second, I suggest that developmental (...) researchers and cognitive scientists interested in knowledge representation can exploit this connection to language by using observations about cross-linguistic grammatical tendencies to inspire hypotheses about core knowledge. (shrink)
A term expresses a thick concept if it expresses a specific evaluative concept that is also substantially descriptive. It is a matter of debate how this rough account should be unpacked, but examples can help to convey the basic idea. Thick concepts are often illustrated with virtue concepts like courageous and generous, action concepts like murder and betray, epistemic concepts like dogmatic and wise, and aesthetic concepts like gaudy and brilliant. These concepts seem to be evaluative, unlike purely descriptive concepts (...) such as red and water. But they also seem different from general evaluative concepts. In particular, thick concepts are typically contrasted with thin concepts like good, wrong, permissible, and ought, which are general evaluative concepts that do not seem substantially descriptive. When Jane says that Max is good, she appears to be evaluating him without providing much description, if any. Thick concepts, on the other hand, are evaluative and substantially descriptive at the same time. For instance, when Max says that Jane is courageous, he seems to be doing two things: evaluating her positively and describing her as willing to face risk. Because of their descriptiveness, thick concepts are especially good candidates for evaluative concepts that pick out properties in the world. Thus they provide an avenue for thinking about ethical claims as being about the world in the same way as descriptive claims. -/- Thick concepts became a focal point in ethics during the second half of the twentieth century. At that time, discussions of thick concepts began to emerge in response to certain disagreements about thin concepts. For example, in twentieth-century ethics, consequentialists and deontologists hotly debated various accounts of good and right. It was also claimed by non-cognitivists and error-theorists that these thin concepts do not correspond to any properties in the world. Dissatisfaction with these viewpoints prompted many ethicists to consider the implications of thick concepts. The notion of a thick concept was thought to provide insight into meta-ethical questions such as whether there is a fact-value distinction, whether there are ethical truths, and, if there are such truths, whether these truths are objective. Some ethicists also theorized about the role that thick concepts can play in normative ethics, such as in virtue theory. By the beginning of the twenty-first century, the interest in thick concepts had spread to other philosophical disciplines such as epistemology, aesthetics, metaphysics, moral psychology, and the philosophy of law. -/- Nevertheless, the emerging interest in thick concepts has sparked debates over many questions: How exactly are thick concepts evaluative? How do they combine evaluation and description? How are thick concepts related to thin concepts? And do thick concepts have the sort of significance commonly attributed to them? This article surveys various attempts at answering these questions. (shrink)
Many statistical problems, including some of the most important for physical applications, have long been regarded as underdetermined from the standpoint of a strict frequency definition of probability; yet they may appear wellposed or even overdetermined by the principles of maximum entropy and transformation groups. Furthermore, the distributions found by these methods turn out to have a definite frequency correspondence; the distribution obtained by invariance under a transformation group is by far the most likely to be observed experimentally, in the (...) sense that it requires by far the least “skill.” These properties are illustrated by analyzing the famous Bertrand paradox. On the viewpoint advocated here, Bertrand's problem turns out to be well posed after all, and the unique solution has been verified experimentally. We conclude that probability theory has a wider range of useful applications than would be supposed from the standpoint of the usual frequency definitions. (shrink)
It has long been known that scientists have a tendency to conduct experiments in a way that brings about the expected outcome. Here, we provide the first direct demonstration of this type of experimenter bias in experimental philosophy. Opposed to previously discovered types of experimenter bias mediated by face-to-face interactions between experimenters and participants, here we show that experimenters also have a tendency to create stimuli in a way that brings about expected outcomes. We randomly assigned undergraduate experimenters to receive (...) two different hypotheses about folk intuitions of consciousness, and then asked them to design experiments based on their hypothesis. Specifically, experimenters generated sentences ascribing intentional and phenomenal mental states to groups, which were later rated by online participants for naturalness. We found a significant interaction between experimenter hypothesis and participant ratings indicating a general tendency for experimenters to obtain the result that they expected. These results indicate that experimenter bias is a real problem in experimental philosophy since the methods and design employed here mirror the predominant survey methods of the field as a whole. The bearing of the current results on Knobe and Prinz’s :67–83, 2008) group mind hypothesis is discussed, and new methods for avoiding experimenter bias are proposed. (shrink)
I argue that the Standard View of ignorance is at odds with the claim that knowledge entails truth. In particular, if knowledge entails truth then we cannot explain away some apparent absurdities that arise from the Standard View of ignorance. I then discuss a modified version of the Standard View, which simply adds a truth requirement to the original Standard View. I show that the two main arguments for the original Standard View fail to support this modified view.
Personal Health Monitoring (PHM) uses electronic devices which monitor and record health-related data outside a hospital, usually within the home. This paper examines the ethical issues raised by PHM. Eight themes describing the ethical implications of PHM are identified through a review of 68 academic articles concerning PHM. The identified themes include privacy, autonomy, obtrusiveness and visibility, stigma and identity, medicalisation, social isolation, delivery of care, and safety and technological need. The issues around each of these are discussed. The system (...) / lifeworld perspective of Habermas is applied to develop an understanding of the role of PHMs as mediators of communication between the institutional and the domestic environment. Furthermore, links are established between the ethical issues to demonstrate that the ethics of PHM involves a complex network of ethical interactions. The paper extends the discussion of the critical effect PHMs have on the patient’s identity and concludes that a holistic understanding of the ethical issues surrounding PHMs will help both researchers and practitioners in developing effective PHM implementations. (shrink)
Inventions of Teaching: A Genealogy is a powerful examination of current metaphors for and synonyms of teaching. It offers an account of the varied and conflicting influences and conceptual commitments that have contributed to contemporary vocabularies--and that are in some ways maintained by those vocabularies, in spite of inconsistencies and incompatibilities among popular terms. The concern that frames the book is how speakers of English invented (in the original sense of the word, "came upon") our current vocabularies for teaching. Conceptually, (...) this book is unique in the educational literature. As a whole, it presents an overview of the major underlying philosophical and ideological concepts and traditions related to knowledge, learning, and teaching in the Western world, concisely introducing readers to the central historical and contemporary discourses that shape current discussions and beliefs in the field. Because the organization of historical, philosophical, theoretical, and etymological information is around key conceptual divergences in Western thought rather than any sort of chronology, this text is not a linear history, but several histories--or, more precisely, it is a genealogy. Specifically, it is developed around breaks in opinion that gave or are giving rise to diverse interpretations of knowledge, learning, and teaching--highlighting historical moments in which vibrant new figurative understandings of teaching emerged and moments at which they froze into literalness. The book is composed of two sorts of chapters, "branching" and "teaching." Branching chapters include an opening treatment of the break in opinion, separate discussions of each branch, and a summary of the common assumptions and shared histories of the two branches. Teaching chapters offer brief etymological histories and some of the practical implications of the terms for teaching that were coined, co-opted, or redefined within the various traditions. Inventions of Teaching: A Genealogy is an essential text for senior undergraduate and graduate courses in curriculum studies and foundations of teaching and is highly relevant as well for students, faculty, and researchers across the field of education. (shrink)
This paper responds to critics of Reinhold Niebuhr by exploring two themes important for locating his views on cruelty’s emergence in modern society. The first relates to his basic insight into the relationship between individual morality and group loyalty and solidarity. Niebuhr provides a sophisticated argument for such group dynamics in his work, issued in Moral Man, Immoral Society, as well as his essays on race. These also form the basis for his second thematic argument regarding cruelty, the role of (...) ‘righteousness’ as it relates to security and insecurity. Niebuhr’s views on race, I argue, need to be considered more broadly as an example of his views on groups, power, and cruelty. The paper concludes with some modest proposals for thinking about combatting cruelty via Niebuhr’s counsel. (shrink)
Examining intrapersonal factors theorized to influence ethics reporting decisions, the relation of self-efficacy as a predictor of propensity for internal whistleblowing is investigated within a US and Canadian multi-regional context. Over 900 professionals from a total of nine regions in Canada and the US participated. Self-efficacy was found to influence participant reported propensity for internal whistleblowing consistently in both the US and Canada. Seasoned participants with greater management and work experience demonstrated higher levels of self-efficacy while gender was also found (...) to be influential to self-efficacy. These individual traits, although related to self-efficacy, did not directly relate to propensities for internal whistleblowing. The findings demonstrate that self-efficacy could represent an important individual trait for examining whistleblowing issues. Internal whistleblowing is becoming an important organizational consideration in many areas of North America, yet there is relatively little research on the topic. Organizations seeking effective internal reporting systems should consider the influence of self-efficacy along with its potential reporting influence. By empirically testing an under-examined component of theory related to internal whistleblowing, this effort contributes to management literature, extending the knowledge beyond a US context, and provides recommendation for managing individual bias with internal reporting systems. (shrink)
This book is the fruit of twenty-five years of study of Spinoza by the editor and translator of a new and widely acclaimed edition of Spinoza's collected works. Based on three lectures delivered at the Hebrew University of Jerusalem in 1984, the work provides a useful focal point for continued discussion of the relationship between Descartes and Spinoza, while also serving as a readable and relatively brief but substantial introduction to the Ethics for students. Behind the Geometrical Method is actually (...) two books in one. The first is Edwin Curley's text, which explains Spinoza's masterwork to readers who have little background in philosophy. This text will prove a boon to those who have tried to read the Ethics, but have been baffled by the geometrical style in which it is written. Here Professor Curley undertakes to show how the central claims of the Ethics arose out of critical reflection on the philosophies of Spinoza's two great predecessors, Descartes and Hobbes. The second book, whose argument is conducted in the notes to the text, attempts to support further the often controversial interpretations offered in the text and to carry on a dialogue with recent commentators on Spinoza. The author aligns himself with those who interpret Spinoza naturalistically and materialistically. (shrink)
The numerical representations of measurement, geometry and kinematics are here subsumed under a general theory of representation. The standard theories of meaningfulness of representational propositions in these three areas are shown to be special cases of two theories of meaningfulness for arbitrary representational propositions: the theories based on unstructured and on structured representation respectively. The foundations of the standard theories of meaningfulness are critically analyzed and two basic assumptions are isolated which do not seem to have received adequate justification: the (...) assumption that a proposition invariant under the appropriate group is therefore meaningful, and the assumption that representations should be unique up to a transformation of the appropriate group. A general theory of representational meaningfulness is offered, based on a semantic and syntactic analysis of representational propositions. Two neglected features of representational propositions are formalized and made use of: (a) that such propositions are induced by more general propositions defined for other structures than the one being represented, and (b) that the true purpose of representation is the application of the theory of the representing system to the represented system. On the basis of these developments, justifications are offered for the two problematic assumptions made by the existing theories. (shrink)
Skeptical puzzles and arguments often employ knowledge-closure principles . Epistemologists widely believe that an adequate reply to the skeptic should explain why her reasoning is appealing albeit misleading; but it’s unclear what would explain the appeal of the skeptic’s closure principle, if not for its truth. In this paper, I aim to challenge the widespread commitment to knowledge-closure. But I proceed by first examining a new puzzle about failing to know—what I call the New Ignorance Puzzle . This puzzle resembles (...) to the Old Ignorance Puzzle , although it does not involve a closure principle. It instead centers on the standard view of ignorance, a highly intuitive principle stating that ignorance is merely a failure to know. In Sects. 2 and 3, I argue that the best way to solve the New Ignorance Puzzle is to reject the standard view of ignorance and to explain away its appeal via conversational implicature. I then use this solution to the New Ignorance Puzzle as a way of explaining why knowledge-closure principles would seem true, and why abominable conjunctions would seem abominable, even if such principles were false . The upshot is a new way of explaining why the skeptic’s reasoning is appealing albeit misleading. (shrink)
There is currently much concern over the use of pharmaceuticals and other biomedical techniques to enhance athletic performance—a practice we might refer to as doping. Many justifications of anti-doping efforts claim that doping involves a serious moral transgression. In this article, I review a number of arguments in support of that claim, but show that they are not conclusive, suggesting that we do not have good reasons for thinking that doping is wrong.
The conjunction of wireless computing, ubiquitous Internet access, and the miniaturisation of sensors have opened the door for technological applications that can monitor health and well-being outside of formal healthcare systems. The health-related Internet of Things (H-IoT) increasingly plays a key role in health management by providing real-time tele-monitoring of patients, testing of treatments, actuation of medical devices, and fitness and well-being monitoring. Given its numerous applications and proposed benefits, adoption by medical and social care institutions and consumers may be (...) rapid. However, a host of ethical concerns are also raised that must be addressed. The inherent sensitivity of health-related data being generated and latent risks of Internet-enabled devices pose serious challenges. Users, already in a vulnerable position as patients, face a seemingly impossible task to retain control over their data due to the scale, scope and complexity of systems that create, aggregate, and analyse personal health data. In response, the H-IoT must be designed to be technologically robust and scientifically reliable, while also remaining ethically responsible, trustworthy, and respectful of user rights and interests. To assist developers of the H-IoT, this paper describes nine principles and nine guidelines for ethical design of H-IoT devices and data protocols. (shrink)
Could a Nazi soldier or terrorist be courageous? The Courage Problem asks us to answer this sort of question, and then to explain why people are reluctant to give this answer. The present paper sheds new light on the Courage Problem by examining a controversy sparked by Bill Maher, who claimed that the 9/11 terrorists’ acts were ‘not cowardly.’ It is shown that Maher's controversy is fundamentally related to the Courage Problem. Then, a unified solution to both problems is provided. (...) This solution entails that gutsy people who lack good ends are not courageous. (shrink)