The work reported in this monograph was begun in the winter of 1967 in a graduate seminar at Berkeley. Many of the basic data were gathered by members of the seminar and the theoretical framework presented here was initially developed in the context of the seminar discussions. Much has been discovered since1969, the date of original publication, regarding the psychophysical and neurophysical determinants of universal, cross-linguistic constraints on the shape of basic color lexicons, and something, albeit less, can now also (...) be said with some confidence regarding the constraining effects of these language-independent processes of color perception and conceptualization on the direction of evolution of basic color term lexicons. (shrink)
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...) affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms. (shrink)
For much of the past two centuries, religion has been understood as a universal phenomenon, a part of the “natural” human experience that is essentially the same across cultures and throughout history. Individual religions may vary through time and geographically, but there is an element, religion, that is to be found in all cultures during all time periods. Taking apart this assumption, Brent Nongbri shows that the idea of religion as a sphere of life distinct from politics, economics, or (...) science is a recent development in European history—a development that has been projected outward in space and backward in time with the result that religion now appears to be a natural and necessary part of our world. Examining a wide array of ancient writings, Nongbri demonstrates that in antiquity, there was no conceptual arena that could be designated as “religious” as opposed to “secular.” Surveying representative episodes from a two-thousand-year period, while constantly attending to the concrete social, political, and colonial contexts that shaped relevant works of philosophers, legal theorists, missionaries, and others, Nongbri offers a concise and readable account of the emergence of the concept of religion. (shrink)
The capacity to collect and analyse data is growing exponentially. Referred to as ‘Big Data’, this scientific, social and technological trend has helped create destabilising amounts of information, which can challenge accepted social and ethical norms. Big Data remains a fuzzy idea, emerging across social, scientific, and business contexts sometimes seemingly related only by the gigantic size of the datasets being considered. As is often the case with the cutting edge of scientific and technological progress, understanding of the ethical implications (...) of Big Data lags behind. In order to bridge such a gap, this article systematically and comprehensively analyses academic literature concerning the ethical implications of Big Data, providing a watershed for future ethical investigations and regulations. Particular attention is paid to biomedical Big Data due to the inherent sensitivity of medical information. By means of a meta-analysis of the literature, a thematic narrative is provided to guide ethicists, data scientists, regulators and other stakeholders through what is already known or hypothesised about the ethical risks of this emerging and innovative phenomenon. Five key areas of concern are identified: informed consent, privacy, ownership, epistemology and objectivity, and ‘Big Data Divides’ created between those who have or lack the necessary resources to analyse increasingly large datasets. Critical gaps in the treatment of these themes are identified with suggestions for future research. Six additional areas of concern are then suggested which, although related have not yet attracted extensive debate in the existing literature. It is argued that they will require much closer scrutiny in the immediate future: the dangers of ignoring group-level ethical harms; the importance of epistemology in assessing the ethics of Big Data; the changing nature of fiduciary relationships that become increasingly data saturated; the need to distinguish between ‘academic’ and ‘commercial’ Big Data practices in terms of potential harm to data subjects; future problems with ownership of intellectual property generated from analysis of aggregated datasets; and the difficulty of providing meaningful access rights to individual data subjects that lack necessary resources. Considered together, these eleven themes provide a thorough critical framework to guide ethical assessment and governance of emerging Big Data practices. (shrink)
Lewis has missed an excellent opportunity to concisely demonstrate that a dynamical system can provide a bridge between emotion theory and neurobiology.
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...) the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly. (shrink)
Three studies provided evidence that syntax influences intentionality judgments. In Experiment 1, participants made either speeded or unspeeded intentionality judgments about ambiguously intentional subjects or objects. Participants were more likely to judge grammatical subjects as acting intentionally in the speeded relative to the reflective condition (thus showing an intentionality bias), but grammatical objects revealed the opposite pattern of results (thus showing an unintentionality bias). In Experiment 2, participants made an intentionality judgment about one of the two actors in a partially (...) symmetric sentence (e.g., “John exchanged products with Susan”). The results revealed a tendency to treat the grammatical subject as acting more intentionally than the grammatical object. In Experiment 3 participants were encouraged to think about the events that such sentences typically refer to, and the tendency was significantly reduced. These results suggest a privileged relationship between language and central theory-of-mind concepts. More specifically, there may be two ways of determining intentionality judgments: (1) an automatic verbal bias to treat grammatical subjects (but not objects) as intentional (2) a deeper, more careful consideration of the events typically described by a sentence. (shrink)
Charles Sanders Peirce was born in September 1839 and died five months before the guns of August 1914. He is perhaps the most important mind the United States has ever produced. He made significant contributions throughout his life as a mathematician, astronomer, chemist, geodesist, surveyor, cartographer, metrologist, engineer, and inventor. He was a psychologist, a philologist, a lexicographer, a historian of science, a lifelong student of medicine, and, above all, a philosopher, whose special fields were logic and semiotics. He is (...) widely credited with being the founder of pragmatism. In terms of his importance as a philosopher and a scientist, he has been compared to Plato and Aristotle. He himself intended "to make a philosophy like that of Aristotle." Peirce was also a tormented and in many ways tragic figure. He suffered throughout his life from various ailments, including a painful facial neuralgia, and had wide swings of mood which frequently left him depressed to the state of inertia, and other times found him explosively violent. Despite his consistent belief that ideas could find meaning only if they "worked" in the world, he himself found it almost impossible to make satisfactory economic and social arrangements for himself. This brilliant scientist, this great philosopher, this astounding polymath was never able, throughout his long life, to find an academic post that would allow him to pursue his major interest, the study of logic, and thus also fulfill his destiny as America's greatest philosopher. Much of his work remained unpublished in his own time, and is only now finding publication in a coherent, chronologically organized edition. Even more astounding is that,despite many monographic studies, there has been no biography until now, almost eighty years after his death. Brent has studied the Peirce papers in detail and enriches his account with numerous quotations from letters by Peirce and by his friends. This is a fascinating account of a p. (shrink)
We study large cardinal properties associated with Ramseyness in which homogeneous sets are demanded to satisfy various transfinite degrees of indescribability. Sharpe and Welch [25], and independently Bagaria [1], extended the notion of $\Pi ^1_n$ -indescribability where $n<\omega $ to that of $\Pi ^1_\xi $ -indescribability where $\xi \geq \omega $. By iterating Feng’s Ramsey operator [12] on the various $\Pi ^1_\xi $ -indescribability ideals, we obtain new large cardinal hierarchies and corresponding nonlinear increasing hierarchies of normal ideals. We provide (...) a complete account of the containment relationships between the resulting ideals and show that the corresponding large cardinal properties yield a strict linear refinement of Feng’s original Ramsey hierarchy. We isolate Ramsey properties which provide strictly increasing hierarchies between Feng’s $\Pi _\alpha $ -Ramsey and $\Pi _{\alpha +1}$ -Ramsey cardinals for all odd $\alpha <\omega $ and for all $\omega \leq \alpha <\kappa $. We also show that, given any ordinals $\beta _0,\beta _1<\kappa $ the increasing chains of ideals obtained by iterating the Ramsey operator on the $\Pi ^1_{\beta _0}$ -indescribability ideal and the $\Pi ^1_{\beta _1}$ -indescribability ideal respectively, are eventually equal; moreover, we identify the least degree of Ramseyness at which this equality occurs. As an application of our results we show that one can characterize our new large cardinal notions and the corresponding ideals in terms of generic elementary embeddings; as a special case this yields generic embedding characterizations of $\Pi ^1_\xi $ -indescribability and Ramseyness. (shrink)
The concept of individuality as applied to species, an important advance in the philosophy of evolutionary biology, is nevertheless in need of refinement. Four important subparts of this concept must be recognized: spatial boundaries, temporal boundaries, integration, and cohesion. Not all species necessarily meet all of these. Two very different types of pluralism have been advocated with respect to species, only one of which is satisfactory. An often unrecognized distinction between grouping and ranking components of any species concept is necessary. (...) A phylogenetic species concept is advocated that uses a grouping criterion of monophyly in a cladistic sense, and a ranking criterion based on those causal processes that are most important in producing and maintaining lineages in a particular case. Such causal processes can include actual interbreeding, selective constraints, and developmental canalization. The widespread use of the biological species concept is flawed for two reasons: because of a failure to distinguish grouping from ranking criteria and because of an unwarranted emphasis on the importance of interbreeding as a universal causal factor controlling evolutionary diversification. The potential to interbreed is not in itself a process; it is instead a result of a diversity of processes which result in shared selective environments and common developmental programs. These types of processes act in both sexual and asexual organisms, thus the phylogenetic species concept can reflect an underlying unity that the biological species concept can not. (shrink)
Ethicists are typically willing to grant that thick terms (e.g. ‘courageous’ and ‘murder’) are somehow associated with evaluations. But they tend to disagree about what exactly this relationship is. Does a thick term’s evaluation come by way of its semantic content? Or is the evaluation pragmatically associated with the thick term (e.g. via conversational implicature)? In this paper, I argue that thick terms are semantically associated with evaluations. In particular, I argue that many thick concepts (if not all) conceptually entail (...) evaluative contents. The Semantic View has a number of outspoken critics, but I shall limit discussion to the most recent--Pekka Väyrynen--who believes that objectionable thick concepts present a problem for the Semantic View. After advancing my positive argument in favor of the Semantic View (section II), I argue that Väyrynen’s attack is unsuccessful (section III). One reason ethicists cite for not focusing on thick concepts is that such concepts are supposedly not semantically evaluative whereas traditional thin concepts (e.g. good and wrong) are. But if my view is correct, then this reason must be rejected. (shrink)
My primary aim is to defend a nonreductive solution to the problem of action. I argue that when you are performing an overt bodily action, you are playing an irreducible causal role in bringing about, sustaining, and controlling the movements of your body, a causal role best understood as an instance of agent causation. Thus, the solution that I defend employs a notion of agent causation, though emphatically not in defence of an account of free will, as most theories of (...) agent causation are. Rather, I argue that the notion of agent causation introduced here best explains how it is that you are making your body move during an action, thereby providing a satisfactory solution to the problem of action. (shrink)
A formal theory of quantity T Q is presented which is realist, Platonist, and syntactically second-order (while logically elementary), in contrast with the existing formal theories of quantity developed within the theory of measurement, which are empiricist, nominalist, and syntactically first-order (while logically non-elementary). T Q is shown to be formally and empirically adequate as a theory of quantity, and is argued to be scientifically superior to the existing first-order theories of quantity in that it does not depend upon empirically (...) unsupported assumptions concerning existence of physical objects (e.g. that any two actual objects have an actual sum). The theory T Q supports and illustrates a form of naturalistic Platonism, for which claims concerning the existence and properties of universals form part of natural science, and the distinction between accidental generalizations and laws of nature has a basis in the second-order structure of the world. (shrink)
Mature information societies are characterised by mass production of data that provide insight into human behaviour. Analytics has arisen as a practice to make sense of the data trails generated through interactions with networked devices, platforms and organisations. Persistent knowledge describing the behaviours and characteristics of people can be constructed over time, linking individuals into groups or classes of interest to the platform. Analytics allows for a new type of algorithmically assembled group to be formed that does not necessarily align (...) with classes or attributes already protected by privacy and anti-discrimination law or addressed in fairness- and discrimination-aware analytics. Individuals are linked according to offline identifiers and shared behavioural identity tokens, allowing for predictions and decisions to be taken at a group rather than individual level. This article examines the ethical significance of such ad hoc groups in analytics and argues that the privacy interests of algorithmically assembled groups in inviolate personality must be recognised alongside individual privacy rights. Algorithmically grouped individuals have a collective interest in the creation of information about the group, and actions taken on its behalf. Group privacy is proposed as a third interest to balance alongside individual privacy and social, commercial and epistemic benefits when assessing the ethical acceptability of analytics platforms. (shrink)
The internet of things is increasingly spreading into the domain of medical and social care. Internet-enabled devices for monitoring and managing the health and well-being of users outside of traditional medical institutions have rapidly become common tools to support healthcare. Health-related internet of things (H-IoT) technologies increasingly play a key role in health management, for purposes including disease prevention, real-time tele-monitoring of patient’s functions, testing of treatments, fitness and well-being monitoring, medication dispensation, and health research data collection. H-IoT promises many (...) benefits for health and healthcare. However, it also raises a host of ethical problems stemming from the inherent risks of Internet enabled devices, the sensitivity of health-related data, and their impact on the delivery of healthcare. This paper maps the main ethical problems that have been identified by the relevant literature and identifies key themes in the on-going debate on ethical problems concerning H-IoT. (shrink)
Personal Health Monitoring (PHM) uses electronic devices which monitor and record health-related data outside a hospital, usually within the home. This paper examines the ethical issues raised by PHM. Eight themes describing the ethical implications of PHM are identified through a review of 68 academic articles concerning PHM. The identified themes include privacy, autonomy, obtrusiveness and visibility, stigma and identity, medicalisation, social isolation, delivery of care, and safety and technological need. The issues around each of these are discussed. The system (...) / lifeworld perspective of Habermas is applied to develop an understanding of the role of PHMs as mediators of communication between the institutional and the domestic environment. Furthermore, links are established between the ethical issues to demonstrate that the ethics of PHM involves a complex network of ethical interactions. The paper extends the discussion of the critical effect PHMs have on the patient’s identity and concludes that a holistic understanding of the ethical issues surrounding PHMs will help both researchers and practitioners in developing effective PHM implementations. (shrink)
This paper proposes a new Separabilist account of thick concepts, called the Expansion View (or EV). According to EV, thick concepts are expanded contents of thin terms. An expanded content is, roughly, the semantic content of a predicate along with modifiers. Although EV is a form of Separabilism, it is distinct from the only kind of Separabilism discussed in the literature, and it has many features that Inseparabilists want from an account of thick concepts. EV can also give non-cognitivists a (...) novel escape from the Anti-Disentangling Argument. §I explains the approach of all previous Separabilists, and argues that there’s no reason for Separabilists to take this approach. §II explains EV. §III fends off objections. And §IV explains how non-cognitivist proponents of EV can escape the Anti-Disentangling Argument. (shrink)
The present study examines whether deictic time and valence are mentally associated, with a link between future and positive valence and a link between past and negative valence. We employed a novel paradigm, the two-choice-sentence-completion paradigm, to address this issue. Participants were presented with an initial sentence fragment that referred to an event that was either located in time or of different valence. Participants chose between two completion phrases. When the given dimension in the initial fragment was time, the two (...) completion phrase alternatives differed in valence. However, when the given dimension in the initial fragment was valence, the two completion phrase alternatives differed in time. As expected, participants chose completion phrases consistent with the proposed association between time and valence. Additional analyses involving individual differences concerning optimism/pessimism revealed that this association is particularly pronounced for people with an optimistic attitude. (shrink)
The numerical representations of measurement, geometry and kinematics are here subsumed under a general theory of representation. The standard theories of meaningfulness of representational propositions in these three areas are shown to be special cases of two theories of meaningfulness for arbitrary representational propositions: the theories based on unstructured and on structured representation respectively. The foundations of the standard theories of meaningfulness are critically analyzed and two basic assumptions are isolated which do not seem to have received adequate justification: the (...) assumption that a proposition invariant under the appropriate group is therefore meaningful, and the assumption that representations should be unique up to a transformation of the appropriate group. A general theory of representational meaningfulness is offered, based on a semantic and syntactic analysis of representational propositions. Two neglected features of representational propositions are formalized and made use of: (a) that such propositions are induced by more general propositions defined for other structures than the one being represented, and (b) that the true purpose of representation is the application of the theory of the representing system to the represented system. On the basis of these developments, justifications are offered for the two problematic assumptions made by the existing theories. (shrink)
It has long been known that scientists have a tendency to conduct experiments in a way that brings about the expected outcome. Here, we provide the first direct demonstration of this type of experimenter bias in experimental philosophy. Opposed to previously discovered types of experimenter bias mediated by face-to-face interactions between experimenters and participants, here we show that experimenters also have a tendency to create stimuli in a way that brings about expected outcomes. We randomly assigned undergraduate experimenters to receive (...) two different hypotheses about folk intuitions of consciousness, and then asked them to design experiments based on their hypothesis. Specifically, experimenters generated sentences ascribing intentional and phenomenal mental states to groups, which were later rated by online participants for naturalness. We found a significant interaction between experimenter hypothesis and participant ratings indicating a general tendency for experimenters to obtain the result that they expected. These results indicate that experimenter bias is a real problem in experimental philosophy since the methods and design employed here mirror the predominant survey methods of the field as a whole. The bearing of the current results on Knobe and Prinz’s :67–83, 2008) group mind hypothesis is discussed, and new methods for avoiding experimenter bias are proposed. (shrink)
Two studies tested the relationship between three facets of personality—conscientiousness, agreeableness, and openness to experience—as well as moral identity, on individuals’ ethical ideology. Study 1 showed that moral personality and the centralityof moral identity to the self were associated with a more principled ethical ideology in a sample of female speech therapists. Study 2 replicated these findings in a sample of male and female college students, and showed that ideology mediated therelationship between personality, moral identity, and two organizationally relevant outcomes: (...) organizational citizenship behavior and the propensity to morally disengage. Implications for business ethics are discussed. (shrink)
Income inequality in the US has now reached levels not seen since the 1920s. Management, as a field of scholarly inquiry, has the potential to contribute in significant ways to our understanding of recent inequality trends. We review and assess recent research, both in the management literature and in other fields. We then delineate a conceptual framework that highlights the mechanisms through which business practice may be linked to income inequality. We then outline four general areas in which management scholars (...) are uniquely positioned to contribute to ongoing research: data and description, organizational dynamics, collective action, and value flows and tradeoffs. To stimulate future research, we highlight a number of relevant research questions and link these questions to existing management research streams that could be leveraged to address them. (shrink)
Preface -- How brave a new world? : God, technology, and medicine -- A theological reflection on reproductive medicine -- Are our genes our fate? : genomics and Christian theology -- Persons, neighbors, and embryos : some ethical reflections on human cloning and stem cell research -- Extending human life : to what end? -- What is Christian about Christian bioethics? -- Revitalizing medicine : empowering natality vs. fearing mortality -- The future of the human species -- Creation, creatures, and (...) creativity : the Word and the final Word. (shrink)
The conjunction of wireless computing, ubiquitous Internet access, and the miniaturisation of sensors have opened the door for technological applications that can monitor health and well-being outside of formal healthcare systems. The health-related Internet of Things (H-IoT) increasingly plays a key role in health management by providing real-time tele-monitoring of patients, testing of treatments, actuation of medical devices, and fitness and well-being monitoring. Given its numerous applications and proposed benefits, adoption by medical and social care institutions and consumers may be (...) rapid. However, a host of ethical concerns are also raised that must be addressed. The inherent sensitivity of health-related data being generated and latent risks of Internet-enabled devices pose serious challenges. Users, already in a vulnerable position as patients, face a seemingly impossible task to retain control over their data due to the scale, scope and complexity of systems that create, aggregate, and analyse personal health data. In response, the H-IoT must be designed to be technologically robust and scientifically reliable, while also remaining ethically responsible, trustworthy, and respectful of user rights and interests. To assist developers of the H-IoT, this paper describes nine principles and nine guidelines for ethical design of H-IoT devices and data protocols. (shrink)
We here present explicit relational theories of a class of geometrical systems (namely, inner product spaces) which includes Euclidean space and Minkowski spacetime. Using an embedding approach suggested by the theory of measurement, we prove formally that our theories express the entire empirical content of the corresponding geometric theory in terms of empirical relations among a finite set of elements (idealized point-particles or events) thought of as embedded in the space. This result is of interest within the general phenomenalist tradition (...) as well as the theory of space and time, since it seems to be the first example of an explicit phenomenalist reconstruction of a realist theory which is provably equivalent to it in observational consequences. The interesting paper "On the Space-Time Ontology of Physical Theories" by Ken Manders, Philosophy of Science, vol. 49, number 4, December 1982, p. 575-590, has significant affinities to this one. We both, in a sense, try to formally vindicate Leibniz's notion of a relational theory of space, by constructing theories of spatial relations among physical objects which are provably equivalent to the standard absolutist theories. The essential difference between our approaches is that Manders retains Leibniz's explicitly modal framework, whereas I do not. Manders constructs a spacetime theory which explicitly characterizes the totality of possible configurations of physical objects, using a modal language in which the notion of a possible configuration occurs as a primitive. There is no doubt that this is a more accurate realization of Leibniz's own conception of space than the embedding-based approach developed here. However, it also remains open to objections (such as those cited here from Sklar) on account of the special appeal to modal notions. Our approach here, by contrast, aims to avoid the special appeal to modal notions by giving directly a set of laws which are satisfied by a configuration individually, if and only if it is one of the allowable ones. One thus avoids the need for reference to possible but not actual configurations or objects, in the statement of the spacetime laws. We may then take this alternative set of laws as the actual geometric theory, and do away with the hypothetical entity called 'space'. Yet at the same time there is no invocation of modality, except in the ordinary sense in which every physical theory constrains what is possible. So that a relationalist is not forced to utilize a modal language (though Leibniz certainly does.). (shrink)
Examining intrapersonal factors theorized to influence ethics reporting decisions, the relation of self-efficacy as a predictor of propensity for internal whistleblowing is investigated within a US and Canadian multi-regional context. Over 900 professionals from a total of nine regions in Canada and the US participated. Self-efficacy was found to influence participant reported propensity for internal whistleblowing consistently in both the US and Canada. Seasoned participants with greater management and work experience demonstrated higher levels of self-efficacy while gender was also found (...) to be influential to self-efficacy. These individual traits, although related to self-efficacy, did not directly relate to propensities for internal whistleblowing. The findings demonstrate that self-efficacy could represent an important individual trait for examining whistleblowing issues. Internal whistleblowing is becoming an important organizational consideration in many areas of North America, yet there is relatively little research on the topic. Organizations seeking effective internal reporting systems should consider the influence of self-efficacy along with its potential reporting influence. By empirically testing an under-examined component of theory related to internal whistleblowing, this effort contributes to management literature, extending the knowledge beyond a US context, and provides recommendation for managing individual bias with internal reporting systems. (shrink)
The underlying structures that are common to the world's languages bear an intriguing connection with early emerging forms of “core knowledge”, which are frequently studied by infant researchers. In particular, grammatical systems often incorporate distinctions that reflect those made in core knowledge. Here, I argue that this connection occurs because non-verbal core knowledge systematically biases processes of language evolution. This account potentially explains a wide range of cross-linguistic grammatical phenomena that currently lack an adequate explanation. Second, I suggest that developmental (...) researchers and cognitive scientists interested in knowledge representation can exploit this connection to language by using observations about cross-linguistic grammatical tendencies to inspire hypotheses about core knowledge. (shrink)
Developing some suggestions of Ramsey (1925), elementary logic is formulated with respect to an arbitrary categorial system rather than the categorial system of Logical Atomism which is retained in standard elementary logic. Among the many types of non-standard categorial systems allowed by this formalism, it is argued that elementary logic with predicates of variable degree occupies a distinguished position, both for formal reasons and because of its potential value for application of formal logic to natural language and natural science. This (...) is illustrated by use of such a logic to construct a theory of quantity which is argued to be scientifically superior to existing theories of quantity based on standard categorial systems, since it yields real-valued scales without the need for unrealistic existence assumptions. This provides empirical evidence for the hypothesis that the categorial structure of the physical world itself is non-standard in this sense. (shrink)
On the Use and Abuse of Foucault for Politics provides an accessible interpretation of Foucault's political philosophy, demonstrating how Foucault is relevant for contemporary democratic theory. Brent Pickett lays out an overview of Foucault's politics, including a comprehensive overview of the reasons for various conflicting interpretations, and then explores how well the different "Foucaults" can be used in progressive politics and democratic theory.
This paper poses the question of whether people have a duty to participate in digital epidemiology. While an implied duty to participate has been argued for in relation to biomedical research in general, digital epidemiology involves processing of non-medical, granular and proprietary data types that pose different risks to participants. We first describe traditional justifications for epidemiology that imply a duty to participate for the general public, which take account of the immediacy and plausibility of threats, and the identifiability of (...) data. We then consider how these justifications translate to digital epidemiology, understood as an evolution of traditional epidemiology that includes personal and proprietary digital data alongside formal medical datasets. We consider the risks imposed by re-purposing such data for digital epidemiology and propose eight justificatory conditions that should be met in justifying a duty to participate for specific digital epidemiological studies. The conditions are then applied to three hypothetical cases involving usage of social media data for epidemiological purposes. We conclude with a list of questions to be considered in public negotiations of digital epidemiology, including the application of a duty to participate to third-party data controllers, and the important distinction between moral and legal obligations to participate in research. (shrink)
Most work on collective action assumes that group members are undifferentiated by status, or standing, in the group. Yet such undifferentiated groups are rare, if they exist at all. Here we extend an existing sociological research program to address how extant status hierarchies help organize collective actions by coordinating how much and when group members should contribute to group efforts. We outline three theoretically derived predictions of how status hierarchies organize patterns of behavior to produce larger public goods. We review (...) existing evidence relevant to two of the three hypotheses and present results from a preliminary experimental test of the third. Findings are consistent with the model. The tendency of these dynamics to lead status-differentiated groups to produce larger public goods may help explain the ubiquity of hierarchy in groups, despite the often negative effects of status inequalities for many group members. (shrink)
A term expresses a thick concept if it expresses a specific evaluative concept that is also substantially descriptive. It is a matter of debate how this rough account should be unpacked, but examples can help to convey the basic idea. Thick concepts are often illustrated with virtue concepts like courageous and generous, action concepts like murder and betray, epistemic concepts like dogmatic and wise, and aesthetic concepts like gaudy and brilliant. These concepts seem to be evaluative, unlike purely descriptive concepts (...) such as red and water. But they also seem different from general evaluative concepts. In particular, thick concepts are typically contrasted with thin concepts like good, wrong, permissible, and ought, which are general evaluative concepts that do not seem substantially descriptive. When Jane says that Max is good, she appears to be evaluating him without providing much description, if any. Thick concepts, on the other hand, are evaluative and substantially descriptive at the same time. For instance, when Max says that Jane is courageous, he seems to be doing two things: evaluating her positively and describing her as willing to face risk. Because of their descriptiveness, thick concepts are especially good candidates for evaluative concepts that pick out properties in the world. Thus they provide an avenue for thinking about ethical claims as being about the world in the same way as descriptive claims. -/- Thick concepts became a focal point in ethics during the second half of the twentieth century. At that time, discussions of thick concepts began to emerge in response to certain disagreements about thin concepts. For example, in twentieth-century ethics, consequentialists and deontologists hotly debated various accounts of good and right. It was also claimed by non-cognitivists and error-theorists that these thin concepts do not correspond to any properties in the world. Dissatisfaction with these viewpoints prompted many ethicists to consider the implications of thick concepts. The notion of a thick concept was thought to provide insight into meta-ethical questions such as whether there is a fact-value distinction, whether there are ethical truths, and, if there are such truths, whether these truths are objective. Some ethicists also theorized about the role that thick concepts can play in normative ethics, such as in virtue theory. By the beginning of the twenty-first century, the interest in thick concepts had spread to other philosophical disciplines such as epistemology, aesthetics, metaphysics, moral psychology, and the philosophy of law. -/- Nevertheless, the emerging interest in thick concepts has sparked debates over many questions: How exactly are thick concepts evaluative? How do they combine evaluation and description? How are thick concepts related to thin concepts? And do thick concepts have the sort of significance commonly attributed to them? This article surveys various attempts at answering these questions. (shrink)
Inventions of Teaching: A Genealogy is a powerful examination of current metaphors for and synonyms of teaching. It offers an account of the varied and conflicting influences and conceptual commitments that have contributed to contemporary vocabularies--and that are in some ways maintained by those vocabularies, in spite of inconsistencies and incompatibilities among popular terms. The concern that frames the book is how speakers of English invented (in the original sense of the word, "came upon") our current vocabularies for teaching. Conceptually, (...) this book is unique in the educational literature. As a whole, it presents an overview of the major underlying philosophical and ideological concepts and traditions related to knowledge, learning, and teaching in the Western world, concisely introducing readers to the central historical and contemporary discourses that shape current discussions and beliefs in the field. Because the organization of historical, philosophical, theoretical, and etymological information is around key conceptual divergences in Western thought rather than any sort of chronology, this text is not a linear history, but several histories--or, more precisely, it is a genealogy. Specifically, it is developed around breaks in opinion that gave or are giving rise to diverse interpretations of knowledge, learning, and teaching--highlighting historical moments in which vibrant new figurative understandings of teaching emerged and moments at which they froze into literalness. The book is composed of two sorts of chapters, "branching" and "teaching." Branching chapters include an opening treatment of the break in opinion, separate discussions of each branch, and a summary of the common assumptions and shared histories of the two branches. Teaching chapters offer brief etymological histories and some of the practical implications of the terms for teaching that were coined, co-opted, or redefined within the various traditions. Inventions of Teaching: A Genealogy is an essential text for senior undergraduate and graduate courses in curriculum studies and foundations of teaching and is highly relevant as well for students, faculty, and researchers across the field of education. (shrink)
Despite substantial research on overall decision-making capacity levels in schizophrenia, the factors that cause individuals to make errors when making decisions regarding research participation or treatment are relatively unknown. We examined the responses of 84 individuals, middle-aged or older, with schizophrenia or schizoaffective disorder. We used a structured decision-making capacity measure, the MacArthur Competence Assessment Tool for Clinical Research, to determine the frequency and apparent cause of participants’ errors. We found that most errors were due to difficulty recalling the disclosed (...) information, particularly the study’s procedures, potential risks, and purpose. Errors attributable to concrete thinking, psychotic symptoms, or perceived coercion were rarer. These results suggest that informed consent procedures for this population might be improved by providing information in a way that facilitates learning and memory, such as iterative disclosure of the information, corrective feedback, and emphasis of key points of the study—for instance, its purpose, procedures, and potential risks. (shrink)