Home, digital technologies and data are intersecting in new ways as responses to the COVID-19 pandemic emerge. We consider the data practices associated with COVID-19 responses and their implications for housing and home through two overarching themes: the notion of home as a private space, and digital technology and surveillance in the home. We show that although home has never been private, the rapid adoption and acceptance of technologies in the home for quarantine, work and study, enabled by the pandemic, (...) is rescripting privacy. The acceleration of technology adoption and surveillance in the home has implications for privacy and potential discrimination, and should be approached with a critical lens. (shrink)
Hegel's "highway of despair," introduced in his _Phenomenology of Spirit_, represents the tortured path traveled by "natural consciousness" on its way to freedom. Despair, the passionate residue of Hegelian critique, also indicates fugitive opportunities for freedom and preserves the principle of hope against all hope. Analyzing the works of an eclectic cast of thinkers, Robyn Marasco considers the dynamism of despair as a critical passion, reckoning with the forms of historical life forged along Hegel's highway. _The Highway of Despair_ (...) follows Theodor Adorno, Georges Bataille, and Frantz Fanon as they each read, resist, and reconfigure a strand of thought in Hegel's _Phenomenology of Spirit_. Confronting the twentieth-century collapse of a certain revolutionary dialectic, these thinkers struggle to revalue critical philosophy and recast Left Hegelianism within the contexts of genocidal racism, world war, and colonial domination. Each thinker also re-centers the role of passion in critique. Arguing against more recent trends in critical theory that promise an escape from despair, Marasco shows how passion frustrates the resolutions of reason and faith. Embracing the extremism of what Marx, in the spirit of Hegel, called the "ruthless critique of everything existing," she affirms the contemporary purchase of radical critical theory, resulting in a passionate approach to political thought. (shrink)
A review of the literature indicates that linear models are frequently used in situations in which decisions are made on the basis of multiple codable inputs. These models are sometimes used normatively to aid the decision maker, as a contrast with the decision maker in the clinical vs statistical controversy, to represent the decision maker "paramorphically" and to "bootstrap" the decision maker by replacing him with his representation. Examination of the contexts in which linear models have been successfully employed indicates (...) that the contexts have the following structural characteristics in common: each input variable has a conditionally monotone relationship with the output; there is error of measurement; and deviations from optimal weighting do not make much practical difference. These characteristics ensure the success of linear models, which are so appropriate in such contexts that random linear models may perform quite well. 4 examples involving the prediction of such codable output variables as GPA and psychiatric diagnosis are analyzed in detail. In all 4 examples, random linear models yield predictions that are superior to those of human judges. (shrink)
This paper proposes that Catherine Elgin’s and Nelson Goodman’s work on exemplification is relevant for discussions within moral philosophy and moral education. Generalizing Elgin’s and Goodman’s account of exemplification to also cover ethics the paper develops a two-factor account of moral exemplarity. According to this account, instantiation and expressivity are individually necessary and jointly sufficient conditions for someone or something to function as a moral exemplar. Applying this two-factor account of exemplarity to discussions within the philosophy of moral education the (...) paper then argues that it is the expressive aspect of moral exemplars, which explains and justifies the educational significance of such exemplars. The paper concludes by discussing the similarities and differences between the expressivity account and the transparency criterion formulated by Michel Croce and Maria Silvia Vaccarezza in a recent paper. (shrink)
Proper linear models are those in which predictor variables are given weights such that the resulting linear composite optimally predicts some criterion of interest; examples of proper linear models are standard regression analysis, discriminant function analysis, and ridge regression analysis. Research summarized in P. Meehl's book on clinical vs statistical prediction and research stimulated in part by that book indicate that when a numerical criterion variable is to be predicted from numerical predictor variables, proper linear models outperform clinical intuition. Improper (...) linear models are those in which the weights of the predictor variables are obtained by some nonoptimal method. The present article presents evidence that even such improper linear models are superior to clinical intuition when predicting a numerical criterion from numerical predictors. In fact, unit weighting is quite robust for making such predictions. The application of unit weights to decide what bullet the Denver Police Department should use is described; some technical, psychological, and ethical resistances to using linear models in making social decisions are considered; and arguments that could weaken these resistances are presented. (shrink)
This intra-view explores a number of productive junctions between contemporary Deleuzoguattarian and new materialist praxes via a series of questions and provocations. Productive tensions are explored via questions of epistemological, ontological, ethical, and political intra-sections as well as notions of difference, transversal contamination, ecosophical practices, diffraction, and, lastly, schizoanalysis. Various irruptions around biophilosophy, transduction, becomology, cartography, power relations, hyperobjects as events, individuation, as well as dyschronia and disorientation, take the discussion further into the wild pedagogical spaces that both praxes have (...) in common. -/- . (shrink)
Studies in experimental philosophy claim to document intuition variation. Some studies focus on demographic group-variation; Colaço et al., for example, claim that age generates intuition variation regarding knowledge attribution in a fake-barn scenario. Other studies claim to show intuition variation when comparing the intuition of philosophers to that of non-philosophers. The main focus has been on documenting intuition variation rather than uncovering what underlying factor may prompt such a phenomenon. We explore a number of suggested explanatory hypotheses put forth by (...) Colaço et al., as well as an attempt to test Sosa's claim that intuition variance is a result of people ‘filling in the details’ of a thought experiment differently from one another. We show that people respond consistently across conditions aimed at ‘filling in the details’ of thought experiments, that risk attitude does not seem relevant to knowledge ascription, that people's knowledge ascriptions do not vary due to views about defeasibility of knowledge. Yet, we find no grounds to reject that a large proportion of people appear to adhere to so-called subjectivism about knowledge, which may explain why they generally have intuitions about the fake-barn scenario that vary from those of philosophers. (shrink)
I propose that an account of metaphor understanding which covers the full range of cases has to allow for two routes or modes of processing. One is a process of rapid, local, on-line concept construction that applies quite generally to the recovery of word meaning in utterance comprehension. The other requires a greater focus on the literal meaning of sentences or texts, which is metarepresented as a whole and subjected to more global, reflective pragmatic inference. The questions whether metaphors convey (...) a propositional content and what role imagistic representation plays receive somewhat different answers depending on the processing route. (shrink)
Most people working on linguistic meaning or communication assume that semantics and pragmatics are distinct domains, yet there is still little consensus on how the distinction is to be drawn. The position defended in this paper is that the semantics/pragmatics distinction holds between encoded linguistic meaning and speaker meaning. Two other ‘minimalist’ positions on semantics are explored and found wanting: Kent Bach’s view that there is a narrow semantic notion of context which is responsible for providing semantic values for a (...) small number of indexicals, and Herman Cappelen and Ernie Lepore’s view that semantics includes the provision of values for all indexicals, even though these depend on the speaker’s communicative intentions. Finally, some implications are considered for the favoured semantics/pragmatics distinction of the fact that there are linguistic elements which do not contribute to truth-conditional content but rather provide guidance on pragmatic inference. (shrink)
The idea is that, in a wide range of contexts, utterances of the sentences in (a) in each case will communicate the assumption in (b) in each case (or something closely akin to it, there being a certain amount of contextually governed variation in the speaker's propositional attitude and so the scope of the negation). These scalar inferences are taken to be one kind of (generalized) conversational implicature. As is the case with pragmatic inference quite generally, these inferences are defeasible (...) (cancellable), which distinguishes them from entailments, and they are nondetachable, which distinguishes them from conventional implicatures. The core idea is that the choice of a weaker element from a scale of elements ordered in terms of semantic strength (that is, numbers of entailments) tends to implicate that, as far as the speaker knows, none of the stronger elements in the scale holds in this instance. The pattern is quite clear in (1) and (2), where the weak/strong alternatives are some/all and five/six respectively. In the case of (3), the stronger expression must be intelligent and good-hearted which entails intelligent; what Y's utterance implicates is that Mary does not have the two properties: intelligence and good-heartedness, so that, given the proposition expressed (Mary is intelligent) it follows, deductively, that she is not good-hearted, in Y's opinion. The example in (4) involves a scale inversion due to the negation, so that the weak/strong alternatives are not necessarily/not possibly; the negation which the scalar inference generates creates a double negation, which is eliminated giving possibly. (shrink)
Anthropocentrism is "the belief that there is a clear and morally relevant dividing line between humankind and the rest of nature, that humankind is the only principal source of value or meaning in the world" p. 51.
In Free Will and Luck, Mele presents a case of an agent Ernie, whose zygote was intentionally designed so that Ernie A-s in 30 years, bringing about a certain event E. Mele uses this case of original design to outline the zygote argument against compatibilism. In this paper I criticize the zygote argument. Unlike other compatibilists who have responded to the zygote argument, I contend that it is open to the compatibilist to accept premise one, that Ernie does not act (...) freely and is not morally responsible for anything he does. I argue that compatibilists should deny premise two. Diana’s effective intention to create Ernie’s zygote such that Ernie A-s in 30 years and her intervention to bring about his A-ing mark a significant difference between Ernie and normal agents in a deterministic universe with regard to how their zygotes were created that affects whether those agents act freely and are morally responsible for so acting. (shrink)
"At once rigorous, insightful, and accessible.... the most thorough study yet available on the phenomenological treatment of God as gift in Marion and Derrida. Invaluable reading for those concerned with the theological promise of contemporary Continental philosophy."-Thomas A. Carlson, University of California, Santa Barbara.
This empirical study concerns the authorship credit decision-making processes and outcomes that occur among coauthors in cases of multiauthored publications. The 2002 American Psychological Association (APA) Ethics Code offers standards for determining authorship order; however, little is known about how these decisions are made in actual practice. Results from a survey of 109 randomly selected authors indicated that most authors were satisfied with the decision-making process and outcome with few disagreements. Participants reported cases of both undeserved authorship being given and (...) omission of deserving contributors' names as coauthors. Some factors associated with authorship decisions included "sense of loyalty or obligation," "publish or perish pressures," and "power differentials." Authors who used APA standards were significantly more satisfied with both the process and outcome of authorship credit decisions. (shrink)
A standard view of the semantics of natural language sentences or utterances is that a sentence has a particular logical structure and is assigned truth-conditional content on the basis of that structure. Such a semantics is assumed to be able to capture the logical properties of sentences, including necessary truth, contradiction and valid inference; our knowledge of these properties is taken to be part of our semantic competence as native speakers of the language. The following examples pose a problem for (...) this view of semantics. (shrink)
Gilbert et al. argue that discussions of self-related changes in patients undergoing DBS are overblown. They show that there is little evidence that these changes occur frequently and make recommendations for further research. We point out that their framing of the issue, their methodology, and their recommendations do not attend to other important questions about these changes.
This article responds to Marzia Milazzo's article ‘On white ignorance, white shame, and other pitfalls in critical philosophy of race’, in which Milazzo argues that the concepts white shame, white guilt, white privilege, white habits, white invisibility and white ignorance are pitfalls in the process of decolonisation. Milazzo contends that the way these concepts are theorised in much critical philosophy of race minimises white people's active interest in reproducing the racial status quo. While I agree with Milazzo's critique of white (...) shame and white guilt, I argue that these affective responses are fundamentally different to the remaining concepts. Drawing on critical whiteness studies and agnotology, I argue that white privilege, white invisibility and white ignorance are valuable conceptual tools for revealing white people's active investment in maintaining racial inequality. Whereas Milazzo sees a contradiction between white people's active interest in maintaining racial inequality and concepts like white invisibility and white ignorance, I argue that, correctly theorised, these concepts resolve this apparent contradiction. I contest Milazzo's call to reject white privilege, white invisibility and white ignorance, arguing that these concepts are useful tools in the project of decolonisation. (shrink)
What I hope to achieve in this paper is some rather deeper understanding of the semantic and pragmatic properties of utterances which are said to involve the phenomenon of metalinguistic negation[FN1]. According to Laurence Horn, who has been primarily responsible for drawing our attention to it, this is a special non-truthfunctional use of the negation operator, which can be glossed as 'I object to U' where U is a linguistic utterance. This is to be distinguished from descriptive truthfunctional negation which (...) operates over a proposition. (shrink)
Within relevance theory the two local pragmatic processes of enrichment and loosening of linguistically encoded conceptual material have been given quite distinct treatments. Enrichments of various sorts, including those which involve a logical strengthening of a lexical concept, contribute to the proposition expressed by the utterance, hence to its truth-conditions. Loosenings, including metaphorical uses, do not enter into the proposition expressed by the utterance or affect its truth-conditions; they stand in a relation of 'interpretive resemblance' with the linguistically encoded concept (...) used to represent them. This asymmetric treatment is questioned here, arguments are given for an account which reflects the complementarity of these processes and several alternative symmetrical treatments are explored. (shrink)
Evidence-based medicine (EBM) ranks different medical research methods on a hierarchy, at the top of which are randomized controlled trials (RCTs) and systematic reviews or meta-analyses of RCTs. Any study that does not randomly assign patients to a treatment or a control group is automatically placed at a lower level on the hierarchy. This article argues that what matters is whether the treatment and control groups are similar with respect to potential confounding factors, not whether they got that way through (...) randomization. Moreover, nonrandomized studies tend to have other characteristics that make them useful sources of evidence, in that they tend to last longer and to enroll more patients than do randomized trials. Replacing the sharp dichotomy between randomized and nonrandomized studies with a continuum from "clean" studies (which have high internal validity but whose results do not readily generalize to clinical practice) to pragmatic studies (which are designed to more closely reflect clinical practice) would also make a place for outcomes research and research using clinical databases, which are not included in the current hierarchy of evidence but which can provide important information about the safety and efficacy of treatments. (shrink)
Although researchers in psychiatry have been trying for decades to elucidate the pathophysiology underlying mental disorders, relatively little progress has been made. One explanation for this failure is that diagnostic categories in psychiatry are unlikely to track underlying neurological mechanisms. Because of this, the US National Institutes of Mental Health has recently developed a novel ontology to guide research in biological psychiatry: the Research Domain Criteria. In this paper, I argue that while RDoC may lead to better neuroscientific explanations for (...) mental disorders, it is unlikely that this new knowledge will then lead to an improved diagnostic system. I therefore suggest that researchers in psychiatry should work toward the development of two new ontologies: one for research and one for clinical practice. (shrink)
A number of reports have suggested that patients who undergo deep brain stimulation may experience changes to their personality or sense of self. These reports have attracted great philosophical interest. This paper surveys the philosophical literature on personal identity and DBS and draws on an emerging empirical literature on the experiences of patients who have undergone this therapy to argue that the existing philosophical discussion of DBS and personal identity frames the problem too narrowly. Much of the discussion by neuroethicists (...) centers on the nature of the threat posed by DBS, asking whether it is best understood as a threat to personal identity, autonomy, agency, or authenticity, or as putting patients at risk of self-estrangement. Our aim in this paper is to use the empirical literature on patients’ experiences post-DBS to open up a broader range of questions - both philosophical and practical, and to suggest that attention to these questions will help to provide better support to patients, both before and after treatment. (shrink)
Within the philosophy of language, pragmatics has tended to be seen as an adjunct to, and a means of solving problems in, semantics. A cognitive-scientific conception of pragmatics as a mental processing system responsible for interpreting ostensive communicative stimuli (specifically, verbal utterances) has effected a transformation in the pragmatic issues pursued and the kinds of explanation offered. Taking this latter perspective, I compare two distinct proposals on the kinds of processes, and the architecture of the system(s), responsible for the recovery (...) of speaker meaning (both explicitly and implicitly communicated meaning). (shrink)
The aim of this paper is to challenge libertarian accounts of free will. It is argued that there is an irreconcilable tension between the way in which philosophers motivate the incompatibilist ability to do otherwise and the way in which they formally express it. Potential incompatibilist responses in the face of this tension are canvassed, and it is argued that each response is problematic. It is not claimed that incompatibilist accounts in general are incoherent, but rather that any incompatibilist account (...) that requires that an agent have alternative possibilities at the point of a free action fails. (shrink)
Going beyond the hype of recent fMRI "findings," this interdisciplinary collection examines such questions as: Do women and men have significantly different brains? Do women empathize, while men systematize? Is there a "feminine" ethics? What does brain research on intersex conditions tell us about sex and gender?
Feminist scholars have shown that research on sex/gender differences in the brain is often used to support gender stereotypes. Scientists use a variety of methodological and interpretive strategies to make their results consistent with these stereotypes. In this paper, I analyze functional magnetic resonance imaging (fMRI) research that examines differences between women and men in brain activity associated with emotion and show that these researchers go to great lengths to make their results consistent with the view that women are more (...) emotional than men. (shrink)
It is widely accepted that there is a distinction to be made between the explicit content and the implicit import of an utterance. There is much less agreement about the precise nature of this distinction, how it is to be drawn, and whether any such two-way distinction can do justice to the levels and kinds of meaning involved in utterance interpretation. Grice’s distinction between what is said by an utterance and what is implicated is probably the best known instantiation of (...) the explicit/implicit distinction. His distinction, along with many of its post-Gricean heirs, is closely entwined with another distinction: that between semantics and pragmatics. Indeed, on some construals they are seen as essentially one and the same; “what is said” is equated with the truthconditional content of the utterance which in turn is equated with (context-relative) sentence meaning, leaving implicatures (conventional and conversational) as the sole domain of pragmatics. (shrink)
Neuroscience research examining sex/gender differences aims to explain behavioral differences between men and women in terms of differences in their brains. Historically, this research has used ad hoc methods and has been conducted explicitly in order to show that prevailing gender roles were dictated by biology. I examine contemporary fMRI research on sex/gender differences in emotion processing and argue that it, too, both uses problematic methods and, in doing so, reinforces gender stereotypes.
This essay argues for a transversal posthumanities-based pedagogy, rooted in an attentive ethico-onto-epistemology, by reading the schizoanalytical praxes of Deleuzoguattarian theory alongside the work of various feminist new materialist scholars.
Cognitive neuropsychology is that branch of cognitive psychology that investi- gates people with acquired or developmental disorders of cognition. The aim is to learn more about how cognitive systems normally operate or about how they are normally acquired by studying selective patterns of cognitive break- down after brain damage or selective dif?culties in acquiring particular cogni- tive abilities. In the early days of modern cognitive neuropsychology, research focused on rather basic cognitive abilities such as speech comprehension or production at the (...) single-word level, reading and spelling, object and face recognition, and short-term memory. More recently the cognitive-neuro- psychological approach has been applied to the study of rather more complex domains of cognition such as belief ?xation (e.g. Coltheart and Davies, 2000; Langdon and Coltheart, 2000) and pragmatic aspects of communication (e.g. McDonald and Van Sommers, 1993). Our paper concerns the investigation of pragmatic disorders in one clinical group in which such disorders are common, patients with schizophrenia, and what the study of such people can tell us about the normal processes of communication. (shrink)