A philosophical essay under this title faces severe rhetorical challenges. New accounts of the good life regularly and rapidly turn out to be variations of old ones, subject to a predictable range of decisive objections. Attempts to meet those objections with improved accounts regularly and rapidly lead to a familiar impasse — that while a life of contemplation, or epicurean contentment, or stoic indifference, or religious ecstasy, or creative rebellion, or self-actualization, or many another thing might count as a good (...) life, none of them can plausibly be identified with the good life, or the best life. Given the long history of that impasse, it seems futile to offer yet another candidate for the genus “good life” as if that candidate might be new, or philosophically defensible. And given the weariness, irony, and self-deprecation expected of a philosopher in such an impasse, it is difficult for any substantive proposal on this topic to avoid seeming pretentious. (shrink)
There has been a lot of interest over the last fifteen years or so in no-collapse interpretations of quantum mechanics. The Bohm interpretation of quantum mechanics has received several thorough accounts, perhaps most notably by Bohm himself.
This article examines Becker's thesis that the hypothesis that choices maximize expected utility relative to fixed and universal tastes provides a general framework for the explanation of behaviour. Three different models of preference revision are presented and their scope evaluated. The first, the classical conditioning model, explains all changes in preferences in terms of changes in the information held by the agent, holding fundamental beliefs and desires fixed. The second, the Jeffrey conditioning model, explains them in terms of changes (...) in both the information held by the agent and changes in her prior beliefs, holding her fundamental desires fixed. The final model, that of generalized conditioning, allows for explanations in terms of changes in the values of all three variables. Key Words: preference change • decision theory • probability • desirability • attitude change. (shrink)
Epistemic luck has been the focus of much discussion recently. Perhaps the most general knowledge-precluding type is veritic luck, where a belief is true but might easily have been false. Veritic luck has two sources, and so eliminating it requires two distinct conditions for a theory of knowledge. I argue that, when one sets out those conditions properly, a solution to the generality problem for reliabilism emerges.
In the psychological literature, love is often seen as a construct inseparable from that of close, interpersonal relationships. As a result, it has been often assumed that the same motivational factors underlie both phenomena. This often leads researchers to propose that love does not exist in itself—that it is an emotion which stems solely from a need for attachment, fulfillment of reproductive aims, or for social exchange. The popular cultural imagination, however, perceives love as a unique, mysterious, altruistic, ever-lasting bond (...) between two people—a vision of love which is at odds with its supposed psychological origins. We propose that an ideal of love and its enactment in our culture is a result of two intertwining factors. Within the last few centuries, interpersonal relationships and love have replaced religion as islands of existential comfort. Toward this end, lovers project illusory meaning on their partners. The laborious and turbulent process of withdrawing these projections can lead to what many thinkers think “love” is: bestowal of value on another, and consequent respect for, and care for that person, unmotivated by one's own needs, within the context of a real relationship. (shrink)
The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe that (...) it is very unlikely, the mere possibility of humanity causing extreme suffering to an AI is important enough to warrant serious consideration. This paper starts from the observation that both concerns rely on problematic philosophical assumptions. Rather than tackling these assumptions directly, it proceeds to present an argument that if one takes these assumptions seriously, then one has a moral obligation to advocate for a ban on the development of a conscious AI. (shrink)
Timothy Williamson has provided damaging counterexamples to Robert Nozick’s sensitivity principle. The examples are based on Williamson’s anti-luminosity arguments, and they show how knowledge requires a margin for error that appears to be incompatible with sensitivity. I explain how Nozick can rescue sensitivity from Williamson’s counterexamples by appeal to a specific conception of the methods by which an agent forms a belief. I also defend the proposed conception of methods against Williamson’s criticisms.
What is the strength of anthropological fieldwork when we want to understand human technologies? In this article we argue that anthropological fieldwork can be understood as a process of gaining insight into different contextualisations in practiced places that will open up new understandings of technologies in use, e.g., technologies as multistable ontologies. The argument builds on an empirical study of robots at a Danish rehabilitation centre. Ethnographic methods combined with anthropological learning processes open up new way for exploring how robots (...) enter into professional practices and change values, social relations and materialities. Though substantial funding has been invested in developing health service robots, few studies have been undertaken that explore human-robot interactions as they play out in everyday practice. We argue that the complex learning processes involve not only so-called end-users but also staff, management, doings and discourse in a complex amalgamation of materials and values. (shrink)
We examine two different descriptions of the behavioral functions of the hippocampal system. One emphasizes spatially organized behaviors, especially those using cognitive maps. The other emphasizes memory, particularly working memory, a short-term memory that requires iexible stimulus-response associations and is highly susceptible to interference. The predictive value of the spatial and memory descriptions were evaluated by testing rats with damage to the hippocampal system in a series of experiments, independently manipulating the spatial and memory characteristics of a behavioral task. No (...) dissociations were found when the spatial characteristics of the stimuli to be remembered were changed; lesions produced a similar deficit in both spatial and nonspatial test procedures, indicating that the hippocampus was similarly involved regardless of the spatial nature of the task. In contrast, a marked dissociation was found when the memory requirements were altered. Rats with lesions were able to perform accurately in tasks that could be solved exclusively on the basis of reference memory. They performed at chance levels and showed no signs of recovery even with extensive postoperative training in tasks that required working memory. In one experiment all the characteristics of the reference memory and working memory procedures were identical except the type of memory required. Consequently, the behavioral dissociation cannot be explained by differences in attention, motivation, response inhibition, or the type of stimuli to be remembered. As a result of these experiments we propose that the hippocampus is selectively involved in behaviors that require working memory, irrespective of the type of material that is to be processed by that memory. (shrink)
Kelly Becker has argued that in an externalist anti-luck epistemology, we must hold that knowledge requires the satisfaction of both a modalized tracking condition and a process reliability condition. We raise various problems for the examples that are supposed to establish this claim.
Viewed from a certain perspective, nothing can seem more secure than introspection. Consider an ordinary conscious episode—say, your current visual experience of the colour of this page. You can judge, when reflecting on this experience, that you have a visual experience as of something white with black marks before you. Does it seem reasonable to doubt this introspective judgement? Surely not—such doubt would seem utterly fanciful. The trustworthiness of introspection is not only assumed by commonsense, it is also taken for (...) granted by many of theorists about the mind. Within both philosophy and the science of consciousness it is widely held that introspection is generally reliable, at least with respect to the question of one’s current (or immediately prior) conscious states. Without this assumption, we could not make sense of theorists’ widespread use of introspection, both in support of their own position and to undermine that of their opponents. (shrink)
In a recent study, Becker and Elliott [Becker, C., & Elliott, M. A. . Flicker induced color and form: Interdependencies and relation to stimulation frequency and phase. Consciousness & Cognition, 15, 175–196] described the appearance of subjective experiences of color and form induced by stimulation with intermittent light. While there have been electroencephalographic studies of similar hallucinatory forms, brain activity accompanying the appearance of hallucinatory colors was never measured. Using a priming procedure where observers were required to indicate (...) the presence of one of eight target colors we compared electrophysiological correlates of hallucinatory color with brain states associated with other visual phenomena. Different target colors were accompanied by different patterns of EEG activation. However, in general, we found that the appearance of hallucinatory colors is preceded by a power decrease in the lower alpha band alongside an increase in gamma band frequencies. We argue that decreasing activity in the lower alpha band acts as a gating mechanism, inducing a switch in perception between different colors. The increasing gamma activation may correlate with the formation of a coherent conscious percept. (shrink)
In 2007 a social scientist and a designer created a spatial installation to communicate social science research about the regulation of emerging science and technology. The rationale behind the experiment was to improve scientific knowledge production by making the researcher sensitive to new forms of reactions and objections. Based on an account of the conceptual background to the installation and the way it was designed, the paper discusses the nature of the engagement enacted through the experiment. It is argued that (...) experimentation is a crucial way of making social science about science communication and engagement more robust. (shrink)
Maya Zehfuss critiques constructivist theories of international relations (currently considered to be at the cutting edge of the discipline) and finds them wanting and even politically dangerous. Zehfuss uses Germany's first shift toward using its military abroad after the end of the Cold War to illustrate why constructivism does not work and how it leads to particular analytical outcomes and forecloses others. She argues that scholars are limiting their abilities to act responsibly in international relations by looking towards constructivism as (...) the future. (shrink)
In discussing rational choice theory (RCT) as an explanation of demand behavior, Becker (1962, Journal of Political Economy, 70, 1?13) proposed a model of random choice in which consumers pick a bundle on their budget line according to a uniform distribution. This model has then been used in various ways to assess the validity of RCT and to support as-if arguments in defense of it. This paper makes both historical and methodological contributions. Historically, it investigates how the interpretation of (...)Becker random behavior evolved between the original 1962 article and the modern experimental literature on individual demand, and surveys six experiments in which it has been used as an alternative hypothesis to RCT. Methodologically, this paper conducts an assessment of the as-if defense of RCT from the standpoint of Becker's model. It argues that this defense is ?weak? in a number of senses, and that it has negatively influenced the design of experiments about RCT. (shrink)
Reliabilism furnishes an account of basic knowledge that circumvents the problem of the given. However, reliabilism and other epistemological theories that countenance basic knowledge have been criticized for permitting all-too-easy higher-level knowledge. In this paper, I describe the problem of easy knowledge, look briefly at proposed solutions, and then develop my own. I argue that the easy knowledge problem, as it applies to reliabilism, hinges on a false and too crude understanding of ‘reliable’. With a more plausible conception of ‘reliable’, (...) a simple and elegant solution emerges. (shrink)
ABSTRACTWhile Hegel’s concept of second nature has now received substantial attention from commentators, relatively little has been said about the place of this concept in the Phenomenology of Spirit. This neglect is understandable, since Hegel does not explicitly use the phrase ‘second nature’ in this text. Nonetheless, several closely related phrases reveal the centrality of this concept to the Phenomenology’s structure. In this paper, I develop new interpretations of the figures ‘natural consciousness’, ‘natural notion’, and ‘inorganic nature’, in order to (...) elucidate the distinctive concept of second nature at work in the Phenomenology. I will argue that this concept of second nature supplements the ‘official’ version, developed in the Encyclopedia, with an ‘unofficial’ version that prefigures its use in critical theory. At the same time, this reconstruction will allow us to see how the Phenomenology essentially documents spirit’s acquisition of a ‘second nature’. (shrink)
The paper discusses two answers to the question, How to address the harmful effects of technology? The first response proposes a complete separation of science from culture, religion, and ethics. The second response finds harm in the logic and method of science itself. The paper deploys a feminist technoscience approach to overcome these accounts of neutral or deterministic technological agency. In this technoscience perspective, agency is not an attribute of autonomous human users alone but enacted and performed in socio-material configurations (...) of people and technology and their ‘intra-actions’. This understanding of agency is proposed as an alternative that opens up for the reconfiguration of design and use for more ethical effects, such as the cultivation of cognitive justice, the equal treatment and representation of different ways of knowing the world. The implication of this approach is that design becomes an adaptive and ongoing intra-active process in which more desirable configurations of people and technology become possible. (shrink)
The article combines a criticism of public understanding of science with the sociology of expectations to examine how particular expectations toward scientific progress have performative effects for the construction of publics as citizens of science. By analyzing a particular controversy about gene therapy in Denmark, the article demonstrates how different sets of expectations can be used to discriminate among three different assemblages: the assemblage of consumption, the assemblage of comportment, and the assemblage of heroic action. Each of these assemblages makes (...) medical science, scientific citizenship, politics, patients, doctors, and expectations toward the future emerge in particular ways. By their radically different expectations toward science and their different constructions of what it means to be a scientific citizen, the assemblages construct the objectives of the governance of science in three very different ways. (shrink)
This chapter discusses the main types of so-called ’subjective measures of consciousness’ used in current-day science of consciousness. After explaining the key worry about such measures, namely the problem of an ever-present response bias, I discuss the question of whether subjective measures of consciousness are introspective. I show that there is no clear answer to this question, as proponents of subjective measures do not employ a worked-out notion of subjective access. In turn, this makes the problem of response bias less (...) tractable than it might otherwise be. (shrink)
Assessments of quality in healthcare often focus on treatment outcome or patient safety, but rarely acknowledge the importance of patients’ encounters with healthcare personnel. The aim of this study was to gain an improved understanding of negative experiences of healthcare encounters by investigating experiences of the general population. A questionnaire was distributed to a randomly selected sample population of 1484 inhabitants in Stockholm County, Sweden. The material was subjected to conventional content analysis. Seventeen different types of complaint about negative encounters (...) were identified, including unpleasant behavior, not being listened to, inadequate information, and discrimination. Two possible underlying explanations are discussed; structural factors relating to the organization and allocation of healthcare, and individual factors relating to the staff’s attitudes and professional practice. The results indicate that different strands of actions are needed to reduce patients’ negative experiences of encounters in healthcare, depending on the setting as well as on which of the two factors predominates. (shrink)
When philosophers speak of the inconclusiveness of arguments for the existence of God, they often do so as if they were talking about a matter of principle—as if it were in principle impossible to prove God's existence, that every proof was in principle inconclusive. Of course, rebutals of the cosmological, ontological, and teleological arguments are usually designed to show that these types of arguments are in principle inconclusive. But one supposes that religious experience arguments are not all in such difficulties. (...) That is, one supposes, for example, that an encounter with the deity would provide a proof of his existence which is at least as conclusive as proofs for the existence of an ‘external world’. And thus it would be false to maintain in an unqualified way that ‘Reason cannot prove the existence of God’. The most one would be able to say would be that at present , or in terms of the currently available evidence, no one can prove God's existence. Further, whether or not sufficient evidence has ever been available in the past would be seen as an historical question— a matter of contingencies, not logical possibilities. (shrink)
: Duncan Pritchard has recently highlighted the problem of veritic epistemic luck and claimed that a safety‐based account of knowledge succeeds in eliminating veritic luck where virtue‐based accounts and process reliabilism fail. He then claims that if one accepts a safety‐based account, there is no longer a motivation for retaining a commitment to reliabilism. In this article, I delineate several distinct safety principles, and I argue that those that eliminate veritic luck do so only if at least implicitly committed to (...) reliabilism. (shrink)
I argue that Quine''s famous claim, any statement can be held true come what may, demands an interpretation that implies that the meanings of the expressions in the held-true statement change. The intended interpretation of this claim is not clear from its context, and so it is often misunderstood by philosophers (and is misleadingly taught to their students). I explain Fodor and Lepore''s (1992) view that the above interpretation would render Quine''s assertion entirely trivial and reply, on both textual and (...) philosophical grounds, that only this trivial reading is consistent with Quine''s famous denial of analyticity. I also explain briefly how the trivial reading lends support to meaning holism, which, regardless of one''s views of its consequences, is an important position in the philosophy of language and mind. (shrink)
Although Foucault’s 1979 lectures on The Birth of Biopolitics promised to treat the theme of biopolitics, the course deals at length with neoliberalism while mentioning biopolitics hardly at all. Some scholars account for this elision by claiming that Foucault sympathized with neoliberalism; I argue on the contrary that Foucault develops a penetrating critique of the neoliberal claim to preserve individual liberty. Following Foucault, I show that the Chicago economist Gary Becker exemplifies what Foucault describes elsewhere as biopolitics: a form (...) of power applied to the behavior of a population through the normalizing use of statistics. Although Becker’s preference for indirect intervention might seem to preserve the independence of individuals, under biopolitics individual liberty is itself the means by which populations are governed indirectly. In my view, by describing the history and ambivalence of neoliberal biopolitics, Foucault fosters a critical vigilance that is the precondition for creative political resistance. (shrink)
Our understanding of human visual perception generally rests on the assumption that conscious visual states represent the interaction of spatial structures in the environment and our nervous system. This assumption is questioned by circumstances where conscious visual states can be triggered by external stimulation which is not primarily spatially defined. Here, subjective colors and forms are evoked by flickering light while the precise nature of those experiences varies over flicker frequency and phase. What’s more, the occurrence of one subjective experience (...) appears to be associated with the occurrence of others. While these data indicate that conscious visual experience may be evoked directly by particular variations in the flow of spatially unstructured light over time, it must be assumed that the systems responsible are essentially temporal in character and capable of representing a variety of visual forms and colors, coded in different frequencies or at different phases of the same processing rhythm. (shrink)
The ethical behavior of marketing managers was examined by analyzing their responses to a series of different types of ethical dilemmas presented in vignette form. The ethical dilemmas addressed dealt with the issues of (1) coercion and control, (2) conflict of interest, (3) the physical environment, (4) paternalism, and (5) personal integrity. Responses were analyzed to discover whether managers' behavior varied by type of issue faced or whether there is some continuity to ethical behavior which transcends the type of ethical (...) problem addressed. (shrink)
Ethical leadership has so far mainly been featured in the organizational behavior domain and, as such, treated as an intra-organizational phenomenon. The present study seeks to highlight the relevance of ethical leadership for extra-organizational phenomena by combining the organizational behavior perspective on ethical leadership with a classical marketing approach. In particular, we demonstrate that customers may use perceived ethical leadership cues as additional reference points when forming purchasing intentions. In two experimental studies, we find that ethical leadership positively affects purchasing (...) intentions because of customers’ concerns for moral self-congruence. We show this by means of both mediation and moderation analyses. Interestingly, the effect of perceived ethical leadership on purchasing intentions holds over and above the ethical advertising claims that are commonly used in marketing. We conclude by discussing the possible ramifications of ethical leadership beyond its effects on immediate employees. (shrink)
Reliabilism is a theory that countenances basic knowledge, that is, knowledge from a reliable source, without requiring that the agent knows the source is reliable. Critics (especially Cohen 2002 ) have argued that such theories generate all-too-easy, intuitively implausible cases of higher-order knowledge based on inference from basic knowledge. For present purposes, the criticism might be recast as claiming that reliabilism implausibly generates cases of understanding from brute, basic knowledge. I argue that the easy knowledge (or easy understanding) criticism rests (...) on an implicit mischaracterization of the notion of a reliable process. Properly understood, reliable processes do not permit the transition from basic knowledge to understanding based on inference. (shrink)