This book is an introduction to and interpretation of the philosophy of language devised by Donald Davidson over the past 25 years. The guiding intuition is that Davidson's work is best understood as an ongoing attempt to purge semantics of theoretical reifications. Seen in this light the recent attack on the notion of language itself emerges as a natural development of his Quinian scepticism towards "meanings" and his rejections of reference-based semantic theories. Linguistic understanding is, for Davidson, essentially dynamic, arising (...) only through a continuous process of theory construction and reconstruction. The result is a conception of semantics in which the notion of interpretation and not the notion of knowing a language is fundamental. In the course of his book Bjorn Ramberg provides a critical discussion of reference-based semantic theories, challenging the standard accounts of the principle of charity and elucidating the notion of radical interpretation. The final chapter on incommensurability ties in with the discussions of Kuhn's work in the philosophy of science and suggests certain links between Davidson's analytic semantics and hermeneutic theory. (shrink)
A broad range of evidence regarding the functional organization of the vertebrate brain – spanning from comparative neurology to experimental psychology and neurophysiology to clinical data – is reviewed for its bearing on conceptions of the neural organization of consciousness. A novel principle relating target selection, action selection, and motivation to one another, as a means to optimize integration for action in real time, is introduced. With its help, the principal macrosystems of the vertebrate brain can be seen to form (...) a centralized functional design in which an upper brain stem system organized for conscious function performs a penultimate step in action control. This upper brain stem system retained a key role throughout the evolutionary process by which an expanding forebrain – culminating in the cerebral cortex of mammals – came to serve as a medium for the elaboration of conscious contents. This highly conserved upper brainstem system, which extends from the roof of the midbrain to the basal diencephalon, integrates the massively parallel and distributed information capacity of the cerebral hemispheres into the limited-capacity, sequential mode of operation required for coherent behavior. It maintains special connective relations with cortical territories implicated in attentional and conscious functions, but is not rendered nonfunctional in the absence of cortical input. This helps explain the purposive, goal-directed behavior exhibited by mammals after experimental decortication, as well as the evidence that children born without a cortex are conscious. Taken together these circumstances suggest that brainstem mechanisms are integral to the constitution of the conscious state, and that an adequate account of neural mechanisms of conscious function cannot be confined to the thalamocortical complex alone. (Published Online May 1 2007) Key Words: action selection; anencephaly; central decision making; consciousness; control architectures; hydranencephaly; macrosystems; motivation; target selection; zona incerta. (shrink)
Ethnic slur terms and other group-based slurs must be differentiated from general pejoratives and pure expressives. As these terms pejoratively refer to certain groups of people, they are a typical feature of hate speech contexts where they serve xenophobic speakers in expressing their hatred for an entire group of people. However, slur terms are actually far more frequently used in other contexts and are more often exchanged among friends than between enemies. Hate speech can be identified as the most central, (...) albeit not the most frequent, mode of use. I broadly distinguish between hate speech, other pejorative uses, parasitic uses, neutral mentioning, and unaware uses. In this paper, authentic examples of use and frequency estimates from empirical research will help provide accurate definitions and insight into these different modes that purely theoretical approaches cannot achieve. (shrink)
Michael Bratman’s work is established as one of the most important philosophical approaches to group agency so far, and Shared Agency, A Planning Theory of Acting Together confirms that impression. In this paper I attempt to challenge the book’s central claim that considerations of theoretical simplicity will favor Bratman’s theory of collective action over its main rivals. I do that, firstly, by questioning whether there must be a fundamental difference in kind between Searle style we-intentions and I-intentions within that type (...) of framework. If not, Searle’s type of theory need not be less qualitatively parsimonious than Bratman’s. This hangs on how we understand the notions of modes and contents of intentional states, and the relations between modes, contents, and categorizations of such states. Secondly, by questioning whether Bratman’s theory steers clear of debunking or dismissing collectivity. Elsewhere I have claimed that the manoeuvres Bratman suggested to avoid circularity in his conceptual analysis (in 1992 and 1997) undermine the strength of his resulting notion of collective action. Bratman responds in detail to this objection in his new book and I return to the issue towards the end of the paper. (shrink)
The issue of the biological origin of consciousness is linked to that of its function. One source of evidence in this regard is the contrast between the types of information that are and are not included within its compass. Consciousness presents us with a stable arena for our actions—the world—but excludes awareness of the multiple sensory and sensorimotor transformations through which the image of that world is extracted from the confounding influence of self-produced motion of multiple receptor arrays mounted on (...) multijointed and swivelling body parts. Likewise excluded are the complex orchestrations of thousands of muscle movements routinely involved in the pursuit of our goals. This suggests that consciousness arose as a solution to problems in the logistics of decision making in mobile animals with centralized brains, and has correspondingly ancient roots. (shrink)
Many studies in big data focus on the uses of data available to researchers, leaving without treatment data that is on the servers but of which researchers are unaware. We call this dark data, and in this article, we present and discuss it in the context of high-performance computing facilities. To this end, we provide statistics of a major HPC facility in Europe, the High-Performance Computing Center Stuttgart. We also propose a new position tailor-made for coping with dark data and (...) general data management. We call it the scientific data officer and we distinguish it from other standard positions in HPC facilities such as chief data officers, system administrators, and security officers. In order to understand the role of the SDO in HPC facilities, we discuss two kinds of responsibilities, namely, technical responsibilities and ethical responsibilities. While the former are intended to characterize the position, the latter raise concerns—and proposes solutions—to the control and authority that the SDO would acquire. (shrink)
According to a common claim, a necessary condition for a collective action (as opposed to a mere set of intertwined or parallel actions) to take place is that the notion of collective action figures in the content of each participant’s attitudes. Insofar as this claim is part of a conceptual analysis, it gives rise to a circularity challenge that has been explicitly addressed by Michael Bratman and Christopher Kutz.1 I will briefly show how the problem arises within Bratman’s and Kutz’s (...) analyses, and then proceed to criticize some possible responses, including the ones proposed by Bratman and Kutz. My conclusion is that in order to avoid circularity and retain the features that are supposed to make this sort of account attractive, we need a notion of collectivity that does not presuppose intention. I suggest that we should make a distinction between collective and noncollective activity merely in terms of dispositions and causal agency. There are independent reasons to think that we actually possess such a distinct causal conception of collectivity. It is not necessary for the participants in a jointly intentional collective action to possess a stronger notion of their intended collective activity than this. In particular, they do not need to possess the concept of a jointly intentional collective action. (shrink)
The aim of this paper is to discuss anonymity and the threats against it—in the form of deanonymization technologies. The question in the title is approached by conceptual analysis: I ask what kind of concept we need and how it ought to be conceptualized given what is really at stake. By what is at stake I mean the values that are threatened by various deanonymization technologies. It will be argued that while previous conceptualizations of anonymity may be reasonable—given a standard (...) lexical, or common-sense, understanding of the term—the concept of anonymity is not sufficient given what is really at stake. I will argue that what is at stake is our ability to be anonymous, which I will broadly characterize as a reasonable control over what we communicate. (shrink)
‘The Second Mistake’ (TSM) is to think that if an act is right or wrong because of its effects, the only relevant effects are the effects of this particular act. This is not (as some think) a truism, since ‘the effects of this particular act’ and ‘its effects’ need not co-refer. Derek Parfit's rejection of TSM is based mainly on intuitions concerning sets of acts that over-determine certain harms. In these cases, each act belongs to the relevant set in virtue (...) of a causal relation (other than marginal contribution) to a specific harmful event. This feature may make an act wrong, in a fashion consequentialists could admit. That explication of TSM does not rely on the questionable assumption that the set of acts is what harms here. Independently of this, there are several other reasons to prefer it to the ‘mere participation’ approach. Correspondence:c1 bjorn.petersson@fil.lu.se. (shrink)
Many studies in big data focus on the uses of data available to researchers, leaving without treatment data that is on the servers but of which researchers are unaware. We call this dark data, and in this article, we present and discuss it in the context of high-performance computing facilities. To this end, we provide statistics of a major HPC facility in Europe, the High-Performance Computing Center Stuttgart. We also propose a new position tailor-made for coping with dark data and (...) general data management. We call it the scientific data officer and we distinguish it from other standard positions in HPC facilities such as chief data officers, system administrators, and security officers. In order to understand the role of the SDO in HPC facilities, we discuss two kinds of responsibilities, namely, technical responsibilities and ethical responsibilities. While the former are intended to characterize the position, the latter raise concerns—and proposes solutions—to the control and authority that the SDO would acquire. (shrink)
The ‘Guiding Principles on Business and Human Rights’ (Principles) that provide guidance for the implementation of the United Nations’ ‘Protect, Respect and Remedy’ framework (Framework) will probably succeed in making human rights matters more customary in corporate management procedures. They are likely to contribute to higher levels of accountability and awareness within corporations in respect of the negative impact of business activities on human rights. However, we identify tensions between the idea that the respect of human rights is a perfect (...) moral duty for corporations and the Principle’s ‘human rights due diligence’ requirement. We argue that the effectiveness of the ‘human rights due diligence’ is in many respects dependent upon the moral commitment of corporations. The Principles leave room for an instrumental or strategic implementation of due diligence, which in some cases could result in a depreciation of the fundamental norms they seek to promote. We reveal some limits of pragmatic approaches to coping with business-related human rights abuses. As these limits become more apparent, not only does the case for further progress in international and extraterritorial human rights law become more compelling, but so too does the argument for a more forceful discussion on the moral foundations of human rights duties for corporations. (shrink)
Sometimes it seems intuitively plausible to hold loosely structured sets of individuals morally responsible for failing to act collectively. Virginia Held, Larry May, and Torbj rn T nnsj have all drawn this conclusion from thought experiments concerning small groups, although they apply the conclusion to large-scale omissions as well. On the other hand it is commonly assumed that (collective) agency is a necessary condition for (collective) responsibility. If that is true, then how can we hold sets of people responsible for (...) not having acted collectively? This paper argues that that loosely structured inactive groups sometimes meet this requirement if we employ a weak (but nonetheless non-reductionist) notion of collective agency. This notion can be defended on independent grounds. The resulting position on distribution of responsibility is more restrictive than Held's, May's or T nnsj 's, and this consequence seems intuitively attractive. (shrink)
Through the vehicle of Nicolas Sarkozy’s so-called “Dakar Address” we will analyse the West’s persisting lack of insight into the need for a Western decolonization. We will try to identify the dangers that come from this refusal, such as the abidance in colonial patterns, the enduring self-understanding as superior com-pared to Africa, and the persisting unwillingness to accept the colonial guilt. Decolonization has to be understood as a two-fold business. Decolonization is over-coming endured and perpetrated violence. It is not only (...) important that the colonial oppressed regain strength, it is equally important that the perpetrator of colonial violence understand their excess of violence and work on effecting its return impossible. We explain how Western thinking remains dangerous and untrustwor-thy if it refuses its own decolonization. Finally, we will draw attention to a fundamental pattern of Western thought that clashes with the fundamental values of the West: contempt. In a final step we suggest how this contempt can be overcome through desuperiorisation and the establishment of elative ethics. (shrink)
Different versions of the idea that individualism about agency is the root of standard game theoretical puzzles have been defended by Regan 1980, Bacharach, Hurley, Sugden :165–181, 2003), and Tuomela 2013, among others. While collectivistic game theorists like Michael Bacharach provide formal frameworks designed to avert some of the standard dilemmas, philosophers of collective action like Raimo Tuomela aim at substantive accounts of collective action that may explain how agents overcoming such social dilemmas would be motivated. This paper focuses on (...) the conditions on collective action and intention that need to be fulfilled for Bacharach’s “team reasoning” to occur. Two influential approaches to collective action are related to the idea of team reasoning: Michael Bratman’s theory of shared intention and Raimo Tuomela’s theory of a we-mode of intending. I argue that neither captures the “agency transformation” that team reasoning requires. That might be an acceptable conclusion for Bratman but more problematic for Tuomela, who claims that Bacharach’s results support his theory. I sketch an alternative framework in which the perspectival element that is required for team reasoning - the ‘we-perspective’ - can be understood and functionally characterized in relation to the traditional distinction between mode and content of intentional states. I claim that the latter understanding of a collective perspective provides the right kind of philosophical background for team reasoning, and I discuss some implications in relation to Tuomela’s assumption that switching between individual and collective perspectives can be a matter of rational choice. (shrink)
In discussions of moral responsibility for collectively produced effects, it is not uncommon to assume that we have to abandon the view that causal involvement is a necessary condition for individual co-responsibility. In general, considerations of cases where there is “a mismatch between the wrong a group commits and the apparent causal contributions for which we can hold individuals responsible” motivate this move. According to Brian Lawson, “solving this problem requires an approach that deemphasizes the importance of causal contributions”. Christopher (...) Kutz’s theory of complicitious accountability in Complicity from 2000 is probably the most well-known approach of that kind. Standard examples are supposed to illustrate mismatches of three different kinds: an agent may be morally co-responsible for an event to a high degree even if her causal contribution to that event is a) very small, b) imperceptible, or c) non-existent (in overdetermination cases). From such examples, Kutz and others conclude that principles of complicitious accountability cannot include a condition of causal involvement. In the present paper, I defend the causal involvement condition for co-responsibility. These are my lines of argument: First, overdetermination cases can be accommodated within a theory of coresponsibility without giving up the causality condition. Kutz and others oversimplify the relation between counterfactual dependence and causation, and they overlook the possibility that causal relations other than marginal contribution could be morally relevant. Second, harmful effects are sometimes overdetermined by non-collective sets of acts. Over-farming, or the greenhouse effect, might be cases of that kind. In such cases, there need not be any formal organization, any unifying intentions, or any other noncausal criterion of membership available. If we give up the causal condition for coresponsibility it will be impossible to delimit the morally relevant set of acts related to those harms. Since we sometimes find it fair to blame people for such harms, we must question the argument from overdetermination. Third, although problems about imperceptible effects or aggregation of very small effects are morally important, e.g. when we consider degrees of blameworthiness or epistemic limitations in reasoning about how to assign responsibility for specific harms, they are irrelevant to the issue of whether causal involvement is necessary for complicity. Fourth, the costs of rejecting the causality condition for complicity are high. Causation is an explicit and essential element in most doctrines of legal liability and it is central in common sense views of moral responsibility. Giving up this condition could have radical and unwanted consequences for legal security and predictability. However, it is not only for pragmatic reasons and because it is a default position that we should require stronger arguments (than conflicting intuitions about “mismatches”) before giving up the causality condition. An essential element in holding someone to account for an event is the assumption that her actions and intentions are part of the explanation of why that event occurred. If we give up that element, it is difficult to see which important function responsibility assignments could have. (shrink)
In discussions of moral responsibility for collectively produced effects, it is not uncommon to assume that we have to abandon the view that causal involvement is a necessary condition for individual co-responsibility. In general, considerations of cases where there is "a mismatch between the wrong a group commits and the apparent causal contributions for which we can hold individuals responsible" motivate this move. According to Brian Lawson, "solving this problem requires an approach that deemphasizes the importance of causal contributions". Christopher (...) Kutz's theory of complicitious accountability in Complicity from 2000 is probably the most wellknown approach of that kind. Standard examples are supposed to illustrate mismatches of three different kinds: an agent may be morally co-responsible for an event to a high degree even if her causal contribution to that event is a) very small, b) imperceptible, or c) non-existent. From such examples, Kutz and others conclude that principles of complicitious accountability cannot include a condition of causal involvement. In the present paper, I defend the causal involvement condition for co-responsibility. These are my lines of argument: First, overdetermination cases can be accommodated within a theory of coresponsibility without giving up the causality condition. Kutz and others oversimplify the relation between counterfactual dependence and causation, and they overlook the possibility that causal relations other than marginal contribution could be morally relevant. Second, harmful effects are sometimes overdetermined by non-collective sets of acts. Over-farming, or the greenhouse effect, might be cases of that kind. In such cases, there need not be any formal organization, any unifying intentions, or any other noncausal criterion of membership available. If we give up the causal condition for coresponsibility it will be impossible to delimit the morally relevant set of acts related to those harms. Since we sometimes find it fair to blame people for such harms, we must question the argument from overdetermination. Third, although problems about imperceptible effects or aggregation of very small effects are morally important, e.g. when we consider degrees of blameworthiness or epistemic limitations in reasoning about how to assign responsibility for specific harms, they are irrelevant to the issue of whether causal involvement is necessary for complicity. Fourth, the costs of rejecting the causality condition for complicity are high. Causation is an explicit and essential element in most doctrines of legal liability and it is central in common sense views of moral responsibility. Giving up this condition could have radical and unwanted consequences for legal security and predictability. However, it is not only for pragmatic reasons and because it is a default position that we should require stronger arguments before giving up the causality condition. An essential element in holding someone to account for an event is the assumption that her actions and intentions are part of the explanation of why that event occurred. If we give up that element, it is difficult to see which important function responsibility assignments could have. (shrink)
Social entrepreneurship increasingly involves collective, voluntary organizing efforts where success depends on generating and sustaining members’ participation. To investigate how such participatory social ventures achieve member engagement in pluralistic institutional settings, we conducted a qualitative, inductive study of German Renewable Energy Source Cooperatives. Our findings show how value tensions emerge from differences in RESCoop members’ relative prioritization of community, environmental, and commercial logics, and how cooperative leaders manage these tensions and sustain member participation through temporal, structural, and collaborative compromise strategies. (...) We unpack the mechanisms by which each strategy enables members to justify organizational decisions that violate their personal value priorities and demonstrate their varying implications for organizational growth. Our findings contribute new insights into the challenges of collective social entrepreneurship, the capacity of hybrid organizing strategies to mitigate value concessions, and the importance of logic combinability as a key dimension of pluralistic institutional settings. (shrink)
‘Natural selection’ is, it seems, an ambiguous term. It is sometimes held to denote a consequence of variation, heredity, and environment, while at other times as denoting a force that creates adaptations. I argue that the latter, the force interpretation, is a redundant notion of natural selection. I will point to difficulties in making sense of this linguistic practise, and argue that it is frequently at odds with standard interpretations of evolutionary theory. I provide examples to show this; one example (...) involving the relation between adaptations and other traits, and a second involving the relation between selection and drift. (shrink)
Although popular, control accounts of privacy suffer from various counterexamples. In this article, it is argued that two such counterexamples—while individually resolvable—can be combined to yield a dilemma for control accounts of privacy. Furthermore, it is argued that it is implausible that control accounts of privacy can defend against this dilemma. Thus, it is concluded that we ought not define privacy in terms of control. Lastly, it is argued that since the concept of privacy is the object of the right (...) to privacy if the former cannot be defined in terms of control, neither can the latter. (shrink)
It is a plain fact that biology makes use of terms and expressions commonly spoken of as teleological. Biologists frequently speak of the function of biological items. They may also say that traits are 'supposed to' perform some of their effects, claim that traits are 'for' specific effects, or that organisms have particular traits 'in order to' engage in specific interactions. There is general agreement that there must be something useful about this linguistic practice but it is controversial whether it (...) is entirely appropriate, and if so why it is.Many theorists have defended the use of seemingly teleological terms by appeal to an etiological notion of function (Wright, 1973; Millikan, 1984, 2002; Neander, 1991; .. (shrink)
Health technology assessment is an evaluation of health technologies in terms of facts and evidence. However, the relationship between facts and values is still not clear in HTA. This is problematic in an era of fake facts and truth production. Accordingly, the objective of this study is to clarify the relationship between facts and values in HTA. We start with the perspectives of the traditional positivist account of evaluating facts and the social-constructivist account of facting values. Our analysis reveals diverse (...) relationships between facts and a spectrum of values, ranging from basic human values, to the values of health professionals, and values of and in HTA, as well as for decision making. We argue for sensitivity to the relationship between facts and values on all levels of HTA, for being open and transparent about the values guiding the production of facts, and for a primacy for the values close to the principal goals of health care, ie, relieving suffering. We maintain that philosophy may have an important role in addressing the relationship between facts and values in HTA. Philosophy may help us to avoid fallacies of inferring values from facts; to disentangle the normative assumptions in the production or presentation of facts and to tease out implicit value judgements in HTA; to analyse evaluative argumentation relating to facts about technologies; to address conceptual issues of normative importance; and to promote reflection on HTAs own value system. In this we argue for a middle way between the traditional positivist account of evaluating facts and the social-constructivist account of facting values, which we call factuation. We conclude that HTA is unique in bringing together facts and values and that being conscious and explicit about this factuation is key to making HTA valuable to both individual decision makers and society as a whole. (shrink)
In this paper, we try to shed light on the ontological puzzle pertaining to models and to contribute to a better understanding of what models are. Our suggestion is that models should be regarded as a specific kind of signs according to the sign theory put forward by Charles S. Peirce, and, more precisely, as icons, i.e. as signs which are characterized by a similarity relation between sign (model) and object (original). We argue for this (1) by analyzing from a (...) semiotic point of view the representational relation which is characteristic of models. We then corroborate our hypothesis (2) by discussing the conceptual differences between icons, i.e. models, and indexical and symbolic signs and (3) by putting forward a general classification of all icons into three functional subclasses (images, diagrams, and metaphors). Subsequently, we (4) integratively refine our results by resorting to two influential and, as can be shown, complementary philosophy of science approaches to models. This yields the following result: models are determined by a semiotic structure in which a subject intentionally uses an object, i.e. the model, as a sign for another object, i.e. the original, in the context of a chosen theory or language in order to attain a specific end by instituting a representational relation in which the syntactic structure of the model, i.e. its attributes and relations, represents by way of a mapping the properties of the original, which hence are regarded as similar in a relevant manner. (shrink)
"Forgetting" plays an important role in the lives of individuals and communities. Although a few Holocaust scholars have begun to take forgetting more seriously in relation to the task of remembering—in popular parlance as well as in academic discourse on the Holocaust—forgetting is usually perceived as a negative force. In the decades following 1945, the terms remembering and forgetting have often been used antithetically, with the communities of victims insisting on the duty to remember and a society of perpetrators desiring (...) to forget. Thus, the discourse on Holocaust memory has become entrenched on this issue. This essay counters the swift rejection of forgetting and its labeling as a reprehensible act. It calls attention to two issues: first, it offers a critical argument for different forms of forgetting; second, it concludes with suggestions of how deliberate performative practices of forgetting might benefit communities affected by a genocidal past. Is it possible to conceive of forgetting not as the ugly twin of remembering but as its necessary companion? (shrink)
The focus of this article is the analysis of generative mechanisms, a basic concept and phenomenon within the metatheoretical perspective of critical realism. It is emphasized that research questions and methods, as well as the knowledge it is possible to attain, depend on the basic view – ontologically and epistemologically – regarding the phenomenon under scrutiny. A generative mechanism is described as a trans empirical but real existing entity, explaining why observable events occur. Mechanisms are mostly possible to grasp only (...) indirectly by analytical work, based however on empirical observations. In order to achieve such an explanatory analysis, five methodological steps are suggested and discussed, among them abduction and retroduction. These steps are illustrated throughout by examples drawn from empirical research regarding social work practice. The article is concluded with a discussion of the need for knowledge of generative mechanisms. (shrink)
A popular strategy for meeting over-determination and pre-emption challenges to the comparative counterfactual conception of harm is Derek Parfit’s suggestion, more recently defended by Neil Feit, that a plurality of events harms A if and only if that plurality is the smallest plurality of events such that, if none of them had occurred, A would have been better off. This analysis of ‘harm’ rests on a simple but natural mistake about the relevant counterfactual comparison. Pluralities fulfilling these conditions make no (...) difference to the worse for anyone in the over-determination cases that prompted the need for revising the comparative conception of harm to begin with. We may choose to call them harmful anyway, but then we must abandon the idea that making a difference to the worse for someone is essential to harming. I argue that we should hold on to the difference-making criterion and give up the plural harm principle. I offer an explanation of why Parfit’s and Feit’s plural harm approach seems attractive. Finally, I argue that the consequences of giving up the plural harm principle and holding on to the simple comparative counterfactual analysis of harm are less radical than we may think, in relation to questions about wrongness and responsibility. (shrink)
In this commentary, I discuss the effects of the liar paradox on Floridi’s definition on semantic information. In particular, I show that there is at least one sentence that creates a contradictory result for Floridi’s definition of semantic information that does not affect the standard definition.
OBJECTIVES: To study whether linguistic analysis and changes in information leaflets can improve readability and understanding. DESIGN: Randomised, controlled study. Two information leaflets concerned with trials of drugs for conditions/diseases which are commonly known were modified, and the original was tested against the revised version. SETTING: Denmark. PARTICIPANTS: 235 persons in the relevant age groups. MAIN MEASURES: Readability and understanding of contents. RESULTS: Both readability and understanding of contents was improved: readability with regard to both information leaflets and understanding with (...) regard to one of the leaflets. CONCLUSION: The results show that both readability and understanding can be improved by increased attention to the linguistic features of the information. (shrink)
This study investigates the interrelation of outer appearance and spatial configuration of modern Chinese court buildings with the party-state’s strategy of building regime legitimacy. The spatial element of this relation is explored in four different court buildings in Kunming, Chongqing, Shanghai and Xi’an. It is argued that court buildings contribute to the empowerment of individuals who appear as parties in trials. Courthouses also facilitate the courts’ function of exercising social control and the application of an instrumentalist approach to the principle (...) of public trials. Both the grounding of court buildings in the past and their compliance with international models of a modern independent judiciary are aspects of consolidating regime legitimacy. (shrink)
This article introduces compliance disclosure regimes to business ethics research. Compliance disclosure is a relatively recent regulatory technique whereby companies are obliged to disclose the extent to which they comply with codes, ‘best practice standards’ or other extra-legal texts containing norms or prospective norms. Such ‘compliance disclosure’ obligations are often presented as flexible regulatory alternatives to substantive, command-and-control regulation. However, based on a report on experiences of existing compliance disclosure obligations, this article will identify major weaknesses that prevent them from (...) becoming effective mechanisms to discipline a certain type of behaviour. It will be argued that regulatory recourse to compliance disclosure obligations is nonetheless worthwhile if we view them as mechanisms that can initiate a dialogue about norm interpretation, application and norm desirability. From this perspective, compliance disclosure obligations serve less to discipline companies by making corporate practices transparent, and more to trigger a process of norm development, in which the law, companies and their stakeholders interact. This article provides an illustration of how mandatory disclosure, if it is restricted to a unilateral communication process, may produce no effective results (or even prove counterproductive), whilst highlighting the alternative potential of disclosure as an initiator of dialogue, supported by laws, geared towards the development and refinement of norms applicable to business in a global context and the values they promote. (shrink)
It is argued that utilitarianism should be reformulated as a scalar theory admitting of degrees of wrongdoing. It is also argued that the degree of wrongness of an action should be sensitive both to the relative valueloss the action results in and to the difficulty of having acted better. A version of utilitarianism meeting these specifications is forumalted.
In health care priority setting different criteria are used to reflect the relevant values that should guide decision-making. During recent years there has been a development of value frameworks implying the use of multiple criteria, a development that has not been accompanied by a structured conceptual and normative analysis of how different criteria relate to each other and to underlying normative considerations. Examples of such criteria are unmet need and severity. In this article these crucial criteria are conceptually clarified and (...) analyzed in relation to each other. We argue that disease-severity and condition-severity should be distinguished and we find the latter concept better reflects underlying normative values. We further argue that unmet need does not fulfil an independent and relevant role in relation to condition-severity except for in some limited situations when having to distinguish between conditions of equal severity. (shrink)
Diese Studie möchte eine existenzphilosophische Vorarbeit zu einer Phänomenologie der Normativität leisten. Durch Auslegungen vor allem griechischer, mittelhochdeutscher und klassischer deutscher Dichtung werden zunächst die Grundbegriffe von Faktizität und Existentialität herausgearbeitet. Auf dieser Grundlage erfolgt sodann die Beschreibung von Anlass und Gestalt normativer Praxis. Diese Beschreibung wird endlich als anthropologische Grundlage philosophischer Ethik vorgeschlagen.
Global university rankings have become increasingly important ‘calculative devices’ for assessing the ‘quality’ of higher education and research. Their ability to make characteristics of universities ‘calculable’ is here exemplified by the first proper university ranking ever, produced as early as 1910 by the American psychologist James McKeen Cattell. Our paper links the epistemological rationales behind the construction of this ranking to the sociopolitical context in which Cattell operated: an era in which psychology became institutionalized against the backdrop of the eugenics (...) movement, and in which statistics of science became used to counter a perceived decline in ‘great men.’ Over time, however, the ‘eminent man,’ shaped foremost by heredity and upbringing, came to be replaced by the excellent university as the emblematic symbol of scientific and intellectual strength. We also show that Cattell’s ranking was generative of new forms of the social, traces of which can still be found today in the enactment of ‘excellence’ in global university rankings. (shrink)
In the course of their disciplinary consolidation during the 19th and 20th centuries, the social sciences came increasingly to be less historically orientated. Analogously, global history became increasingly a marginal concern for professional historical scholarship. At the present juncture, however, there is a coincidence of a rethinking of the formation of modernity in cultural terms and the need to locate European modernity in a global context. Social theory must be able to provide an account of global historical developments that is (...) less constrained and biased than modernization theory, even in the new garb of globalization studies, but significantly more elaborate in conceptual terms than current contributions to global history. A rethinking of the formation of modernity has already contributed to a greater appreciation of processes of cultural and ideational transformations. It has also suggested new ways of studying institutional change. It must, however, also be able to locate the specific European trajectory in a global context. The core element in such a research programme is the analysis of three major periods of global cultural crystallization, namely the Axial Age, the ecumenical renaissance, and the formation of modernity. The rationale and the contours of this research programme are outlined. (shrink)
Summary In two articles Friedrich Rapp argues that there is a methodological symmetry between falsification and verification in contradistinction to the logical asymmetry that obtains between them. (The Methodological Symmetry between Verification and Falsification,Ztschr. f. Allg. Wissth., Band VI/1 (1975), pp 139â144; A Helpful Argument â Reply to K. Eichner,Ztschr. f. Allg. Wissth., Band VII/1 (1976), pp. 121â123). Rapp puts forward the thesis that methodological falsification of a theory T implies the acceptance of an inference from ~ (x) Tx to (...) (x) ~ Tx. However, this thesis does not have to be accepted even if the premises of Rapp's argument were accepted. Furthermore, Rapp has not shown that the falsification of a theory T implies that T will not be retained. Neither has Rapp formulated assumptions that are sufficient to guarantee that the outcome of an intended test of a theory T can be considered as an outcome of an actual test of T. (shrink)
It is commonly thought that natural selection explains the rise of adaptive complexity. Razeto-Barry and Frick have recently argued in favour of this view, dubbing it the Creative View. I argue that the Creative View is mistaken if it claims that natural selection serves to answer Paley’s question. This is shown by a case that brings out the contrastive structure inherent in this demand for explanation. There is, however, a rather trivial sense in which specific environmental conditions are crucial for (...) the rise of specific adaptations, but this is hardly what opponents of the Creative View are denying. (shrink)
Conflation of our unique human endowment for language with innate, so-called universal, grammar has banished language from its biological home. The facts reviewed by Evans & Levinson (E&L) fit the biology of cultural transmission. My commentary highlights our dedicated learning capacity for vocal production learning as the form of our language endowment compatible with those facts.