Asserting that humanitarian intervention is a highly ambiguous principle, Pasic and Weiss warn of the dangers of politically driven rescues that often force trade-offs between the pursuit of rescue and political order.
In recent years, firms have greatly increased the amount of resources allocated to activities classified as Corporate Social Responsibility (CSR). While an increase in CSR expenditure may be consistent with firm value maximization if it is a response to changes in stakeholders' preferences, we argue that a firm's insiders (managers and large blockholders) may seek to overinvest in CSR for their private benefit to the extent that doing so improves their reputations as good global citizens and has a "warm-glow" effect. (...) We test this hypothesis by investigating the relation between firms' CSR ratings and their ownership and capital structures. Employing a unique data set that categorizes the largest 3000 U. S. corporations as either socially responsible (SR) or socially irresponsible (SI), we find that on average, insiders' ownership and leverage are negatively related to the firm's social rating, while institutional ownership is uncorrelated with it. Assuming that higher CSR ratings is associated with higher CSR expenditure level, these results support our hypothesis that insiders induce firms to over-invest in CSR when they bear little of the cost of doing so. (shrink)
Contributors to the recent disagreement debate have sought to provide a uniform response to cases in which epistemic peers disagree about the epistemic import of a shared body of evidence, no matter what kind of evidence they are disagreeing about. The varied cases addressed in the literature have included examples of disagreement about restaurant bills, court verdicts, weather forecasting, chess, morality, religious beliefs, and even disagreements about philosophical disagreements. The equal treatment of these varied cases has motivated the search for (...) a uniform response to peer disagreement wherever it is encountered. In this article I challenge this prevalent approach in the literature. I grant the notion of epistemic peer and accept that being a peer may amount to the same thing in different domains; nonetheless I contend that different domains appear to call for different responses to disagreement. I argue that the appropriate response to finding out about a disagreement with a peer is different in different domains. (shrink)
Using the Stroop paradigm, we have previously shown that a specific suggestion can remove or reduce involuntary conflict and alter information processing in highly suggestible individuals . In the present study, we carefully matched less suggestible individuals to HSIs on a number of factors. We hypothesized that suggestion would influence HSIs more than LSIs and reduce the Stroop effect in the former group. As well, we conducted secondary post hoc analyses to examine negative priming – the apparent disruption of the (...) response to a previously-ignored item. Our present findings indicate that suggestion reduces Stroop effects in HSIs. Secondary analyses show that LSIs had an NP effect at baseline and that suggestion influenced the NP condition. Thus, at least in this experimental context, suggestion seems to dampen a deeply-engrained and largely automatic process – reading – by wielding a larger influence on HSIs relative to comparable LSIs. (shrink)
Ptolemy presents only one argument for the eccentricity in his models of the superior planets, while each one of them has two eccentricities: one for center of the uniform motion, the other for the center of the constant distance. To take into account the first eccentricity, he introduces the equant point, but he provides no argument for the eccentricity of the center of the deferent. Why is the second eccentricity different from the first one? The 13 th century astronomer Quṭb (...) al-Dīn al-Shīrāzī, a member of the famous school of Marāgha, who was interested in this problem, suggests the “retrograde arcs” as the empirical origin of the second eccentricity and develops an argument to justify this conjecture. Although his argument is not without difficulty, his suggestion is in line with the suggestions made by some historians of astronomy in recent decades. Résumé Ptolémée ne donne qu'un seul argument pour expliquer dans son système l'excentricité des planètes supérieures, alors que chacune d'elles a deux excentricités: l'une par rapport au centre du mouvement uniforme, l'autre par rapport au centre de la distance constante. Pour rendre compte de la première excentricité, il introduit le point équant, mais il ne donne en revanche aucun argument pour l'excentricité par rapport au centre du cercle déférent. Or, pourquoi la seconde excentricité est-elle différente de la première? Quṭb al-Dīn al-Shīrāzī, astronome du xiii e siècle membre de l'école de Marāgha, qui s'est intéressé à cette question, a fait l'hypothèse que les “arcs de rétrogradation” constituent l'origine empirique de cette seconde excentricité. Bien que l'argument sur lequel il appuie cette hypothèse ne soit pas exempt de difficultés, sa suggestion rejoint celles faites par des historiens de l'astronomie durant les dernières décennies. (shrink)
The acquaintance principle (AP) and the view it expresses have recently been tied to a debate surrounding the possibility of aesthetic testimony, which, plainly put, deals with the question whether aesthetic knowledge can be acquired through testimony—typically aesthetic and non-aesthetic descriptions communicated from person to person. In this context a number of suggestions have been put forward opting for a restricted acceptance of AP. This paper is an attempt to restrict AP even more.
Cognitive scientists distinguish between automatic and controlled mental processes. Automatic processes are either innately involuntary or become automatized through extensive practice. For example, reading words is a purportedly automatic process for proficient readers and the Stroop effect is consequently considered the “gold standard” of automated performance. Although the question of whether it is possible to regain control over an automatic process is mostly unasked, we provide compelling data showing that posthypnotic suggestion reduced and even removed Stroop interference in highly hypnotizable (...) individuals. Drawing on a large sample of highly hypnotizable participants, we examined the effects of suggestion on Stroop performance both with and without a posthypnotic suggestion to perceive the input stream as meaningless symbols. We show that suggestion administered to highly hypnotizable persons significantly reduced Stroop interference and derailed a seemingly automatic process. (shrink)
Religious diversity is a key topic in contemporary philosophy of religion. One way religious diversity has been of interest to philosophers is in the epistemological questions it gives rise to. In other words, religious diversity has been seen to pose a challenge for religious belief. In this study four approaches to dealing with this challenge are discussed. These approaches correspond to four well-known philosophers of religion, namely, Richard Swinburne, Alvin Plantinga, William Alston, and John Hick. The study is concluded by (...) suggesting four factors which shape one’s response to the challenge religious diversity poses to religious belief. (shrink)
This paper shows how we can plausibly extend the guise of the good thesis in a way that avoids intellectualist challenge, allows animals to be included, and is consistent with the possibility of performing action under the cognition of their badness. The paper also presents some independent arguments for the plausibility of this interpretation of the thesis. To this aim, a teleological conception of practical attitudes as well as a cognitivist account of arational desires is offered.
Plagiarism has been on the rise amongst university students in recent decades. This study puts university teachers in the spotlight and investigates their role in raising students’ awareness about plagiarism. To that end, plagiarism policies in 207 Iranian university TEFL teachers’ syllabuses were analyzed. The researchers analyzed the syllabuses to find out if they contain a plagiarism policy, and if so, how the term is defined; whether they approach the issue of plagiarism directly; if they offer students any guidelines on (...) how to avoid plagiarism; and if the consequences of committing plagiarism are specified. The results indicated that the majority of the syllabuses lacked a plagiarism policy and those that did include a policy were often vague in their definition of the phenomenon. However, when there was a plagiarism policy in the syllabuses, the teachers tried to address the issue directly half of the time and offered students brief guidelines on how to avoid plagiaristic behavior, which was a small step in the right direction. It is recommended that other higher education institutions make it obligatory for their academic staff to include a plagiarism policy in their syllabuses if they wish to cultivate academic integrity in students. (shrink)
Recent data indicate that under a specific posthypnotic suggestion to circumvent reading, highly suggestible subjects successfully eliminated the Stroop interference effect. The present study examined whether an optical explanation could account for this finding. Using cyclopentolate hydrochloride eye drops to pharmacologically prevent visual accommodation in all subjects, behavioral Stroop data were collected from six highly hypnotizables and six less suggestibles using an optical setup that guaranteed either sharply focused or blurred vision. The highly suggestibles performed the Stroop task when naturally (...) vigilant, under posthypnotic suggestion not to read, and while visually blurred; the less suggestibles ran naturally vigilant, while looking away, and while visually blurred. Although visual accommodation was precluded for all subjects, posthypnotic suggestion effectively eliminated Stroop interference and was comparable to looking away in controls. These data strengthen the view that Stroop interference is neither robust nor inevitable and support the hypothesis that posthypnotic suggestion may exert a top-down influence on neural processing. (shrink)
Moral realism faces two worries: How can we have knowledge of moral norms if they are independent of us, and why should we care about them if they are independent of rational activities they govern? Kantian constitutivism tackles both worries simultaneously by claiming that practical norms are constitutive principles of practical reason. In particular, on Stephen Engstrom’s account, willing involves making a practical judgment. To will well, and thus to have practical knowledge (i.e., knowledge of what is good), the content (...) of one’s will needs to conform to the formal presuppositions of practical knowledge. Practical norms are thus constitutive of practical knowledge. However, I will argue that the universality principles from which Engstrom derives the formal presuppositions of practical knowledge are reflectively and psychologically unavailable. As a result, they cannot help Kantian constitutivism provide an answer to moral realism's worries. (shrink)
Computational properties, it is standardly assumed, are to be sharply distinguished from semantic properties. Specifically, while it is standardly assumed that the semantic properties of a cognitive system are externally or non-individualistically individuated, computational properties are supposed to be individualistic and internal. Yet some philosophers (e.g., Tyler Burge) argue that content impacts computation, and further, that environmental factors impact computation. Oron Shagrir has recently argued for these theses in a novel way, and gave them novel interpretations. In this paper I (...) present a conception of computation in cognitive science that takes Shagrir's conception as its starting point, but further develops it in various directions and strengthens it. I argue that the explanatory role of computational properties emerges from the idea that syntactical properties and the relevant external factors presented by cognitive systems compose wide computational properties. I also elaborate upon the notion of content that is in play, and argue that it is contents of the kind that are ascribed by transparent interpretations of content ascriptions that impact computation. This fact enables the thesis that external factors impact computation to rebuff the challenge which concerns the claim that psychology must be individualistic. (shrink)
This paper proposes a reading of the history of equivalence in mathematics. The paper has two main parts. The first part focuses on a relatively short historical period when the notion of equivalence is about to be decontextualized, but yet, has no commonly agreed-upon name. The method for this part is rather straightforward: following the clues left by the others for the ‘first’ modern use of equivalence. The second part focuses on a relatively long historical period when equivalence is experienced (...) in context. The method for this part is to strip the ideas from their set-theoretic formulations and methodically examine the variations in the ways equivalence appears in some prominent historical texts. The paper reveals several critical differences in the conceptions of equivalence at different points in history that are at variance with the standard account of the mathematical notion of equivalence encompassing the concepts of equivalence relation and equivalence class. (shrink)
Corporate Social Responsibility (CSR) has existed in name for over 70 years. It is practiced in many countries and it is studied in academia around the world. However, CSR is not a universally adopted concept as it is understood differentially despite increasing pressures for its incorporation into business practices. This lack of a clear definition is complicated by the use of ambiguous terms in the proffered definitions and disputes as to where corporate governance is best addressed by many of the (...) national bodies legislating, mandating, or recommending CSR. This article explores the definitions of CSR as published on the Internet by governments in four countries (United Kingdom (UK), France, the United States, and Canada). We look for a consensus of understanding in an attempt to propose a more universal framework to enhance international adoption and practice of CSR using the triple bottom line. Our results concur with the findings of both national and international bodies and suggest that both within and among the countries in our study there exists no clear definition of the concept of CSR. While there are some similarities, there are substantial differences that must be addressed. We present a number of proposals for a more universal framework to define CSR. (shrink)
The purpose of the paper is to show that semanticexternalism â the thesis that contents are notdetermined by ``individualistic'' features of mentalstates â is mistaken. Externalist thinking, it isargued, rests on two mistaken assumptions: theassumption that if there is an externalist wayof describing a situation the situation exemplifiesexternalism, and the assumption that cases in which adifference in the environment of an intentional stateentails a difference in the state's intentional objectare cases in which environmental factors determine thestate's content. Exposing these mistakes (...) leads to seethat the conditions that are required for thetruth of externalism are inconsistent. (shrink)
I will argue that Raz’s defense of the doctrine of the guise of the good rests on a over-intellectualized account of action. Raz holds that attributing evaluative beliefs to agents is justified on explanatory grounds. I argue that this account fails to do justice to the first-personal character of action explanation. Moreover, I will argue that Raz’s account of action has its root in his restrictive and over-intellectualized understanding of normative explanation. I will suggest that we can have a more (...) plausible understanding of the guise of the good that is not over-intellectualized, if we adopt a broader understanding of normative explanation. (shrink)
In 1907, German anthropologist Theodor Mollison invented a unique method for racial differentiation, called ‘deviation curves’. By transforming anthropometric data matrices into graphs, Mollison’s method enabled the simultaneous comparison of a large number of physical attributes of individuals and groups. However, the construction of deviation curves had been highly desultory, and their interpretation had been prone to various visual misjudgements. Despite their methodological shortcomings, deviation curves became very popular among racial anthropologists. This positive reception not only stemmed from the method’s (...) utilities, but was related to additional interests of its protagonists which the method helped promote. Deviation curves provided a unique solution to the holistic–atomistic controversy in German anthropology. By giving separate measurements a consolidated visual form, they substantiated the idea that the attributes of certain social groups were part of distinct racial compounds. Deviation curves thus reinforced racial suppositions, in face of severe criticism on the ontological reality of race itself. Finally, deviation curves emphasized the biological singularity of disadvantaged human groups – Jews, Africans and also women – and of their divergence from ideologically defined physical norms. Disciplinary and social interests thus became intertwined in the formation of a scientific method, which is used to this day in physical anthropology. (shrink)
The Kripkean conception of natural kinds (kinds are defined by essences that are intrinsic to their members and that lie at the microphysical level) indirectly finds support in a certain conception of a law of nature, according to which generalizations must have unlimited scope and be exceptionless to count as laws of nature. On my view, the kinds that constitute the subject matter of special sciences such as biology may very well turn out to be natural despite the fact that (...) their essences fail to be microphysical or micro-based. On the causal conception of natural kinds I privilege, the naturalness of a kind is a function of the fact that it figures prominently in at least one causal law. However, there is a strong tendency prevailing among contemporary philosophers to assume that, in order to count as proper laws generalizations must be expectionless. Since most generalizations tracked down by the special sciences turn out not to fulfill these criteria, what this conception of a law implies is that most of the generalizations the special sciences trade in are not proper laws. It follows that, on this view, most if not all of the kinds the special sciences dealing with turn out not to constitute natural kinds, understood as kinds to which bona fide laws apply. In order to establish that the non-microstructurally defined kinds that fall within the domain of enquiry of the special sciences are eligible for the status of natural kind, I must therefore establish that generalizations needn’t have unlimited scope and be exceptionless to count as laws of nature. This is precisely what I seek to do in this paper. I begin by arguing that the question “what is a law of nature?” is most naturally interpreted as the question “what features must generalizations exhibit in order to ground scientific explanations?” and by offering reasons to believe that generalizations needn’t be exceptionless and have unlimited scope to play the crucial role laws have been thought to play in scientific explanation. Drawing on Sandra Mitchell [Mitchell, S. (2000). Philosophy of Science, 67, 242–265] and James Woodward’s [Woodward, J. (1997). Philosophy of science, 64 (proceedings), 524–541; Woodward, J. (2000). British Journal for the philosophy of science, 51(2), 197–254; Woodward, J. (2001). Philosophy of science, 68, 1–20] work, I subsequently develop an alternative account of the criteria generalizations must satisfy in order to count as laws of nature, which at least some of the generalizations of the special sciences turn out to fulfill. I thus give credence to the idea that at least some of the kinds that fall within the domain of the special sciences figure in laws of nature, and I thereby restore the possibility that some special science kinds deserve to be deemed natural. (shrink)
In this paper I present a criticism of Sarah Moss‘ recent proposal to use scoring rules as a means of reaching epistemic compromise in disagreements between epistemic peers that have encountered conflict. The problem I have with Moss‘ proposal is twofold. Firstly, it appears to involve a double counting of epistemic value. Secondly, it isn‘t clear whether the notion of epistemic value that Moss appeals to actually involves the type of value that would be acceptable and unproblematic to regard as (...) epistemic. (shrink)
In this paper, we explore three separate questions that are relevant to assessing the prudential value of life in infants with severe life-limiting illness. First, what is the value or disvalue of a short life? Is it in the interests of a child to save her life if she will nevertheless die in infancy or very early childhood? Second, how does profound cognitive impairment affect the balance of positives and negatives in a child’s future life? Third, if the life of (...) a child with life-limiting illness is prolonged, how much suffering will she experience and can any of it be alleviated? Is there a risk that negative experiences for such a child will remain despite the provision of palliative care? We argue that both the subjective and objective components of well-being for children could be greatly reduced if they are anticipated to have a short life that is affected by profound cognitive impairment. This does not mean that their overall well-being will be negative, but rather that there may be a higher risk of negative overall well-being if they are expected to experience pain, discomfort, or distress. Furthermore, we point to some of the practical limitations of therapies aimed at relieving suffering, such that there is a risk that suffering will go partially or completely unrelieved. Taken together, these considerations imply that some life-prolonging treatments are not in the best interests of infants with severe life-limiting illness. (shrink)
The paper argues that Jackson's knowledge argument fails to undermine physicalist ontology. First, it is argued that, as this argument stands, it begs the question. Second, it is suggested that by supplementing the argument , this flaw can be remedied insofar as the argument is taken to be an argument against type-physicalism; however, this flaw cannot be remedied insofar as the argument is taken to be an argument against token-physicalism. The argument cannot be supplemented so as to show that experiences (...) have properties which are illegitimate from a physicalist perspective. (shrink)
To sum up, then, both kinds of Putnam's arguments established externalism, though they suffer from several defects. Yet, I think Searle's discussion of these arguments contributes to our understanding of what makes externalism true, and forces us to accept a moderate version of externalism. Searle's own account of the TE story shows us, within a solipsistic outline, how two identical mental states can be directed towards different objects, and further, that the content-determination of indexical thoughts does not necessarily involve external (...) factors. We are thus led to search elsewhere (i.e., not in the nature of indexical thoughts nor in the mere fact of there being identical thoughts with different intentionalities) for what makes the thoughts in question ‘external’. Searle formulates the thesis that intension determines extension as asserting that intension sets certain conditions that anything has to meet in order to fall under its extension. I showed that this is a trivial and implausible understanding of that thesis. Yet, it leads us to distinguish between an intension's setting conditions for falling under its extension and its fully determining such conditions, and thus to see in what sense externalism is true: in the sense that there are intensions that do not fully determine the conditions for falling under their extensions. Rather, they leave indeterminacies. This version of externalism is a moderate one, since though the intensions do not fully determine extensions, they, so to speak, determine their indeterminacies, by specifying the possible external facts that can complete the determination of extension. (The intensions, as I said, function like open sentences, and can be viewed as narrow contents.) So what's in the head plays a much more important role in determining content than Putnam takes it to play. Searle's pointing out that Hilary's concepts ‘elm’ and ‘beech’ are different also contributes to seeing this phenomenon: we realize that in that case the difference between the concepts is what is responsible for the fact that the completions of the extension-determinations are different. I think that this way of viewing the facts shows that ‘the externalist turn’ is not a great revolution, and that with the help of the concept of narrow content we can accept it without abandoning the traditional views about the mind as the source of content, and without being embarrassed by the very idea of (realistic) belief-desire psychology. (shrink)
We present and defend a view labeled “practiceism” which provides a solution to the incompatibility problems. The classic incompatibility problem is inconsistency of:1. Someone who intentionally violates the rules of a game is not playing the game.2. In many cases, players intentionally violate the rules as part of playing the game.The problem has a normative counterpart:1’. In normal cases, it is wrong for a player to intentionally violate the rules of the game.2’. In many normal cases, it is not wrong (...) for a player to intentionally violate the rules of the game.According to both formalism and informalism, the rules of the game include the formal rules of the game. Both traditional positions avoid the incompatibility problems by rejecting 1 and 1'. Practiceism rejects 2 and 2’: it maintains that the rules are the rules manifested in playing the game, not the formal rules.Practiceism presents two theses: (... (shrink)
In "Contents just are in the head" (Erkenntnis 54, pp. 321-4.) I have presented two arguments against the thesis of semantic externalism. In "Contents just aren't in the head" Anthony Brueckner has argued that my arguments are unsuccessful, since they rest upon some misconceptions regarding the nature of this thesis. (Erkenntnis 58, pp. 1-6.) In the present paper I will attempt to clarify and strengthen the case against semantic externalism, and show that Brueckner misses the point of my arguments.
Fault interpretation is one of the routine processes used for subsurface structure mapping and reservoir characterization from 3D seismic data. Various techniques have been developed for computer-aided fault imaging in the past few decades; for example, the conventional methods of edge detection, curvature analysis, red-green-blue rendering, and the popular machine-learning methods such as the support vector machine, the multilayer perceptron, and the convolutional neural network. However, most of the conventional methods are performed at the sample level with the local reflection (...) pattern ignored and are correspondingly sensitive to the coherent noises/processing artifacts present in seismic signals. The CNN has proven its efficiency in utilizing such local seismic patterns to assist seismic fault interpretation, but it is quite computationally intensive and often demands higher hardware configuration. We have developed an innovative scheme for improving seismic fault detection by integrating the computationally efficient SVM/MLP classification algorithms with local seismic attribute patterns, here denoted as the super-attribute-based classification. Its added values are verified through applications to the 3D seismic data set over the Great South Basin in New Zealand, where the subsurface structure is dominated by polygonal faults. A good match is observed between the original seismic images and the detected lineaments, and the generated fault volume is tested usable to the existing advanced fault interpretation tools/modules, such as seeded picking and automatic extraction. It is concluded that the improved performance of our scheme results from its two components. First, the SVM/MLP classifier is computationally efficient in parsing as many seismic attributes as specified by interpreters and maximizing the contributions from each attribute, which helps minimize the negative effects from using a less useful or “wrong” attribute. Second, the use of super attributes incorporates local seismic patterns into training a fault classifier, which helps exclude the random noises and/or artifacts of distinct reflection patterns. (shrink)
Jonathan Schaffer has provided three putative counterexamples to the transitivity of grounding, and has argued that a contrastive treatment of grounding is able to provide a resolution to them, which in turn provides some motivation for accepting such a treatment. In this article, I argue that one of these cases can easily be turned into a putative counterexample to a principle which Schaffer calls differential transitivity. Since Schaffer's proposed resolution rests on this principle, this presents a dilemma for the contrastivist: (...) either he dismisses the third case, which weakens the motivation for accepting his treatment of grounding, or else he accepts it, in which case he is faced with a counterexample to a principle that his proposed resolution to the original cases depends on. In the remainder of the article, I argue that the prima facie most promising strategy the contrastivist could take, which is to place some restriction on which contrastive facts are admissible so as to rule out the purported counterexample to differential transitivity, faces some important difficulties. Although these difficulties are not insurmountable, they do pose a substantial challenge for the contrastivist. (shrink)