The key question in this three way debate is the role of the collectivity and of agency. Collins and Shrager debate whether cognitive psychology has, like the sociology of knowledge, always taken the mind to extend beyond the individual. They agree that irrespective of the history, socialization is key to understanding the mind and that this is compatible with Clark’s position; the novelty in Clark’s “extended mind” position appears to be the role of the material rather than the role (...) of other minds. Collins and Clark debate the relationship between self, agency, and the human collectivity. Collins argues that the Clark’s extended mind fails to stress the asymmetry of the relationship between the self and its material “scaffolding.” Clark accepts that there is asymmetry but that an asymmetrical ensemble is sufficient to explain the self. Collins says that we know too little about the material world to pursue such a model to the exclusion of other approaches including that both the collectivity and language have agency. The collectivity must be kept in mind! (Though what follows is a robust exchange of views it is also a cooperative effort, authors communicating “backstage” with each other to try to make the disagreements as clear and to the point as possible.). (shrink)
cis is presented of Randall Collins's book, The Sociology of Philosophies: A Global Theory of Intellectual Change. It presents a sociological theory of intellectual networks that connect thinkers in chains of masters and pupils, colleagues and rivals, and of the internalized conversations that constitute the social processes of thinking. The theory is used to analyze long-term developments of the intellectual communities of philosophers in ancient Greece, ancient and medieval China and India, medieval and modern Japan, medieval Islam and Judaism, (...) medieval Christendom, and modern Europe through the early 20th century. (shrink)
This fascinating study in the sociology of science explores the way scientists conduct, and draw conclusions from, their experiments. The book is organized around three case studies: replication of the TEA-laser, detecting gravitational rotation, and some experiments in the paranormal. "In his superb book, Collins shows why the quest for certainty is disappointed. He shows that standards of replication are, of course, social, and that there is consequently (...) no outside standard, no Archimedean point beyond society from which we can lever the intellects of our fellows."--Donald M. McCloskey, Journal of Economic Psychology "Collins is one of the genuine innovators of the sociology of scientific knowledge. . . . Changing Order is a rich and entertaining book."-- Isis "The book gives a vivid sense of the contingent nature of research and is generally a good read."--Augustine Brannigan, Nature "This provocative book is a review of [Collins's] work, and an attempt to explain how scientists fit experimental results into pictures of the world. . . . A promising start for new explorations of our image of science, too often presented as infallibly authoritative."--Jon Turney, New Scientist. (shrink)
The problem of the unity of the proposition is almost as old as philosophy itself, and was one of the central themes of early analytical philosophy, greatly exercising the minds of Frege, Russell, Wittgenstein, and Ramsey. The problem is how propositions or meanings can be simultaneously unities (single things) and complexes, made up of parts that are autonomous of the positions they happen to fill in any given proposition. The problem has been associated with numerous paradoxes and has motivated general (...) theories of thought and meaning, but has eluded any consensual resolution; indeed, the problem is sometimes thought to be wholly erroneous, a result of atomistic assumptions we should reject. In short, the problem has been thought to be of merely historical interest. Collins argues that the problem is very real and poses a challenge to any theory of linguistic meaning. He seeks to resolve the problem by laying down some minimal desiderata on a solution and presenting a uniquely satisfying account. The first part of the book surveys and rejects extant 'solutions' and dismissals of the problem from (especially) Frege and Russell, and a host of more contemporary thinkers, including Davidson and Dummett. The book's second part offers a novel solution based upon the properties of a basic syntactic principle called 'Merge', which may be said to create objects inside objects, thus showing how unities can be both single things but also made up of proper parts. The solution is defended from both philosophical and linguistic perspectives. The overarching ambition of the book, therefore, is to strengthen the ties between current linguistics and contemporary philosophy of language in a way that is genuinely sensitive to the history of both fields. (shrink)
This note briefly responds to Devitt’s (2008) riposte to Collins’s (2008a) argument that linguistic realism prima facie fails to accommodate unvoiced elements within syntax. It is argued that such elements remain problematic. For it remains unclear how conventions might target the distribution of PRO and how they might explain hierarchical structure that is presupposed by such distribution and which is not witnessed in concrete strings.
Collins, John Francis In October this year there are to be two events at the Vatican. Beginning on 7 October and going through to 28 October bishops from all over the world are to gather at a Synod on 'New Evangelization for the Transmission of the Christian Faith.' On 11 October, midway through the Synod, the whole Church will mark the fiftieth anniversary of the opening of the Second Vatican Council. The bishops who are to gather this year at (...) the Synod follow in the footsteps of the more than 2000 Bishops who gathered at the Second Vatican Council. John XXIII opened the Second Vatican Council, with the following words 'Looked at one way there is the deposit of faith or the truths which are contained in our doctrine which we venerate, looked at another way there is the way by which the same (the deposit of faith) is enunciated both in its meaning and its spirit.' In a recent interview for Salt and Light Television the inaugural head of the Pontifical Council for the promotion of the New Evangelisation Archbishop Salvatore Fisichella noted that what Vatican II did for the Church is still present in our community. Later in the interview the Archbishop stated that the 'New Evangelisation is not a new work, it is a new mentality; a new language, a new enthusiasm for announcing the gospel.' There is continuity between both the spirit and letter of the Archbishop's words recorded in 2012 and the words of John XXIII in opening Vatican II. That is, as a Church, what we are seeking is new ways to announce the meaning and spirit of the deposit of faith, the truths contained in doctrine. What would later be called the new evangelisation permeated Vatican II. (shrink)
A range of positions persist in the proper interpretation of generative linguistics. The paper responds to recent work in this area that either weakly or strongly diverges from the non-contentful, internalist model presented in Collins (2008a). Against the sympathetic criticisms of Matthews (2008) and Smith (2008), it is argued that a crucial role for content in our understanding of linguistic theories remains obscure, although the discussion here will hopefully clarify the divergence between the parties as merely perspectival. Rey (2008) (...) more strongly argues that the non-contentful model is prey to some classic complaints. The charges are rebutted. Finally, the position of Devitt (2008a, b) is considered. It is argued that his most recent presentation of his brand of realism fails to speak to the fundamental complaints levelled against it, especially as regards the putative role of conventions in the explanation of unvoiced syntax. (shrink)
I respond to Selinger and Mix (Selinger, E. and Mix, J. 2004. On interactional expertise: Pragmatic and ontological considerations. Phenomenology and the Cognitive Sciences 3: 145–163), concentrating on their charges that Collins (Collins, H. M. 2004a. Interactional expertise as a third form of knowledge. Phenomenology and the Cognitive Sciences 3: 125–143) underrates the importance of interactional expertise as an expertise sui generis and that the paper fails to analyse the idea of embodiment sufficiently holistically, misleading treating the ‘body’ (...) as no more than the linear sum of its parts. (shrink)
The Allegiance of Thomas Hobbes offers a revisionist interpretation of Thomas Hobbes's evolving response to the English Revolution. It rejects the prevailing understanding of Hobbes as a consistent, if idiosyncratic, royalist, and vindicates the contemporaneous view that the publication of Leviathan marked Hobbes's accommodation with England's revolutionary regime. In sustaining these conclusions, Professor Collins foregrounds the religious features of Hobbes's writings, and maintains a contextual focus on the broader religious dynamics of the English Revolution itself. Hobbes and the Revolution (...) are both placed within the tumultuous historical process that saw the emerging English state coercively secure jurisdictional control over national religion and the corporate church. Seen in the light of this history, Thomas Hobbes emerges as a theorist who moved with, rather than against, the revolutionary currents of his age. The strongest claim of the book is that Hobbes was motivated by his deep detestation of clerical power to break with the Stuart cause and to justify the religious policies of England's post-regicidal masters, including Oliver Cromwell. -/- Methodologically, Professor Collins supplements intellectual or linguistic contextual analysis with original research into Hobbes's biography, the prosopography of his associates, the reception of Hobbes's published works, and the nature of the English Revolution as a religious conflict. This multi-dimensional contextual approach produces, among other fruits: a new understanding of the political implications of Leviathan; an original interpretation of Hobbes's civil war history, Behemoth; a clearer picture of Hobbes's career during the neglected period of the 1650s; and a revisionist interpretation of Hobbes's reaction to the emergence of English republicanism. By presenting Thomas Hobbes as a political actor within a precisely defined political context, Professor Collins has recovered the significance of Hobbes's writings as artefacts of the English Revolution. (shrink)
Among the many philosophers who hold that causal facts1 are to be explained in terms of—or more ambitiously, shown to reduce to—facts about what happens, together with facts about the fundamental laws that govern what happens, the clear favorite is an approach that sees counterfactual dependence as the key to such explanation or reduction. The paradigm examples of causation, so advocates of this approach tell us, are examples in which events c and e—the cause and its effect—both occur, but: had (...) c not occurred, e would not have occurred either. From this starting point ideas proliferate in a vast profusion. But the remarkable disparity among these ideas should not obscure their common foundation. Neither should the diversity of opinion about the prospects for a philosophical analysis of causation obscure their importance. For even those philosophers who see these prospects as dim—perhaps because they suffer post-Quinean queasiness at the thought of any analysis of any concept of interest—can often be heard to say such things as that causal relations among events are somehow “a matter of” the patterns of counterfactual dependence to be found in them. It was not always so. Thirty-odd years ago, so-called “regularity” analyses (so-called, presumably, because they traced back to Hume’s well-known analysis of causation as constant conjunction) ruled the day, with Mackie’s Cement of the Universe embodying a classic statement. But they fell on hard times, both because of internal problems—which we will review in due course—and because dramatic improvements in philosophical understanding of counterfactuals made possible the emergence of a serious and potent rival: a counterfactual analysis of causation resting on foundations firm enough to be repel the kind of philosophical suspicion that had formerly warranted dismissal.. (shrink)
In this article, I examine and criticize John Searle's account of the relation between mind and body. Searle rejects dualism and argues that the traditional mind-body problem has a 'simple solution': mental phenomena are both caused by biological processes in the brain and are themselves features of the brain. More precisely, mental states and events are macro-properties of neurons in much the same way that solidity and liquidity are macro-properties of molecules. However, Searle also maintains that the mental is (...) 'ontologically irreducible' to the physical, a view which follows from his understanding of the status and nature of consciousness. Consciousness is essential to the mind; subjectivity is essential to consciousness; and no purely objective, physical description of consciousness could ever capture or explain its essentially subjective character. None the less, Searle maintains that irreducibility is a 'trivial' result of our 'definitional practices' and is entirely compatible with his theory. I contend that this latter claim is based on an equivocation: Searle's conclusion only seems to follow because he alters and trivializes what philosophers ordinarily mean by 'reduction'. I also maintain that Searle's position is reductionist in the ordinary, nontrivial sense. For this reason, his theory fails to accommodate the subjective character of consciousness and fails to solve the traditional mind-body problem. Finally, I briefly discuss Searle's claim that he is not an epiphenomenalist, and argue that given the assumptions of his view there is no interesting causal role for consciousness in the physical world. (shrink)
Intersectionality has attracted substantial scholarly attention in the 1990s. Rather than examining gender, race, class, and nation as distinctive social hierarchies, intersectionality examines how they mutually construct one another. I explore how the traditional family ideal functions as a privileged exemplar of intersectionality in the United States. Each of its six dimensions demonstrates specific connections between family as a gendered system of social organization, racial ideas and practices, and constructions of U.S. national identity.
Most content externalists concede that even if externalism is compatible with the thesis that one has authoritative self-knowledge of thought contents, it is incompatible with the stronger claim that one is always able to tell by introspection whether two of one’s thought tokens have the same, or different, content. If one lacks such authoritative discriminative self-knowledge of thought contents, it would seem that brute logical error – non-culpable logical error – is possible. Some philosophers, such as Paul Boghossian, have argued (...) that this would present a big problem for externalism, forcing the externalist to overhaul our norms of rationality. I consider several externalist strategies to block this possibly unhappy epistemological consequence, but I argue that they all fail. (shrink)
A counterfactual is a conditional statement in the subjunctive mood. For example: If Suzy hadn’t thrown the rock, then the bottle wouldn’t have shattered. The philosophical importance of counterfactuals stems from the fact that they seem to be closely connected to the concept of causation. Thus it seems that the truth of the above conditional is just what is required for Suzy’s throw to count as a cause of the bottle’s shattering. If philosophers were reluctant to exploit this idea prior (...) to 1970, it was because of a widespread feeling that the truth-conditions of the counterfactual conditional were not sufficiently well understood. The development of a formal semantics for counterfactuals by Robert Stalnaker  and David Lewis [1973b] stands as a major recent achievement in philosophical logic. (shrink)
Newcomb’s problem is a decision puzzle whose difficulty and interest stem from the fact that the possible outcomes are probabilistically dependent on, yet causally independent of, the agent’s options. The problem is named for its inventor, the physicist William Newcomb, but first appeared in print in a 1969 paper by Robert Nozick . Closely related to, though less well-known than, the Prisoners’ Dilemma, it has been the subject of intense debate in the philosophical literature. After three decades, the issues remain (...) unresolved. Newcomb’s problem is of genuine importance because it poses a challenge to the theoretical adequacy of orthodox Bayesian decision theory. It has led both to the development of causal decision theory and to efforts aimed at defending the adequacy of the orthodox theory. (shrink)
I defend a theory of mental representation that satisfies naturalistic constraints. Briefly, we begin by distinguishing (i) what makes something a representation from (ii) given that a thing is a representation, what determines what it represents. Representations are states of biological organisms, so we should expect a unified theoretical framework for explaining both what it is to be a representation as well as what it is to be a heart or a kidney. I follow Millikan in explaining (i) in terms (...) of teleofunction, explicated in terms of natural selection. -/- To explain (ii), we begin by recognizing that representational states do not have content, that is, they are neither true nor false except insofar as they both “point to” or “refer” to something, as well as “say” something regarding whatever it is they are about. To distinguish veridical from false representations, there must be a way for these separate aspects to come apart; hence, we explain (ii) by providing independent theories of what I call f-reference and f-predication (the ‘f’ simply connotes ‘fundamental’, to distinguish these things from their natural language counterparts). -/- Causal theories of representation typically founder on error, or on what Fodor has called the disjunction problem. Resemblance or isomorphism theories typically founder on what I’ve called the non-uniqueness problem, which is that isomorphisms and resemblance are practically unconstrained and so representational content cannot be uniquely determined. These traditional problems provide the motivation for my theory, the structural preservation theory, as follows. F-reference, like reference, is a specific, asymmetric relation, as is causation. F-predication, like predication, is a non-specific relation, as predicates typically apply to many things, just as many relational systems can be isomorphic to any given relational system. Putting these observations together, a promising strategy is to explain f-reference via causal history and f-predication via something like isomorphism between relational systems. -/- This dissertation should be conceptualized as having three parts. After motivating and characterizing the problem in chapter 1, the first part is the negative project, where I review and critique Dretske’s, Fodor’s, and Millikan’s theories in chapters 2-4. Second, I construct my theory about the nature of representation in chapter 5 and defend it from objections in chapter 6. In chapters 7-8, which constitute the third and final part, I address the question of how representation is implemented in biological systems. In chapter 7 I argue that single-cell intracortical recordings taken from awake Macaque monkeys performing a cognitive task provide empirical evidence for structural preservation theory, and in chapter 8 I use the empirical results to illustrate, clarify, and refine the theory. (shrink)
Managers of organizations should be aware of the attitudes of employees concerning whistleblowing. Employee views should affect how employers choose to respond to whistleblowers through the evolving law of wrongful discharge.This article reports on a survey of employee attitudes toward the legal protection of whistleblowers and presents an analysis of the results of that survey.
The dead donor rule justifies current practice in organ procurement for transplantation and states that organ donors must be dead prior to donation. The majority of organ donors are diagnosed as having suffered brain death and hence are declared dead by neurological criteria. However, a significant amount of unrest in both the philosophical and the medical literature has surfaced since this practice began forty years ago. I argue that, first, declaring death by neurological criteria is both unreliable and unjustified but (...) further, the ethical principles which themselves justify the dead donor rule are better served by abandoning that rule and instead allowing individuals who have suffered severe and irreversible brain damage to become organ donors, even though they are not yet dead and even though the removal of their organs would be the proximal cause of death. (shrink)
Borg (2009) surveys and rejects a number of arguments in favour of semantic internalism. This paper, in turn, surveys and rejects all of Borg's anti-internalist arguments. My chief moral is that, properly conceived, semantic internalism is a methodological doctrine that takes its lead from current practice in linguistics. The unifying theme of internalist arguments, therefore, is that linguistics neither targets nor presupposes externalia. To the extent that this claim is correct, we should be internalists about linguistic phenomena, including semantics.
Jerry Fodor argues that the massive modularity thesis – the claim that (human) cognition is wholly served by domain specific, autonomous computational devices, i.e., modules – is a priori incoherent, self-defeating. The thesis suffers from what Fodor dubs the input problem: the function of a given module (proprietarily understood) in a wholly modular system presupposes non-modular processes. It will be argued that massive modularity suffers from no such a priori problem. Fodor, however, also offers what he describes as a really (...) real input problem (i.e., an empirical one). It will be suggested that this problem is real enough, but it does not selectively strike down massive modularity – it is a problem for everyone. (shrink)
In his opening case , Quentin Smith has presented an ingenious argument for the claim that the universe is self caused, and hence its existence is self explanatory. He then goes on to claim that the fact that the universe is self caused, and hence self explanatory, is inconsistent with theism. His main argument is based on the assumption that each temporal part of the universe has an explanation in terms of the temporal parts existing prior to it. The fundamental (...) temporal parts that Smith uses are instantaneous universe states. Before going into Smith's argument, we need to mention two technicalities that the impatient reader may skip. (shrink)
I argue that the dispute between two leading theories of interpretation of legal texts, textual originalism and textual evolutionism, depends on the false presupposition that changes in the way a word is used necessarily require a change in the wordâ€™s meaning. Semantic externalism goes a long way towards reconciling these views by showing how a wordâ€™s semantic properties can be stable over time, even through vicissitudes of usage. I argue that temporal externalism can account for even more semantic stability, however. (...) Temporal externalism is the theory that the content of an utterance at time t may be determined by developments in linguistic usage subsequent to t . If this semantic theory is correct, then the originalist and evolutionist positions effectively collapse. Originalism is correct in that the original meaning of the text is the meaning that is binding on jurists, but evolutionism is vindicated, as it is the current practices and standards that determine the meaning the text now has, and has always had . Objections to temporal externalism, and to its application to the interpretation of legal texts, are considered and addressed. (shrink)
We explicate representational content by addressing how representations that ex- plain intelligent behavior might be acquired through processes of Darwinian evo- lution. We present the results of computer simulations of evolved neural network controllers and discuss the similarity of the simulations to real-world examples of neural network control of animal behavior. We argue that focusing on the simplest cases of evolved intelligent behavior, in both simulated and real organisms, reveals that evolved representations must carry information about the creature’s environ- ments (...) and further can do so only if their neural states are appropriately isomor- phic to environmental states. Further, these informational and isomorphism rela- tions are what are tracked by content attributions in folk-psychological and cognitive scientific explanations of these intelligent behaviors. (shrink)
Temporal externalism (TE) is the thesis (defended by Jackman (1999)) that the contents of some of an individual’s thoughts and utterances at time t may be determined by linguistic developments subsequent to t. TE has received little discussion so far, Brown 2000 and Stoneham 2002 being exceptions. I defend TE by arguing that it solves several related problems concerning the extension of natural kind terms in scientifically ignorant communities. Gary Ebbs (2000) argues that no theory can reconcile our ordinary, practical (...) judgments of sameness of extension over time with the claim that linguistic usage determines word extensions. I argue that Ebbs shows at most that no theory other than TE can effect this reconciliation. Furthermore, while Ebbs’ argument undermines Jessica Brown’s solutions to two closely related problems about natural kind term extensions (Brown 1998), TE can solve both problems without difficulty. Some criticisms of TE are briefly addressed as well. (shrink)
Psychologists and philosophers tend to treat expertise as a property of special individuals. These are individuals who have devoted much more time than the general population to the acquisition of their specific expertises. They are often said to pass through stages as they move toward becoming experts, for example, passing from an early stage, in which they follow self-conscious rules, to an expert stage in which skills are executed unconsciously. This approach is ‘one-dimensional’. Here, two extra dimensions are added. They (...) are drawn from the programme known as Studies of Expertise and Experience (SEE) and its ‘Periodic Table of Expertises’. SEE, which is sociological, and/or Wittgensteinian, in inspiration, takes expertise to be the property of groups; there are ‘domains’ of expertise. Under SEE, level of expertise grows with embedding in the society of domain experts; the key is the transmission of domain-specific tacit knowledge. Thus, one extra dimension is degree of exposure to tacit knowledge. Under SEE, domains can be big or small so there can be ‘ubiquitous tacit knowledge’, such as natural-language-speaking or other elements of general social behaviour, which belong to every member of a society. The second extra dimension is, therefore, ‘esotericity’. The resulting three-dimensional ‘expertise-space’ can be explored in a number of ways which reveal the narrowness of the analysis and the mistakes that have been made under the one-dimensional model. (shrink)
This paper has as its topic two recent philosophical disputes. One of these disputes is internal to the project known as decision theory, and while by now familiar to many, may well seem to be of pressing concern only to specialists. It has been carried on over the last twenty years or so, but by now the two opposing camps are pretty well entrenched in their respective positions, and the situation appears to many observers (as well as to some of (...) the parties involved) to have reached a sort of stalemate. The second of these two disputes is, on the other hand, very much alive. While it has been framed in decision theoretic terms, it is definitely not a dispute internal to that enterprise. It is, rather, a debate about the very coherence of the notion of objective value, and as such touches on issues of central importance to, for example, meta–ethics and moral psychology. (shrink)
There is an incompatibility between the deflationist approach to truth, which makes truth transparent on the basis of an antecedent grasp of meaning, and the traditional endeavour, exemplified by Davidson, to explicate meaning through of truth. I suggest that both parties are in the explanatory red: deflationist lack a non-truth-involving theory of meaning and Davidsonians lack a non-deflationary account of truth. My focus is on the attempts of the latter party to resolve their problem. I look in detail at Davidson's (...) more recent work and suggest that it seeks to articulate a primitive notion of truth that may balance between a notion that collapses into deflationism and one that is wholly subsumed under a general theory of interpretation. I conclude that this tightrope walk is ultimately unsuccessful. Equally, however, some reasons are provided for thinking that deflationism might be equally unsuccessful with its problem. 'Truth or meaning?' remains an open question. (shrink)
Knowledge entails the truth of the proposition known; that which is merely believed may be false. If I have beliefs about your beliefs, then I may believe that some of your beliefs are false. I may believe, for example, that you mistakenly believe that it is now raining outside. This is a coherent belief for me, though not for you. You cannot coherently believe that you believe falsely that it is raining, and this despite the fact that your having that (...) false belief is clearly a logical possibility. The proposition is, for you, a kind of doxastic blindspot. (shrink)
Humean supervenience is the doctrine that there are no necessary connections in the world. David Lewis identifies one big bad bug to the programme of providing Humean analyses for apparently non-Humean features of the world. The bug is chance. We put the bug under the microscope, and conclude that chance is no special problem for the Humean.
Rationalizations of deliberation often make reference to two kinds of mental state, which we call belief and desire. It is worth asking whether these kinds are necessarily distinct, or whether it might be possible to construe desire as belief of a certain sort — belief, say, about what would be good. An expected value theory formalizes our notions of belief and desire, treating each as a matter of degree. In this context the thesis that desire is belief might amount to (...) the claim that the degree to which an agent desires any proposition A equals the degree to which the agent believes the proposition that A would be good. We shall write this latter proposition ‘A◦’ (pronounced ‘A halo’). The Desire-as-Belief Thesis states, then, that to each proposition A there corresponds another proposition A◦, where the probability of A◦ equals the expected value of A. (shrink)
My paper defends the use of the poverty of stimulus argument (POSA) for linguistic nativism against Cowie's (1999) counter-claim that it leaves empiricism untouched. I first present the linguistic POSA as arising from a reflection on the generality of the child's initial state in comparison with the specific complexity of its final state. I then show that Cowie misconstrues the POSA as a direct argument about the character of the pld. In this light, I first argue that the data Cowie (...) marshals about the pld does not begin to suggest that the POSA is unsound. Second, through a discussion of the so-called `auxiliary inversion rule', I show, by way of diagnosis, that Cowie misunderstands both the methodology of current linguistics and the complexity of the data it is obliged to explain. (shrink)
Jerry Fodor, among others, has maintained that Chomsky's language faculty hypothesis is an epistemological proposal, i.e. the faculty comprises propositional structures known (cognized) by the speaker/hearer. Fodor contrasts this notion of a faculty with an architectural (directly causally efficacious) notion of a module. The paper offers an independent characterisation of the language faculty as an abstractly specified nonpropositional structure of the mind/brain that mediates between sound and meaning—a function in intension that maps to a pair of structures that determine soundmeaning (...) convergence. This conception will be elaborated and defended against a number of likely complaints deriving from Fodor's faculty/module distinction and other positions which seek to credit knowledge of language with an empirical or theoretical significance. A recent explicit argument from Fodor that Chomsky must share his conception will be diagnosed and the common appeal to implicit knowledge as a foundation for linguistic competence will be rejected. (shrink)
Sociology is split into two antagonistic or mutually oblivious wings: quantitative and nonquantitative. Statistics does not occupy a privileged methodological position vis-a-vis qualitative, verbal sociology. Probability is a theory like any other, and each statistical method contains its particular theoretical bias. Such biases should be brought into the open and tested. Statistics may continue to be useful, though, as a substantive theory of change processes in the social world. A reorientation in our views of statistics may bring mathematical and antimathematical (...) branches of sociology back into a common enterprise. (shrink)
Moral hypocrisy is motivation to appear moral yet, if possible, avoid the cost of actually being moral. In business, moral hypocrisy allows one to engender trust, solve the commitment problem, and still relentlessly pursue personal gain. Indicating the power of this motive, research has provided clear and consistent evidence that, given the opportunity, many people act to appear fair (e.g., they flip a coin to distribute resources between themselves and another person) without actually being fair (they accept the flip only (...) if it favors themselves). New evidence also indicates the power of moral hypocrisy in a situation more obviously relevant to business, resource allocation when one party has information about relative resource value that the other does not. Characteristics of modern business situations likely to encourage moral hypocrisy are outlined. We conclude that moral hypocrisy is not only a pragmatic virtue in modern business but is also fast becoming a prescriptive one. (shrink)
Much of the best contemporary work in the philosophy of language and content makes appeal to the theories developed in generative syntax. In particular, there is a presumption that-at some level and in some way-the structures provided by syntactic theory mesh with or support our conception of content/linguistic meaning as grounded in our first-person understanding of our communicative speech acts. This paper will suggest that there is no such tight fit. Its claim will be that, if recent generative theories are (...) on the right lines, syntactic structure provides both too much and too little to serve as the structural partner for content, at least as that notion is generally understood in philosophy. The paper will substantiate these claims by an assessment of the recent work of King, Stanley, and others. (shrink)
Deflationism is perhaps the prevailing conception of truth within contemporary philosophy. The chief reason for this ascendancy, I think, is that deflationary theories present themselves to be neutral between all disputes in epistemology and metaphysics. This offers deflationism a straightforward dialectical advantage over the more traditional theories that seek to explicate..
Do lexical items have internal structure that contributes to, or determines, the stable interpretation of their potential hosts? One argument in favour of the claim that lexical items are so structured is that certain putative verbs appear to be ‘impossible’, where the intended interpretation of them is apparently precluded by the character of their internal structure. The adequacy of such reasoning has recently been debated by Fodor and Lepore and Johnson, but to no apparent resolution. The present paper argues that (...) such ‘impossible word arguments' for internal lexical structure, although not apodictic, do constitute inferences to the best explanation for such structure. Alternative explanations for the ‘impossible words' are considered and rejected. (shrink)
The paper considers our ordinary mentalistic discourse in relation to what we should expect from any genuine science of the mind. A meta-scientific eliminativism is commended and distinguished from the more familiar eliminativism of Skinner and the Churchlands. Meta-scientific eliminativism views folk psychology qua folksy as unsuited to offer insight into the structure of cognition, although it might otherwise be indispensable for our social commerce and self-understanding. This position flows from a general thesis that scientific advance is marked by an (...) eschewal of folk understanding. The latter half of the paper argues that, contrary to the received view, Chomsky's review of Skinner offers not just an argument against Skinner's eliminativism, but, more centrally, one in favour of the second eliminativism. (shrink)
In recent years, a number of philosophers have argued against a biological understanding of the innate in favor of a narrowly psychological notion. On the other hand, Ariew ((1996). Innateness and canalization. Philosophy of Science, 63, S19-S27. (1999). Innateness is canalization: in defense of a developmental account of innateness. In V. Hardcastle (Ed.), Where biology meets psychology: Philosophical essays (pp. 117-138). Cambridge, MA: MIT.) has developed a novel substantial account of innateness based on developmental biology: canalization. The governing thought of (...) this paper is that the notion of the innate, as it re-emerged with the work of Chomsky, is a general notion that applies equally to all biological traits. On this basis, the paper recommends canalization as a promising candidate account of the notion of the innate. (shrink)
To become an expert in a technical domain means acquiring the tacit knowledge pertaining to the relevant domain of expertise, at least, according to the programme known as “Studies of Expertise and Experience” (SEE). We know only one way to acquire tacit knowledge and that is through some form of sustained social contact with the group that has it. Those who do not have such contact cannot acquire the expertise needed to make technical judgments. They can, however, use social expertise (...) to judge between experts or expert claims. Where social expertise is used to make technical judgments we refer to it as “transmuted expertise”. The various kinds of transmuted expertise are described and analysed. (shrink)
Ecological research and conservation practice frequently raise difficult and varied ethical questions for scientific investigators and managers, including duties to public welfare, nonhuman individuals (i.e., animals and plants), populations, and ecosystems. The field of environmental ethics has contributed much to the understanding of general duties and values to nature, but it has not developed the resources to address the diverse and often unique practical concerns of ecological researchers and managers in the field, lab, and conservation facility. The emerging field of (...) “ecological ethics” is a practical or scientific ethics that offers a superior approach to the ethical dilemmas of the ecologist and conservation manager. Even though ecological ethics necessarily draws from the principles and commitments of mainstream environmental ethics, it is normatively pluralistic, including as well the frameworks of animal, research, and professional ethics. It is also methodologically pragmatic, focused on the practical problems of researchers and managers and informed by these problems in turn. The ecological ethics model offers environmental scientists and practitioners a useful analytical tool for identifying, clarifying, and harmonizing values and positions in challenging ecological research and management situations. Just as bioethics provides a critical intellectual and problem-solving service to the biomedical community, ecological ethics can help inform and improve ethical decision making in the ecology and conservation communities. (shrink)
"It as little occurs to me to get involved in the philosophical quarrels and arguments of my times as to go down an ally and take part in a scuffle when I see the mob fighting there." — Arthur Schopenhauer, 1828-30, Adversaria' in Manuscript Remains, Vol. 3: Berlin Manuscripts (1818-1830). Oxford: Berg Publishers.
The concept that peope have of themselves as a 'person' is one of the most intimate notions that they hold. Yet the way in which the category of the person is conceived varies over time and space. In this volume, anthropologists, philosophers, and historians examine the notion of the person in different cultures, past and present. Taking as their starting point a lecture on the person as a category of the human mind, given by Marcel Mauss in 1938, the contributors (...) critically assess Mauss's speculation that ntions of the person, rather than being primarily philosophical or psychological, have a complex social and ideological origin. Discussing societies ranging from ancient Greece, India, and China to modern Africa and Papua New Guinea, they provide fascinating descriptions of how these different cultures define the person. But they also raise deeper theoretical issues: What is universally constant and what is culturally variable in people's thinking about the person? How can these variations be explained? Has there been a general progressive development toward the modern Western view of the person? What is distinctive about this? How do one's notions of the person inform one's ability to comprehend alternative formulations? These questions are of compelling interest for a wide range of anthropologists, philosophers, historians, psychologists, sociologists, orientalists, and classicists. The book will appeal to any reader concerned with understanding one of the most fundamental aspects of human existence. (shrink)
Prinz (Perceptual the Mind: Concepts and Their Perceptual Basis, MIT Press, 2002) presents a new species of concept empiricism, under which concepts are off-line long-term memory networks of representations that are ‘copies’ of perceptual representations – proxytypes. An apparent obstacle to any such empiricism is the prevailing nativism of generative linguistics. The paper critically assesses Prinz’s attempt to overcome this obstacle. The paper argues that, prima facie, proxytypes are as incapable of accounting for the structure of the linguistic mind as (...) are the more traditional species of empiricism. This position is then confirmed by looking in detail at two suggestions (one derived from recent connectionist research) from Prinz of how certain aspects of syntactic structure might be accommodated by the proxytype theory. It is shown that the suggestions fail to come to terms with both the data and theory of contemporary linguistics. (shrink)
I argue for a cognitive architecture in which folk psychology is supported by an interface of a ToM module and the language faculty, the latter providing the former with interpreted LF structures which form the content representations of ToM states. I show that LF structures satisfy a range of key features asked of contents. I confront this account of ToM with eliminativism and diagnose and combat the thought that "success" and innateness are inconsistent with the falsity of folk psychology. I (...) show that, while my ensemble account of ToM and language refutes the culturalist presuppositions that tend to underlie eliminativist arguments, the falsity of folk psychology is consistent with the account. (shrink)
A plausible thought about vagueness is that it involves a form of semantic incompleteness. To say that a predicate is vague is to say (at the very least) that its extension is incompletely specified. And where there is incomplete specification of extension there is indeterminacy—an indeterminacy between various ways that the specification of the predicate might be completed or, as some like to say, sharpened (or precisified). We shall argue that this idea is defective insofar as there are vague predicates (...) that cannot be sharpened. At least, there are predicates that are vague but that cannot be sharpened in such a way as to meet certain basic constraints that we think must be imposed on the very notion of a sharpening. (shrink)
Philosophical discussions of apologies have focused on apologizing for wrong actions. Such a focus overlooks an important dimension of moral failures, namely, failures of character. However, when one attempts to revise the standard account of apology to make room for failures of character, two objections emerge. The first is rooted in the psychology of shame. The second stems from the purported social function of apologies. This paper responds to these objections and, in so doing, sheds further light both on why (...) we apologize (when we are in the wrong) and on why we accept apologies (when others are). (shrink)
Conflict produces group solidarity in four phases: (1) an initial few days of shock and idiosyncratic individual reactions to attack; (2) one to two weeks of establishing standardized displays of solidarity symbols; (3) two to three months of high solidarity plateau; and (4) gradual decline toward normalcy in six to nine months. Solidarity is not uniform but is clustered in local groups supporting each other's symbolic behavior. Actual solidarity behaviors are performed by minorities of the population, while vague verbal claims (...) to performance are made by large majorities. Commemorative rituals intermittently revive high emotional peaks; participants become ranked according to their closeness to a center of ritual attention. Events, places, and organizations claim importance by associating themselves with national solidarity rituals and especially by surrounding themselves with pragmatically ineffective security ritual. Conflicts arise over access to centers of ritual attention; clashes occur between pragmatists deritualizing security and security zealots attempting to keep up the level of emotional intensity. The solidarity plateau is also a hysteria zone; as a center of emotional attention, it attracts ancillary attacks unrelated to the original terrorists as well as alarms and hoaxes. In particular historical circumstances, it becomes a period of atrocities. (shrink)
The American Institute of Certified Public Accountants (AICPA) is responsible for the Code of Professional Conduct that governs the actions of CPAs. In 1988, the Code was revised by the AICPA, but a number of issues still remain unresolved or confounded by the new Code. These issues are examined in light of the profession''s stated commitment to the public good, a commitment that is discussed at length in the new Code.Specifically, this paper reviews the following issues: (1) client confidentiality and (...) whistleblowing, (2) limited liability, and (3) auditor independence. We argue that, in each of these areas, the AICPA promotes a position that is potentially harmful to the public good. (shrink)
(i)Â Â Languages are indefinitely various along every dimension. (ii)Â Languages are essentially systems of habit/dispositions. (iii) Languages are learnt from experience via analogy and generalisation. (iv) There is no component of the speaker/hearerâ€™s psychology that is Â Â Â Â Â Â specifically linguistic. (v)Â Syntactic relations are ones of surface immediate constituency. (vi) Linguistics is a descriptive/taxonomic science - there is nothing to Â Â Â Â Â explain.
In July 2008, Pacific Rim Mining, a socially responsive Canadian gold mining Multinational Corporation (MNC) with $77 million invested in El Salvador, experienced a 30% decline in stock price when it suspended exploration drilling for gold there. In April 2009, the company filed a lawsuit against the government of El Salvador through Central American Free Trade Agreement to recover its investments plus damages. This corporate failure is explored based on: (1) four globalization economic development models, (2) the social, political, and (...) economic history of El Salvador, (3) the El Salvador gold mining industry, and (4) social movement reactions to international mining companies. MNCs must carefully engage "Social Justice" Nongovernment Organizations when pursuing economic development projects to ensure a nation's successful integration into the global economy. (shrink)
Between formal propositional knowledge and embodied skill lies ‘interactional expertise’—the ability to converse expertly about a practical skill or expertise, but without being able to practice it, learned through linguistic socialisation among the practitioners. Interactional expertise is exhibited by sociologists of scientific knowledge, by scientists themselves and by a large range of other actors. Attention is drawn to the distinction between the social and the individual embodiment theses: a language does depend on the form of the bodies of its members (...) but an individual within that community can learn the language without the body. The idea has significance for our understanding of colour-blindness, deafness and other abilities and disabilities. They say that love's a word. (shrink)
In 1999, the Journal of Business Ethics published its 1 500th article. This article commemorates the journal's quest "to improve the human condition" (Michalos, 1988, p. 1) with a summary and assessment of the first eighteen volumes. The first part provides an overview of JBE, highlighting the journal's growth, types of methodologies published, and the breadth of the field. The second part provides a detailed account of the quantitative research findings. Major research topics include (1) prevalence of ethical behavior, (2) (...) ethical sensitivities, (3) ethics codes and programs, (4) corporate social performance and policies, (5) human resource practices and policies, and (6) professions – accounting, marketing/sales, and finance/strategy. Much remains to be done. (shrink)
Phenomenologists such as Merleau?Ponty have argued that the ordinary teleological relation between an embodied agent and the world is neither ?subjective? nor ?cognitive?, i.e. that it is not normally mediated by a chain of explicit cognition occurring within a distinct mental subject. Yet, while this seems true from a first?person, phenomenological perspective, I argue that teleological forms of explanation require the ascription of Intentional states. Intentional states, however, are usually regarded as subjective, cognitive states. In order to reconcile the phenomenology (...) with the logic of teleology, I introduce the notion of ?body?intentionality?. I maintain that we can use a modified version of Jonathan Bennett's concept of a teleological law to specify third?person empirical criteria for a pre?cognitive, pre?subjective kind of Intentionality. I also argue that this notion of body?intentionality provides us with at least a partial solution to the mind?body problem that avoids the inadequacies of the computational theory of mind. (shrink)