Why ever assert clarity? If It is clear that p is true, then saying so should be at best superfluous. Barker and Taranto (2003) and Taranto (2006) suggest that asserting clarity reveals information about the beliefs of the discourse participants, specifically, that they both believe that p . However, mutual belief is not sufficient to guarantee clarity ( It is clear that God exists ). I propose instead that It is clear that p means instead (roughly) 'the publicly available evidence (...) justifies concluding that p '. Then what asserting clarity reveals is information concerning the prevailing epistemic standard that determines whether a body of evidence is sufficient to justify a claim. If so, the semantics of clarity constitutes a grammatical window into the discourse dynamics of inference and skepticism. (shrink)
We present a general theory of scope and binding in which both crossover and superiority violations are ruled out by one key assumption: that natural language expressions are normally evaluated (processed) from left to right. Our theory is an extension of Shan’s (2002) account of multiple-wh questions, combining continuations (Barker, 2002) and dynamic type-shifting. Like other continuation-based analyses, but unlike most other treatments of crossover or superiority, our analysis is directly compositional (in the sense of, e.g., Jacobson, 1999). In particular, (...) it does not postulate a level of Logical Form or any other representation distinct from surface syntax. One advantage of using continuations is that they are the standard tool for modeling order-of-evaluation in programming languages. This provides us with a natural and independently motivated characterization of what it means to evaluate expressions from left to right. We give a combinatory categorial grammar that models the syntax and the semantics of quantifier scope and wh-question formation. It allows quantificational binding but not crossover, in-situ wh but not superiority violations. In addition, the analysis automatically accounts for a variety of sentence types involving binding in the presence of pied piping, including reconstruction cases such as Whose criticism of hisi mother did each personi resent? (shrink)
Barker, Ken One of the lasting fruits of the wide-spread experience of the renewal in the Catholic Church since the Second Vatican Council has been the surprising emergence of new expressions of consecrated life. The Missionaries of God's Love (MGL) is an Australian example of this renaissance. Founded in Canberra in 1986 as a small fraternity of young men around a priest, the MGL brothers have now grown to more than twenty in final vows and more than thirty in formation. (...) The MGL sisters, founded in Canberra in 1987, possess the same charism, but with a separate identity and expression. They currently have six in final vows and fifteen members. To understand the new ecclesial energy that has generated this resurgence of desire for consecrated life, it is necessary to examine the ecclesial context in which the Missionaries of God's Love was born. Three main movements of the Spirit have provided this new ecclesial environment, which has proved conducive to the birth of a new way of consecrated life. (shrink)
In this paper I respond to Jacquette’s criticisms, in (Jacquette, 2008), of my (Barker, 2008). In so doing, I argue that the Liar paradox is in fact a problem about the disquotational schema, and that nothing in Jacquette’s paper undermines this diagnosis.
Whom a prime minister or president will not shake hands with is still more noticed than with whom they will. Public identity can afford to be ambiguous about friends, but not about enemies. Rodney Barker examines the available accounts of how enmity functions in the cultivation of identity, how essential or avoidable it is, and what the consequences are for the contemporary world.
The standard view about counterfactuals is that a counterfactual (A > C) is true if and only if the A-worlds most similar to the actual world @ are C-worlds. I argue that the worlds conception of counterfactuals is wrong. I assume that counterfactuals have non-trivial truth-values under physical determinism. I show that the possible-worlds approach cannot explain many embeddings of the form (P > (Q > R)), which intuitively are perfectly assertable, and which must be true if the contingent falsity (...) of (Q > R) is to be explained. If (P > (Q > R)) has a backtracking reading then the contingent facts that (Q > R) needs to be true in the closest P-worlds are absent. If (P > (Q > R)) has a forwardtracking reading, then the laws required by (Q > R) to be true in the closest P-worlds will be absent, because they are violated in those worlds. Solutions like lossy laws or denial of embedding won't work. The only approach to counterfactuals that explains the embedding is a pragmatic metalinguistic approach in which the whole idea that counterfactuals are about a modal reality, be it abstract or concrete, is given up. (shrink)
Negative facts get a bad press. One reason for this is that it is not clear what negative facts are. We provide a theory of negative facts on which they are no stranger than positive atomic facts. We show that none of the usual arguments hold water against this account. Negative facts exist in the usual sense of existence and conform to an acceptable Eleatic principle. Furthermore, there are good reasons to want them around, including their roles in causation, chance-making (...) and truth-making, and in constituting holes and edges. (shrink)
I offer a new theory of faultless disagreement, according to which truth is absolute (non-relative) but can still be non-objective. What's relative is truth-aptness: a sentence like ‘Vegemite is tasty’ (V) can be truth-accessible and bivalent in one context but not in another. Within a context in which V fails to be bivalent, we can affirm that there is no issue of truth or falsity about V, still disputants, affirming and denying V, were not at fault, since, in their context (...) of assertion V was bivalent. This theory requires a theory of assertion that is a form of cognitive expressivism. (shrink)
Essentialism is widely regarded as a mistaken view of biological kinds, such as species. After recounting why (sections 2-3), we provide a brief survey of the chief responses to the “death of essentialism” in the philosophy of biology (section 4). We then develop one of these responses, the claim that biological kinds are homeostatic property clusters (sections 5-6) illustrating this view with several novel examples (section 7). Although this view was first expressed 20 years ago, and has received recent discussion (...) and critique, it remains underdeveloped and is often misrepresented by its critics (section 8). (shrink)
Alexander Bird argues that David Armstrong’s necessitarian conception of physical modality and laws of nature generates a vicious regress with respect to necessitation. We show that precisely the same regress afflicts Bird’s dispositional-monist theory, and indeed, related views, such as that of Mumford and Anjum. We argue that dispositional monism is basically Armstrongian necessitarianism modified to allow for a thesis about property identity.
Are all instances of the T-schema assertable? I argue that they are not. The reason is the presence of conventional implicature in a language. Conventional implicature is meant to be a component of the rule-based content that a sentence can have, but it makes no contribution to the sentence's truth-conditions. One might think that a conventional implicature is like a force operator. But it is not, since it can enter into the scope of logical operators. It follows that the semantic (...) content of a sentence is not given simply by its truth-conditional content. So not all instances of the T-schema are assertable in the relevant sense. Consequently, there is a strong case to be made against truth-conditional semantics of the disquotational variety and deflationism about truth. (shrink)
My goal is to illuminate truth-making by way of illuminating the relation of making. My strategy is not to ask what making is, in the hope of a metaphysical theory about is nature. It's rather to look first to the language of making. The metaphor behind making refers to agency. It would be absurd to suggest that claims about making are claims about agency. It is not absurd, however, to propose that the concept of making somehow emerges from some feature (...) to do with agency. That's the contention to be explore in this paper. The way to do this is through expressivism,. Truth-making claims, and making-claims generfally, are claims in which we express mental states linked to our maipulation of concepts, like truth. In particular, they express disposition to undertake derivations using inference rules, in which introduction rules have a specific role. I then show how this theory explains our intuitions about truth's asymmetric dependence on being. (shrink)
This paper argues that the new metaphysics of powers, also known as dispositional essentialism or causal structuralism, is an illusory metaphysics. I argue for this in the following way. I begin by distinguishing three fundamental ways of seeing how facts of physical modality — facts about physical necessitation and possibility, causation, disposition, and chance — are grounded in the world. The first way, call it the first degree, is that the actual world or all worlds, in their entirety, are the (...) source of physical modality. Humeanism is the best known such approach, but there are other less well-known approaches. The second way, the second degree, is that the source of physical modality lies in certain second-order facts, involving a relation between universals. Armstrong’s necessitarianism and other views are second-degree views. The third way, the third degree, holds that properties themselves are the source of physical modality. This is the powers view. I examine four ways of developing the third degree: relational constitution, graph-theoretic structuralism, dispositional roles, and powerful qualities. All these ways are either incoherent, or just disguised versions of the first-degree. The new metaphysics of powers is illusory. With the collapse of the third degree, the second degree, the necessitarian view of law, collapses as well. I end the paper with some reflections on the first degree, on the problem of explaining necessary connections between distinct existences, and on the dim prospects of holist ontology. (shrink)
I argue that conventional implicatures embed in logical compounds, and are non-truth-conditional contributors to sentence meaning. This, I argue has significant implications for how we understand truth, truth-conditional content, and truth-bearers.
If we seek to analyse causation in terms of counterfactual conditionals then we must assume that there is a class of counterfactuals whose members (i) are all and only those we need to support our judgements of causation, (ii) have truth-conditions specifiable without any irreducible appeal to causation. I argue that (i) and (ii) are unlikely to be met by any counterfactual analysis of causation. I demonstrate this by isolating a class of counterfactuals called non-projective counterfactuals, or NP-counterfactuals, and indicate (...) how counterfactual analyses of causation must appeal to them to account for the correct causal judgements we make. I show that the truth-conditions of NP-counterfactuals are specifiable only by irreducible appeal to causation. A dilemma then holds: if counterfactual analyses of causation eschew appeal to NP-counterfactuals they are empirically inadequate, but if they appeal to NP-counterfactuals they are circular and thus conceptually inadequate. (shrink)
Are the sculpture and the mass of gold which permanently makes it up one object or two? In this paper, we argue that the monist, who answers ‘one object’, cannot accommodate the asymmetry of material constitution. To say ‘the mass of gold materially constitutes the sculpture, whereas the sculpture does not materially constitute the mass of gold’, the monist must treat ‘materially constitutes’ as an Abelardian predicate, whose denotation is sensitive to the linguistic context in which it appears. We motivate (...) this approach in terms of modal analyses of material constitution, but argue that ultimately it fails. The monist must instead accept a deflationary, symmetrical use of ‘materially constitutes’. We argue that this is a serious cost for her approach. (shrink)
There is a wide-spread belief amongst theorists of mind and language. This is that in order to understand the relation between language, thought, and reality we need a theory of meaning and content, that is, a normative, formal science of meaning, which is an extension and theoretical deepening of folk ideas about meaning. This book argues that this is false, offering an alternative idea: The form of a theory that illuminates the relation of language, thought, and reality is a theory (...) of language agency. In a nutshell, the theory of language agency is a theory of competence, without being a theory of understanding or grasping rules. It is a theory of cognitive structure and language production. This theory distils all there is to say about language, thought, and reality. It does not supplement a theory of truth-conditions or semantic norms. It is not the explanation of how a speaker, qua cognitive system causally embedded in a larger reality, is able to use a language with some pre-existing semantic characterization. There is no pre-existing semantic characterization. Nevertheless, there are facts of meaning, as good as any other facts. The dissolution of the theory of meaning is accompanied by another disappearance. That is the disappearance of metaphysical questions in a number of domains. Once we complete the theory of language agency, then just as theoretical questions about meaning disappear, certain theoretical questions about existence disappear. Having provided a theory of the language agency for talk of meaning, fact, property, relation, and proposition, there is no question left over about what meanings, facts, properties, relations, and propositions are. There is no theory to be given of their natures. This is not because they have primitive irreducible natures. Rather it is because, in a sense to be clarified in this work, they lack natures. I call this approach to language agency Global Expressivism. That is because it generalizes some of the insights brought to the study of value-language by expressivists. However, it removes these insights from the clouding affects of attempting to make expressivism a semantic theory. Expressivism about value fails as a semantic theory of value talk. However, global expressivism can succeed as a theory of all talk because it is not a semantic theory but a theory of language agency, wherein the theory of meaning is replaced by a theory of talk about meaning. (shrink)
Kant wrote two versions of the Transcendental Deduction, the first, “A-”Deduction in 1781, and the second, “B-”Deduction in 1787. Since Henrich's “The Proof Structure of Kant's Transcendental Deduction”, most work on the Transcendental Deduction attempts to make sense of the B-Deduction's two-step argument structure. Though the A-Deduction has suffered comparative neglect, it has received some attention from interpreters who take its extended treatment of the “subjective” side of cognition to amount to a brand of proto-functionalism. Whatever the merits and demerits (...) of these proto-functionalist approaches, they tend to deemphasize the two arguments that constitute the “objective” side of the A-Deduction, the “argument from above” and then the “argument from below”. Since Kant himself refers to this objective side of the A-Deduction as the “Deduction of the Pure Concepts of the Understanding”, it is surprising that the structure of these arguments has not received closer scrutiny. This is doubly true since Kant actually claims that his revisions for the 1787 version of the Deduction impacted only the “presentation” of it. Any lessons learned from the central arguments of the A-Deduction should help clarify the structure of its younger and more closely studied brother. (shrink)
A far-reaching and influential view in evolutionary biology claims that species are cohesive units held together by gene flow. Biologists have recognized empirical problems facing this view; after sharpening the expression of the view, we present novel conceptual problems for it. At the heart of these problems is a distinction between two importantly different concepts of cohesion, what we call integrative and response cohesion. Acknowledging the distinction problematizes both the explanandum of species cohesion and the explanans of gene flow that (...) are central to the view we discuss. We conclude by tracing four broader implications for the study and conceptualization of species. (shrink)
The Tidal Model represents a significant alternative to mainstream mental health theories, emphasizing how those suffering from mental health problems can benefit from taking a more active role in their own treatment. Based on extensive research, The Tidal Model charts the development of this approach, outlining the theoretical basis of the model to illustrate the benefits of a holistic model of care which promotes self-management and recovery. Clinical examples are also employed to show how, by exploring rather than ignoring a (...) client's narrative, practitioners can encourage the individual's greater involvement in the decisions affecting their assessment and treatment. The Tidal Model 's comprehensive coverage of the theory and practice of this model will be of great use to a range of mental health professionals and those in training in the fields of mental health nursing, social work, psychotherapy, clinical psychology and occupational therapy. (shrink)
Over the last 2,300 years or so, many philosophers have believed that species are individuated by essences that are at least in part intrinsic. Psychologists tell us most folks also believe this view. But most philosophers of biology have abandoned the view, in light of evolutionary conceptions of species. In defiance, Michael Devitt has attempted in this journal to resurrect a version of the view, which he calls Intrinsic Biological Essentialism. I show that his arguments for the resurrection fail, and (...) I identify challenges that face anyone wishing to defend Intrinsic Biological Essentialism. (shrink)
: This paper continues my application of theories of concepts developed in cognitive psychology to clarify issues in Kuhn's mature account of scientific change. I argue that incommensurability is typically neither global nor total, and that the corresponding form of scientific change occurs incrementally. Incommensurability can now be seen as a local phenomenon restricted to particular points in a conceptual framework represented by a set of nodes. The unaffected parts in the framework constitute the basis for continued communication between the (...) communities supporting alternative structures. The importance of a node is a measure of the severity of incommensurability introduced by replacing it. Such replacements occur incrementally so that changes like that from the conceptual structure of Aristotelian celestial physics to the conceptual structure of Newtonian celestial physics occur in small stages over time, and for each change it is in principle possible to identify the arguments and evidence that led historical actors to make the revisions. Thus the process of scientific change is a rational one, even when its beginning and end points are incommensurable conceptual structures. It is also apparent, from a detailed examination of the conceptual structure of astronomy at the time of Copernicus, thatthe kind of conceptual difficulty identified as incommensurability may occur within a single scientific tradition as well as between two rival traditions. (shrink)
Two critiques of simple adaptationism are distinguished: anti-adaptationism and extended adaptationism. Adaptationists and anti-adaptationists share the presumption that an evolutionary explanation should identify the dominant simple cause of the evolutionary outcome to be explained. A consideration of extended-adaptationist models such as coevolution, niche construction and extended phenotypes reveals the inappropriateness of this presumption in explaining the evolution of certain important kinds of features—those that play particular roles in the regulation of organic processes, especially behavior. These biological or behavioral ‘levers’ are (...) distinctively available for adaptation and exaptation by their possessors and for co-optation by other organisms. As a result they are likely to result from a distinctive and complex type of evolutionary process that conforms neither to simple adaptationist nor to anti-adaptationist styles of explanation. Many of the human features whose evolutionary explanation is most controversial belong to this category, including the female orgasm. (shrink)
In a previous article we have shown that Kuhn's theory of concepts is independently supported by recent research in cognitive psychology. In this paper we propose a cognitive re-reading of Kuhn's cyclical model of scientific revolutions: all of the important features of the model may now be seen as consequences of a more fundamental account of the nature of concepts and their dynamics. We begin by examining incommensurability, the central theme of Kuhn's theory of scientific revolutions, according to two different (...) cognitive models of concept representation. We provide new support for Kuhn 's mature views that incommensurability can be caused by changes in only a few concepts, that even incommensurable conceptual systems can be rationally compared, and that scientific change of the most radical sort—the type labeled revolutionary in earlier studies—does not have to occur holistically and abruptly, but can be achieved by a historically more plausible accumulation of smaller changes. We go on to suggest that the parallel accounts of concepts found in Kuhn and in cognitive science lead to a new understanding of the nature of normal science, of the transition from normal science to crisis, and of scientific revolutions. The same account enables us to understand how scientific communities split to create groups supporting new paradigms, and to resolve various outstanding problems. In particular, we can identify the kind of change needed to create a revolution rather precisely. This new analysis also suggests reasons for the unidirectionality of scientific change. (shrink)
It seems to be generally accepted that (a) counterfactual conditionals are to be analysed in terms of possible worlds and inter-world relations of similarity and (b) causation is conceptually prior to counterfactuals. I argue here that both (a) and (b) are false. The argument against (a) is not a general metaphysical or epistemological one but simply that, structurally speaking, possible worlds theories are wrong: this is revealed when we try to extend them to cover the case of probabilistic counterfactuals. Indeed (...) a type of counterfactual probability exists which cannot be expressed in possible worlds terms at all. The argument against (b) emerges when we look at the form of an adequate account of both probabilistic and non-probabilistic counterfactuals. I do this by sketching and defending an approach to counterfactuals that, first, invoke a generalized notion of cause as primitive and, secondly, is algorithmic in form: counterfactuals are evaluated algorithmically in terms of other counterfactuals, without vicious circularity. Structures like possible worlds do not play a role either in general truth-conditions or in evaluation. They are simply the wrong sorts of structures. (shrink)
Drawing on the results of modem psychology and cognitive science we suggest that the traditional theory of concepts is no longer tenable, and that the alternative account proposed by Kuhn may now be seen to have independent empirical support quite apart from its success as part of an account of scientific change. We suggest that these mechanisms can also be understood as special cases of general cognitive structures revealed by cognitive science. Against this background, incommensurability is not an insurmountable obstacle (...) to accepting Kuhn's position, as many philosophers of science still believe. Rather it becomes a natural consequence of cognitive structures that appear in all human beings. (shrink)
Emotivist, or non-descriptivist metaethical theories hold that value-statements do not function by describing special value-facts, but are the mere expressions of naturalistically describable motivational states of (valuing) agents. Non-descriptivism has typically been combined with the claim that value-statements are non-cognitive: they are not the manifestations of genuine belief states. However, all the linguistic, logical and phenomenological evidence indicates that value-statements are cognitive. Non-descriptivism then has a problem. Horgan and Timmons propose to solve it by boldly combining a non-descriptivist thesis about (...) value with the claim that value-judgements are after all cognitive. Although possessing many attractive features, I argue that their framework fails to deliver the promised results; it suffers from a certain internal incoherence about the concept of content and mis-characterizes the descriptive/non-descriptive content distinction required by nondescriptivism. (shrink)
In this paper we examine the pattern of conceptual change during scientific revolutions by using methods from cognitive psychology. We show that the changes characteristic of scientific revolutions, especially taxonomic changes, can occur in a continuous manner. Using the frame model of concept representation to capture structural relations within concepts and the direct links between concept and taxonomy, we develop an account of conceptual change in science that more adequately reflects the current understanding that episodes like the Copernican revolution are (...) not always abrupt. When concepts are represented by frames, the transformation from one taxonomy to another can be achieved in a piecemeal fashion not preconditioned by a crisis stage, and a new taxonomy can arise naturally out of the old frame instead of emerging separately from the existing conceptual system. This cognitive mechanism of continuous change demonstrates the constructive roles of anomaly and incommensurability in promoting the progress of science. (shrink)
This paper brings needed clarity to the influential view that species are cohesive entities held together by gene flow, and then develops an empirical argument against that view: Neglected data suggest gene flow is neither necessary nor sufficient for species cohesion. Implications are discussed. ‡I'm grateful to Rob Wilson, Alex Rueger and Lindley Darden for important comments on earlier drafts, and to Joseph Nagel, Heather Proctor, Ken Bond, members of the DC History and Philosophy of Biology reading group, and audience (...) members at the November 2006 meeting of the PSA, for helpful comments or discussion. Social Sciences and Humanities Research Council of Canada fellowship 752-2005-1208 supported research. †To contact the author, please write to: Philosophy Department, University of Wisconsin–Madison, 5185 Helen C. White Hall, 600 North Park Street, Madison, WI 53706; e-mail: firstname.lastname@example.org. (shrink)
This book examines the hypothesis of "direct compositionality", which requires that semantic interpretation proceed in tandem with syntactic combination. Although associated with the dominant view in formal semantics of the 1970s and 1980s, the feasibility of direct compositionality remained unsettled, and more recently the discussion as to whether or not this view can be maintained has receded. The syntax-semantics interaction is now often seen as a process in which the syntax builds representations which, at the abstract level of logical form, (...) are sent for interpretation to the semantics component of the language faculty. In the first extended discussion of the hypothesis of direct compositionality for twenty years, this book considers whether its abandonment might have been premature and whether in fact direct compositionality is not after all a simpler and more effective conception of the grammar than the conventional account of the syntax-semantics interface in generative grammar. It contains contributions from both sides of the debate, locates the debate in the setting of a variety of formal theories, and draws on examples from a range of languages and a range of empirical phenomena. (shrink)
Biomedical ontologies are emerging as critical tools in genomic and proteomic research where complex data in disparate resources need to be integrated. A number of ontologies exist that describe the properties that can be attributed to proteins; for example, protein functions are described by Gene Ontology, while human diseases are described by Disease Ontology. There is, however, a gap in the current set of ontologies—one that describes the protein entities themselves and their relationships. We have designed a PRotein Ontology (PRO) (...) to facilitate protein annotation and to guide new experiments. The components of PRO extend from the classification of proteins on the basis of evolutionary relationships to the representation of the multiple protein forms of a gene (products generated by genetic variation, alternative splicing, proteolytic cleavage, and other post-translational modification). PRO will allow the specification of relationships between PRO, GO and other OBO Foundry ontologies. Here we describe the initial development of PRO, illustrated using human proteins from the TGF-beta signaling pathway (http://pir.georgetown.edu/pro). (shrink)