Patricia Williams made a number of claims concerning the methods and practise of cladistic analysis and classification. Her argument rests upon the distinction of two kinds of hierarchy: a divisional hierarchy depicting evolutionary descent and the Linnean hierarchy describing taxonomic groups in a classification. Williams goes on to outline five problems with cladistics that lead her to the conclusion that systematists should eliminate cladism as a school of biological taxonomy and to replace it either with something that is (...) philosophically coherent or to replace it with pure methodology, untainted by theory (Williams 1992, 151). Williams makes a number of points which she feels collectively add up to insurmountable problems for cladistics. We examine Williams' views concerning the two hierarchies and consider what cladists currently understand about the status of ancestors. We will demonstrate that Williams has seriously misunderstood many modern commentators on this subject and all of her five persistent problems are derivable from this misunderstanding. Some persons believe and argue, on grounds approaching faith it seems to me, that phylogeny comes from our knowledge of evolution. Others have found to their surprise, and sometimes dismay, that phylogeny comes from our knowledge of systematics. Nelson (1989, 67). (shrink)
Two essays on utilitarianism, written from opposite points of view, by J. J. C. Smart and Bernard Williams. In the first part of the book Professor Smart advocates a modern and sophisticated version of classical utilitarianism; he tries to formulate a consistent and persuasive elaboration of the doctrine that the rightness and wrongness of actions is determined solely by their consequences, and in particular their consequences for the sum total of human happiness. This is a revised version of Professor (...) Smart's famous essay 'an outline of a system of utilitarian ethics', first published in 1961 but long unobtainable. In Part II Bernard Williams offers a sustained and vigorous critique of utilitarian assumptions, arguments and ideals. He finds inadequate the theory of action implied by utilitarianism, and he argues that utilitarianism fails to engage at a serious level with the real problems of moral and political philosophy, and fails to make sense of notions such as integrity, or even human happiness itself. Both authors are agreed on utilitarianism's importance: it cuts across a number of different philosophical disputes and combines a systematic account of mata-ethical problems with a distinctive and substantive moral stand. It thus is, or involves, philosophy in both the traditional and the narrower, professional sense of the word, and is a key topic (often the first topic) in introductory philosophy courses. This book should also be of interest to welfare economists, political scientists and decision-theorists. (shrink)
This new volume of philosophical papers by Bernard Williams is divided into three sections: the first Action, Freedom, Responsibility, the second Philosophy, Evolution and the Human Sciences; in which appears the essay which gives the collection its title; and the third Ethics, which contains essays closely related to his 1983 book Ethics and the Limits of Philosophy. Like the two earlier volumes of Williams's papers published by Cambridge University Press, Problems of the Self and Moral Luck, this volume (...) will be welcomed by all readers with a serious interest in philosophy. It is published alongside a volume of essays on Williams's work, World, Mind, and Ethics: Essays on the Ethical Philosophy of Bernard Williams, edited by J. E. J. Altham and Ross Harrison, which provides a reappraisal of his work by other distinguished thinkers in the field. (shrink)
Chancy counterfactuals are a headache. Dylan Dodd (2009) presents an interesting argument against a certain general strategy for accounting for them, instances of which are found in the appendices to Lewis (1979) and in Williams (2008). I will argue (i) that Dodd’s understates the counterintuitiveness of the conclusions he can reach; (ii) that the counterintuitiveness can be thought of as an instance of more general oddities arising when we treat vagueness and indeterminacy in a classical setting; and (iii) the (...) underlying source of discontent which animates Dodd’s complains is to be found in a certain general constraint one might impose on conditionals—what I’ll call the counterfactual Ramsey bound. Unfortunately, the counterfactual Ramsey bound is just as problematic as its famous indicative cousin. The moral is that there’s no comfortable resting place in this area; for violations of the counterfactual Ramsey bound are going to lead to prima facie surprising results. (shrink)
Jeff Paris (2001) proves a generalized Dutch Book theorem. If a belief state is not a generalized probability (a kind of probability appropriate for generalized distributions of truth-values) then one faces ‘sure loss’ books of bets. In Williams (manuscript) I showed that Joyce’s (1998) accuracy-domination theorem applies to the same set of generalized probabilities. What is the relationship between these two results? This note shows that (when ‘accuracy’ is treated via the Brier Score) both results are easy corollaries of (...) the core result that Paris appeals to in proving his dutch book theorem (Minkowski’s separating hyperplane theorem). We see that every point of accuracy-domination defines a dutch book, but we only have a partial converse. (shrink)
Jeff Paris (2001) proves a generalized Dutch Book theorem. If a belief state is not a generalized probability (a kind of probability appropriate for generalized distributions of truth-values) then one faces ‘sure loss’ books of bets. In <span class='Hi'>Williams</span> (manuscript) I showed that Joyce’s (1998) accuracy-domination theorem applies to the same set of generalized probabilities. What is the relationship between these two results? This note shows that (when ‘accuracy’ is treated via the Brier Score) both results are easy corollaries (...) of the core result that Paris appeals to in proving his dutch book theorem (Minkowski’s separating hyperplane theorem). We see that every point of accuracy-domination deﬁnes a dutch book, but we only have a partial converse. (shrink)
Toleration has a rich tradition in Western political philosophy. It is, after all, one of the defining topics of political philosophy—historically pivotal in the development of modern liberalism, prominent in the writings of such canonical figures as John Locke and John Stuart Mill, and central to our understanding of the idea of a society in which individuals have the right to live their own lives by their own values, left alone by the state so long as they respect the similar (...) interests of others. -/- Toleration and Its Limits, the latest addition to the NOMOS series, explores the philosophical nuances of the concept of toleration and its scope in contemporary liberal democratic societies. Editors Melissa S. Williams and Jeremy Waldron carefully compiled essays that address the tradition's key historical figures; its role in the development and evolution of Western political theory; its relation to morality, liberalism, and identity; and its limits and dangers. -/- Contributors: Lawrence A. Alexander, Kathryn Abrams, Wendy Brown, Ingrid Creppell, Noah Feldman, Rainer Forst, David Heyd, Glyn Morgan, Glen Newey, Michael A. Rosenthal, Andrew Sabl, Steven D. Smith, and Alex Tuckness. (shrink)
This book demonstrates that law can be newly interrogated when examined through the lens of literature. Like its forerunner, Empty Justice, the book creates simple pathways which energise and illustrate the links between legal theory and legal science and doctrine, through the wider visions of history, literature and culture. This broadening approach is integral to understanding law in the context of wider debates and media in the community. The book provides a collection of essays, with additional commentary which reflects upon (...) very recent scholarship and debate on a range of ethico-legal topics; it also illustrates how conventional legal matters may be rendered lively and palatable, as an adjunct to approaching doctrine and cases 'cold' in the conventional textbook manner. The chapters range from examination of current thought on cohabitation and marriage laws (via Jude the Obscure), 19th century medico-legal cases relevant to current narratives of insanity in women and the nature and status of expert evidence generally; assisted suicide and autonomy (via a poem by Jon Stallworthy) to an essay on the nature of race and ethnicity (via a poem by R S Thomas), a discussion of obscenity and moral philosophy (via an essay on Crash by J G Ballard and the philosophy of Bernard Williams) and a history of ideas discussion of positivism, natural law and political crisis, war and terrorism through legal and political theory texts and a poem by Auden. The materials refer to case law where appropriate. The chapters range from examination of current thought on cohabitation and marriage laws (via Jude the Obscure), 19th century medico-legal cases relevant to current narratives of insanity in women and the nature and status of expert evidence generally; assisted suicide and autonomy (via a poem by Jon Stallworthy) to an essay on the nature of race and ethnicity (via a poem by R S Thomas), a discussion of obscenity and moral philosophy (via an essay on Crash by J G Ballard and the philosophy of Bernard Williams) and a history of ideas discussion of positivism, natural law and political crisis, war and terrorism through legal and political theory texts and a poem by Auden. The materials refer to case law where appropriate. (shrink)
The behavioral sciences have come under attack for writings and speech that affront sensitivities. At such times, academic freedom and tenure are invoked to forestall efforts to censure and terminate jobs. We review the history and controversy surrounding academic freedom and tenure, and explore their meaning across different fields, at different institutions, and at different ranks. In a multifactoral experimental survey, 1,004 randomly selected faculty members from top-ranked institutions were asked how colleagues would typically respond when confronted with dilemmas concerning (...) teaching, research, and wrong-doing. Full professors were perceived as being more likely to insist on having the academic freedom to teach unpopular courses, research controversial topics, and whistle-blow wrong-doing than were lower-ranked professors (even associate professors with tenure). Everyone thought that others were more likely to exercise academic freedom than they themselves were, and that promotion to full professor was a better predictor of who would exercise academic freedom than was the awarding of tenure. Few differences emerged related either to gender or type of institution, and behavioral scientists' beliefs were similar to scholars from other fields. In addition, no support was found for glib celebrations of tenure's sanctification of broadly defined academic freedoms. These findings challenge the assumption that tenure can be justified on the basis of fostering academic freedom, suggesting the need for a re-examination of the philosophical foundation and practical implications of tenure in today's academy. (Published Online February 8 2007) Key Words: academia; academic freedom; ethical issues; faculty beliefs; professoriate; promotion; scientific misconduct; tenure; whistle-blowing. (shrink)
In our target article, we took the position that tenure conveys many important benefits but that its original justification – fostering academic freedom – is not one of them. Here we respond to various criticisms of our study as well as to proposals to remedy the current state of affairs. Undoubtedly, more research is needed to confirm and extend our findings, but the most reasonable conclusion remains the one we offered – that the original rationale for tenure is poorly served (...) by the current system as practiced at top-ranked colleges and universities. (Published Online February 8 2007). (shrink)
Inscrutability arguments threaten to reduce interpretationist metasemantic theories to absurdity. Can we find some way to block the arguments? A highly influential proposal in this regard is David Lewis’ ‘eligibility’ response: some theories are better than others, not because they fit the data better, but because they are framed in terms of more natural properties. The purposes of this paper are (1) to outline the nature of the eligibility proposal, making the case that it is not ad hoc, but instead (...) flows naturally from three independently motivated elements; and (2) to show that severe limitations afflict the proposal. In conclusion, I pick out the element of the eligibility response that is responsible for the limitations: future work in this area should therefore concentrate on amending this aspect of the overall theory. (shrink)
We discuss arguments against the thesis that the world itself can be vague. The first section of the paper distinguishes dialectically effective from ineffective arguments against metaphysical vagueness. The second section constructs an argument against metaphysical vagueness that promises to be of the dialectically effective sort: an argument against objects with vague parts. Firstly, cases of vague parthood commit one to cases of vague identity. But we argue that Evans' famous argument against will not on its own enable one to (...) complete the reductio in the present context. We provide a metaphysical premise that would complete the reductio, but note that it seems deniable. We conclude by drawing general morals from our case study. (shrink)
Worlds where things divide forever ("gunk" worlds) are apparently conceivable. The conceivability of such scenarios has been used as an argument against "nihilist" or "near-nihilist" answers to the special composition question. I argue that the mereological nihilist has the resources to explain away the illusion that gunk is possible.
Might it be that world itself, independently of what we know about it or how we represent it, is metaphysically indeterminate? This article tackles in turn a series of questions: In what sorts of cases might we posit metaphysical indeterminacy? What is it for a given case of indefiniteness to be 'metaphysical'? How does the phenomenon relate to 'ontic vagueness', the existence of 'vague objects', 'de re indeterminacy' and the like? How might the logic work? Are there reasons for postulating (...) this distinctive sort of indefiniteness? Conversely, are there reasons for denying that there is indefiniteness of this sort? (shrink)
In the literature on supervaluationism, a central source of concern has been the acceptability, or otherwise, of its alleged logical revisionism. I attack the presupposition of this debate: arguing that when properly construed, there is no sense in which supervaluational consequence is revisionary. I provide new considerations supporting the claim that the supervaluational consequence should be characterized in a ‘global’ way. But pace Williamson (1994) and Keefe (2000), I argue that supervaluationism does not give rise to counterexamples to familiar inference-patterns (...) such as reductio and conditional proof. (shrink)
Some things, argues Lewis, are just better candidates to be referents than others. Even at the cost of attributing false beliefs, we interpret people as referring to the most interesting kinds in their vicinity. How should this be accounted for? In section 1, I look at Lewis’s interpretationism, and the reference magnetism it builds in (not just for ‘perfectly natural’ properties, but for certain kinds of auxiliary apparatus). In section 2, I draw on (Field, 1975) to argue that what properties (...) are reference magnetic may be an ultimately conventional matter—though in the Lewisian setting, there may be an objectively best conventional choice to make. But Lewis’s own account has implausible commitments, so in section 3 I consider variations and alternatives, all of which have problems. In section 4, I look in more detail at eligibility-based interpretationism that do not appeal to naturalness, arguing that there are credible metasemantic theories of this form. (shrink)
Lewis (1973) gave a short argument against conditional excluded middle, based on his treatment of ‘might’ counterfactuals. Bennett (2003), with much of the recent literature, gives an alternative take on ‘might’ counterfactuals. But Bennett claims the might-argument against CEM still goes through. This turns on a specific claim I call Bennett’s Hypothesis. I argue that independently of issues to do with the proper analysis of might-counterfactuals, Bennett’s Hypothesis is inconsistent with CEM. But Bennett’s Hypothesis is independently objectionable, so we should (...) resolve this tension by dropping the Hypothesis, not by dropping CEM. (shrink)
I formulate a counterfactual version of the notorious ‘Ramsey Test’. Even in a weak form, this makes counterfactuals subject to the very argument that Lewis used to persuade the majority of the philosophical community that indicative conditionals were in hot water. I outline two reactions: to indicativize the debate on counterfactuals; or to counterfactualize the debate on indicatives.
Some argue that theories of universals should incorporate structural universals, in order to allow for the metaphysical possibility of worlds of 'infinite descending complexity' ('onion worlds'). I argue that the possibility of such worlds does not establish the need for structural universals. So long as we admit the metaphysical possibility of emergent universals, there is an attractive alternative description of such cases.
Joyce (1998) gives an argument for probabilism: the doctrine that rational credences should conform to the axioms of probability. In doing so, he provides a distinctive take on how the normative force of probabilism relates to the injunction to believe what is true. But Joyce presupposes that the truth values of the propositions over which credences are defined are classical. I generalize the core of Joyce’s argument to remove this presupposition. On the same assumptions as Joyce uses, the credences of (...) a rational agent should always be weighted averages of truth value assignments. In the special case where the truth values are classical, the weighted averages of truth value assignments are exactly the probability functions. But in the more general case, probabilistic axioms formulated in terms of classical logic are violated—but we will show that generalized versions of the axioms formulated in terms of non-classical logics are satisfied. (shrink)
I outline and motivate a way of implementing a closest world theory of indicatives, appealing to Stalnaker’s framework of open conversational possibilities. Stalnakerian conversational dynamics helps us resolve two outstanding puzzles for a such a theory of indicative conditionals. The first puzzle—concerning so-called ‘reverse Sobel sequences’—can be resolved by conversation dynamics in a theory-neutral way: the explanation works as much for Lewisian counterfactuals as for the account of indicatives developed here. Resolving the second puzzle, by contrast, relies on the interplay (...) between the particular theory of indicative conditionals developed here and Stalnakerian dynamics. The upshot is an attractive resolution of the so-called “Gibbard phenomenon” for indicative conditionals. (shrink)
Bryne & H´ajek (1997) argue that Lewis’s (1988; 1996) objections to identifying desire with belief do not go through if our notion of desire is ‘causalized’ (characterized by causal, rather than evidential, decision theory). I argue that versions of the argument go through on certain assumptions about the formulation of decision theory. There is one version of causal decision theory where the original arguments cannot be formulated—the ‘imaging’ formulation that Joyce (1999) advocates. But I argue this formulation is independently objectionable. (...) If we want to maintain the desire as belief thesis, there’s no shortcut through causalization. (shrink)
How are permutation arguments for the inscrutability of reference to be formulated in the context of a Davidsonian truth-theoretic semantics? Davidson (1979) takes these arguments to establish that there are no grounds for favouring a reference scheme that assigns London to “Londres”, rather than one that assigns Sydney to that name. We shall see, however, that it is far from clear whether permutation arguments work when set out in the context of the kind of truth-theoretic semantics which Davidson favours. The (...) principle required to make the argument work allows us to resurrect Foster problems against the Davidsonian position. The Foster problems and the permutation inscrutability problems stand or fall together: they are one puzzle, not two. (shrink)
Taking away grains from a heap of rice, at what point is there no longer a heap? It seems small changes – removing a single grain – can’t make a difference to whether or not something is a heap; but big changes obviously do. How can this be, since big changes are nothing but small changes chained together?
This study examined the influence of corporate giving programs on the link between certain categories of corporate crime and corporate reputation. Specifically, firms that violate EPA and OSHA regulations should, to some extent, experience a decline in their reputations, while firms that contribute to charitable causes should see their reputations enhanced. The results of this study support both of these contentions. Further, the results suggest that corporate giving significantly moderates the link between the number of EPA and OSHA violations committed (...) by a firm and its reputation. Thus, while a firm's reputation can be diminished through its violation of various government regulations, the extent of the decline in reputation may be significantly reduced through charitable giving. (shrink)
There are advantages to thrift over honest toil. If we can make do without numbers we avoid challenging questions over the metaphysics and epistemology of such entities; and we have a good idea, I think, of what a nominalistic metaphysics should look like. But minimizing ontology brings its own problems; for it seems to lead to error theory— saying that large swathes of common-sense and best science are false. Should recherche philosophical arguments really convince us to give all this up? (...) Such Moorean considerations are explicitly part of the motivation for the recent resurgence of structured metaphysics, which allow a minimal (perhaps nominalistic) fundamental ontology, while avoiding error-theory by adopting a permissive stance towards ontology that can be argued to be grounded in the fundamental. This paper evaluates the Moorean arguments, identifying key epistemological assumptions. On the assumption that Moorean arguments can be used to rule out error-theory, I examine deflationary ‘representationalist’ rivals to the structured metaphysics reaction. Quinean paraphrase, fictionalist claims about syntax and semantics are considered and criticized. In the final section, a ‘direct’ deflationary strategy is outlined and the theoretical obligations that it faces are articulated. The position advocated may have us talking a lot like a friend of structured metaphysics—but with a very different conception of what we’re up to. (shrink)
I formulate a counterfactual version of the notorious ‘Ramsey Test’. Whereas the Ramsey Test for indicative conditionals links credence in indicatives to conditional credences, the counterfactual version links credence in counterfactuals to expected conditional chance. I outline two forms: a Ramsey Identity on which the probability of the conditional should be identical to the corresponding conditional probability/expectation of chance; and a Ramsey Bound on which credence in the conditional should never exceed the latter. Even in the weaker, bound, form, the (...) counterfactual Ramsey Test makes counterfactuals subject to the very argument that Lewis used to argue against the indicative version of the Ramsey Test. I compare the assumptions needed to run each, pointing to assumptions about the time-evolution of chances that can replace the appeal to Bayesian assumptions about credence update in motivating the assumptions of the argument. I ﬁnish by outlining two reactions to the discussion: to indicativize the debate on counterfactuals; or to counterfactualize the debate on indicatives. (shrink)
Two major themes in the literature on indicative conditionals are (1) that the content of indicative conditionals typically depends on what is known;1 (2) that conditionals are intimately related to conditional probabilities.2 In possible world semantics for counterfactual conditionals, a standard assumption is that conditionals whose antecedents are metaphysically impossible are vacuously true.3 This aspect has recently been brought to the fore, and defended by Tim Williamson, who uses it in to characterize alethic necessity by exploiting such equivalences as: A⇔¬A (...) A. One might wish to postulate an analogous connection for indicative conditionals, with indicatives whose antecedents are (in some relevant sense) epistemically impossible being vacuously true: and indeed, the modal account of indicative conditionals of Brian Weatherson has exactly this feature.4 This allows one to characterize an epistemic modal by the equivalence A⇔¬A→A. For simplicity, in what follows we write A as KA and think of it as expressing that subject S knows that A.5 The connection to probability has received much attention. Stalnaker (1970) suggested, as a way of articulating the ‘Ramsey Test’, the following very general schema for indicative conditionals relative to some probability function P: P(A→B) = P(B|A) 1For example, Nolan (2003); Weatherson (2001); Gillies (2007). 2For example Stalnaker (1970); McGee (1989); Adams (1975). 3Lewis (1973). See Nolan (1997) for criticism. 4‘epistemically possible’ here means incompatible with what is known (where ‘what is known’ is to be cashed out in some relevant sense). 5This idea was suggested to me in conversation by John Hawthorne. I do not know of it being explored in print. The plausibility of this characterization will depend on the exact sense of ‘epistemically possible’ in play—if it is compatibility with what a single subject knows, then can be read ‘the relevant subject knows that p’. If it is more delicately formulated, we might be able to read as the epistemic modal ‘must’.. (shrink)
When should we believe a indicative conditional, and how much confidence in it should we have? Here’s one proposal: one supposes actual the antecedent; and sees under that supposition what credence attaches to the consequent. Thus we suppose that Oswald did not shot Kennedy; and note that under this assumption, Kennedy was assassinated by someone other than Oswald. Thus we are highly confident in the indicative: if Oswald did not kill Kennedy, someone else did.
This paper posits that organizational variables are the factors that lead to the moral decline of companies like Enron and Worldcom. The individuals involved created environments within the organizations that precipitated a spiral of unethical decision-making. It is proposed that at the executive level, it is the organizational factors associated with power and decision-making that have the critical influence on moral and ethical behavior. The study has used variables that were deemed to be surrogate measures of the ethical violations (OSHA (...) and EPA violations), the risky shift phenomenon (executive team size), banality of wrong-doing (reputation score for firms) and escalating commitment (tenure with the firm/change in revenue for declining firms). The research found that there were small correlations between ethical violations and the three organizational variables. (shrink)
We discuss the impact of horizontal gene transfer (HGT) on phylogenetic reconstruction and taxonomy. We review the power of HGT as a creative force in assembling new metabolic pathways, and we discuss the impact that HGT has on phylogenetic reconstruction. On one hand, shared derived characters are created through transferred genes that persist in the recipient lineage, either because they were adaptive in the recipient lineage or because they resulted in a functional replacement. On the other hand, taxonomic patterns in (...) microbial phylogenies might also be created through biased gene transfer. The agreement between different molecular phylogenies has encouraged interpretation of the consensus signal as reflecting organismal history or as the tree of cell divisions; however, to date the extent to which the consensus reflects shared organismal ancestry and to which it reflects highways of gene sharing and biased gene transfer remains an open question. Preferential patterns of gene exchange act as a homogenizing force in creating and maintaining microbial groups, generating taxonomic patterns that are indistinguishable to those created by shared ancestry. To understand the evolution of higher bacterial taxonomic units, concepts usually applied in population genetics need to be applied. (shrink)
Burning fossil fuel in the North American continent contributes more to the CO2 global warming problem than in any other continent. The resulting climate changes are expected to alter food production. The overall changes in temperature, moisture, carbon dioxide, insect pests, plant pathogens, and weeds associated with global warming are projected to reduce food production in North America. However, in Africa, the projected slight rise in rainfall is encouraging, especially since Africa already suffers from severe shortages of rainfall. For all (...) regions, a reduction in fossil fuel burning is vital. Adoption of sound ecological resource management, especially soil and water conservation and the prevention of deforestation, is important. Together, these steps will benefit agriculture, the environment, farmers, and society as a whole. (shrink)
Research in education and cognitive development suggests that explaining plays a key role in learning and generalization: When learners provide explanations—even to themselves—they learn more effectively and generalize more readily to novel situations. This paper proposes and tests a subsumptive constraints account of this effect. Motivated by philosophical theories of explanation, this account predicts that explaining guides learners to interpret what they are learning in terms of unifying patterns or regularities, which promotes the discovery of broad generalizations. Three experiments provide (...) evidence for the subsumptive constraints account: prompting participants to explain while learning artificial categories promotes the induction of a broad generalization underlying category membership, relative to describing items (Exp. 1), thinking aloud (Exp. 2), or free study (Exp. 3). Although explaining facilitates discovery, Experiment 1 finds that description is more beneficial for learning item details. Experiment 2 additionally suggests that explaining anomalous observations may play a special role in belief revision. The findings provide insight into explanation’s role in discovery and generalization. (shrink)
A venerable story in the history of medieval philosophy has it that the eleventh century saw a debate between certain 'dialecticians', who exalted the role of reason and disdained theological authority, and 'anti-dialecticians', who carefully limited—or even rejected—the application of dialectical reasoning to Christian doctrine. A number of authors have called into question certain details of this story, but in..
Recent empirical work indicates that reduced autobiographical memory specificity can act as an avoidant processing style. By truncating the memory search before specific elements of traumatic memories are accessed, one can ward off the affective impact of negative reminiscences. This avoidant processing style can be viewed as an instance of what Erdelyi describes as the “subtractive” class of repressive processes.
Revising semantics and logic has consequences for the theory of mind. Standard formal treatments of rational belief and desire make classical assumptions. If we are to challenge the presuppositions, we indicate what is kind of theory is going to take their place. Consider probability theory interpreted as an account of ideal partial belief. But if some propositions are neither true nor false, or are half true, or whatever—then it’s far from clear that our degrees of belief in it and its (...) negation should sum to 1, as classical probability theory requires (?, cf.). There are extant proposals in the literature for generalizing (categorical) probability theory to a non-classical setting, and we will use these below. But subjective probabilities themselves stand in functional relations to other mental states, and we need to trace the knock-on consequences of revisionism for this interrelationship (arguably, degrees of belief only count as kinds of belief in virtue of standing in these functional relationships). (shrink)
Does feature evolution stop once we have acquired sufficient features to perform a recognition task? With extended practice, novices may develop a more sophisticated feature space that allows them to perform more accurately or quickly. Our work on perceptual expertise indicates that feature learning and reorganization can continue even after an initial set of features is available to represent a novel class of objects.