Paul Boghossian (1997) has argued that there is much to be said on behalf of the notion of analyticity so long as we distinguish epistemic analyticity and metaphysical analyticity. In particular, (1) epistemic analyticity isn’t undermined by Quine’s critique of the analytic-synthetic distinction, (2) it can explain the a prioricity of logic, and (3) epistemic analyticity can’t be rejected short of embracing semantic irrealism. In this paper, we argue that all three of these claims are mistaken.
In a series of ten articles from leading American and European scholars, Pragmatist Epistemologies explores the central themes of epistemology in the pragmatist tradition through a synthesis of new and old pragmatist thought, engaging contemporary issues while exploring from a historical perspective. It opens a new avenue of research in contemporary pragmatism continuous with the main figures of pragmatist tradition and incorporating contemporary trends in philosophy. Students and scholars of American philosophy will find this book indispensable.
The philosophy of cognitive science is concerned with fundamental philosophical and theoretical questions connected to the sciences of the mind. How does the brain give rise to conscious experience? Does speaking a language change how we think? Is a genuinely intelligent computer possible? What features of the mind are innate? Advances in cognitive science have given philosophers important tools for addressing these sorts of questions; and cognitive scientists have, in turn, found themselves drawing upon insights from philosophy--insights that have often (...) taken their research in novel directions. The Oxford Handbook of Philosophy of Cognitive Science brings together twenty-one newly commissioned chapters by leading researchers in this rich and fast-growing area of philosophy. It is an indispensible resource for anyone who seeks to understand the implications of cognitive science for philosophy, and the role of philosophy within cognitive science. (shrink)
In [Laurence, Margolis 2003] the authors try - within their polemics against F.Jackson’s views in [Jackson 1998] - to decide the question whether concepts are a priori (in their formulation “to be defined a priori”). Their discussion suffers - as a number of similar articles - from a typical drawback: some problem whose solution requires an exact notion of concept is handled as if the latter were quite clear. The consequence of this ‘conceptual laxity’ is that a) the topic (...) of the discussion is not very clear (what does the phrase ‘concepts must be defined a priori’ mean?); b) the relevance of the Quinean criticism of the “second dogma of empiricism”, i.e., of Quine’s claim that “science sometimes overturns our most cherished beliefs” and therefore there is no sharp boundary between analytic and synthetic is uncritically accepted; c) no distinction is made between the question whether the relation between an expression and its meaning is a priori and the question whether the relation between a concept and the object identified by the concept is a priori. The present article intends to elucidate and then to answer the questions that can be asked when we say something like “concepts are a priori ”. (shrink)
Stephen Laurence and Eric Margolis have recently argued that certain kinds of regress arguments against the language of thought (LOT) hypothesis as an account of how we understand natural languages have been answered incorrectly or inadequately by supporters of LOT ('Regress arguments against the language of thought', Analysis, 57 (1), 60-6, J 97). They argue further that this does not undermine the LOT hypothesis, since the main sources of support for LOT are (or might be) independent of it providing (...) an account of how we understand natural language. In my paper I seek to refute both these claims, and reinstate the putative explanation of natural language understanding as a necessarily central part of the support for LOT. The main argument exploits the fact that Laurence and Margolis give too little weight to the ideas (a) that LOT might be innate (b) that for LOT supporters a semantic theory must apply to in-the-head tokens, not linguistic utterances. (shrink)
A central concern of economics is how society allocates its resources. Modern economies rely on two institutions to allocate: markets and governments. But how much of the allocating should be performed by markets and how much by governments? This collection of readings will help students appreciate the power of the market. It supplements theoretical explanations of how markets work with concrete examples, addresses questions about whether markets actually work well and offers evidence that supposed "market failures" are not as serious (...) as claimed. Featuring readings from Hayek, William Baumol, Harold Demsetz, Daniel Fischel and Edward Lazear, Benjamin Klein and Keith B. Leffler, Stanley J. Liebowitz and Stephen E. Margolis, and John R. Lott, Jr., this book covers key topics such as: • Why markets are efficient allocators • How markets foster economic growth • Property rights • How markets choose standards • Asymmetric Information • Whether firms abuse their power • Non-excludable goods • Monopolies The selections should be comprehended by undergraduate students who have had an introductory course in economics. This reader can also be used as a supplement for courses in intermediate microeconomics, industrial organization, business and government, law and economics, and public policy. (shrink)
The strong weak truth table (sw) reducibility was suggested by Downey, Hirschfeldt, and LaForte as a measure of relative randomness, alternative to the Solovay reducibility. It also occurs naturally in proofs in classical computability theory as well as in the recent work of Soare, Nabutovsky, and Weinberger on applications of computability to differential geometry. We study the sw-degrees of c.e. reals and construct a c.e. real which has no random c.e. real (i.e., Ω number) sw-above it.
G. E. Moore's ‘A Defence of Common Sense’ has generated the kind of interest and contrariety which often accompany what is new, provocative, and even important in philosophy. Moore himself reportedly agreed with Wittgenstein's estimate that this was his best article, while C. D. Broad has lamented its very great but largely unfortunate influence. Although the essay inspired Wittgenstein to explore the basis of Moore's claim to know many propositions of common sense to be true, A. J. Ayer judges its (...) enduring value to lie in provoking a more sophisticated conception of the very type of metaphysics which disputes any such unqualified claim of certainty. (shrink)
Esse artigo mostra o significado da unicidade da ética e da estética no Tractatus Logico-Philosophicus. Primeiro, ele apresenta os principais aspectos ética tractatiana: que ela não hierarquiza fatos, que ela é eudemonista, e que ela não propõe qualquer finalidade externa às ações do sujeito ético. Segundo, ele mostra que a obra de arte é a expressão da vida de um ponto de vista ético, ou seja, ela é a expressão do significado da vida de um ponto de vista da eternidade. (...) Concluindo, ele mostra que essa concepção propõe uma delimitação absoluta que separa o que é arte e que não é arte. (shrink)
This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. (...) The analysis first focuses on an agent’s trustworthiness , this one is presented as the necessary requirement for e-trust to occur. Then, a new definition of e-trust as a second-order-property of first-order relations is presented. It is shown that the second-order-property of e-trust has the effect of minimising an agent’s effort and commitment in the achievement of a given goal. On this basis, a method is provided for the objective assessment of the levels of e-trust occurring among the artificial agents of a distributed artificial system. (shrink)
The E-Z Reader model (Reichle et al. 1998; 1999) provides a theoretical framework for understanding how word identification, visual processing, attention, and oculomotor control jointly determine when and where the eyes move during reading. In this article, we first review what is known about eye movements during reading. Then we provide an updated version of the model (E-Z Reader 7) and describe how it accounts for basic findings about eye movement control in reading. We then review several alternative models of (...) eye movement control in reading, discussing both their core assumptions and their theoretical scope. On the basis of this discussion, we conclude that E-Z Reader provides the most comprehensive account of eye movement control during reading. Finally, we provide a brief overview of what is known about the neural systems that support the various components of reading, and suggest how the cognitive constructs of our model might map onto this neural architecture. Key Words: attention; eye-movement control; E-Z Reader; fixations; lexical access; models; reading; regressions; saccades. (shrink)
The central problems of political philosophy (e.g., legitimate authority, distributive justice) mirror the central problems of businessethics. The question naturally arises: should political theories be applied to problems in business ethics? If a version of egalitarianism is the correct theory of justice for states, for example, does it follow that it is the correct theory of justice for businesses? If states should be democratically governed by their citizens, should businesses be democratically managed by their employees? Most theorists who have considered (...) these questions, including John Rawls in Political Liberalism, and Robert Phillips and Joshua Margolis in a 1999 article, have said “no.” They claim that states and businesses are different kinds of entities, and hence require different theories of justice. I challenge this claim. While businesses differ from states, the difference is one of degree, not one of kind. Business ethicshas much to learn from political philosophy. (shrink)
E. F. Carritt (1876-1964) was educated at and taught in Oxford University. He made substantial contributions both to aesthetics and to moral philosophy. The focus of this entry is his work in moral philosophy. His most notable works in this field are The Theory of Morals (1928) and Ethical and Political Thinking (1947). Carritt developed views in metaethics and in normative ethics. In meta-ethics he defends a cognitivist, non-naturalist moral realism and was among the first to respond to A. J. (...) Ayer’s emotivist challenge to this view. In normative ethics he advocates a deontological view in which there is a plurality of obligations and of non-instrumental goods. In the context of defending this view he raised some penetrating and novel criticisms of ideal utilitarianism. He held that it is not acceptable to revise our reflective common-sense moral attitudes in the face of philosophical moral theories, and that moral philosophy is only indirectly practical. (shrink)
Contemporary health care relies on electronic devices. These technologies are not ethically neutral but change the practice of care. In light of Sennett’s work and that of other thinkers (Dewey, Dreyfus, Borgmann) one worry is that “e-care”—care by means of new information and communication technologies—does not promote skilful and careful engagement with patients and hence is neither conducive to the quality of care nor to the virtues of the care worker. Attending to the kinds of knowledge involved in care work (...) and their moral significance, this paper explores what “craftsmanship” means in the context of medicine and health care and discusses whether today the care giver’s craftsmanship is eroded. It is argued that this is a real danger, especially under modern conditions and in the case of telecare, but that whether it happens, and to what extent it happens, depends on whether in a specific practice and given a specific technology e-carers can develop the know-how and skill to engage more intensely with those under their care and to cooperate with their co-workers. (shrink)
1 There is a Standard Objection to the idea that concepts might be prototypes (or exemplars, or stereotypes): Because they are productive, concepts must be compositional. Prototypes aren't compositional, so concepts can't be prototypes (see, e.g., Margolis, 1994).2 However, two recent papers (Osherson and Smith, 1988; Kamp and Partee, 1995) reconsider this consensus. They suggest that, although the Standard Objection is probably right in the long run, the cases where prototypes fail to exhibit compositionality are relatively exotic and involve (...) phenomena which any account of compositionality is likely to find hard to deal with; for example, the effects of quantifiers, indexicals, contextual constraints, etc. KP are even prepared to indulge a guarded optimism: "... when a suitably rich compositional theory... is developed, prototypes will be seen ... as one property among many which only when taken altogether can support a compositional theory of combination" (p.56). In this paper, we argue that the Standard Objection to prototype theory was right after all: The problems about compositionality are insuperable in even the most trivial sorts of examples; it is therefore as near to certain as anything in cognitive science ever gets that the structure of concepts is not statistical. Theories of categorization, concept acquisition, lexical meaning and the like, which assume the contrary simply don't work. We commence with a general discussion of the constraints that an account of concepts must meet if their compositionality is to explain their productivity. We'll then turn to a criticism of proposals that OS2 and KP make for coping with some specific cases. (shrink)
In this contribution, I explore the treatment that Plato devotes to Protagoras’ relativism in the first section of the Theaetetus (151 E 1–186 E 12) where, among other things, the definition that knowledge is perception is put under scrutiny. What I aim to do is to understand the subtlety of Plato’s argument about Protagorean relativism and, at the same time, to assess its philosophical significance by revealing the inextric¬ability of ontological and epistemological aspects on which it is built (for this (...) latter aspect, I refer to contemporary discussions of relativism, mainly to Margolis’ robust relativism). I then turn to Aristotle’s treatment of Protagoras’ relativism in Metaphysics Γ, sections 5 and 6, in order to show that Plato and Aristotle surprisingly share the same view as regards the philosophical content of Protagoras’ relativism (in doing so, I take position against the standard opinion among scholars that Plato and Aristotle understand Protagoras’ relativism in different, even incompatible, ways). What I ultimately aim to demonstrate is that Protagoras’ relativism, as understood by both Plato and Aristotle, is a coherent, even attractive, philosophical position. (shrink)
I have two aims in this paper. In §§2-4 I contend that Moore has two arguments (not one) for the view that that ‘good’ denotes a non-natural property not to be identified with the naturalistic properties of science and common sense (or, for that matter, the more exotic properties posited by metaphysicians and theologians). The first argument, the Barren Tautology Argument (or the BTA), is derived, via Sidgwick, from a long tradition of anti-naturalist polemic. But the second argument, the Open (...) Question Argument proper (or the OQA), seems to have been Moore’s own invention and was probably devised to deal with naturalistic theories, such as Russell’s, which are immune to the Barren Tautology Argument. The OQA is valid and not (as Frankena (1939) has alleged) question-begging. Moreover, if its premises were true, it would have disposed of the desire-to-desire theory. But as I explain in §5, from 1970 onwards, two key premises of the OQA were successively called into question, the one because philosophers came to believe in synthetic identities between properties and the other because it led to the Paradox of Analysis. By 1989 a philosopher like Lewis could put forward precisely the kind of theory that Moore professed to have refuted with a clean intellectual conscience. However, in §§6-8 I shall argue that all is not lost for the OQA. I first press an objection to the desire-to-desire theory derived from Kripke’s famous epistemic argument. On reflection this argument looks uncannily like the OQA. But the premise on which it relies is weaker than the one that betrayed Moore by leading to the Paradox of Analysis. This suggests three conclusions: 1) that the desire-to-desire theory is false; 2) that the OQA can be revived, albeit in a modified form; and 3) that the revived OQA poses a serious threat to what might be called semantic naturalism. (shrink)
The effectiveness of information retrieval technology in electronic discovery (E-discovery) has become the subject of judicial rulings and practitioner controversy. The scale and nature of E-discovery tasks, however, has pushed traditional information retrieval evaluation approaches to their limits. This paper reviews the legal and operational context of E-discovery and the approaches to evaluating search technology that have evolved in the research community. It then describes a multi-year effort carried out as part of the Text Retrieval Conference to develop evaluation methods (...) for responsive review tasks in E-discovery. This work has led to new approaches to measuring effectiveness in both batch and interactive frameworks, large data sets, and some surprising results for the recall and precision of Boolean and statistical information retrieval methods. The paper concludes by offering some thoughts about future research in both the legal and technical communities toward the goal of reliable, effective use of information retrieval in E-discovery. (shrink)
Given the very large numbers of documents involved in e-discovery investigations, lawyers face a considerable challenge of collaborative sensemaking. We report findings from three workplace studies which looked at different aspects of how this challenge was met. From a sociotechnical perspective, the studies aimed to understand how investigators collectively and individually worked with information to support sensemaking and decision making. Here, we focus on discovery-led refinement; specifically, how engaging with the materials of the investigations led to discoveries that supported refinement (...) of the problems and new strategies for addressing them. These refinements were essential for tractability. We begin with observations which show how new lines of enquiry were recursively embedded. We then analyse the conceptual structure of a line of enquiry and consider how reflecting this in e-discovery support systems might support scalability and group collaboration. We then focus on the individual activity of manual document review where refinement corresponded with the inductive identification of classes of irrelevant and relevant documents within a collection. Our observations point to the effects of priming on dealing with these efficiently and to issues of cognitive ergonomics at the human–computer interface. We use these observations to introduce visualisations that might enable reviewers to deal with such refinements more efficiently. (shrink)
The information overload in E-Discovery proceedings makes reviewing expensive and it increases the risk of failure to produce results on time and consistently. New interactive techniques have been introduced to increase reviewer productivity. In contrast, the techniques presented in this article propose an alternative method that tries to reduce information during culling so that less information needs to be reviewed. The proposed method first focuses on mapping the email collection universe using straightforward statistical methods based on keyword filtering combined with (...) date time and custodian identities. Subsequently, a social network is constructed from the email collection that is analyzed by filtering on date time and keywords. By using the network context we expect to provide a better understanding of the keyword hits and the ability to discard certain parts of the collection. (shrink)
This article is aimed at contributing to the study of the relationship that new religious movements entertain with technology and science. It focuses on an object that is central in Scientology's teachings and practice: the Electropsychometer or E-meter. In interaction with the general public, such as in a 2014 TV Super Bowl advertisement, Scientology seems to claim a unique relationship with science and technology in the form of a “combination” and a “connection” evoked while displaying this very E-meter. Hence, exploring (...) the teachings related to it is relevant in order to understand how such combination or connection is conceptualized. (shrink)