According to the Generality Constraint, mental states with conceptual content must be capable of recombining in certain systematic ways. Drawing on empirical evidence from cognitive science, I argue that so-called analogue magnitude states violate this recombinability condition and thus have nonconceptual content. I further argue that this result has two significant consequences: it demonstrates that nonconceptual content seeps beyond perception and infiltrates cognition; and it shows that whether mental states have nonconceptual content is largely an empirical matter determined by (...) the structure of the neural representations underlying them. (shrink)
In “Why the generality problem is everybody’s problem,” Michael Bishop argues that every theory of justification needs a solution to the generality problem. He contends that a solution is needed in order for any theory to be used in giving an acceptable account of the justificatory status of beliefs in certain examples. In response, first I will describe the generality problem that is specific to process reliabilism and two other sorts of problems that are essentially the same. (...) Then I will argue that the examples that Bishop presents pose no such problem for some theories. I will illustrate the exempt theories by describing how an evidentialist view can account for the justification in the examples without having any similar problem. It will be clear that other views about justification are likewise unaffected by anything like the generality problem. (shrink)
The No-Miracles Argument (NMA) is often used to support scientific realism. We can formulate this argument as an inference to the best explanation this accusation of circularity by appealing to reliabilism, an externalist epistemology. In this paper I argue that this retreat fails. Reliabilism suffers from a potentially devastating difficulty known as the Generality Problem and attempts to solve this problem require adopting both epistemic and metaphysical assumptions regarding local scientific theories. Although the externalist can happily adopt the former, (...) if he adopts the latter then the Generality Problem arises again, but now at the level of scientific methodology. Answering this new version of the Generality Problem is impossible for the scientific realist without making the important further assumption that there exists the possibility of a unique rule of Doing this however would make the NMA viciously premise circular. (shrink)
Reliabilist theories of knowledge face the “generality problem”; any token of a belief-forming processes instantiates types of different levels of generality, which can vary in reliability. I argue that we exploit this situation in epistemic evaluation; we appraise beliefs in different ways by adverting to reliability at different levels of generality. We can detect at least two distinct uses of reliability, which underlie different sorts of appraisals of beliefs and believers.
The generality of a derivation is an equivalence relation on the set of occurrences of variables in its premises and conclusion such that two occurrences of the same variable are in this relation if and only if they must remain occurrences of the same variable in every generalization of the derivation. The variables in question are propositional or of another type. A generalization of the derivation consists in diversifying variables without changing the rules of inference. This paper examines in (...) the setting of categorial proof theory the conjecture that two derivations with the same premises and conclusions stand for the same proof if and only if they have the same generality. For that purpose generality is defined within a category whose arrows are equivalence relations on finite ordinals, where composition is rather complicated. Several examples are given of deductive systems of derivations covering fragments of logic, with the associated map into the category of equivalence relations of generality. This category is isomorphically represented in the category whose arrows are binary relations between finite ordinals, where composition is the usual simple composition of relations. This representation is related to a classical representation result of Richard Brauer. (shrink)
The generality relativist has been accused of holding a self-defeating thesis. Kit Fine proposed a modal version of generality relativism that tries to resist this claim. We discuss his proposal and argue that one of its formulations is self-defeating.
A prized property of theories of all kinds is that of generality, of applicability or least relevance to a wide range of circumstances and situations. The purpose of this article is to present a pair of distinctions that suggest that three kinds of generality are to be found in mathematics and logics, not only at some particular period but especially in developments that take place over time: ‘omnipresent’ and ‘multipresent’ theories, and ‘ubiquitous’ notions that form dependent parts, or (...) moments, of theories. The category of ‘facets’ is also introduced, primarily to assess the roles of diagrams and notations in these two disciplines. Various consequences are explored, starting with means of developing applied mathematics, and then reconsidering several established ways of elaborating or appraising theories, such as analogising, revolutions, abstraction, unification, reduction and axiomatisation. The influence of theories already in place upon theory-building is emphasised. The roles in both mathematics and logics of set theory, abstract algebras, metamathematics, and model theory are assessed, along with the different relationships between the two disciplines adopted in algebraic logic and in mathematical logic. Finally, the issue of monism versus pluralism in these two disciplines is rehearsed, and some suggestions are made about the special character of mathematical and logical knowledge, and also the differences between them. Since the article is basically an exercise in historiography, historical examples and case studies are described or noted throughout. (shrink)
Williamson (2000) [Knowledge and its Limits, Oxford: Oxford University Press] argues that attempts to substitute narrow mental states or narrow/environmental composites for broad and factive mental states will result in poorer explanations of behavior. I resist Williamson.
Epistemic luck has been the focus of much discussion recently. Perhaps the most general knowledge-precluding type is veritic luck, where a belief is true but might easily have been false. Veritic luck has two sources, and so eliminating it requires two distinct conditions for a theory of knowledge. I argue that, when one sets out those conditions properly, a solution to the generality problem for reliabilism emerges.
According to reliabilists about epistemic justification, what makes a belief epistemically justified is that it was produced by a reliable process of belief-formation. Earl Conee and Richard Feldman have forcefully presented a problem for such reliabilism, "the generality problem."? The generality problem arises once we realize that the notion of reliability applies straightforwardly only to types of process--for only types of process are repeatable entities which can produce true or false beliefs in each of their instances. Moreover, any (...) token process will be an instance of indefinitely many types of process. Which of these types must be reliable for my belief to be justified, according to reliabilism? That question, generalized to cover every case of belief-formation, is the generality problem for reliabilism. In this paper I propose a solution to the generality problem. The solution makes use of the basing relation, and so, given that it isn't clear how to characterize that relation, it might be thought to replace one problem with another. I argue that, however difficult it is to characterize the basing relation, every adequate epistemological theory must make use of it implicitly or explicitly. Therefore, it is perfectly legitimate to appeal to the basing relation in solving a problem for an epistemological theory. (shrink)
This paper is about the claim that, necessarily, a subject who can think that a is F must also have the capacities to think that a is G, a is H, a is I, and so on (for some reasonable range of G, H, I), and that b is F, c is F, d is F, and so on (for some reasonable range of b, c, d). I set out, and raise objections to, two arguments for a strong version of (...) this claim (Gareth Evans' generality constraint). I present a new argument for a weaker version of the claim, and sketch some directions of enquiry which this new argument opens up. (shrink)
Is human behavior exclusively motivated by self-interest? Common sense indicates that we should flatly deny this, or so it seems to me. Yet the doctrine of universal self-interest, psychological egoism for short, has gained the support of many researchers in science. Common sense also seems to allow the rejection of ethical egoism, the doctrine that human behavior should be motivated exclusively by self-interest. It appears to be at variance with widely endorsed moralities. Yet it is a perennial subject of research (...) in ethics. What stance should we take in the face of these discrepancies? Two views suggest themselves. Commonsensical views of egoism and altruism are flawed or research on the subject in science and ethics is misguided. Considering ethics I argue in this article that research is misguided to the extent that it is conducted at inappropriately high levels of generality. I argue that both ethical egoism and psychological egoism are mistaken. (shrink)
The generality problem is widely considered to be a devastating objection to reliabilist theories of justification. My goal in this paper is to argue that a version of the generality problem applies to all plausible theories of justification. Assume that any plausible theory must allow for the possibility of reflective justification—S's belief, B, is justified on the basis of S's knowledge that she arrived at B as a result of a highly (but not perfectly) reliable way of reasoning, (...) R. The generality problem applies to all cases of reflective justification: Given that is the product of a process-token that is an instance of indefinitely many belief-forming process-types (or BFPTs), why is the reliability of R, rather than the reliability of one of the indefinitely many other BFPTs, relevant to B's justificatory status? This form of the generality problem is restricted because it applies only to cases of reflective justification. But unless it is solved, the generality problem haunts all plausible theories of justification, not just reliabilist ones. (shrink)
One of Laurence BonJour’s main arguments for the existence of the a priori is an argument that a priori justification is indispensable for making inferences from experience to conclusions that go beyond experience. This argument has recently come under heavy fire from Albert Casullo, who has dubbed BonJour’s argument, “The Generality Argument.” In this paper I (i) defend the Generality Argument against Casullo’s criticisms, and (ii) develop a new, more plausible, version of the Generality Argument in response (...) to some other objections of my own. Two of these objections stem out of BonJour’s failing to fully consider the importance of the distinction between being justified in believing that an inference is good and being justified in making an inference. The final version of the argument that I develop sees the Generality Argument as one part of a cumulative case argument for the existence of a priori justification, rather than as a stand-alone knock-down argument. (shrink)
In conversations between native speakers, words such as ‘same’ and ‘identical’ do not usually cause much difficulty. We take it for granted that others use them with the same sense as we do. If it is unclear whether numerical or qualitative identity is intended, a brief gloss such as ‘one thing not two’ for the former or ‘exactly alike’ for the latter removes the unclarity. In this paper, numerical identity is intended. A particularly conscientious and logically aware speaker might explain (...) what ‘identical’ means in her.. (shrink)
Many commentators have attempted to say, more clearly than Wittgenstein did in his Tractatus logico-philosophicus, what sort of things the ‘simple objects’ spoken of in that book are. A minority approach, but in my view the correct one, is to reject all such attempts as misplaced. The Tractarian notion of an object is categorially indeterminate: in contrast with both Frege's and Russell's practice, it is not the logician's task to give a specific categorial account of the internal structure of elementary (...) propositions or atomic facts, nor, correlatively, to give an account of the forms of simple objects. The few commentators who have hitherto maintained this view have mainly devoted themselves to establishing that this was Wittgenstein's intention, and do not much address the question whyWittgenstein held that it is not the logician's business to say what the objects are. The present paper means to fill this lacuna by placing this view in the context of the Tractatus's treatment of logic generally, and in particular by connecting it with Wittgenstein's treatment of generality and with his reaction to Russell's approach to logical form. (shrink)
The problem addressed is that of finding a sound characterization of ambiguity. Two kinds of characterizations are distinguished: tests and definitions. Various definitions of ambiguity are critically examined and contrasted with definitions of generality and indeterminacy, concepts with which ambiguity is sometimes confused. One definition of ambiguity is defended as being more theoretically adequate than others which have been suggested by both philosophers and linguists. It is also shown how this definition of ambiguity obviates a problem thought to be (...) posed by ambiguity for truth theoretical semantics. In addition, the best known test for ambiguity, namely the test by contradiction, is set out, its limitations discussed, and its connection with ambiguity's definition explained. The test is contrasted with a test for vagueness first proposed by Peirce and a test for generality propounded by Margalit. (shrink)
The problem of absolute generality has attracted much attention in recent philosophy. Agustin Rayo and Gabriel Uzquiano have assembled a distinguished team of contributors to write new essays on the topic. They investigate the question of whether it is possible to attain absolute generality in thought and language and the ramifications of this question in the philosophy of logic and mathematics.
Demands for generality sometimes exert a powerful influence on our thinking, pressing us to treat more general moral positions, such as consequentialism, as superior to more specific ones, like those which incorporate agent-centered restrictions or prerogatives. I articulate both foundationalist and coherentist versions of the demands for generality and argue that we can best understand these demands in terms of a certain underlying metaphysical commitment. I consider and reject various arguments which might be offered in support of this (...) commitment, and argue that generality may not be the weapon in moral argument that it is sometimes thought to be. (shrink)
General statements have been the chief subject matter of logic since Aristotle’s syllogistic. They have also been a fundamental concern of metaphysics, though only since Frege invented modern quantification theory. Indeed, logicians and even metaphysicians seldom ask what, if anything, general statements correspond to in the world. But Frege and Russell did, and the question became a major theme in Wittgenstein’s early (pre-1929) and Gustav Bergmann’s later (post- 1959) works. All four were aware that, as Bergmann put it in his (...) posthumously published New Foundations of Ontology, there could not be any laws of nature if generality were not in the world.[i] Generality must be in the world if the world is at all how science, indeed any cognition beyond that of babes, takes it to be. This is why all four were also aware of the tie of the topic to what became known as the realism/antirealism issue.[ii]. (shrink)
The purpose of this paper is to offer a diagnosis and a resolution to generality problem. I state the generality problem and suggest a distinction between criteria of relevance and what I call a theory of determination. The generality problem may concern either of these. While plausible criteria of relevance would be convenient for the externalist, he does not need them. I discuss various theories of determination, and argue that no existing theory of determination is plausible. This (...) provides a case for the no determination view: there are no facts that determine relevant types. This is the diagnosis of the generality problem. The externalist, however, may embrace the no determination view. This is what provides a resolution to the generality problem. (shrink)
The dilemma referred to in the title occurs in many contexts concerned with expressive meaning in art, and especially music, which suggests that the issue it raises will be central to any complete theory of musical expressiveness. One notable attempt to resolve the paradox of simultaneous generality and particularity in music is in Aaron Ridley's book Music, Value and the Passions. I show why I consider his account unsatisfactory and then propose my own resolution of the paradox. It takes (...) the form of distinguishing between two distinct notions of generality (which I term ‘generality’ and ‘abstractness’) and of particularity (‘specificity’ and ‘concreteness’), and of constructing two relatively independent oppositions: the concrete versus the abstract and the specific versus the general. Finally, I show that a description of music's expressive meaning as abstract, but specific, rightly captures what is usually thought about music, and does not entail any contradictions. (shrink)
In his debates with Daniel Kahneman and Amos Tversky, Gerd Gigerenzer puts forward a stricter standard for the proper representation of judgment heuristics. I argue that Gigerenzer’s stricter standard contributes to naturalized epistemology in two ways. First, Gigerenzer’s standard can be used to winnow away cognitive processes that are inappropriately characterized and should not be used in the epistemic evaluation of belief. Second, Gigerenzer’s critique helps to recast the generality problem in naturalized epistemology and cognitive psychology as the methodological (...) problem of identifying criteria for the appropriate specification and characterization of cognitive processes in psychological explanations. I conclude that naturalized epistemologists seeking to address the generality problem should turn their focus to methodological questions about the proper characterization of cognitive processes for the purposes of psychological explanation. (shrink)
I argue that we should not adopt categorial restrictions on the significance of syntactically well-formed strings. Even syntactically well-formed but semantically absurd strings, such as ‘Life is but a walking shadow’ and ‘Caesar is a prime number’, can express thoughts; and competent thinkers both can and ought to be able to grasp such thoughts. A more specific way of putting this claim is that Gareth Evans’ Generality Constraint should be viewed as a fully general constraint on concept possession and (...) propositional thought, even though Evans himself accepted only a categorially-restricted version of the Constraint. I establish this by arguing, first, that even well-formed but semantically cross-categorial strings often do possess substantive inferential roles; second, that hearers exploit these inferential roles in interpreting such strings metaphorically; and third, that there is no good reason to deny truth-conditions to strings with inferential roles. (shrink)
In this paper I discuss Heck's (2007) new argument for content dualism. This argument is based on the claim that conceptual states, but not perceptual states, meet Evans's Generality Constraint. Heck argues that this claim, together with the idea that the kind of content we should attribute to a mental state depends on which generalizations the state satisfies, implies that conceptual states and perceptual states have different kinds of contents. I argue, however, that it is unlikely that there is (...) a plausible reading of the Generality Constraint under which it is non-trivially true both that conceptual states meet it and that perceptual states do not. Therefore, the soundness of Heck's argument is dubious. En este artículo discuto el nuevo argumento de Heck (2007) en favor del dualismo de contenido. Este argumento se basa en la afirmación de que los estados conceptuales, pero no los perceptuales, cumplen con el Requisito de Generalidad de Evans. Heck argumenta que esta afirmación, junto con la idea de que el tipo de contenido que debemos atribuir a un estado mental depende de las generalizaciones que el estado satisface, implica que los estados conceptuales tienen un tipo de contenido diferente del de los estados perceptuales. Yo argumento, sin embargo, que es poco probable que haya una interpretación convincente del Requisito de Generalidad según la cual sea verdadero pero no trivial tanto que los estados conceptuales lo satisfacen como que los perceptuales no. Por lo tanto, la solidez del argumento de Heck es dudosa. (shrink)
The generality problem is a well-known problem for process reliabilist theories of justification. Here’s how the problem usually gets started. In the first instance, token processes of belief formation are not themselves reliable or unreliable. Rather, it is types of processes of belief formation that are reliable or unreliable. But any token process is an instance of many different types. And these types may differ in reliability.
Some thirty years ago, two proposals were made concerning criteria for identity of proofs. Prawitz proposed to analyze identity of proofs in terms of the equivalence relation based on reduction to normal form in natural deduction. Lambek worked on a normalization proposal analogous to Prawitz's, based on reduction to cut-free form in sequent systems, but he also suggested understanding identity of proofs in terms of an equivalence relation based on generality, two derivations having the same generality if after (...) generalizing maximally the rules involved in them they yield the same premises and conclusions up to a renaming of variables. These two proposals proved to be extensionally equivalent only for limited fragments of logic. The normalization proposal stands behind very successful applications of the typed lambda calculus and of category theory in the proof theory of intuitionistic logic. In classical logic, however, it did not fare well. The generality proposal was rather neglected in logic, though related matters were much studied in pure category theory in connection with coherence problems, and there are also links to low-dimensional topology and linear algebra. This proposal seems more promising than the other one for the general proof theory of classical logic. (shrink)
From 1905–1908 onward, Russell thought that his new ‘substitutional theory’ provided him with the right framework to resolve the set-theoretic paradoxes. Even if he did not finally retain this resolution, the substitutional strategy was instrumental in the development of his thought. The aim of this paper is not historical, however. It is to show that Russell's substitutional insight can shed new light on current issues in philosophy of mathematics. After having briefly expounded Russell's key notion of a ‘structured variable’, I (...) connect it to two ongoing discussions: the Caesar problem, and absolute generality. (shrink)
I present what might seem to be a local, deterministic model of the EPR-Bohm experiment, inspired by recent work by Joy Christian, that appears at first blush to be in tension with Bell-type theorems. I argue that the model ultimately fails to do what a hidden variable theory needs to do, but that it is interesting nonetheless because the way it fails helps clarify the scope and generality of Bell-type theorems. I formulate and prove a minor proposition that makes (...) explicit how Bell-type theorems rule out models of the sort I describe here. (shrink)
This article is concerned with a statistical proposal due to James R. Beebe for how to solve the generality problem for process reliabilism. The roposal is highlighted by Alvin I. Goldman as an interesting candidate solution. However, Goldman raises the worry that the proposal may not always yield a determinate result. We address this worry by proving a dilemma: either the statistical approach does not yield a determinate result or it leads to trivialization, i.e. reliability collapses into truth (and (...) anti-reliability into falsehood). Various strategies for avoiding this predicament are considered, including revising the statistical rule or restricting its application to natural kinds. All amendments are seen to have serious problems of their own. We conclude that reliabilists need to look elsewhere for a convincing solution to the generality problem. (shrink)
This paper deals with Wittgenstein's statement that our "craving for generality" is a main source of confusion in philosophy. It is argued that difficulties connected with this tendency also affect most attempts to explain or elaborate Wittgenstein's philosophical thinking, since most commentaries elucidate his thinking in general terms, in the notions and classificatory apparatus of some prevalent vocabulary of professional philosophy. It is argued that this craving for generality is closely tied up with another tendency of traditional philosophy, (...) namely the tendency to impose substantive normative claims. The effort to dissociate himself from this tendency is a main feature of the late Wittgenstein's philosophy that has not been sufficiently observed. /// O presente ensaio trata da afirmação de Wittgenstein segundo a qual "a nossa ânsia de generalidade" constitui umafonte principal de confusão em filosofia. O autor procura demonstrar até que ponto dificuldades ligadas a esta tendência tambem afectam a maior parte das tentativas para explicar ou pormenorizar o pensamento filosófico de Wittgenstein, dado que a maiorparte dos comentdrios elucidam o seu pensamento em termos gerais, com as noqoes e aparato classificatorio de certo vocabulário dominante dafilosofia profissional. O artigo mostra assim coma está ânsia da generalidade está intimamente ligada com outra tendencia dafilosofia tradicional, nomeadamente a tendência a impor reivindicações normativas substantivas. O autor considera que o esforço de Wittgenstein para se dissociar desta tendencia é precisamente uma das características principais da sua filosofia tardia que não tern sido suficientemente estudada. (shrink)
Although it is eommon to attribute to Wittgenstein in the Tractatus a treatment of general propositions as equivalent to eonjunctions and disjunctions of instance propositions, the evidence for this is not perfeetly clear. This article considers Wittgenstein’s comments in 5.521, which can be read as rejecting such a treatment. It argues that properly situating the Tractatus historically allows for a revised reading of 5.521 and other parts of the Tractatus relevant to Wittgenstein’s theory of generality. The result is that (...) 5.521 does not conflict with the view that general propositions are truth-functions of instance propositions. Common problems with such a view are to some extent obviated by the fact that Wittgenstein, following Russell and Moore, was not concerned with a syntactically defined language, but with propositions conceived as independent of a fixed language. (shrink)
Object identity, the apprehension that two glimpses refer to the same object, is offered as an example of combining generality, mathematics, and evolution. We argue that it applies to glimpses in time (apparent motion), modality (ventriloquism), and space (Gestalt grouping); that it has a mathematically elegant solution of nested geometries (Euclidean, Similarity, Affine, Projective, Topology); and that it is evolutionarily sound despite our Euclidean world. [Shepard].
Naturalistic epistemologists frequently assume that their aim is to identify generalities (i.e. general laws) about the effectiveness of particular reasoning processes and methods. This paper argues that the search for this kind of generality fails. Work that has been done thus far to identify generalities (e.g. by Goldman, Kitcher and Thagard) overlooks both the complexity of reasoning and the relativity of assessments to particular contexts (domain, stage and goal of inquiry). Examples of human reasoning which show both complexity and (...) contextuality are given. The paper concludes with a discussion of the kind of multivariate model of reasoning that naturalistic epistemologists can use to evaluate processes and methods for specific domains. (shrink)
Does ethics have adequate general theories? Our analysis shows that this question does not have a straightforward answer since the key terms are ambiguous. So we should not concentrate on the answer but on the question itself. “Ethics” stands for many things, but we let that pass. “Adequate” may refer to varied arrays of methodological principles which are seldom fully articulated in ethics. “General” is a notion with at least three meanings. Different kinds of generality may be at cross-purposes, (...) so we must not expect theories to be general in sundry senses. “Theory,” for that matter, is itself ambiguous. Some thinkers say that ethics cannot have theories, while others deny it. We doubt whether opposing parties are talking about the same things.No wonder, then, that controversies in ethics are long-lasting and unproductive. We hope that the methodology we have presented will alleviate some of them. The examples we chose show that this is feasible. Views such as Hare's and Jonsen and Toulmin's which are seemingly wide apart, show convergence if we put them in a methodological perspective.Our analysis also suggests that many alleged differences between science and ethics could fade away if methodology is brought to bear on them. Specifically, the idea that ethics compares poorly with science in view of limited generality, or poor means of justification, is unfounded. Those who defend this view over-rate the powers of science. (shrink)
I distinguish between being cognisant and being able to perform intelligent operations. The former, but not the latter, minimally involves the capacity to make adequate judgements about one's relation to objects in the environment. The referential nature of cognisance entails that the mental states of cognisant systems must be inter-related holistically, such that an individual thought becomes possible because of its relation to a system of potential thoughts. I use Gareth Evans' 'Generality Constraint' as a means of describing how (...) the reference and holism of mental states in cognisant systems are mutually dependent. Next, I describe attempts to deny the relevance of holism and reference by positing a mentalese. These attempts fail because the meanings of symbols are under determined, with there being no principled means of distinguishing between the mental tokening of a symbol and its disambiguation. I argue that the connectionist meta-theory does not encounter this problem because it is able to encompass the holism of the mental. Recent attempts to show that symbol processing theories of thought must be preferred to connectionist theories do not work. Despite appearances to the contrary, the Generality Constraint favours connectionist not symbol-processing theories. (shrink)
In America by the 1930s, albino rats had become a kind of generic standard in research on physiology and behavior that de-emphasized diversity across species. However, prior to about 1915, the early work of many of the pioneer rat researchers in America and in central Europe reflected a strong interest in species differences and a deep regard for diversity. These scientists sought broad, often medical, generality, but their quest for generality using a standard animal did not entail a (...) de-emphasis of organic diversity. They chose white rats as test animals for two primary reasons. First, rats develop very slowly. They therefore made features of physiological, neural and psychological development accessible to the experimental method at a time when its application to the phenomena of development remained controversial. Secondly, rats were thought to have unusually strong sex drives. For this reason they became central to the experimental study of sexuality and, in the work of the reproductive physiologist Eugen Steinach, sexual development. Connections among three research institutes that stressed experimental approaches to the study of brain and development demonstrate the importance of the rat's institutional role. As the emphasis on experimentation in the study of development grew, two of these institutes bred rats to provide uniform materials. Eventually, however, their reasons for selecting rats were lost; and the ready availability of a uniform test animal led to a shift in scientists' presumptions about diversity, as the standard rat became a tool for assuring generality. (shrink)
Webb has articulated a clear, multi-dimensional framework for discussing simulation models and modelling strategies. This framework will likely co-evolve with modelling. As such, it will be important to continue to clarify these dimensions and perhaps add to them. I discuss the dimension of generality and suggest that a dimension of integrativeness may also be needed.
We introduce the notion of an alphabetic trace of a cut-free intuitionistic prepositional proof and show that it serves to characterize the equality of arrows in cartesian closed categories. We also show that alphabetic traces improve on the notion of the generality of proofs proposed in the literature. The main theorem of the paper yields a new and considerably simpler solution of the coherence problem for cartesian closed categories than those in [11, 14].
En este artículo, me propongo exponer algunas dificultades relacionadas con la posibilidad de que la Teoría de Modelos pueda constituirse en una Teoría General de la Interpretación. Específicamente la idea que sostengo es que lo que nos muestra la Paradoja de Orayen es que las interpretaciones no pueden ser ni conjuntos ni objetos. Por eso, una elucidación del concepto intuitivo de interpretación, que apele a este tipo de entidades, está condenada al fracaso. De manera secundaria, muestro que no hay algún (...) supuesto conjuntista que sea imprescindible para que surja la mencionada paradoja: sólo se necesita que las interpretaciones sean objetos. Voy a argumentar que si las interpretaciones son objetos, tal como lo supone la posibilidad de cuantificar sobre las mismas para poder dar una caracterización satisfactoria de consecuencia lógica, la au-toaplicación (como un caso de aplicación de los recursos semánticos de la teoría de modelos para encontrar una interpretación con máxima generalidad) es imposible. Finalmente, discuto cada una de las dos soluciones que el propio Orayen imaginó frente a su paradoja, y muestro que cada una posee diferentes dificultades.In this paper, I intend to present some problems to construe the Model Theory as a Theory of Interpretation. Specifically, I am going to defend that, according to the Paradox of Orayen, interpretations can not be neither set nor object. Thus, an explication of the intuitive concept of interpretation that appeals to these types of entities will be condemned to failure. Secondary, I will show that there is not any like-set assumption indispensable to get rise to that paradox: only all is needed is the assumption that interpretations are objects. I am going to argue that if interpretations are objects, as it is assumed by the possibility of quantifying over interpretations in order to offer an satisfactory characterization of logical consequence, self-application (as a case of application of semantics resources of the Model Theory for finding a interpretation with absolute generality) is not possible. Eventually, I will discuss both of the solutions provided by Orayen to his paradox and I will support that both have different difficulties. (shrink)
ID: 89 / Parallel 4k: 2 Single paper Topics: Philosophy of mind, Philosophy of science Keywords: Cognitive Science, Cognitive Neuroscience, Mechanistic explanations, Reductionism, Normativity, Generality, Emerging School of Philosophers of Science. The role of philosophy in cognitive science: mechanistic explanations, normativity, generality Mohammadreza Haghighi Fard Leiden University, Netherlands, The; email@example.com Introduction -/- Cognitive science, as an interdisciplinary research endeavour, seeks to explain mental activities such as reasoning, remembering, language use, and problem solving, and the explanations it advances commonly (...) involve descriptions of the mechanisms responsible for these activities. Cognitive mechanisms are distinguished from the mechanisms invoked in other domains of biology by involving the processing of information. Many of the philosophical issues discussed in the context of cognitive science involve the nature of information processing. For philosophy of science, a central question is what counts as a scientific explanation. But what is a mechanistic explanation and how does it work, how can philosophy of science use it as a solution for the problem of integration in cognitive science? By answering these questions and merging my answers with discussion of concepts of philosophy, normativity and generality, I will investigate the following claim. -/- I claim that philosophy by using strength concepts such as normativity, generality, and a mechanistic philosophy of explanations, can be a most important contributor to cognitive science. I also investigate how philosophy of science could be (can be) a bridge between psychology and neuroscience. We need a distinction between philosophy of cognitive science and philosophy in cognitive science; I am talking about the latter. -/- This claim is very important for the integration and the future of the interdisciplinary field known as cognitive science. -/- Philosophy as a true cognitive science -/- When the Cognitive Science Society was founded, in the late 1970s, philosophy, neuroscience, and anthropology were playing smaller roles. The three disciplines that formed the core group were artificial intelligence, psychology, and linguistics. The curious thing is that George Miller, a psychologist and an important founder of cognitive sciences in a hexagon diagram that he presented, put philosophy at the top of the diagram and neuroscience at the very bottom. There is enough agreement now that neuroscience is the most important contributor to cognitive science and there are fair connections between philosophy and neuroscience. In that diagram there was almost no connection between philosophy and neuroscience. -/- The developments and rise of cognitive science in the last half-century has been accompanied by considerable amount of philosophical activity. Perhaps no other area within analytic philosophy in the second half of that period has attracted more attention or produced more publications. (Bechtel and Graham, 1998. Rumelhart and Bly 1999. Bechtel, Mandik, Mundale 2001. Thagard, 2007. Bennett and Dennett et al, 2007. Bennett and Hacker, 2008. Andler, 2009. Frankish and Ramsey, 2012.) -/- Many philosophers of science offer conclusions that have a direct bearing on cognitive science and its practitioners can profit from closer engagement with the rest of cognitive science. For example, William Bechtel has discussed three projects, two in naturalistic philosophy of mind and one in naturalistic philosophy of science that have been pursued during the past 30 years, that he contends, can make theoretical and methodological contributions to cognitive science (Bechtel, 2009). Paul Thagard is another example of the mentioned emerging school of philosophers of science that define cognitive science as the interdisciplinary investigation of mind and intelligence (Thagard, 2006). Thagard by presenting some general but important philosophical questions such as, “What is the nature of the explanations and theories developed in cognitive science?”, and by providing answers to these central questions has showed how philosophy of science can help cognitive science by the advantage of its generality. Andrew Brook, however, believes that philosophical approaches have never had a settled place in cognitive science but he is listed in a group of the philosophers of science that they are contributing very closely the cognitive science (Brook, 2009). Daniel Dennett , as well as being a member the mentioned naturalistic philosophers of science, believes that there is much good work for philosophers to do in cognitive science if they adopt the constructive attitude that prevails in science. -/- What are mechanisms? Let us begin abstractly before considering an example. Mechanisms are collections of entities and activities organized together to do something (cf. Machamer, Darden, & Craver, 2000; Craver & Darden, 2001; Bechtel &Richardson, 1993; Glennan, 1996). These explanations are known as ‘mechanistic explanations’. By using and developing these mechanistic explanations of philosophy of science one can draw normative consequences for cognitive science. Paul Thagard (Thagard, 2006 and 2009), William Bechtel (Bechtel, 2008 and 2009), Andrew Brook (Brook, 2008) investigated and promoted using the ‘normativity’ in philosophy to show a better and crucial role for philosophy of science in an interdisciplinary known as cognitive science. Some philosophers have thought that, in order to pursue this normative function, philosophy must distance itself from empirical matters, but there are examples where the investigations of descriptive and normative issues go hand in hand. ( Thagard, 2009). -/- I will investigate how we can reduce a higher-level science such as psychology to neuroscience without the problems of reductionism but via mechanistic explanations. By problem I mean psychology does not lose its autonomy. -/- Conclusion -/- If cognitive science is all about understanding the human mind, or if cognitive science is the interdisciplinary investigation of mind and intelligence, since the whole life of philosophy was involving with the ways of knowing (epistemology) and conceptions of reality (metaphysics), also philosophy has considered the so-called mind-body problem ( identity theory, functionalism, and heuristic identity theory) , then philosophy could be the most deserved discipline to be a most contributor in cognitive science. I tried to discuss this by using the three advantages in philosophy, normativity, and generality and by introducing an emerging school of mechanistic (not mechanical) philosophers. One thing left, as cognitive science is a two-way street, philosophers need also to stop in a station of cognitive science and learn from the most important advances in brain and neuroscience. Key references : Cognitive Science, Cognitive Neuroscience, Mechanistic explanations, Reductionism, Normativity, Generality, Emerging School of Philosophers of Science. (shrink)