The epistemic modal auxiliaries must and might are vehicles for expressing the force with which a proposition follows from some body of evidence or information. Standard approaches model these operators using quantificational modal logic, but probabilistic approaches are becoming increasingly influential. According to a traditional view, must is a maximally strong epistemic operator and might is a bare possibility one. A competing account—popular amongst proponents of a probabilisitic turn—says that, given a body of evidence, must \ entails that \\) is (...) high but non-maximal and might \ that \\) is significantly greater than 0. Drawing on several observations concerning the behavior of must, might and similar epistemic operators in evidential contexts, deductive inferences, downplaying and retractions scenarios, and expressions of epistemic tension, I argue that those two influential accounts have systematic descriptive shortcomings. To better make sense of their complex behavior, I propose instead a broadly Kratzerian account according to which must \ entails that \ = 1\) and might \ that \ > 0\), given a body of evidence and a set of normality assumptions about the world. From this perspective, must and might are vehicles for expressing a common mode of reasoning whereby we draw inferences from specific bits of evidence against a rich set of background assumptions—some of which we represent as defeasible—which capture our general expectations about the world. I will show that the predictions of this Kratzerian account can be substantially refined once it is combined with a specific yet independently motivated ‘grammatical’ approach to the computation of scalar implicatures. Finally, I discuss some implications of these results for more general discussions concerning the empirical and theoretical motivation to adopt a probabilisitic semantic framework. (shrink)
The aim of this paper is to reconcile two claims that have long been thought to be incompatible: that we compositionally determine the meaning of complex expressions from the meaning of their parts, and that prototypes are components of the meaning of lexical terms such as fish, red, and gun. Hypotheses and are independently plausible, but most researchers think that reconciling them is a difficult, if not hopeless task. In particular, most linguists and philosophers agree that is not negotiable; so (...) they tend to reject. Recently, there have been some attempts to reconcile these claims, but they all adopt an implausibly weak notion of compositionality. Furthermore, parties to this debate tend to fall into a problematic way of individuating prototypes that is too externalistic. In contrast, I propose that we can reconcile and if we adopt, instead, an internalist and pluralist conception of prototypes and a context-sensitive but strong notion of compositionality. I argue that each of this proposals is independently plausible, and that, when taken together, provide the basis for a satisfactory account of prototype compositionality. (shrink)
This paper defends the view that common nouns have a dual semantic structure that includes extension-determining and non-extension-determining components. I argue that the non-extension-determining components are part of linguistic meaning because they play a key compositional role in certain constructions, especially in privative noun phrases such as "fake gun" and "counterfeit document". Furthermore, I show that if we modify the compositional interpretation rules in certain simple ways, this dual content account of noun phrase modification can be implemented in a type-driven (...) formal semantic framework. In addition, I also argue against traditional accounts of privative noun phrases which can be paired with the assumption that nouns do not have a dual semantic structure. At the most general level, this paper presents a proposal for how we can begin to integrate a psychologically realistic account of lexical semantics with a linguistically plausible compositional semantic framework. (shrink)
Discussions in social psychology overlook an important way in which biases can be encoded in conceptual representations. Most accounts of implicit bias focus on ‘mere associations’ between features and representations of social groups. While some have argued that some implicit biases must have a richer conceptual structure, they have said little about what this richer structure might be. To address this lacuna, we build on research in philosophy and cognitive science demonstrating that concepts represent dependency relations between features. These relations, (...) in turn, determine the centrality of a feature f for a concept C: roughly, the more features of C depend on f, the more central f is for C. In this paper, we argue that the dependency networks that link features can encode significant biases. To support this claim, we present a series of studies that show how a particular brilliance-gender bias is encoded in the dependency networks which are part of the concepts of female and male academics. We also argue that biases which are encoded in dependency networks have unique implications for social cognition. (shrink)
The aim of this article is to discuss the conditions under which functional neuroimaging can contribute to the study of higher cognition. We begin by presenting two case studies—on moral and economic decision making—which will help us identify and examine one of the main ways in which neuroimaging can help advance the study of higher cognition. We agree with critics that functional magnetic resonance imaging (fMRI) studies seldom “refine” or “confirm” particular psychological hypotheses, or even provide details of the neural (...) implementation of cognitive functions. However, we suggest that neuroimaging can support psychology in a different way—namely, by selecting among competing hypotheses of the cognitive mechanisms underlying some mental function. One of the main ways in which neuroimaging can be used for hypothesis selection is via reverse inferences, which we here examine in detail. Despite frequent claims to the contrary, we argue that successful reverse inferences do not assume any strong or objectionable form of reductionism or functional locationism. Moreover, our discussion illustrates that reverse inferences can be successful at early stages of psychological theorizing, when models of the cognitive mechanisms are only partially developed. (shrink)
This paper defends the view that the Faculty of Language is compositional, i.e., that it computes the meaning of complex expressions from the meanings of their immediate constituents and their structure. I fargue that compositionality and other competing constraints on the way in which the Faculty of Language computes the meanings of complex expressions should be understood as hypotheses about innate constraints of the Faculty of Language. I then argue that, unlike compositionality, most of the currently available non-compositional constraints predict (...) incorrect patterns of early linguistic development. This supports the view that the Faculty of Language is com- positional. More generally, this paper presents a way of framing the compositionality debate (by focusing on its implications for language acquisition) that can lead to its even- tual resolution, so it will hopefully also interest theorists who disagree with its main conclusion. (shrink)
Grammatical theories of Scalar Implicatures make use of an exhaustivity operator exh, which asserts the conjunction of the prejacent with the negation of excludable alternatives. We present a new Grammatical theory of Scalar Implicatures according to which exh is replaced with pex, an operator that contributes its prejacent as asserted content, but the negation of scalar alternatives at a non-at-issue level of meaning. We show that by treating this non-at-issue level as a presupposition, this theory resolves a number of empirical (...) challenges faced by the old formulation of exh (as well as by standard neo-Gricean theories). The empirical challenges include projection of scalar implicatures from certain embedded environments (‘some under some’ sentences, some under negative factives), their restricted distribution under negation, and the existence of common ground-mismatching and oddness-inducing implicatures. We argue that these puzzles have a uniform solution given a pex-based Grammatical theory of implicatures and some independently motivated principles concerning presupposition projection, cancellation and accommodation. (shrink)
This article presents and discusses one of the most prominent inferential strategies currently employed in cognitive neuropsychology, namely, reverse inference. Simply put, this is the practice of inferring, in the context of experimental tasks, the engagement of cognitive processes from locations or patterns of neural activation. This technique is notoriously controversial because, critics argue, it presupposes the problematic assumption that neural areas are functionally selective. We proceed as follows. We begin by introducing the basic structure of traditional “location-based” reverse inference (...) and discuss the influential lack of selectivity objection. Next, we rehearse various ways of responding to this challenge and provide some reasons for cautious optimism. The second part of the essay presents a more recent development: “pattern-decoding reverse inference”. This inferential strategy, we maintain, provides an even more convincing response to the lack of selectivity charge. Due to this and other methodological advantages, it is now a prominent component in the toolbox of cognitive neuropsychology. Finally, we conclude by drawing some implications for philosophy of science and philosophy of mind. (shrink)
Recent advancements in the brain sciences have enabled researchers to determine, with increasing accuracy, patterns and locations of neural activation associated with various psychological functions. These techniques have revived a longstanding debate regarding the relation between the mind and the brain: while many authors now claim that neuroscientific data can be used to advance our theories of higher cognition, others defend the so-called `autonomy' of psychology. Settling this significant question requires understanding the nature of the bridge laws used at the (...) psycho-neural interface. While these laws have been the topic of extensive discussion, such debates have mostly focused on a particular type of link: reductive laws. Reductive laws are problematic: they face notorious philosophical objections and they are too scarce to substantiate current research at the interface of psychology and neuroscience. The aim of this article is to provide a systematic analysis of a different kind of bridge laws--associative laws--which play a central, albeit often overlooked, role in scientific practice. (shrink)
Recent advancements in the brain sciences have enabled researchers to determine, with increasing accuracy, patterns and locations of neural activation associated with various psychological functions. These techniques have revived a longstanding debate regarding the relation between the mind and the brain: while many authors claim that neuroscientific data can be employed to advance theories of higher cognition, others defend the so-called ‘autonomy’ of psychology. Settling this significant issue requires understanding the nature of the bridge laws used at the psycho-neural interface. (...) While these laws have been the topic of extensive discussion, such debates have mostly focused on a particular type of link: reductive laws. Reductive laws are problematic: they face notorious philosophical objections and they are too scarce to substantiate current research at the intersection of psychology and neuroscience. The aim of this article is to provide a systematic analysis of a different kind of bridge laws—associative laws—which play a central, albeit overlooked role in scientific practice. (shrink)
The concepts expressed by social role terms such as artist and scientist are unique in that they seem to allow two independent criteria for categorization, one of which is inherently normative. This study presents and tests an account of the content and structure of the normative dimension of these “dual character concepts.” Experiment 1 suggests that the normative dimension of a social role concept represents the commitment to fulfill the idealized basic function associated with the role. Background information can affect (...) which basic function is associated with each social role. However, Experiment 2 indicates that the normative dimension always represents the relevant commitment as an end in itself. We argue that social role concepts represent the commitments to basic functions because that information is crucial to predict the future social roles and role-dependent behavior of others. (shrink)
Recent work in formal semantics suggests that the language system includes not only a structure building device, as standardly assumed, but also a natural deductive system which can determine when expressions have trivial truth‐conditions (e.g., are logically true/false) and mark them as unacceptable. This hypothesis, called the ‘logicality of language’, accounts for many acceptability patterns, including systematic restrictions on the distribution of quantifiers. To deal with apparent counter‐examples consisting of acceptable tautologies and contradictions, the logicality of language is often paired (...) with an additional assumption according to which logical forms are radically underspecified: i.e., the language system can see functional terms but is ‘blind’ to open class terms to the extent that different tokens of the same term are treated as if independent. This conception of logical form has profound implications: it suggests an extreme version of the modularity of language, and can only be paired with non‐classical—indeed quite exotic—kinds of deductive systems. The aim of this paper is to show that we can pair the logicality of language with a different and ultimately more traditional account of logical form. This framework accounts for the basic acceptability patterns which motivated the logicality of language, can explain why some tautologies and contradictions are acceptable, and makes better predictions in key cases. As a result, we can pursue versions of the logicality of language in frameworks compatible with the view that the language system is not radically modular vis‐á‐vis its open class terms and employs a deductive system that is basically classical. (shrink)
Recent work in formal semantics suggests that the language system includes not only a structure building device, as standardly assumed, but also a natural deductive system which can determine when expressions have trivial truth-conditions (e.g., are logically true/false) and mark them as unacceptable. This hypothesis, called the `logicality of language', accounts for many acceptability patterns, including systematic restrictions on the distribution of quantifiers. To deal with apparent counter-examples consisting of acceptable tautologies and contradictions, the logicality of language is often paired (...) with an additional assumption according to which logical forms are radically underspecified: i.e., the language system can see functional terms but is `blind' to open class terms to the extent that different tokens of the same term are treated as if independent. This conception of logical form has profound implications: it suggests an extreme version of the modularity of language, and can only be paired with non-classical---indeed quite exotic---kinds of deductive systems. The aim of this paper is to show that we can pair the logicality of language with a different and ultimately more traditional account of logical form. This framework accounts for the basic acceptability patterns which motivated the logicality of language, can explain why some tautologies and contradictions are acceptable, and makes better predictions in key cases. As a result, we can pursue versions of the logicality of language in frameworks compatible with the view that the language system is not radically modular vis-a-vis its open class terms and employs a deductive system that is basically classical. (shrink)
The meaning that expressions take on particular occasions often depends on the context in ways which seem to transcend its direct effect on context-sensitive parameters. ‘Truth-conditional pragmatics’ is the project of trying to model such semantic flexibility within a compositional truth-conditional framework. Most proposals proceed by radically ‘freeing up’ the compositional operations of language. I argue, however, that the resulting theories are too unconstrained, and predict flexibility in cases where it is not observed. These accounts fall into this position because (...) they rarely, if ever, take advantage of the rich information made available by lexical items. I hold, instead, that lexical items encode both extension and non-extension determining information. Under certain conditions, the non-extension determining information of an expression e can enter into the compositional processes that determine the meaning of more complex expressions which contain e. This paper presents and motivates a set of type-driven compositional operations that can access non-extension determining information and introduce bits of it into the meaning of complex expressions. The resulting multidimensional semantics has the tools to deal with key cases of semantic flexibility in appropriately constrained ways, making it a promising framework to pursue the project of truth-conditional pragmatics. (shrink)
According to Kratzer’s influential account of epistemic must and might, these operators involve quantification over domains of possibilities determined by a modal base and an ordering source. Recently, this account has been challenged by invoking contexts of ‘epistemic tension’: i.e., cases in which an assertion that must\ is conjoined with the possibility that \, and cases in which speakers try to downplay a previous assertion that must\, after finding out that \. Epistemic tensions have been invoked from two directions. Von (...) Fintel and Gillies :351–383, 2010) propose a return to a simpler modal logic-inspired account: must and might still involve universal and existential quantification, but the domains of possibilities are determined solely by realistic modal bases. In contrast, Lassiter :117–163, 2016), following Swanson, proposes a more revisionary account which treats must and might as probabilistic operators. In this paper, we present a series of experiments to obtain reliable data on the degree of acceptability of various contexts of epistemic tension. Our experiments include novel variations that, we argue, are required to make progress in this debate. We show that restricted quantificational accounts à la Kratzer fit the overall pattern of results better than either of their recent competitors. In addition, our results help us identify the key components of restricted quantificational accounts, and on that basis propose some refinements and general constraints that should be satisfied by any account of the modal auxiliaries. (shrink)
How are biases encoded in our representations of social categories? Philosophical and empirical discussions of implicit bias overwhelmingly focus on salient or statistical associations between target features and representations of social categories. These are the sorts of associations probed by the Implicit Association Test and various priming tasks. In this paper, we argue that these discussions systematically overlook an alternative way in which biases are encoded, that is, in the dependency networks that are part of our representations of social categories. (...) Dependency networks encode information about how features in a conceptual representation depend on each other. This information determines the degree of centrality of a feature for a conceptual representation. Importantly, centrally encoded biases systematically disassociate from those encoded in salient-statistical associations. Furthermore, the degree of centrality of a feature determines its cross-contextual stability: in general, the more central a feature is for a concept, the more likely it is to survive into a wide array of cognitive tasks involving that concept. Accordingly, implicit biases that are encoded in the central features of concepts are predicted to be more resilient across different tasks and contexts. As a result, the distinction between centrally encoded and salient-statistical biases has important theoretical and practical implications. (shrink)
The logicality of language is the hypothesis that the language system has access to a ‘natural’ logic that can identify and filter out as unacceptable expressions that have trivial meanings—that is, that are true/false in all possible worlds or situations in which they are defined. This hypothesis helps explain otherwise puzzling patterns concerning the distribution of various functional terms and phrases. Despite its promise, logicality vastly over-generates unacceptability assignments. Most solutions to this problem rest on specific stipulations about the properties (...) of logical form—roughly, the level of linguistic representation which feeds into the interpretation procedures—and have substantial implications for traditional philosophical disputes about the nature of language. Specifically, contextualism and semantic minimalism, construed as competing hypotheses about the nature and degree of context-sensitivity at the level of logical form, suggest different approaches to the over-generation problem. In this paper, I explore the implications of pairing logicality with various forms of contextualism and semantic minimalism. I argue that to adequately solve the over-generation problem, logicality should be implemented in a constrained contextualist framework. (shrink)
The following article intends an overview on Enrique Dussel’s early work and the examination of the development process of political and philosophical categories such as the ones of “Latin-American identity / otherness”; “Latin-American thought” and political and historical subjects. The corpus covered in this study includes the author’s anthropological-philosophical research on the origins and fundamentals of the western culture as they appear in the trilogy: El humanismo semita, El humanismo helénico and El dualismo en la antropología de la Cristiandad; the (...) first development of otherness and analectic and the proposition of a new political subject in Método para una filosofía de la liberación and Para una ética de la liberación latinoamericana. A review of the first author’s historiographical experiences is included as well by the study of his book: Hipótesis para una historia de la Iglesia en América Latina. (shrink)
El libro que reseñaremos a continuación fue publicado en el año 2003 como primer número de la colección Thesys de la Editorial de la Universidad Católica de Córdoba, Argentina. Una pequeña nota en la contrasolapa nos informa que estamos, con esta colección, delante de “...una selección de obras de múltiples disciplinas, elaboradas a partir de los trabajos de Tesis de posgrado, presentadas y defendidas públicamente por docentes e investigadores de la U.C. de Córdoba., en distintas institucione..
Esta obra congrega a un grupo de reconocidos filósofos alemanes, argentinos, españoles, venezolanos y colombianos, en torno a la figura del Profesor Dr. Guillermo Hoyos Vásquez. Con aportes en los ámbitos de la fenomenología, la filosofía política y la ética se ofrece al lector especializado, pero también al estudiante de filosofía y al lector interesado en la reflexión filosófica, una serie de escritos de primer orden y de máxima actualidad.
Entre los diversos aportes teóricos que hicieron de Max Weber una de las figuras principales del pensamiento social del siglo XX, encontramos su sociología del derecho que tiene como tema el proceso de racionalización del derecho occidental. Max Weber estudió ese proceso como una racionalización de tipo formal que es acompañada, aún en su última etapa moderna, de tendencias contrarias que se dirigen hacia la materialización del derecho. A partir de estos trabajos de Weber se ha construido una crítica del (...) Estado social y una oposición a tomar los derechos sociales como derechos fundamentales, consistentes en afirmar que tal proyecto político conspira contra las notas de racionalidad del Estado de derecho por implicar la introducción de exigencias materiales de justicia que degradan la norma superior del ordenamiento jurídico, es decir, la Constitución. (shrink)
Este trabajo pretende preguntarse si, a pesar del “silencio” de Heidegger ante la teoría de los géneros poéticos de Hölderlin, no es esta misma teoría, y especialmente el poema lírico que caracteriza a la modernidad, la cuestión en la cual se apoya Heidegger para fundamentar su “historia del ser”, de modo que la noción fuerte de _historia _que se maneja en su obra y finalmente el término marcado _das Ereignis _estaría en sintonía con los “trayectos poéticos” que Hölderlin teoriza.
W illia m o f Ockha m w a s a F rancisca n fria r , a theol o gia n an d a v e r y singula r philo sophe r . H e l i v e d a t a tim e o f crisi s an d durin g th e transitio n o f philosop h y an d theol o g y . Hi s secularis m i s manifeste d i n (...) th e defens e o f a radica l separatio n bet w ee n th e religious an d secula r p ow ers . Assigne d t o th e philosophica l cu r ren t o f nominalism , h e deal t a s e v ere b l o w t o th e metap h ysica l realis m o f Aristotl e an d Thoma s Aquina s an d h e ad v ocate d the separatio n o f reaso n an d f aith , bet w ee n philosop h y an d theol o g y an d thu s h e unde r mined th e ideol o gica l foundation s o f th e churc h o f hi s time . H e w a s accuse d o f heres y because o f hi s nominalism , althoug h h e himsel f condemne d P op e Joh n XXI I a s heretica l fo r his conceptio n o f p o v e r t y , a concep t f a r rem o v e d fro m ev angelica l principle s an d especial ly fro m th e notio n o f th e F rancisca n orde r . H e defende d th e separatio n o f churc h an d stat e and h e denie d th e P ope ’ s authorit y i n secula r matters . H e flat ly asse r te d freedo m o f conscience an d Luthe r too k hi m a s a teache r. (shrink)
En este trabajo admito como hipótesis de trabajo filosófico que el pluralismo moral podría ser una mejor opción frente al monismo moral . A partir de esta hipótesis de trabajo, caracterizo cuáles son las notas principales que definen un pluralismo —especialmente moral— de carácter «razonable». Sostengo que estas notas definitorias, que forman parte de las premisas de partida del pluralismo razonable, podrían comportar consecuencias que el propio pluralista no estaría fácilmente dispuesto a aceptar: en particular, un fuerte relativismo moral, un (...) particularismo moral fuerte y una fragmentación de valores que podría derivar en desintegración moral. El seguimiento de estas posibles consecuencias, a su vez, depende de cuánta presión conceptual sobre las notas definitorias del pluralismo razonable ejerza el análisis conceptual preferido por el filósofo. (shrink)
An essential characteristic of the new model of criminal policy is the substitution of the principle of guilt by that of the actor’s potential danger. This old concept of authoritarian criminal law demands that the security of the State be elevated to the category of an autonomous legal-penal good. One of the consequences of elevating the concept of security to the status of a legal good is that danger comes to occupy the role of the basis of repression. This “new” (...) criminal law can be said to hold the subject as emanation of danger, as a risk for the security of the State. We speak then of the singular criminal law of emergency whose aim is to combat dangers, essentially, through security measures; a law in which what is considered is not so much the action as the potential risk for security, and in which certain fundamental rights are restricted in the name of reasons of State. It is precisely this new conception of danger, the result of the return to the criminal law of the actor, which causes the creation of spaces where law is absent, where torture and the absence of guarantees is habitual. The USA criminal model against its “enemies” and the collaboration of the European governments in illegal detentions and torture practiced by USA functionaries is the best example of this. (shrink)