Resumen: Derek Parfit en Personas, racionalidad y tiempo sostiene que si bien es posible concebir experiencias sin referir a personas, las experiencias dependen para su existencia de las personas, y a su vez, las experiencias dependerían para su identidad de cierta otra entidad no idéntica con la entidad persona. Tal tesis, que deviene de determinado experimentos mentales de Parfit, específicamente del argumento Mi División y el Argumento del Hospital, se revisará desde ciertas nociones metafísicas de E. J. Lowe, en específico, (...) desde la tesis que supone que dependencia de identidad implica dependencia existencial, de forma que si x depende para su identidad de y, x dependería, de igual forma, para su existencia de y. Tal supuesto permitirá desarrollar ciertas problemáticas para lo que Parfit sostiene en su esquema sobre la dependencia de las experiencias.: Derek Parfit in People, rationality and time argues that although it is possible to conceive experiences without referring to people, experiences depend on people for their existence, and in turn, experiences would depend on certain other entity not identical to the person entity for their identity. This thesis, which comes from certain mental experiments of Parfit, specifically from My Division argument and the Hospital Argument, will be reviewed from certain metaphysical notions of E. J. Lowe, specifically, from the thesis that assumes that identity dependence implies existential dependence, so that if x depends on the identity of y, x would depend, in the same way, on the existence of y. This assumption will bring to developing certain problems concerning what Parfit holds in his scheme on the dependence of experiences. (shrink)
Reichenbachian approaches to indexicality contend that indexicals are "token-reflexives": semantic rules associated with any given indexical-type determine the truth-conditional import of properly produced tokens of that type relative to certain relational properties of those tokens. Such a view may be understood as sharing the main tenets of Kaplan's well-known theory regarding content, or truth-conditions, but differs from it regarding the nature of the linguistic meaning of indexicals and also regarding the bearers of truth-conditional import and truth-conditions. Kaplan has criticized these (...) approaches on different counts, the most damaging of which is that they make impossible a "logic of demonstratives". The reason for this is that the token-reflexive approach entails that not two tokens of the same sentential type including indexicals are guaranteed to have the same truth-conditions. In this paper I rebut this and other criticisms of the Reichenbachian approach. Additionally, I point out that Kaplan's original theory of "true demonstratives" is empirically inadequate, and claim that any modification capable of accurately handling the linguistic data would have similar problems to those attributed to the Reichenbachian approach. This is intended to show that the difficulties, no matter how real, are not caused by idiosincracies of the "token-reflexive" view, but by deep facts about indexicality. (shrink)
The paper examines an alleged distinction claimed to exist by Van Gelder between two different, but equally acceptable ways of accounting for the systematicity of cognitive output (two “varieties of compositionality”): “concatenative compositionality” vs. “functional compositionality.” The second is supposed to provide an explanation alternative to the Language of Thought Hypothesis. I contend that, if the definition of “concatenative compositionality” is taken in a different way from the official one given by Van Gelder (but one suggested by some of his (...) formulations) then there is indeed a different sort of compositionality; however, the second variety is not an alternative to the language of thought in that case. On the other hand, if the concept of concatenative compositionality is taken in a different way, along the lines of Van Gelder's explicit definition, then there is no reason to think that there is an alternative way of explaining systematicity. (shrink)
Descriptive semantic theories purport to characterize the meanings of the expressions of languages in whatever complexity they might have. Foundational semantics purports to identify the kind of considerations relevant to establish that a given descriptive semantics accurately characterizes the language used by a given individual or community. Foundational Semantics I presents three contrasting approaches to the foundational matters, and the main considerations relevant to appraise their merits. These approaches contend that we should look at the contents of speakers’ intuitions; at (...) the deep psychology of users and its evolutionary history, as revealed by our best empirical theories; or at the personal-level rational psychology of those subjects. Foundational Semantics II examines a fourth view, according to which we should look instead at norms enforced among speakers. The two papers aim to determine in addition the extent to which the approaches are really rival, or rather complementary. (shrink)
Espino, Santamaria, and Garcia-Madruga (2000) report three results on the time taken to respond to a probe word occurring as end term in the premises of a syllogistic argument. They argue that these results can only be predicted by the theory of mental models. It is argued that two of these results, on differential reaction times to end-terms occurring in different premises and in different figures, are consistent with Chater and Oaksford's (1999) probability heuristics model (PHM). It is argued that (...) the third finding, on different reaction times between figures, does not address the issue of processing difficulty where PHM predicts no differences between figures. It is concluded that Espino et al.'s results do not discriminate between theories of syllogistic reasoning as effectively as they propose. (shrink)
We propose a nonmonotonic Description Logic of typicality able to account for the phenomenon of the combination of prototypical concepts. The proposed logic relies on the logic of typicality ALC + TR, whose semantics is based on the notion of rational closure, as well as on the distributed semantics of probabilistic Description Logics, and is equipped with a cognitive heuristic used by humans for concept composition. We first extend the logic of typicality ALC + TR by typicality inclusions of the (...) form p :: T(C) v D, whose intuitive meaning is that “we believe with degree p about the fact that typical Cs are Ds”. As in the distributed semantics, we define different scenarios containing only some typicality inclusions, each one having a suitable probability. We then exploit such scenarios in order to ascribe typical properties to a concept C obtained as the combination of two prototypical concepts. We also show that reasoning in the proposed Description Logic is EXPTIME-complete as for the underlying standard Description Logic ALC. (shrink)
In this paper we propose a computational framework aimed at extending the problem solving capabilities of cognitive artificial agents through the introduction of a novel, goal-directed, dynamic knowledge generation mechanism obtained via a non monotonic reasoning procedure. In particular, the proposed framework relies on the assumption that certain classes of problems cannot be solved by simply learning or injecting new external knowledge in the declarative memory of a cognitive artificial agent but, on the other hand, require a mechanism for the (...) automatic and creative re-framing, or re-formulation, of the available knowledge. We show how such mechanism can be obtained trough a framework of dynamic knowledge generation that is able to tackle the problem of commonsense concept combination. In addition, we show how such a framework can be employed in the field of cognitive architectures in order to overcome situations like the impasse in SOAR by extending the possible options of its subgoaling procedures. (shrink)
This essay discusses the main contentions of The Antinomies of Antonio Gramsci by Perry Anderson in a critical reading of both the positions of the British historian, and of his critics among ‘Togliattian Gramscianists’.
We propose a nonmonotonic Description Logic of typicality able to account for the phenomenon of combining prototypical concepts, an open problem in the fields of AI and cognitive modelling. Our logic extends the logic of typicality ALC + TR, based on the notion of rational closure, by inclusions p :: T(C) v D (“we have probability p that typical Cs are Ds”), coming from the distributed semantics of probabilistic Description Logics. Additionally, it embeds a set of cognitive heuristics for concept (...) combination. We show that the complexity of reasoning in our logic is EXPTIME-complete as in ALC. (shrink)
We obtain in this paper a representation of the formulae of extensions ofL by generalized quantifiers through functors between categories of first-order structures and partial isomorphisms. The main tool in the proofs is the back-and-forth technique. As a corollary we obtain the Caicedo's version of Fraïssés theorem characterizing elementary equivalence for such languages. We also discuss informally some geometrical interpretations of our results.
In this article we present an advanced version of Dual-PECCS, a cognitively-inspired knowledge representation and reasoning system aimed at extending the capabilities of artificial systems in conceptual categorization tasks. It combines different sorts of common-sense categorization (prototypical and exemplars-based categorization) with standard monotonic categorization procedures. These different types of inferential procedures are reconciled according to the tenets coming from the dual process theory of reasoning. On the other hand, from a representational perspective, the system relies on the hypothesis of conceptual (...) structures represented as heterogeneous proxytypes. Dual-PECCS has been experimentally assessed in a task of conceptual categorization where a target concept illustrated by a simple common-sense linguistic description had to be identified by resorting to a mix of categorization strategies, and its output has been compared to human responses. The obtained results suggest that our approach can be beneficial to improve the representational and reasoning conceptual capabilities of standard cognitive artificial systems, and –in addition– that it may be plausibly applied to different general computational models of cognition. The current version of the system, in fact, extends our previous work, in that Dual-PECCS is now integrated and tested into two cognitive architectures, ACT-R and CLARION, implementing different assumptions on the underlying invariant structures governing human cognition. Such integration allowed us to extend our previous evaluation. (shrink)
John Broome has developed an account of rationality and reasoning which gives philosophical foundations for choice theory and the psychology of rational agents. We formalize his account into a model that differs from ordinary choice-theoretic models through focusing on psychology and the reasoning process. Within that model, we ask Broome’s central question of whether reasoning can make us more rational: whether it allows us to acquire transitive preferences, consistent beliefs, non-akratic intentions, and so on. We identify three structural types of (...) rationality requirements: consistency requirements, completeness requirements, and closedness requirements. Many standard rationality requirements fall under this typology. Based on three theorems, we argue that reasoning is successful in achieving closedness requirements, but not in achieving consistency or completeness requirements. We assess how far our negative results reveal gaps in Broome's theory, or deficiencies in choice theory and behavioral economics. (shrink)
In this paper a possible general framework for the representation of concepts in cognitive artificial systems and cognitive architectures is proposed. The framework is inspired by the so called proxytype theory of concepts and combines it with the heterogeneity approach to concept representations, according to which concepts do not constitute a unitary phenomenon. The contribution of the paper is twofold: on one hand, it aims at providing a novel theoretical hypothesis for the debate about concepts in cognitive sciences by providing (...) unexplored connections between different theories; on the other hand it is aimed at sketching a computational characterization of the problem of concept representation in cognitively inspired artificial systems and in cognitive architectures. (shrink)
Resume This paper offers a methodical review of the scientific literature of the last decade that concerns itself with online services offering supportive advocacy for anorexia nervosa and bulimia nervosa. The main question is whether these studies reproduce the traditional divide in the study of eating disorders, between clinical and social science perspectives, with limited mutual exchanges. Having first identified a specific body of literature, the authors investigate its content, methods and approaches, and analyse the network of cross-citations the components (...) generate and share. On this basis, the authors argue that the scientific literature touching on pro-ana websites can be regarded as a single transdisciplinary body of knowledge. What’s more, they show that the literature on computer-mediated sociabilities centred on eating disorders displays different structural characteristics with respect to the traditional, non-Web-related research on eating disorders. In the latter, the social sciences have usually provided a critical counterpoint to the development of a health sciences mainstream. In the case of Web-related research, however, the social sciences have taken the lead role in defining the field, with the health sciences following suit. (shrink)
Señoras y señores, amigas y amigos:Para mí constituye una ocasión de gran alegría y honda satisfacción realizar, a nombre de la Universidad Bolivariana, esta presentación de homenaje a nuestro amigo y maestro Franz Josef Hinkelammert. Querría presentar sucintamente una semblanza de su personalidad intelectual y de su vasta obra investigativa. Nació en Alemania en 1931. Doctor en Economía por la Universidad Libre de Berlín, realizó su formación de postgrado en el Instituto de Europa Oriental..
Page generated Tue Jul 27 16:29:19 2021 on philpapers-web-84c8c567c7-mhfn6
cache stats: hit=8840, miss=10735, save= autohandler : 2125 ms called component : 2108 ms search.pl : 1747 ms render loop : 1334 ms next : 651 ms addfields : 626 ms publicCats : 568 ms initIterator : 409 ms autosense : 188 ms menu : 117 ms save cache object : 96 ms match_cats : 84 ms retrieve cache object : 81 ms match_authors : 52 ms match_other : 52 ms quotes : 45 ms prepCit : 24 ms search_quotes : 9 ms applytpl : 6 ms intermediate : 2 ms init renderer : 0 ms setup : 0 ms writelog : 0 ms auth : 0 ms