The paper investigates the epistemological and communicative competences the experts need to use and communicate evidence in the reasoning process leading to diagnosis. The diagnosis and diagnosis communication are presented as intertwined processes that should be jointly addressed in medical consultations, to empower patients’ compliance in illness management. The paper presents defeasible reasoning as specific to the diagnostic praxis, showing how this type of reasoning threatens effective diagnosis communication and entails that we should understand diagnostic evidence as defeasible as well. (...) It argues that metaphors might be effective communicative devices to let the patients understand the relevant defeasors in the diagnostic reasoning process, helping to improve effective diagnosis communication, and also encouraging a change in patients’ beliefs and attitudes on their own experience of illness and illness’ management. (shrink)
The paper suggests a distinction between two dimensions of grasp of concepts within an inferentialist approach to conceptual content: a common sense "minimum" version, where a simple speaker needs just a few inferences to grasp a concept C, and an expert version, where the specialist is able to master a wide range of inferential transitions involving C. This paper tries to defend this distinction and to explore some of its basic implications.
Anaphoric deflationism is a prosententialist account of the use of “true.” Prosentences are, for sentences, the equivalent of what pronouns are for nouns: as pronouns refer to previously introduced nouns, so prosentences like “that’s true” inherit their content from previously introduced sentences. This kind of deflationism concerning the use of “true” (especially in Brandom’s version) is an explanation in terms of anaphora; the prosentence depends anaphorically on the sentence providing its content. A relevant implication of this theory is that “true” (...) is not understood as a predicate and that truth is not a property. Primitivism, defended by Frege, Moore, and Davidson, is associated with two ideas: (1) that truth is a primitive and central trait of our conceptual system and (2) that truth, as such, cannot be defined. This second claim can be called “negative primitivism,” and it especially points out the facts about the “indefinability” of truth generally advocated by primitivists. In what follows, a connection is established between the deflationist’s rejection of the predicate and of the property and facts (and primitivist ideas) about the indefinability of truth. This connection establishes a common framework to lend further explanatory power to both options. According to the resulting view, this indefinability can explain the appeal and soundness of a deflationist dismissal of predicates and properties dealing with truth. (shrink)
Hilary Putnam spent much of his career criticizing the fact/value dichotomy, and this became apparent already during the phase when he defended internal realism. He later changed his epistemological and metaphysical view by endorsing natural realism, with the consequence of embracing alethic pluralism, the idea that truth works differently in various discourse domains. Despite these changes of mind in epistemology and in theory of truth, Putnam went on criticizing the fact/value dichotomy. However, alethic pluralism entails drawing distinctions among discourse domains, (...) especially between factual and nonfactual domains, and these distinctions are in tension with the rejection of the fact/value dichotomy, as this would in principle hinder factual domains as genuine. This issue raises, prima facie, some doubts about the effective compatibility of these views. (shrink)
Anaphoric deflationism is a kind of prosententialist account of the use of “true.” It holds that “true” is an expressive operator and not a predicate. In particular, “is true” is explained as a “prosentence.” Prosentences are, for sentences, the equivalent of what pronouns are for nouns: As pronouns refer to previously introduced nouns, so prosentences like “that’s true” inherit their semantic content from previously introduced sentences. So, if Jim says, “The candidate is going to win the election,” and Bill replies (...) “that’s true,” the real meaning of Bill’s statement is “It is true that the candidate is going to win the election.” This kind of prosententialist deflationism around the use of “true,” especially in Robert Brandom’s version, is an explanation given in terms of anaphora. The prosentence is an anaphoric dependent of the sentence providing its content. Båve (Philosophical Studies, 145, 297-310. 2009) argued that the anaphoric account is not as general as prosententialists claim, and that the analogy between prosentences and pronouns is explanatorily idle because it does not do any real explanatory work. The two criticisms are connected: The lack of unity within the anaphoric theory can be used to show its poor explanatory value. The plurality of uses of “is true” exceeds the anaphoric account indeed. Therefore, prosententialism is just a superficial re-description and the real work is done by means of more general semantic terms, namely “semantic equivalence and consequence” between “p” and ““p” is true.” I analyze Båve’s arguments and highlight that he fails to acknowledge the importance of a pragmatic and expressive dimension explained by the anaphoric account, a dimension that semantic “equivalence” and “consequence” are not capable of explaining. I then show that the anaphoric account can actually explain semantic equivalence and consequence, and this is crucial because equivalence and consequence do not explain anaphoric dependence. This reverses the allegation of generality: The anaphoric account is more general. Again, the cases typically used to defend prosententialism, if correctly described, show a unitary structure: They are all versions of lazy anaphoric dependence. Therefore, the unifying principle performing the explanation here is lazy anaphora. (shrink)
The recent book 'Putnam' by Massimo Dell’Utri concerns the philosophical and argumentative journey of Hilary Putnam, that led him to explore the implications of Quine’s views about analyticity and the many ways in which realism can be understood in epistemology, philosophy of science, philosophy of mathematics, its main entailments for the philosophy of mind, and more recently about issues concerning ethics, meta-ethics, and value-theory. The present critical review briefly recollects the reading presented in the book, and then highlights some of (...) the implications of this reconstruction, especially those concerning a key passage in the evolution of Putnam’s thought: Putnam’s abandonment of internal realism and the endorsement of natural realism. (shrink)
La realtà del passato rappresenta uno dei principali problemi riguardanti la semantica giustificazionista proposta da Michael Dummett. L’antirealismo tipico di questa prospettiva determina una concezione del passato piuttosto controintuitiva secondo cui esso «cessa di esistere» quando non lascia tracce e testimonianze. In Truth and the Past, Dummett è tornato sulla questione abbandonando l’antirealismo sul passato con l’obiettivo di evitare questa concezione. Questa svolta rappresenta un inedito spostamento in direzione del realismo, limitato tuttavia dal netto rifiuto di aderire ad una nozione (...) di verità bivalente. Il mio intervento intende ricostruire e analizzare criticamente le ragioni di questa svolta di Dummett e cercare di sondare la solidità e la coerenza di questa rimodulazione del giustificazionismo. (shrink)
Cosa vuol dire “fare uso di concetti”? Che relazione sussiste tra l’uso di un sistema concettuale e l’uso di un linguaggio naturale? Esiste un’influenza delle pratiche sociali in cui sono coinvolti gli esseri umani sui significati delle loro espressioni linguistiche? Che rapporto lega il ragionamento con l’uso di concetti? Queste sono alcune delle domande centrali per il lavoro del filosofo statunitense Robert Brandom. Sulla scorta di simili interrogativi, e mediante un confronto articolato con autori quali Kant, Hegel, Frege, Wittgenstein, Sellars (...) e Dummett, Brandom ha elaborato una complessa teoria del linguaggio e della pratica discorsiva. Questo volume è dedicato, in primo luogo, a presentare le linee guida e le principali strategie argomentative della teoria di Brandom; in secondo luogo, a discutere alcune questioni chiave per il modello inferenzialista, selezionate con particolare attenzione alle principali discussioni suscitate dal dibattito teorico negli ultimi decenni. (shrink)
Among the many features that go hand in hand with the recent onset of populism in many countries, an interesting phenomenon is surely the shift of public discourse in the direction of social media. Is there any-thing special about communication in social media that is particularly suitable for the development of such movements and ideas? In what fol-lows, I provide an attempt to read Facebook comments as showing an anaphoric structure. This analysis permits me to give emphasis on a number (...) of interesting features that such communications exhibit. Final-ly, I try also to highlight some of the main implications of this model in comparison with ordinary communication. (shrink)
Robert Brandom has developed an account of conceptual content as instituted by social practices. Such practices are understood as being implicitly normative. Brandom proposed the idea of implicit norms in order to meet some requirements imposed by Wittgenstein’s remarks on rule-following: escaping the regress of rules on the one hand, and avoiding mere regular behavior on the other. Anandi Hattiangadi has criticized this account as failing to meet such requirements. In what follows, I try to show how the correct understanding (...) of sanctions and the expressivist reading of the issue can meet these challenges. (shrink)
Inferentialism, especially Brandom’s theory, is the project aimed at understanding meaning as determined by inferences, and language as a social practice governed by rational discursive norms. Discursive practice is thus understood as the basic rational practice, where commitments undertaken by participants are evaluated in terms of their being correct/incorrect. This model of explanation is also intended to rescue, by means of reasons, the commitments we undertake ourselves and assess the commitments we attribute to others, in an objective sense: starting from (...) our subjective normative and doxastic attitudes we should be able to use the normative discursive resources apt to assess our commitments, not only referring to what we take to be correct, but also referring to how things actually are.My hypothesis is that this objectivity is not achieved only on the basis of the rational structure of discursive practice. The main worry concerns the fact that material inferences, those responsible for the content of our concepts (and commitments), are in general non-monotonic. These inferences put experts in an advantageous position,namely as those capable of defeasible reasoning. I propose a view by which this asymmetry among language users is the crucial factor in assessing the objectivity of claims within discursive practice. (shrink)
Wilfrid Sellars’ denunciation of the Myth of the Given was meant to clarify, against empiricism, that perceptual episodes alone are insufficient to ground and justify perceptual knowledge. Sellars showed that in order to accomplish such epistemic tasks, more resources and capacities, such as those involved in using concepts, are needed. Perceptual knowledge belongs to the space of reasons and not to an independent realm of experience. Dan Hutto and Eric Myin have recently presented the Hard Problem of Content as an (...) ensemble of reasons against naturalistic accounts of content. In a nutshell, it states that covariance relations—even though they are naturalistically acceptable explanatory resources—do not constitute content. The authors exploit this move in order to promote their preferred radical enactivist and anti-representationalist option, according to which, basic minds—the lower stratum of cognition—do not involve content. Although it is controversial to argue that the Hard Problem of Content effectively dismisses naturalistic theories of representation, a central aspect of it—the idea that information as covariance does not suffice to explain content—finds support among the defenders of classical cognitive representationalism, such as Marcin Miłkowski. This support—together with the acknowledgment this remark about covariance is a point already made by Sellars in his criticism of the Myth of the Given—has a number of interesting implications. Not only is it of interest for the debates about representationalism in cognitive science, where it can be understood as an anticipatory move, but it also offers some clues and insights for reconsidering some issues along Sellarsian lines—a conflation between two concepts of representation that is often assumed in cognitive science, a distinction between two types of relevant normativities, and a reconsideration of the naturalism involved in such explanations. (shrink)
In this broad interview Robert Brandom talks about many themes concerning his work and about his career and education. Brandom reconstructs the main debts that he owes to colleagues and teachers, especially Wilfrid Sellars, Richard Rorty, and David Lewis, and talks about the projects he’s currently working on. He also talks about contemporary and classical pragmatism, and of the importance of classical thinkers like Kant and Hegel for contemporary debates. Other themes go deeper into the principal topics of his theoretical (...) work – in particular, his later understanding of expressivism, his take on the debate between representationalists and anti-representationalists in semantics, the main open problems for his wide inferentialist project, and his methodological preference for the normative vocabulary in his account of discursive practice. Finally, Brandom touches on the epistemic role of perception and on his views about the importance of the phenomenological aspects of perceptual experience. (shrink)
It is often argued that inferential role semantics (IRS) entails semantic holism as long as theorists fail to answer the question about which inferences, among the many, are meaning-constitutive. Since analyticity, as truth in virtue of meaning, is a widely dismissed notion in indicating which inferences determine meaning, it seems that holism follows. Semantic holism is often understood as facing problems with the stability of content and many usual explanations of communication. Thus, we should choose between giving up IRS, to (...) avoid these holistic entailments, and defending holism against this charge, to rescue IRS. I try to pursue the second goal by analyzing certain patterns of counterfactual reasoning. Wilfrid Sellars and Robert Brandom claim that, to defend IRS, content-constitutive inferences are those counterfactually robust. While it is difficult to assess the goodness of such a view, it nonetheless entails that counterfactually non-robust inferences (which I call “modally ruled out inferences”) are not content-constitutive. If this is true, and if we take certain remarks about the grasp of concepts on board, there is a way to restrict the scope of the holism entailed by IRS to the extent of reshaping problems with the stability of content. (shrink)
It is a common opinion that chance events cannot be understood in causal terms. Conversely, according to a causal view of chance, intersections between independent causal chains originate accidental events, called “coincidences.” The present paper takes into proper consideration this causal conception of chance and tries to shed new light on it. More precisely, starting from Hart and Honoré’s view of coincidental events, this paper furnishes a more detailed account on the nature of coincidences, according to which coincidental events are (...) hybrids constituted by ontic components, that is the intersections between independent causal chains, plus epistemic aspects; where by “epistemic” we mean what is related, in some sense, to knowledge: for example, access to information, but also expectations, relevance, significance, that is psychological aspects. In particular, this paper investigates the role of the epistemic aspects in our understanding of what coincidences are. In fact, although the independence between the causal lines involved plays a crucial role in understanding coincidental events, that condition results to be insufficient to give a satisfactory definition of coincidences. The main target of the present work is to show that the epistemic aspects of coincidences are, together with the independence between the intersecting causal chains, a constitutive part of coincidental phenomena. Many examples are offered throughout this paper to enforce this idea. This conception, despite—for example—Antoine Augustine Cournot and Jacques Monod’s view, entails that a pure objectivist view about coincidences is not tenable. (shrink)
Wittgenstein’s Investigations proposed an egalitarian view about language games, emphasizing their plurality (“language has no downtown”). Uses of words depend on the game one is playing, and may change when playing another. Furthermore, there is no privileged game dictating the rules for the others: games are as many as purposes. This view is pluralist and egalitarian, but it says little about the connection between meaning and use, and about how a set of rules is responsible for them in practice. Brandom’s (...) Making It Explicit attempted a straightforward answer to these questions, by developing Wittgensteinian insights: the primacy of social practice over meanings; the idea that meaning is use; the idea of rule–following to understand participation in social practices. Nonetheless, Brandom defended a non–Wittgensteinian conception of discursive practice: language has a “downtown”, the game of “giving and asking for reasons”. This is the idea of a normative structure of language, consisting of advancing claims and drawing inferences. By means of assertions, speakers undertake “commitments” that can be challenged/defended in terms of reasons (those successfully justified can gain “entitlement”). This game is not one among many: it is indispensable to the very idea of discursive practice. In this paper, my aim will be that of exploring the main motivations and implications of both perspectives. (shrink)
Ita La recensione presenta la prospettiva enattivista difesa da Alva Noë, e ne discute alcuni aspetti specifici. Il pensiero, la coscienza e la cognizione non sono pienamente comprensibili, secondo l’enattivismo di Noë, senza un’adeguata considerazione del ruolo ricoperto dal corpo e dall’ambiente. Sarebbe quindi sbagliato continuare a pensare che il cervello da solo sia responsabile dei processi cognitivi umani: il programma che ricerca i correlati neurali della coscienza sarebbe quindi destinato al fallimento dal principio, perché tralascia in partenza corpo e (...) ambiente; i programmi di ricerca nel campo dell’intelligenza artificiale sarebbero ugualmente compromessi, non solo con una concezione computazionalista della cognizione, ma anche con la vecchia idea che un cervello artificiale sia sufficiente alla cognizione quanto un cervello naturale. Di seguito si prendono in esame due prospettive sperimentali su cui Noë fa grande affidamento: gli studi sulla plasticità neurale dei furetti svolti da Mriganka Sur e altri, e la realizzazione di un sistema di sostituzione visuo-tattile da parte di Paul Bach-Y-Rita. Per Noë si tratta di elementi cruciali in supporto alla propria prospettiva: in realtà, come emerge dalla breve analisi, l’attribuire loro un significato così chiaro e univoco pro-enattivismo è qualcosa di molto meno scontato. Infine, segue un breve bilancio del libro di Noë, dove gli elementi d’interesse del volume sono presentati insieme all’individuazione di alcuni punti deboli, che il testo condivide con la gran parte degli studi che in vario modo fanno leva sull’intuizione legata all’embodiement: da un lato una difficoltà strutturale riguarda la possibilità di identificare con chiarezza gli elementi costitutivi del mentale; dall’altro lato, il riferimento a forme di esternalismo radicale sembra ugualmente comportare diversi problemi. -/- Abstract Eng The review presents Alva Noë’s enactivist view, and aims at discussing some of its ideas. Thought, consciousness, and cognition are not understandable, Noë claims, without taking into proper consideration the role played by the body and the environment. It would be wrong indeed to go on thinking that our brain alone is responsible for human cognitive processes: the program searching for the neural correlates of consciousness is hopeless in principle, because it neglects from the beginning the body and the environment; research programs in artificial intelligence are as well compromised, not only with a computationalist view of cognition, but also with the old idea that an artificial brain will suffice for cognition just like a natural one. Thereafter, two experimental perspectives, that Noë uses to support his view, are examined: Mriganka Sur’s studies on neural plasticity of ferrets, and Paul Bach-Y-Rita’s tactile-visual substitution system. Noë argues that these results are crucial in supporting his view, but, as the discussion highlights, their pro-enactivism meaning is not so clear and unambiguous. Finally, a balance of the book follows, where the many elements of interest are presented together with the acknowledgment of some weak points, that the book shares with the majority of the proposals that deal with the embodiment’s insight: on the one hand, a structural difficulty concerns the possibility to identify clearly the constitutive elements of the mental; on the other hand, the reference to radical versions of externalism seems to entail many difficulties as well. (shrink)
This paper is a detailed and critical report of the debate between Rorty and Habermas (published in R.Brandom(ed.),"Rorty and His Critics", Blackwell, Oxford 2000) about the importance of truth and epistemic justification in communicative practices. They here present two different versions of the idea of communicative reason. I try to compare them and to evaluate their vices and virtues.
Tentativo di ridescrizione filosofica del modo di intendere la conoscenza enciclopedica, e nello specifico la natura dell'informazione enciclopedica, di seguito ai radicali cambiamenti teorici intervenuti nel '900 attraverso le tesi di Quine sull'analiticità e il declino del progetto Neopositivista. Segue una proposta di caratterizzazione dell'informazione enciclopedica che non si basa su una rigida distinzione tra enunciati analitici e sintetici, ma su di una griglia teorica più sfumata.
This chapter explores some key themes of Huw Price's global expressivist program and his appropriation of inferentialist views. Some remarks concerning certain internal tensions inside that program follow.
Anti-representationalism is the hallmark of Richard Rorty's critique of the epistemological tradition. According to it, knowledge does not "mirror" reality and the human mind is not a representational device. Anti-representationalism is a family of philosophical theses, respectively dealing with the notion of "representation" in different ways. Though prima facie one may feel entitled to think about anti-representationalism as a kind of uniform philosophical movement, things stand quite differently. In fact, among many anti-representationalist options, we can identify two main versions: a (...) global anti-representationalism that entirely rejects the philosophical uses of the notion of "representation", and a local version that just removes the notion of "representation" from the explanatory toolbox. In this chapter I try to compare Rorty's global anti-representationalism and Robert Brandom's local version, exploiting a recent discussion by Brandom and a famous exchange between Rorty and Bjørn Ramberg about Donald Davidson's take on the special role of the intentional vocabulary. (shrink)
Persuasion is a special aspect of our social and linguistic practices – one where an interlocutor, or an audience, is induced, to perform a certain action or to endorse a certain belief, and these episodes are not due to the force of the better reason. When we come near persuasion, it seems that, in general, we are somehow giving up factual discourse and the principles of logic, since persuading must be understood as almost different from convincing rationally. Sometimes, for example, (...) we can find persuasion a political speech that relies on our feelings, emotions and values, but we can also find a persuasive person a dodger, busy in his own questionable activities that are intentionally performed in order to mislead and manipulate other people. However, I do not want to try to define a general notion of persuasion from the beginning. I would rather start with a conception that already has a place, even if controversial, in the philosophical debate. In particular, the version that I have always found particularly provocative is that provided by Wittgenstein’s On Certainty. This peculiar version of the idea of persuasion, which is often associated with the possibility of overcoming deep disagreements, is quite famous in the literature and often understood as indicating certain intrinsic limits of our reason-giving practices. The following are Wittgenstein’s famous remarks on persuasion: -/- 608. Is it wrong for me to be guided in my actions by the propositions of physics? Am I to say I have no good ground for doing so? Isn’t precisely this what we call a ‘good ground’? -/- 609. Suppose we met people who did not regard that as a telling reason. Now, how do we imagine this? Instead of the physicist, they consult an oracle. (And for that we consider them primitive.) Is it wrong for them to consult an oracle and be guided by it? If we call this “wrong” aren’t we using our language-game as a base from which to combat theirs? -/- 611. Where two principles really do meet which cannot be reconciled with one another, then each man declares the other a fool and heretic. -/- 612. I said I would ‘combat’ the other man, – but wouldn’t I give him reasons? Certainly; but how far do they go? At the end of reasons comes persuasion. (Think what happens when missionaries convert natives.) -/- This paper is not devoted, as far as possible, to focus on interpretative matters about Wittgenstein’s late philosophy. Rather, it aims to investigate whether there are problems and incompatibilities between this particular conception of persuasion and our contemporary understanding of our reason-giving practices and of our belief-revision procedures. The first part of this study will be concerned about assessing this conception of persuasion and trying to shed new light on it (and on its consequences); most of the work here is done by looking at some truisms and normative features of our practices regarding the rational updating of our beliefs. It also addresses the question: Is the resistance to reasons of Wittgenstein’s persuasion capable to avoid the strict dynamics of belief revision? The second section of this study concerns the possibility of limiting the scope of this conception of persuasion, thanks to some of our contemporary ways of understanding rational discursive practices (I will focus especially on Robert Brandom’s game of “giving and asking for reasons”). If our rational practices require at least a certain degree of epistemic responsibility, then how is it possible that one invokes the end of reasons (that would explicitly mean giving up this responsibility)? A third section is the attempt, on this basis, to try to revise our contrasting conception(s) of persuasion (with an eye open on the near doxastic territory). Are there other conceptions of persuasion which are more compatible with our rational practices and do not entail any version of the “end of reasons” that are so compatible with our individual and social epistemic responsibility? (shrink)
This paper highlights the fundamental difference in the criteria adopted to explain original intentionality, which is the basic stratum of intentional phenomena, between the mentalist mainstream and the minority inspired by the rejection of the Myth of the Given. Among the attempts on the latter, inferentialism has become a view of particular interest. According to inferentialism, full intentionality is a feature of cognitive subjects who participate in normative discursive practice. Therefore, the criteria to which the basic intentionality of the mind (...) can be attributed are fundamentally linguistic and not mentalist; for example, perception alone is not sufficient to ground full intentional states. This situation is an interesting challenge to mainstream intentionalism. Furthermore, it can be argued that this view can be a good basis to develop a different conception of collective intentionality in social ontology. In fact, inferentialism makes use of the basic normative notion of discursive commitment to track and evaluate the inferential moves of speakers. This notion can be easily extended hypothetically in the direction of the idea of a ‘joint commitment’ developed by Margaret Gilbert. In what follows, a number of questions and remarks are raised as a first explorative attempt. The role that language plays in social ontology through this approach is of particular interest because of its strict connection with collective intentionality: it is an ‘active’ institution (maybe a meta-institution), the medium and the practice to create and establish other institutions. (shrink)
A collection of essays dedicated to Pier Luigi Lecis' retirement. Contributors include: Mariano Bianca, Silvana Borutti, Vinicio Busacchi, Massimo Dell'Utri, Rosaria Egidi, Roberta Lanfredini, Giuseppe Lorini, Diego Marconi, Francesco Orilia, Paolo Parrini, Alberto Peruzzi, Simonluca Pinna, Pietro Salis, Paolo Spinicci.
Siamo ormai lontani dalla stagione in cui la forza propulsiva della “svolta linguistica” si impose come tendenza dominante nel dibattito filosofico. Da varie angolazioni si è potuto parlare negli ultimi anni di pictorial turn come antidoto all’egemonia del paradigma linguistico in filosofia. Il volume Verità, Immagine, Normatività. Truth, Image, and Normativity non si inserisce direttamente in questa nuova tendenza, ma ruota comunque intorno a questioni che derivano dal medesimo sfondo. I 24 saggi qui raccolti tengono conto dei diversi ruoli che (...) le immagini possono svolgere, a seconda che si focalizzi la loro dimensione descrittiva o normativa, e indagano alcuni aspetti del rapporto tra immagine e linguaggio. Questi problemi di filosofia dell’immagine, intesa sia come immagine cognitiva sia come immagine grafica, costituiscono il fil rouge di questo volume. Una tale riflessione sui linguaggi figurati e sulla comunicazione visiva non-linguistica costringe a riformulare in termini nuovi le grandi domande concernenti le tradizionali nozioni di verità, oggettività, normatività, consenso e persuasione. (shrink)
In recent years, the social world is quickly gaining the focus of attention within the philosophical debates. The work of authors such as John Searle, Barry Smith, Margaret Gilbert, Raimo Tuomela, to name just a few, is becoming increasingly important within the philosophical community. Hence, topics in social ontology dealing with the nature of institutions, collective actions, collective self/personhood, collective intentionality, shared goals and commitments, etc. are increasingly addressed by contemporary philosophical investigations. The discussion on these topics is today sprawling (...) of mainstream ambitious views, contributing to a complete understanding of the proper structures and dynamics of the social world. The present collection of essays aims at exploring side-issues in social ontology, trying somehow to go beyond mainstream topics. Another important aim of this book is connecting the typical discussions of social ontology to the philosophical tradition, and to debates in the social sciences and in anthropology. -/- The collection of essays does not offer a perspective on the social world which is shared among the authors, and it is rather pluralistic in nature: our shared belief is that pluralism and openness, especially in explorative attempts, are methodologically fit to promote dialogue and interaction between different ideas, strategies, and perspectives. According to this spirit, the book comprises a number of articles and critical notes that target different problems, traditions, and methodologies. (shrink)
The recent publication of the books La razionalità, by Paolo Labinaz, and I modi della razionalità, edited by Massimo Dell’Utri and Antonio Rainone, offers the opportunity to provide an overview of some important discussions on rationality. In particular, I highlight how the modern and Cartesian ideal of this notion is undergoing a transformation that is enhanced by results coming from empirical studies in the field of cognitive science. These transformations are visible in many ambits concerning rationality: this discussion privileges the (...) traditional ones, i.e., logical, theoretical, and practical rationality. (shrink)