In this paper we compare the strategies applied by two successful biological components of the ecosystem, the viruses and the human beings, to interact with the environment. Viruses have had and still exert deep and vast actions on the ecosystem especially at the genome level of most of its biotic components. We discuss on the importance of the human being as contraptions maker in particular of robots, hence of machines capable of automatically carrying out complex series of actions. Beside the (...) relevance of designing and assembling these contraptions, it is of basic importance the goal for which they are assembled and future scenarios of their possible impact on the ecosystem. We can’t procrastinate the development and implementation of a highly inspired and stringent “ethical code” for human beings and humanoid robots because it will be a crucial aspect for the wellbeing of the mankind and of the entire ecosystem. (shrink)
Galvani's discovery provoked an animated debate that lasted for about a decade. So far, historians have studied only the controversy between Volta and Galvani. I show that a more extensive examination of the response to Galvani's treatise reveals a number of important issues that were characteristic of the contemporary physics and physiology but have not much attracted the attention of historians. In particular, the analysis shows the need to reappraise Galvani's role in establishing animal electricity.
Simple assumptions represent a decisive reason to prefer one theory to another in everyday scientific praxis. But this praxis has little philosophical justification, since there exist many notions of simplicity, and those that can be defined precisely strongly depend on the language in which the theory is formulated. The language dependence is a natural feature—to some extent—but it is also believed to be a fatal problem, because, according to a common general argument, the simplicity of a theory is always trivial (...) in a suitably chosen language. But, this trivialization argument is typically either applied to toy-models of scientific theories or applied with little regard for the empirical content of the theory. This paper shows that the trivialization argument fails, when one considers realistic theories and requires their empirical content to be preserved . In fact, the concepts that enable a very simple formulation, are not necessarily measurable, in general. Moreover, the inspection of a theory describing a chaotic billiard shows that precisely those concepts that naturally make the theory extremely simple are provably not measurable. This suggests that—whenever a theory possesses sufficiently complex consequences—the constraint of measurability prevents too simple formulations in any language. This explains why the scientists often regard their assessments of simplicity as largely unambiguous. In order to reveal a cultural bias in the scientists’ assessment, one should explicitly identify different characterizations of simplicity of the assumptions that lead to different theory selections. General arguments are not sufficient. (shrink)
Luigi Giussani, a high school religion teacher throughout the 1950s and 1960s, grounded his teachings in the vast body of experience to be found in Christianity's two-thousand-year history. He told his students, “I'm not here to make you adopt the ideas I will give you as your own, but to teach you a method for judging the things I will say.” Throughout his life, education was one of Giussani's primary intellectual interests. He believed that effective education required an adequate (...) background in the Christian tradition, presented within a lived experience that underscored the capacity of the faith to answer universal questions. What he proposed was a process that allowed one to sift through tradition, critically examining it and comparing it against the ultimate criteria for judgment: the desires of the heart. In Giussani's view, the primary concern was to “educate the human heart as God made it.” In The Risk of Education he states that fear leads students to associate this process of criticism with negativity or doubt. Yet, without an education in criticism, students cannot develop conviction. At a time when young people are abandoning the church and questioning the value of faith, Giussani's method of judging and verifying Christianity as an experience seems a necessary intervention. In The Risk of Education he argues that, ultimately, education and the Christian message reveal themselves through human freedom. (shrink)
One of the main goals of scientific research is to provide a description of the empirical data which is as accurate and comprehensive as possible, while relying on as few and simple assumptions as possible. In this paper, I propose a definition of the notion of few and simple assumptions that is not affected by known problems. This leads to the introduction of a simple model of scientific progress that is based only on empirical accuracy and conciseness. An essential point (...) in this task is the understanding of the role played by measurability in the formulation of a scientific theory. This is the key to prevent artificially concise formulations. The model is confronted here with many possible objections and with challenging cases of real progress. Although I cannot exclude that the model might have some limitations, it includes all the cases of genuine progress examined here, and no spurious one. In this model, I stress the role of the state of the art, which is the collection of all the theories that are the only legitimate source of scientific predictions. Progress is a global upgrade of the state of the art. (shrink)
In this article we introduce the reader to the reasons that led to this collection: an interdisciplinary exploration aimed at renewing interest in Luigi Einaudi’s search for «good government», broadly understood as «good society». Prompted by the Einaudian quest, the essays – exploring philosophy of law, economics, politics and epistemology – develop the issue of good government in several forms, including the relationship between public and private, public governance, the question of freedom and the complexity of the human in (...) contemporary societies. The common thread of these essays is that problematic but indissoluble knot that tells us something deeply human: our being torn between homing and roaming, institutional and individual, law and freedom, real and ideal. (shrink)
Nel mio articolo propongo una lettura dei §§ 79-84 della Critica del Giudizio, parte della sezione Metodologia del Giudizio teleologico. Dapprima mi interrogo sul significato di una Methodenlehre del Giudizio teleologico, che rintraccio in un’attività metariflessiva del Giudizio; procedo poi ad una lettura analitica del testo nelle sue varie articolazioni, nella quale passo in rassegna le questioni attinenti alla specificità dello statuto epistemologico della teleologia, alla possibile convivenza tra finalismo e meccanicismo nella scienza della natura, all’origine della vita, allo scopo (...) ultimo della natura e allo scopo finale della creazione; approdo, infine, ad un’interpretazione per cui l’intera teleologia rationis humanae, che si estende dalla teleologia naturale alla teleologia morale passando per l’antropologia e per la filosofia della storia, viene in questa sezione riarticolata intorno ad un nuovo fulcro concettuale di livello trascendentale, che può garantire l’effettivo collegamento tra queste diverse parti della filosofia proprio perché esso non si esercita in forma di dominio legislativo, ma in forma di riflessione sul molteplice empirico e sulla causalità teleologica propria dell’uomo. (shrink)
Since the 1980s the concepts of “neoliberalism” and “technoscience,” although both of them were coined earlier, have almost simultaneously become rather prominent conceptual tools in various fields of social science research. The starting point of Neoliberalism and Technoscience: Critical Assessments, edited by Luigi Pellizzoni and Marja Ylönen, is the assumption that this temporal overlap is not just a coincidence and that it would be “quite surprising, then, to find no or merely casual connections between neoliberalization processes and technoscience” . (...) There is already some work in science and technology studies and the sociology of science investigating the impact of neoliberalism on science, frequently focusing on the management of scientific institutions; in addition, there are a number of studies, often inspired by Michel Foucault’s work on biopolitics and governmentality, on the close relations of science and neoliberalism in the field of biomedicine .. (shrink)
I argue that the key to understand many fundamental issues in philosophy of science lies in understanding the subtle relation between the non-empirical cognitive values used in science and the constraints imposed by measurability. In fact, although we are not able to fix the interpretation of a scientific theory through its formulation, I show that measurability puts constraints that can at least exclude some implausible interpretations. This turns out to be enough to define at least one cognitive value that is (...) able to penalize, without damages, precisely those bad features that deceive purely empirical assessments. This leads to the formulation of a simple model of scientific progress, that is based only on empirical accuracy and conciseness. The model is confronted here with many possible objections and with challenging cases of real progress. Although I cannot exclude that the model might be incomplete, it includes all the cases of genuine progress examined here, and no spurious one. In this model, I stress the role of the state of the art, which is the collection of all the theories that are the only legitimate source of scientific predictions. Progress is a global upgrade of the state of the art. (shrink)
The point of view advocated, in the last ten years, by quantum probability about the foundations of quantum mechanics, is based on the investigation of the mathematical consequences of a deep and elementary idea developed by the founding fathers of quantum mechanics and accepted nowadays as a truism by most physicists, namely: one should be careful when applying the rules derived from the experience of macroscopic physics to experiments which are mutually incompatible in the sense of quantum mechanics.
The first part of this paper aims to highlight the analogies between Schutz’s vision of the natural attitude and Wittgenstein’s vision of a phenomenon that concerns the same problematic field, i.e. certainty, the belief of common sense that is free of all doubt, that the world “out there” is as it appears, absolutely real. These certainties form the basis, the foundation of language games and therefore of knowledge in general and in its entirety. This foundation is unfounded and yet indispensable. (...) The second part of the paper examines an important topic analysed by Wittgenstein, related to the aforementioned problem: the language transposition of pre-predicative, pre-reflective and non-propositional certainties, the cornerstones of which are “hinge propositions”, whose hybrid nature can be identified in the shift from empirical propositions to grammatical rules. (shrink)
It is mostly agreed that Popper's criterion of falsifiability fails to provide a useful demarcation between science and pseudo-science, because ad-hoc assumptions are always able to save any theory that conflicts with the empirical data, and a characterization of ad-hoc assumptions is lacking. Moreover, adding some testable predictions is not very difficult. It should be emphasized that the Duhem-Quine argument does not simply make the demarcation approximate, but it makes it totally useless. Indeed, no philosophical criterion of demarcation is presently (...) able to rule out even some of the most blatant cases of pseudo-science, not even approximatively. This is in sharp contrast with our firm belief that some theories are clearly not scientific. Where does this belief come from? In this paper I argue that it is necessary and possible to recognize the notion of syntactic simplicity that is able to tell the difference between empirically equivalent scientific and non-scientific theories, with a precision that is adequate to many important practical purposes, and it fully agrees with the judgments generally held in the scientific community. (shrink)
Probabilism, the view that agents have numerical degrees of beliefs that conform to the axioms of probability, has been defended by the vast majority of its proponents by way of either of two arguments, the Dutch Book Argument and the Representation Theorems Argument. In this paper I argue that both arguments are flawed. The Dutch Book Argument is based on an unwarranted, ad hoc premise that cannot be dispensed with. The Representation Theorems Argument hinges on an invalid implication.
KOINONIA/ASETT MINGA/MUTIRÃO DE REVISTAS DE TEOLOGIA LATINO-AMERICANAS Tiempos oscuros, tiempos de monstruos: Teología de la liberación y desafíos culturales (Dark times, times of monsters: liberation theology and cultural challenges).
This book is the first of its kind to provide a comprehensive overview of happiness in Economics. Although it is comparatively unusual to put happiness and economics together, the association appears increasingly exciting and fruitful. A number of studies have been produced following Richard Easterlins and Tibor Scitovskys pioneering works throughout the 1970s. The essays collected in this book provide an authoritative and comprehensive assessment both theoretical, applied and partly experimental of the whole field moving from the so-called paradoxes of (...) happiness in Economics. The book breaks new ground, particularly on the more recent directions of research on happiness, well-being, interpersonal relations and reciprocity. The meaning of happiness is thoroughly explored and the tension between a hedonic-subjective idea of happiness and a eudaimonic-objective one is discussed. This volume opens with Richard Easterlins own assessment of the main issues. Other authors include Robert H. Frank, Robert Sugden, Bruno S. Frey, Alois Stutzer, Richard Layard, Martha C. Nussbaum, Matt Matravers, Bernard M.S, van Praag, Oded Stark, You Q. Wang, Ruut Veenhoven, Charlotte Phelps, Stefano Zamagni, and Luigi Pasinetti. (shrink)
The relentless exploitation of the earth's resources and technologys boundless growth are a matter of urgent concern. When did this race towards the limitless begin? The Greeks, who shaped the basis of Western thinking, lived in mortal fear of humanity's hidden hunger for the infinite and referred to it as hubris, the one true sin in their moral code. Whoever desired or possessed too much was implacably punished by nemesis, yet the Greeks themselves were to pioneer an unprecedented level of (...) ambition that began to reverse that tabu. If it is true that no culture can truly repudiate its origins, and that gods who are no longer potent can vanish but still leave behind a body of myth which coninues to live and assert itself in modernized garb, then our concern with the limits of growth reflects something more than an awareness of new technological problems - it also brings to light a psychic wound a a feeling of guilt which are infinitely more ancient. (shrink)
Psychophysical experiments have demonstrated large and highly systematic perceptual distortions of tactile space. Such a space can be referred to our experience of the spatial organisation of objects, at representational level, through touch, in analogy with the familiar concept of visual space. We investigated the neural basis of tactile space by analysing activity patterns induced by tactile stimulation of nine points on a 3 × 3 square grid on the hand dorsum using functional magnetic resonance imaging. We used a searchlight (...) approach within pre-defined regions of interests to compute the pairwise Euclidean distances between the activity patterns elicited by tactile stimulation. Then, we used multidimensional scaling to reconstruct tactile space at the neural level and compare it with skin space at the perceptual level. Our reconstructions of the shape of skin space in contralateral primary somatosensory and motor cortices reveal that it is distorted in a way that matches the perceptual shape of skin space. This suggests that early sensorimotor areas critically contribute to the distorted internal representation of tactile space on the hand dorsum. (shrink)
Chapter 1 First Person Access to Mental States. Mind Science and Subjective Qualities -/- Abstract. The philosophy of mind as we know it today starts with Ryle. What defines and at the same time differentiates it from the previous tradition of study on mind is the persuasion that any rigorous approach to mental phenomena must conform to the criteria of scientificity applied by the natural sciences, i.e. its investigations and results must be intersubjectively and publicly controllable. In Ryle’s view, philosophy (...) of mind needs to adopt an antimentalist stance to achieve this aim. Antimentalism not only definitively rejects the idea that mind is a substance separated from the body, it also denies that mental phenomena radically differ from physical phenomena by virtue of several unique features. Most problematically, mental phenomena have a conscious character (mental states are related to specific qualitative feelings) and are accessible only to the first-person (only the subject knows directly what s/he is experiencing inside his/her mind). Ryle takes a strong stance on antimentalism going so far as to maintain that an approach to mind which aims to meet the criteria of scientificity set by the natural sciences must avoid any reference to internal, unobservable mental states. In his view (which is considered a specifically philosophical version of psychological behaviorism and also addresses questions put forward in psychological research), mental states can be redescribed in terms of behavioral dispositions. In this chapter, we address the historical roots of the antimentalist view and analyze its relation to the later tradition of research on mind. We show that, compared to the antimentalist stance, functionalism and cognitivism take a step back when they maintain that direct reference to mental states is necessary since mental states cause and therefore explain human behavior. This step backwards is often interpreted as a return to mentalism. However, this is only partially true. Indeed, we suggest that these later traditions retain one important element of Ryle’s antimentalism, i.e. the idea that mental states must be uniquely identified using external and publicly observable criteria, while excluding any reference to introspection and those qualitative dimensions of a mental state, which are accessible only to the first-person. According to the perspective we put forward, this epistemological stance has continued to influence contemporary research on mind and current philosophical and psychological theories which both tend to exclude the subjective qualities of human experience from their accounts of how the mind works. The issue we raise here is whether this is legitimate or whether subjective qualities do play a role with respect to the way our mind works. The conclusion of this chapter anticipates the argument the book makes in in favor of this latter position, starting from a particular angle, i.e. the problem of how we categorize concepts related to our internal states. -/- Chapter 2 The Misleading Aspects of the Mind/Computer Analogy. The Grounding Problem and the Thorny Issue of Propriosensitive Information -/- Abstract. After the crisis of behaviorism, cognitivism and functionalism became the predominant models in the field of psychology and of philosophy, respectively. Their success is mainly due to the new key they use for interpreting mental processes: the mind/computer analogy. On the basis of this analogy, mental operations are seen as cognitive processes based on computations, i.e. on manipulations of abstract symbols which are in turn understood as informational unities (representations). This chapter identifies two main problems with this model. The first is how these symbols can relate to and communicate with perception and thus allow us to identify and classify what we perceive through the senses. Here we limit ourselves to presenting this issue in relation to the classical symbol grounding problem originally put forward by Harnad on the basis of Searle’s Chinese room argument. An attempt to address the problem raised here will be made in chap. 3. The second point we discuss in relation to the mind/computer analogy concerns the idea of information it fosters. Indeed, following this analogy, information is something available in the external world which can be captured by the senses and transmitted to the central system without being influenced or modified by the procedures of transmission. This perspective does not take into account that – unlike computers – in living beings information is acquired by means of the body. As Ulric Neisser has already pointed out, the body is itself an informational source that provides us with additional sensory experience that influences (modifies or complements) the information extracted from the external world by the senses. To develop this line of analysis and to determine exactly what information is provided by the body and how this might influence cognition, we examine Sherrington’s and Gibson’s positions. Moving on from their views, we qualify bodily information in terms of ‘proprioception’. We use ‘proprioception’ in a broad sense to describe any kind of experience we have of our internal states (including postural information as well as sensations related to the general state of the body and its parts). Following Damasio’s and Craig’s studies, we further elaborate this position, arguing that living beings are equipped with an internal propriosensitive monitoring system which maps all the changes that constantly occur in our body and that give us perceptual (‘proprioceptive’ or propriosensitive) information about what happens inside us. Moreover, relying on Goldie’s and Ratcliffe’s view, we show that emotional information can also be considered as a form of ‘proprioception’ which contributes to determining everything we perceive. This analysis leads us to the second main thesis of this book: ‘proprioception’ is a form of internal perception and it is an essential component of the sensory information we can access and use for all cognitive purposes. -/- Chapter 3 Semantic Competence from the Inside: Conceptual Architecture and Composition -/- Abstract. Concepts are essential constituents of thought: they are the instruments we use to categorize our experience, i.e. to classify things and group them together in homogeneous sets. Here we define concepts as the internal mental information (representations) that allows us, among other things, to master words in natural language. By analyzing the way in which individuals master word meanings we explore a number of hypotheses regarding the nature of concepts. Following Diego Marconi’s research, we differentiate between two kind of abilities that underpin lexical competence – so-called ‘referential’ and ‘inferential competence’ – and we suggest that, in order to support these abilities, concepts must also include two corresponding kinds of information, i.e. inferential and referential information. We point out that the most widely used and acknowledged theories of concepts do not make this distinction, instead broadly characterizing the information used for categorization in terms of propositionally described feature lists. However, we show that while feature lists can explain inferential competence, they do not account for referential competence. To address the issue of referential competence we examine Ray Jackendoff’s hypothesis that to account for the possibility of linking perceptual and conceptual information we need to assume the existence of a (visual) representation that encodes the geometric and topological properties of objects and bridges the gap between the percept and the concept. Furthermore, we analyze the extension of this work by Jesse Prinz who introduced the notion of a proxytype, a perceptual representation of a class of objects that incorporates structural and parametric information related to their appearance. However, as we point out, proxytypes can only explain the relationship between perception and concepts with respect to instances that can be perceived through the senses and that belong to the same class by virtue of their physical similarity. We suggest that this notion be extended to include larger conceptual classes. To accomplish this, we further develop Mark Johnson, George Lakoff and Jean Mandler’s idea of a schematic image and argue that conceptual representations include a perceptual schema. Perceptual schemata are non-linguistic, structured experiential gestalts (patterns or maps) that make use of information taken from all sensory modalities, including body perception. They accomplish a quasi-conceptual function: they allow us to recognize and to classify different instances. In this work, we hypothesize that perceptual schemata are an essential component of concepts, but not identical to them. Instead, we suggest that concepts include both perceptual and propositional information with perceptual schemata providing the ‘perceptual core concept’ that grounds related propositional information. -/- Chapter 4 In the Beginning There Were Categories. The Bodily Origin of Prelinguistic Categorical Organization: The Example of Folkbiological Taxonomies -/- Abstract. Studies of categorization in psychology and the cognitive sciences have made use of the notions of ‘category’ and ‘concept’ without precisely defining what is meant by either; in fact, often these terms have been used as synonyms, making it difficult to address specific issues related to conceptual development. This chapter begins by discussing the definitions of, as well as the distinctions between, ‘categories’ and ‘concepts’ in the classical philosophical tradition (Aristotle, Kant and Husserl). We introduce our view of categories with reference to Husserl. Categorization is defined as the way in which our experience is originally (pre-linguistically) organized in a passive and fully unconscious manner on the basis of universal structuring principles. Conceptualization is explained as a later process in which the earlier categorical macro-classes are further subdivided into more specific and detailed sets; this later process also relies on linguistic learning and exposure to culture. On the basis of Ray Jackendoff, Jean Mandler, George Lakoff and Mark Johnson’s work, we hypothesize that categorization is a two-step process that begins with the formation of homogeneous sets of entities partitioned into regions that describe the ontological boundaries of the objects that humans perceive (categories) and continues at a later stage with the development of more specific classifications (concepts). In this chapter, we mainly address three related issues: Why should we assume that there is categorical organization which precedes the development of a conceptual system? How do categories and concepts relate to each other? Shall we hypothesize that categories are innate or that they are formed before concepts on the basis of information and organizational structure available at a very early developmental stage? We show that categorical partitions are necessary for categorization and, following Mandler, that the general categories we form at an early age do not match our adult superordinate concepts. As for the third issue, we argue that there might be no need to assume that categories are innately present in the human mind, since their formation can be explained – at least in certain cases – by basic mechanisms that work on body (propriosensitive) information. This hypothesis will not be discussed in general, but in relation to a particularly relevant example of categorical partition, i.e. the folk-biological dichotomy between ANIMATE/INANIMATE. This is compared with other dichotomies that derive from it, but are not directly categorical such as LIVING/NON-LIVING and BIOLOGICAL/NON-BIOLOGICAL. -/- Chapter 5 Internal States: From Headache to Anger. Conceptualization and Semantic Mastery -/- Abstract. Here we ask if we can also apply the distinction between referential and inferential competence we introduced in Chapter 3 to words that do not refer to things that are perceived using the external senses, especially to words/concepts that denote bodily experiences (such as pain, thirst, hunger, etc.) or emotions. We introduce and discuss the hypothesis that – even though such words/concepts do not refer to intersubjectively identifiable entities in the external world – they do have a kind of referent that can be accessed via direct perception, more specifically ‘proprioception’, as we have defined it in terms of all propriosensitive information we can consciously access. In the first part of the chapter, we specifically consider terms denoting bodily experiences such as ‘pain’ or ‘hunger’ and argue that their referents are identified and classified from a first-personal point of view on the basis of four main characteristics: their specific intensity, their localization in the body, their co-occurrence with other signals and above all their specific qualitative sensations. Emotions are addressed in the second part of the chapter. We suggest that there is a continuity between bodily experiences and emotions. In particular, we argue for a perceptual theory of emotions in line with that proposed by James and Lange at the end of the 19th century and developed more recently by authors such as Damasio (see also chap. 2§5, §6). The hypothesis we put forward is that the referential information that supports the categorization of emotions and therefore also our mastery of terms referring to emotions consists in the perception (i.e. the ‘proprioception’) of those bodily states and changes in bodily states which constitute our emotional experience. In the context of this discussion we examine some objections to this line of reasoning that arise from a cognitivist perspective and following authors such as Oatley, Johnson-Laird and Frijda, we distinguish between basic emotions that can be identified and classified solely on the basis on how they feel and complex emotions whose identification and classification additionally depends on cognitive factors. To describe how emotions are identified and classified on the basis of how they feel, we rely on Marcel and Lambie’s distinction between an ‘emotion state’ and an ‘emotion experience’. Both notions indicate kinds of feelings that we consciously experience. However, they describe first-order and second order emotion awareness respectively. The emotion state is the feeling we have of the bodily states and changes that occur when we are experiencing an emotion, while the emotion experience is the fully developed and integrated emotion we both experience and are, with reflection, aware of experiencing. On the basis of this differentiation, we also show that the same characteristics that aid in the identification and classification of bodily experiences (specific qualitative sensations; somatic localization; specific intensity; presence/absence of specific concomitant sensations) can also be used for the identification and classification of emotions – at least basic emotions. In the last two sections of the chapter we present some clinical evidence on the semantic competence of people who suffer from Alexithymia and Autism Spectrum Disorder which supports the conclusions of our preceding analyses. -/- Chapter 6 The ‘Proprioceptive’ Component of Abstract Concepts -/- Abstract. In this chapter, we address the issue of whether the mastery of abstract words requires only inferential knowledge and thus, if the concepts that support the mastery of abstract words include only linguistic information. We start by differentiating the notions of ‘abstract’ and ‘general’ which are often erroneously confused. We then identify a strict definition of abstract, as contrasted with ‘concrete’, that applies to words or concepts whose referent cannot be experienced by the senses. We argue that abstract words/concepts would be better described as theoretical, because they are usually conceived as structured sets of inferential knowledge expressed linguistically; that is, as small theories. Pointing out parallels with Carnap’s analysis of this issue in philosophy of science, we hypothesize that words/concepts denoting non-observable entities are not all ‘equally theoretical’, because their link to sensory experience can be stronger or weaker. We revive the distinction, inspired by Quine, between theoretical and intertheoretical concepts/words. This distinction relies on the fact that the former – unlike the latter – retains a strong, although indirect, connection with perception. In Quine’s discussion, perception is understood uniquely in terms of observability, i.e. of external sensory experience. Here we argue, however, that bodily, ‘proprioceptive’ (i.e. propriosensitive) experience can also serve to referentially ground theoretical (i.e. abstract) concepts/words. We frame this issue using the example of the theoretical concept ‘freedom’ and Lakoff’s hypothesis that this concept is developed on the basis of bodily information. We contrast this with the example of ‘democracy’ which more closely resembles an intertheoretical concept/word. Furthermore, we show that one of the classical views put forward in psycholinguistic research to explain how abstract concepts are mentally represented – i.e. Paivio’s Dual Coding Theory – points in the same direction as our analysis. The same is true of Barsalou’s work suggesting that we use internal information to understand at least some abstract words. To sustain this position, we put forward two lines of evidence: the first comes from psycholinguistic studies while the second examines deficits of semantic competence exhibited by people with Autism Spectrum Disorder. On the basis of our analysis, we put forward a classification that distinguishes between different kinds of concreteness and different degrees of abstraction: concepts/words referring to body experiences and basic emotions are described as analogous to concrete concepts/words because they are grounded in perceptual (i.e. propriosensitive) experience, while abstract concepts/words are considered more or less abstract depending on whether they are intratheoretical (and rely entirely on inferential information) or theoretical (and are partially grounded in perceptual – or more often in propriosensitive perceptual – information). In the last section of the chapter we consider two scales that have been used in psycholinguistic research to measure the degree of concreteness vs. abstractness of words and we show that – used conjointly – they can provide a measure of the internal vs. external grounding of specific words. (shrink)
The anti-fascist cantata Il canto sospeso, the string quartet Fragmente - Stille, an Diotima and the 'Tragedy of Listening' Prometeo cemented Luigi Nono's place in music history. In this study, Carola Nielinger-Vakil examines these major works in the context of Nono's amalgamation of avant-garde composition with Communist political engagement. Part I discusses Il canto sospeso in the context of all of Nono's anti-fascist pieces, from the unfinished Fučik project to Ricorda cosa ti hanno fatto in Auschwitz. Nielinger-Vakil explores Nono's (...) position at the Darmstadt Music Courses, the evolution of his compositional technique, his penchant for music theatre and his use of spatial and electronic techniques to set the composer and his works against the diverging circumstances in Italy and Germany after 1945. Part II further examines these concerns and shows how they live on in Nono's work after 1975, culminating in a thorough analysis of Prometeo. (shrink)
In some sections of On Certainty, Wittgenstein uses the term “persuasion,” pitting it, on the one hand, against “giving reasons”, and comparing it, on the other, to conversion, while, finally, defining it as “giving someone one's own picture of the world.” In this essay, I analyse these sections, in an effort to fit them into the broader context of On Certainty, and to clarify the meaning and the limits of the comparison between persuasion and conversion. My aim is to show (...) that persuasion as Wittgenstein understands it here is quite similar to what we could call “re-education.”. (shrink)
: The open-ended character of natural languages calls for the hypothesis that humans are endowed with a recursive procedure generating sentences which are hierarchically organized. Structural relations such as c-command, expressed on hierarchical sentential representations, determine all sorts of formal and interpretive properties of sentences. The relevant computational principles are well beyond the reach of conscious introspection, so that studying such properties requires the formulation of precise formal hypotheses, and empirically testing them. This article illustrates all these aspects of linguistic (...) research through the discussion of non-coreference effects. The article argues in favor of the formal linguistic approach based on hierarchical structures, and against alternatives based on vague notions of “analogical generalization”, and/or exploiting mere linear order. In the final part, the issue of cross-linguistic invariance and variation of non-coreference effects is addressed. Keywords: Linguistic Knowledge; Morphosyntactic Properties; Unconscious Computations; Coreference; Linguistic Representations Conoscenza linguistica e computazioni inconsce Riassunto: Il carattere aperto del linguaggio naturale avvalora l’ipotesi che gli esseri umani siano dotati di una procedura ricorsiva che genera frasi gerarchicamente organizzate. Relazioni strutturali come il c-comando, espresse su rappresentazioni frasali gerarchiche, determinano tutte le proprietà formali e interpretative delle frasi. I principi computazionali rilevanti sono totalmente al di fuori della portata della coscienza introspettiva e così lo studio di tali proprietà richiede la formulazione di precise ipotesi formali e la loro verifica sperimentale. Questo articolo illustra tutti questi aspetti della ricerca linguistica, esaminando gli effetti di non-coreferenza. Si argomenta in favore dell’approccio linguistico formale basato su strutture gerarchiche e contro alternative basate su vaghe nozioni di “generalizzazione analogica” e/o che impiegano il semplice ordine lineare. Nella parte finale si affronta il tema dell’invarianza e della variazione cross-linguistica degli effetti di non-coreferenza. Parole chiave: Conoscenza linguistica; Proprietà morfosintattiche; Computazioni inconsce; Coreferenzialità; Rappresentazioni linguistiche. (shrink)
This paper exhibits a general and uniform method to prove axiomatic completeness for certain modal fixpoint logics. Given a set Γ of modal formulas of the form γ, where x occurs only positively in γ, we obtain the flat modal fixpoint language by adding to the language of polymodal logic a connective γ for each γΓ. The term γ is meant to be interpreted as the least fixed point of the functional interpretation of the term γ. We consider the following (...) problem: given Γ, construct an axiom system which is sound and complete with respect to the concrete interpretation of the language on Kripke structures. We prove two results that solve this problem.First, let be the logic obtained from the basic polymodal by adding a Kozen–Park style fixpoint axiom and a least fixpoint rule, for each fixpoint connective γ. Provided that each indexing formula γ satisfies a certain syntactic criterion, we prove this axiom system to be complete.Second, addressing the general case, we prove the soundness and completeness of an extension of . This extension is obtained via an effective procedure that, given an indexing formula γ as input, returns a finite set of axioms and derivation rules for γ, of size bounded by the length of γ. Thus the axiom system is finite whenever Γ is finite. (shrink)