El kitsch no es solo una categoría que ha definido una de las posibles gramáticas estéticas de la modernidad, sino también una dimensión antropológica que ha tenido diferentes configuraciones en el curso de los procesos históricos. El ensayo ofrece una mirada histórico-crítica sobre las transformaciones que condujeron desde el kitsch de principios del siglo XX hasta el neokitsch contemporáneo: desde la génesis del kitsch hasta su afirmación como una de las manifestaciones más tangibles de la cultura de masas. Integrándose con (...) la estética posmoderna, el kitsch se transforma en neokitsch, una estética que utiliza el kitsch como su propia sintaxis en el complejo escenario de la estética contemporánea. /// -/- Kitsch is not just a category that has defined one of the possible aesthetic grammars of modernity, but also an anthropological dimension that has had different configurations in the course of historical processes. The essay offers a historical-critical look at the transformations that led from the early twentieth century kitsch to the contemporary neokitsch: from the genesis of kitsch to its affirmation as one of the most tangible manifestations of mass culture. Integrating with postmodern aesthetics, kitsch turns into neokitsch, an aesthetic that deliberately uses kitsch as its own syntax in the complex scenario of contemporary aesthetics. (shrink)
"How can human beings, who are liable to error, possess knowledge, since the grounds on which we believe do not rule out that we are wrong? Andrea Kern argues that we can disarm this skeptical doubt by conceiving knowledge as an act of a rational capacity. In this book, she develops a metaphysics of the mind as existing through knowledge of itself."--Provided by publisher.
Chapter 14. Andrea Timár engages with literary representations of the experience of perpetrators of dehumanization. Her chapter focuses on perpetrators of dehumanization who do not violate laws of their society (i.e., they are not criminals) but exemplify what Simona Forti, inspired by Hannah Arendt, calls “the normality of evil.” Through the parallel examples of Dezső Kosztolányi’s Anna Édes (1926) and Doris Lessing’s The Grass is Singing (1950), Timár first explores a possible clash between criminals and perpetrators of dehumanization, showing (...) literature’s exceptional ability to reveal the gap between ethics and law. Second, she examines novels focalized through perpetrators and the difficult narrative empathy they provoke, arguing that only the critical reading of these novels can make one engage with the potential perpetrator in oneself. As case studies, Timár examines Daniel Defoe’s Robinson Crusoe (1719), which may potentially turn its reader into an accomplice in the process of dehumanization, and J.M. Coetzee’s Foe (1986), which puts on critical display the dehumanizing potentials of both aesthetic representation and sympathy as imaginative violence. Third, she reads Jonathan Littell’s The Kindly Ones [Les Bienveillantes, 2006], which can make the reader question, through the polyphony of the voice of its protagonist, the notions of narrative voice and readerly empathy, only to reveal that the difficulty involved in empathizing with perpetrator characters lies not so much in the characters’ being perpetrators, but rather in their being literary characters. Eventually, Timár briefly touches upon the problem of the aesthetic and the comic via Nabokov’s Lolita (1955) to ask whether one can avoid some necessarily dehumanizing aspects of humor. (shrink)
Andreas Stokke presents a comprehensive study of lying and insincere language use. He investigates how lying relates to other forms of insincerity and explores the kinds of attitudes that go with insincere uses of language. -/- Part I develops an account of insincerity as a linguistic phenomenon. Stokke provides a detailed theory of the distinction between lying and speaking insincerely, and accounts for the relationship between lying and deceiving. A novel framework of assertion underpins the analysis of various kinds of (...) insincere speech, including false implicature and forms of misleading with presuppositions, prosodic focus, and semantic incompleteness. -/- Part II sets out the relationship between what is communicated and the speaker's attitudes. Stokke develops the view of insincerity as a shallow phenomenon that is dependent on conscious attitudes rather than deeper motivations. The various of ways of speaking while being indifferent toward what one communicates are covered, and the phenomenon of 'bullshitting' is distinguished from lying and other forms of insincerity. Finally, an account of insincere uses of interrogative, imperative, and exclamative utterances is also given. (shrink)
She was intensely sympathetic. She was immensely charming. She excelled in the difficult arts of family life. She sacrificed herself daily. If there was chicken, she took the leg, if there was a draught, she sat in it—in short, she was so constituted that she never had a mind or wish of her own, but preferred to sympathise always with the minds and wishes of others. — Virginia Woolf (1979, 59).
.This paper discusses Aristotle’s thesis and Boethius’ thesis, the most distinctive theorems of connexive logic. Its aim is to show that, although there is something plausible in Aristotle’s thesis and Boethius’ thesis, the intuitions that may be invoked to motivate them are consistent with any account of indicative conditionals that validates a suitably restricted version of them. In particular, these intuitions are consistent with the view that indicative conditionals are adequately formalized as strict conditionals.
'Microphysicalism', the view that whole objects behave the way they do in virtue of the behaviour of their constituent parts, is an influential contemporary view with a long philosophical and scientific heritage. In _What's Wrong With Microphysicalism?_ Andreas Hüttemann offers a fresh challenge to this view. Hüttemann agrees with the microphysicalists that we can explain compound systems by explaining their parts, but claims that this does not entail a fundamentalism that gives hegemony to the micro-level. At most, it shows that (...) there is a relationship of determination between parts and wholes, but there is no justification for taking this relationship to be asymmetrical rather than one of mutual dependence. Hüttemann argues that if this is the case, then microphysicalists have no right to claim that the micro-level is the ultimate agent: neither the parts nor the whole have 'ontological priority'. Hüttemann advocates a pragmatic pluralism, allowing for different ways to describe nature. _What's Wrong With Microphysicalism?_ is a convincing and original contribution to central issues in contemporary philosophy of mind, philosophy of science and metaphysics. (shrink)
Many philosophers believe that there exist distinctive obstacles to relying on moral testimony. In this paper, I criticize previous attempts to identify these obstacles and offer a new theory. I argue that the problems associated with moral deference can't be explained in terms of the value of moral understanding, nor in terms of aretaic considerations related to subjective integration. Instead, our uneasiness with moral testimony is best explained by our attachment to an ideal of authenticity that places special demands on (...) our moral beliefs. (shrink)
By virtue of what do alarm calls and facial expressions carry natural information? The answer I defend in this paper is that they carry natural information by virtue of changing the probabilities of various states of affairs, relative to background data. The Probabilistic Difference Maker Theory of natural information that I introduce here is inspired by Dretske's  seminal analysis of natural information, but parts ways with it by eschewing the requirements that information transmission must be nomically underwritten, mind-independent, and (...) knowledge-yielding. PDMT includes both a qualitative account of information transmission and a measure of natural information in keeping with the basic principles of Shannon's communication theory and Bayesian confirmation theory. It also includes a new account of the informational content of a signal, understood as the combination of the incremental and overall support that the signal provides for all states of affairs at the source. Finally, I compare and.. (shrink)
Although the modern age is often described as the age of democratic revolutions, the subject of popular foundings has not captured the imagination of contemporary political thought. Most of the time, democratic theory and political science treat as the object of their inquiry normal politics, institutionalized power, and consolidated democracies. The aim of Andreas Kalyvas' study is to show why it is important for democratic theory to rethink the question of its beginnings. Is there a founding unique to democracies? Can (...) a democracy be democratically established? What are the implications of expanding democratic politics in light of the question of whether and how to address democracy's beginnings? Kalyvas addresses these questions and scrutinizes the possibility of democratic beginnings in terms of the category of the extraordinary, as he reconstructs it from the writings of Max Weber, Carl Schmitt, and Hannah Arendt and their views on the creation of new political, symbolic, and constitutional orders. (shrink)
This article works out the main characteristics of `practice theory', a type of social theory which has been sketched by such authors as Bourdieu, Giddens, Taylor, late Foucault and others. Practice theory is presented as a conceptual alternative to other forms of social and cultural theory, above all to culturalist mentalism, textualism and intersubjectivism. The article shows how practice theory and the three other cultural-theoretical vocabularies differ in their localization of the social and in their conceptualization of the body, mind, (...) things, knowledge, discourse, structure/process and the agent. (shrink)
Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...) realistic option), or facing a responsibility gap, which cannot be bridged by traditional concepts of responsibility ascription. (shrink)
Andrea Falcon's work is guided by the exegetical ideal of recreating the mind of Aristotle and his distinctive conception of the theoretical enterprise. In this concise exploration of the significance of the celestial world for Aristotle's science of nature, Falcon investigates the source of discontinuity between celestial and sublunary natures and argues that the conviction that the natural world exhibits unity without uniformity is the ultimate reason for Aristotle's claim that the heavens are made of a special body, unique (...) to them. This book presents Aristotle as a totally engaged, systematic investigator whose ultimate concern was to integrate his distinct investigations into a coherent interpretation of the world we live in, all the while mindful of human limitations to what can be known. Falcon reads in Aristotle the ambition of an extraordinarily curious mind and the confidence that that ambition has been largely fulfilled. (shrink)
Organoids are three-dimensional biological structures grown in vitro from different kinds of stem cells that self-organise mimicking real organs with organ-specific cell types. Recently, researchers have managed to produce human organoids which have structural and functional properties very similar to those of different organs, such as the retina, the intestines, the kidneys, the pancreas, the liver and the inner ear. Organoids are considered a great resource for biomedical research, as they allow for a detailed study of the development and pathologies (...) of human cells; they also make it possible to test new molecules on human tissue. Furthermore, organoids have helped research take a step forward in the field of personalised medicine and transplants. However, some ethical issues have arisen concerning the origin of the cells that are used to produce organoids and their properties. In particular, there are new, relevant and so-far overlooked ethical questions concerning cerebral organoids. Scientists have created so-called mini-brains as developed as a few-months-old fetus, albeit smaller and with many structural and functional differences. However, cerebral organoids exhibit neural connections and electrical activity, raising the question whether they are or will one day be somewhat sentient. In principle, this can be measured with some techniques that are already available, which are used for brain-injured non-communicating patients. If brain organoids were to show a glimpse of sensibility, an ethical discussion on their use in clinical research and practice would be necessary. (shrink)
Upon discovering that certain beliefs we hold are contingent on arbitrary features of our background, we often feel uneasy. I defend the proposal that if such cases of contingency anxiety involve defeaters, this is because of the epistemic significance of disagreement. I note two hurdles to our accepting this Disagreement Hypothesis. Firstly, some cases of contingency anxiety apparently involve no disagreement. Secondly, the proposal may seem to make our awareness of the influence of arbitrary background factors irrelevant in determining whether (...) to revise our beliefs. I show that each of these problems can be successfully accommodated by the Disagreement Hypothesis. (shrink)
Is logic masculine? Is women's lack of interest in the "hard core" philosophical disciplines of formal logic and semantics symptomatic of an inadequacy linked to sex? Is the failure of women to excel in pure mathematics and mathematical science a function of their inability to think rationally? Andrea Nye undermines the assumptions that inform these questions, assumptions such as: logic is unitary, logic is independenet of concrete human relations, and logic transcends historical circumstances as well as gender. In a (...) series of studies of the logics of historical figures--Parmenides, Plato, Aristotle, Zeno, Abelard, Ockham, and Frege--she traces the changing interrelationships between logical innovation and oppressive speech strategies, showing that logic is not transcendent truth but abstract forms of language spoken by men, whether Greek ruling citizens, or scientists. (shrink)
This paper tries to explain why racial profiling involves a serious injustice and to do so in a way that avoids the problems of existing philosophical accounts. An initially plausible view maintains that racial profiling is pro tanto wrong in and of itself by violating a constraint on fair treatment that is generally violated by acts of statistical discrimination based on ascribed characteristics. However, consideration of other cases involving statistical discrimination suggests that violating a constraint of this kind may not (...) be an especially serious wrong in and of itself. To fully capture the significant wrong that occurs when racial profiling is targeted at black Americans or other similarly situated groups, it is argued that we should appeal to the idea that this basic injustice is exacerbated when it forms part of a larger pattern of similar actions that collectively realize a state of cumulative injustice. (shrink)
Name any valued human trait—intelligence, wit, charm, grace, strength—and you will find an inexhaustible variety and complexity in its expression among individuals. Yet we insist that such diversity does not provide grounds for differential treatment at the most basic level. Whatever merit, blame, praise, love, or hate we receive as beings with a particular past and a particular constitution, we are always and everywhere due equal respect merely as persons. -/- But why? Most who attempt to answer this question appeal (...) to the idea that all human beings possess an intrinsic dignity and worth—grounded in our capacities, for example, to reason, reflect, or love—that raises us up in the order of nature. Andrea Sangiovanni rejects this predominant view and offers a radical alternative. -/- To understand our commitment to basic equality, Humanity without Dignity argues that we must begin with a consideration not of equality but of inequality. Rather than search for a chimerical value-bestowing capacity possessed to an equal extent by each one of us, we ought to ask: Why and when is it wrong to treat others as inferior? Sangiovanni comes to the conclusion that our commitment to moral equality is best explained by a rejection of cruelty rather than a celebration of rational capacity. He traces the impact of this fundamental shift for our understanding of human rights and the norms of anti-discrimination that underlie it. (shrink)
Many moral philosophers accept the Debunking Thesis, according to which facts about natural selection provide debunking explanations for certain of our moral beliefs. I argue that philosophers who accept the Debunking Thesis beg important questions in the philosophy of biology. They assume that past selection can explain why you or I hold certain of the moral beliefs we do. A position advanced by many prominent philosophers of biology implies that this assumption is false. According to the Negative View, natural selection (...) cannot explain the traits of individuals. Hence, facts about past selection cannot provide debunking explanations for any of our moral beliefs. The aim of this paper is to explore the conflict between the Debunking Thesis and the Negative View. (shrink)
In _The Boundaries of Babel_, Andrea Moro tells the story of an encounter between two cultures: contemporary theoretical linguistics and the cognitive neurosciences. The study of language within a biological context has been ongoing for more than fifty years. The development of neuroimaging technology offers new opportunities to enrich the "biolinguistic perspective" and extend it beyond an abstract framework for inquiry. As a leading theoretical linguist in the generative tradition and also a cognitive scientist schooled in the new imaging (...) technology, Moro is uniquely equipped to explore this. Moro examines what he calls the "hidden" revolution in contemporary science: the discovery that the number of possible grammars is not infinite and that their number is biologically limited. This radical but little-discussed change in the way we look at language, he claims, will require us to rethink not just the fundamentals of linguistics and neurosciences but also our view of the human mind. Moro searches for neurobiological correlates of "the boundaries of Babel" -- the constraints on the apparent chaotic variation in human languages -- by using an original experimental design based on artificial languages. He offers a critical overview of some of the fundamental results from linguistics over the last fifty years, in particular regarding syntax, then uses these essential aspects of language to examine two neuroimaging experiments in which he took part. He describes the two neuroimaging techniques used, but makes it clear that techniques and machines do not provide interesting data without a sound theoretical framework. Finally, he discusses some speculative aspects of modern research in biolinguistics regarding the impact of the linear structure of linguistics expression on grammar, and more generally, some core aspects of language acquisition, genetics, and evolution. (shrink)
In recent years, Brentano’s theory of consciousness has been systematically reassessed. The reconstruction that has received the most attention is the so-called identity reconstruction. It says that secondary consciousness and the mental phenomenon it is about are one and the same. Crucially, it has been claimed that this thesis is the only one which can make Brentano’s theory immune to what he considers the main threat to it, namely, the duplication of the primary object. In this paper, I argue that (...) the identity reconstruction is untenable, and I defend an alternative, which I name the unity reconstruction. According to the unity reconstruction, secondary consciousness is a real part of the mental phenomenon it is about, and hence is distinct from it. I contend that this thesis does not in itself lead to the duplication of the primary object, and that what should be blamed is rather a controversial thesis about the intentional structure of secondary consciousness—a thesis which Brentano ultimately abandoned. (shrink)
I argue that many of the priority rankings that have been proposed by effective altruists seem to be in tension with apparently reasonable assumptions about the rational pursuit of our aims in the face of uncertainty. The particular issue on which I focus arises from recognition of the overwhelming importance and inscrutability of the indirect effects of our actions, conjoined with the plausibility of a permissive decision principle governing cases of deep uncertainty, known as the maximality rule. I conclude that (...) we lack a compelling decision theory that is consistent with a longtermist perspective and does not downplay the depth of our uncertainty, while also supporting orthodox effective altruist conclusions about cause prioritization. (shrink)
This paper reviews the uneven history of the relationship between Anthropology and Cognitive Science over the past 30 years, from its promising beginnings, followed by a period of disaffection, on up to the current context, which may lay the groundwork for reconsidering what Anthropology and (the rest of) Cognitive Science have to offer each other. We think that this history has important lessons to teach and has implications for contemporary efforts to restore Anthropology to its proper place within Cognitive Science. (...) The recent upsurge of interest in the ways that thought may shape and be shaped by action, gesture, cultural experience, and language sets the stage for, but so far has not fully accomplished, the inclusion of Anthropology as an equal partner. (shrink)
This article revisits Miranda Fricker’s Epistemic Injustice through one specific aspect of Axel Honneth’s recognition theory. Taking a first cue from Honneth’s critique of the limitations of the “language-theoretic framework” in Habermas’ discourse ethics, it floats the idea that the two categories of Fricker’s groundbreaking analysis—testimonial and hermeneutical injustice—likewise lean towards a speech-based metric. If we accept, however, that there are also implicit, preverbal, affective, and embodied ways of knowing and channels of knowledge transmission, this warrants an expansion of Fricker’s (...) original concept. By drawing on Honneth’s recognition theory, I argue it is possible to extend the account of epistemic injustice beyond Fricker’s two central categories, to glimpse yet another register of serious “wrong done to someone specifically in their capacity as a knower.” I define this harm as prediscursive epistemic injury and offer two central cases to illustrate this additional form of epistemic injustice. (shrink)
Several attempts have been made to apply the choice-sensitive theory of distributive justice, luck egalitarianism, in the context of health and healthcare. This article presents a framework for this discussion by highlighting different normative decisions to be made in such an application, some of the objections to which luck egalitarians must provide answers and some of the practical implications associated with applying such an approach in the real world. It is argued that luck egalitarians should address distributions of health rather (...) than healthcare, endorse an integrationist theory that combines health concerns with general distributive concerns and be pluralist in their approach. It further suggests that choice-sensitive policies need not be the result of applying luck egalitarianism in this context. (shrink)
Many philosophers believe that natural selection explanations debunk our moral beliefs or would do so if moral realism were true, relying on the assumption that explanations of this kind show that moral facts play no role in explaining human moral beliefs. Here I argue that this assumption rests on a confusion of proximate and ultimate explanatory factors. Insofar as evolutionary debunking arguments hinge on the assumption that moral facts play no role in explaining human moral beliefs, these arguments fall short.
Many of our endeavors -- be it personal or communal, technological or artistic -- aim at eradicating all traces of dissatisfaction from our daily lives. They seek to cure us of our discontent in order to deliver us a fuller and flourishing existence. But what if ubiquitous pleasure and instant fulfilment make our lives worse, not better? What if discontent isn't an obstacle to the good life but one of its essential ingredients? In Propelled, Andreas Elpidorou makes a lively case (...) for the value of discontent and illustrates how boredom, frustration, and anticipation are good for us. Weaving together stories from sources as wide-ranging as classical literature, social and cognitive psychology, philosophy, art, and video games, Elpidorou shows that these psychological states aren't unpleasant accidents of our lives. Rather, they illuminate our desires and expectations, inform us when we find ourselves stuck in unpleasant and unfulfilling situations, and motivate us to furnish our lives with meaning, interest, and value. Boredom, frustration, and anticipation aren't obstacles to our goals--they are our guides, propelling us into lives that are truly our own. (shrink)
Many arguments are affected by context sensitivity, because they include sentences that have different truth conditions in different contexts. Therefore, it is natural to think that a general criterion for evaluating arguments must take context sensitivity into account. One way to give substance to that thought is provided by the definition of validity offered by David Kaplan within his theory of indexicals. However, the route indicated by Kaplan is hindered by a problem whose importance is often underestimated. This paper explores (...) a different route, and outlines a definition of validity that does not run into that problem. Its moral is that Kaplan's definition is not the only plausible definition. This is not to say that the definition outlined is the only plausible definition or that it is correct in some absolute sense. There might be equally important problems with it that the paper does not take into account. But until such problems are found and brought up, the departure from Kaplan's route remains a viable option. (shrink)
Moral bioenhancement is the attempt to improve human behavioral dispositions, especially in relation to the great ethical challenges of our age. To this end, scientists have hypothesised new molecules or even permanent changes in the genetic makeup to achieve such moral bioenhancement. The philosophical debate has focused on the permissibility and desirability of that enhancement and the possibility of making it mandatory, given the positive result that would follow. However, there might be another way to enhance the overall moral behavior (...) of us humans, namely that of targeting people with lower propensity to trust and altruism. Based on the theory of attachment, people who have a pattern of insecure attachment are less inclined to prosocial behavior. We know that these people are influenced by negative childhood memories: this negative emotional component may be erased or reduced by the administration of propranolol when the bad memory is reactivated, thereby improving prosocial skills. It could be objected that memory-editing might be a threat for the person’s identity and authenticity. However, if the notion of rigid identity is replaced by that of extended identity, this objection loses validity. If identity is understood as something that changes over time, moral bioenhancement through memory-editing seems indeed legitimate and even desirable. (shrink)
According to a widespread view, a complete explanatory reduction of all aspects of the human mind to the electro-chemical functioning of the brain is at hand and will certainly produce vast and positive cultural, political and social consequences. However, notwithstanding the astonishing advances generated by the neurosciences in recent years for our understanding of the mechanisms and functions of the brain, the application of these findings to the specific but crucial issue of human agency can be considered a “pre-paradigmatic science” (...) (in Thomas Kuhn’s sense). This implies that the situation is, at the same time, intellectually stimulating and methodologically confused. More specifically—because of the lack of a solid, unitary and coherent methodological framework as to how to connect neurophysiology and agency—it frequently happens that tentative approaches, bold but very preliminary claims and even clearly flawed interpretations of experimental data are taken for granted. In this article some examples of such conceptual confusions and intellectual hubris will be presented, which derive from the most recent literature at the intersection between neurosciences, on the one hand, and philosophy, politics and social sciences, on the other hand. It will also be argued that, in some of these cases, hasty and over-ambitious conclusions may produce negative social and political consequences. The general upshot will be that very much has still to be clarified as to what and how neurosciences can tell us about human agency and that, in the meantime, intellectual and methodological caution is to be recommended. (shrink)