In the book, I argue that the mind can be explained computationally because it is itself computational—whether it engages in mental arithmetic, parses natural language, or processes the auditory signals that allow us to experience music. All these capacities arise from complex information-processing operations of the mind. By analyzing the state of the art in cognitive science, I develop an account of computational explanation used to explain the capacities in question.
This paper centers around the notion that internal, mental representations are grounded in structural similarity, i.e., that they are so-called S-representations. We show how S-representations may be causally relevant and argue that they are distinct from mere detectors. First, using the neomechanist theory of explanation and the interventionist account of causal relevance, we provide a precise interpretation of the claim that in S-representations, structural similarity serves as a “fuel of success”, i.e., a relation that is exploitable for the representation using (...) system. Then, we discuss crucial differences between S-representations and indicators or detectors, showing that—contrary to claims made in the literature—there is an important theoretical distinction to be drawn between the two. (shrink)
The claim defended in the paper is that the mechanistic account of explanation can easily embrace idealization in big-scale brain simulations, and that only causally relevant detail should be present in explanatory models. The claim is illustrated with two methodologically different models: Blue Brain, used for particular simulations of the cortical column in hybrid models, and Eliasmith’s SPAUN model that is both biologically realistic and able to explain eight different tasks. By drawing on the mechanistic theory of computational explanation, I (...) argue that large-scale simulations require that the explanandum phenomenon is identified; otherwise, the explanatory value of such explanations is difficult to establish, and testing the model empirically by comparing its behavior with the explanandum remains practically impossible. The completeness of the explanation, and hence of the explanatory value of the explanatory model, is to be assessed vis-à-vis the explanandum phenomenon, which is not to be conflated with raw observational data and may be idealized. I argue that idealizations, which include building models of a single phenomenon displayed by multi-functional mechanisms, lumping together multiple factors in a single causal variable, simplifying the causal structure of the mechanisms, and multi-model integration, are indispensable for complex systems such as brains; otherwise, the model may be as complex as the explanandum phenomenon, which would make it prone to so-called Bonini paradox. I conclude by enumerating dimensions of empirical validation of explanatory models according to new mechanism, which are given in a form of a “checklist” for a modeler. (shrink)
In this paper, I argue that even if the Hard Problem of Content, as identified by Hutto and Myin, is important, it was already solved in natu- ralized semantics, and satisfactory solutions to the problem do not rely merely on the notion of information as covariance. I point out that Hutto and Myin have double standards for linguistic and mental representation, which leads to a peculiar inconsistency. Were they to apply the same standards to basic and linguistic minds, they would (...) either have to embrace representationalism or turn to semantic nihilism, which is, as I argue, an unstable and unattractive position. Hence, I conclude, their book does not offer an alternative to representation- alism. At the same time, it reminds us that representational talk in cognitive science cannot be taken for granted and that information is different from men- tal representation. Although this claim is not new, Hutto and Myin defend it forcefully and elegantly. (shrink)
In this paper, we argue that several recent ‘wide’ perspectives on cognition (embodied, embedded, extended, enactive, and distributed) are only partially relevant to the study of cognition. While these wide accounts override traditional methodological individualism, the study of cognition has already progressed beyond these proposed perspectives towards building integrated explanations of the mechanisms involved, including not only internal submechanisms but also interactions with others, groups, cognitive artifacts, and their environment. The claim is substantiated with reference to recent developments in the (...) study of “mindreading” and debates on emotions. We claim that the current practice in cognitive (neuro)science has undergone, in effect, a silent mechanistic revolution, and has turned from initial binary oppositions and abstract proposals towards the integration of wide perspectives with the rest of the cognitive (neuro)sciences. (shrink)
In this paper, I argue that computationalism is a progressive research tradition. Its metaphysical assumptions are that nervous systems are computational, and that information processing is necessary for cognition to occur. First, the primary reasons why information processing should explain cognition are reviewed. Then I argue that early formulations of these reasons are outdated. However, by relying on the mechanistic account of physical computation, they can be recast in a compelling way. Next, I contrast two computational models of working memory (...) to show how modeling has progressed over the years. The methodological assumptions of new modeling work are best understood in the mechanistic framework, which is evidenced by the way in which models are empirically validated. Moreover, the methodological and theoretical progress in computational neuroscience vindicates the new mechanistic approach to explanation, which, at the same time, justifies the best practices of computational modeling. Overall, computational modeling is deservedly successful in cognitive science. Its successes are related to deep conceptual connections between cognition and computation. Computationalism is not only here to stay, it becomes stronger every year. (shrink)
Predictive processing (PP) has been repeatedly presented as a unificatory account of perception, action, and cognition. In this paper, we argue that this is premature: As a unifying theory, PP fails to deliver general, simple, homogeneous, and systematic explanations. By examining its current trajectory of development, we conclude that PP remains only loosely connected both to its computational framework and to its hypothetical biological underpinnings, which makes its fundamentals unclear. Instead of offering explanations that refer to the same set of (...) principles, we observe systematic equivocations in PP‐based models, or outright contradictions with its avowed principles. To make matters worse, PP‐based models are seldom empirically validated, and they are frequently offered as mere just‐so stories. The large number of PP‐based models is thus not evidence of theoretical progress in unifying perception, action, and cognition. On the contrary, we maintain that the gap between theory and its biological and computational bases contributes to the arrested development of PP as a unificatory theory. Thus, we urge the defenders of PP to focus on its critical problems instead of offering mere re‐descriptions of known phenomena, and to validate their models against possible alternative explanations that stem from different theoretical assumptions. Otherwise, PP will ultimately fail as a unified theory of cognition. (shrink)
The purpose of this paper is to present a general mechanistic framework for analyzing causal representational claims, and offer a way to distinguish genuinely representational explanations from those that invoke representations for honorific purposes. It is usually agreed that rats are capable of navigation because they maintain a cognitive map of their environment. Exactly how and why their neural states give rise to mental representations is a matter of an ongoing debate. I will show that anticipatory mechanisms involved in rats’ (...) evaluation of possible routes give rise to satisfaction conditions of contents, and this is why they are representationally relevant for explaining and predicting rats’ behavior. I argue that a naturalistic account of satisfaction conditions of contents answers the most important objections of antirepresentationalists. (shrink)
Cognitive science is an interdisciplinary conglomerate of various research fields and disciplines, which increases the risk of fragmentation of cognitive theories. However, while most previous work has focused on theoretical integration, some kinds of integration may turn out to be monstrous, or result in superficially lumped and unrelated bodies of knowledge. In this paper, I distinguish theoretical integration from theoretical unification, and propose some analyses of theoretical unification dimensions. Moreover, two research strategies that are supposed to lead to unification are (...) analyzed in terms of the mechanistic account of explanation. Finally, I argue that theoretical unification is not an absolute requirement from the mechanistic perspective, and that strategies aiming at unification may be premature in fields where there are multiple conflicting explanatory models. (shrink)
In this paper, I show how semantic factors constrain the understanding of the computational phenomena to be explained so that they help build better mechanistic models. In particular, understanding what cognitive systems may refer to is important in building better models of cognitive processes. For that purpose, a recent study of some phenomena in rats that are capable of ‘entertaining’ future paths (Pfeiffer and Foster 2013) is analyzed. The case shows that the mechanistic account of physical computation may be complemented (...) with semantic considerations, and in many cases, it actually should. (shrink)
In this article, after presenting the basic idea of causal accounts of implementation and the problems they are supposed to solve, I sketch the model of computation preferred by Chalmers and argue that it is too limited to do full justice to computational theories in cognitive science. I also argue that it does not suffice to replace Chalmers’ favorite model with a better abstract model of computation; it is necessary to acknowledge the causal structure of physical computers that is not (...) accommodated by the models used in computability theory. Additionally, an alternative mechanistic proposal is outlined. (shrink)
In this paper, I focus on a problem related to teleological theories of content namely, which notion of function makes content causally relevant? It has been claimed that some functional accounts of content make it causally irrelevant, or epiphenomenal; in which case, such notions of function could no longer act as the pillar of naturalized semantics. By looking closer at biological questions about behavior, I argue that past discussion has been oriented towards an ill-posed question. What I defend is a (...) Very Boring Hypothesis: depending on the representational phenomenon and the explanatory question, different aspects might be important, and it is difficult to say a priori which ones these might be. There are multiple facets to biological functionality and causality relevant for explaining representational phenomena, and ignoring them will lead to unmotivated simplifications. In addition, accounting for different facets of functionality helps dispense with intuition-based specifications of cognitive phenomena. (shrink)
In this paper, the Author reviewed the typical objections against the claim that brains are computers, or, to be more precise, information-processing mechanisms. By showing that practically all the popular objections are based on uncharitable interpretations of the claim, he argues that the claim is likely to be true, relevant to contemporary cognitive science, and non-trivial.
In this paper, we defend a novel, multidimensional account of representational unification, which we distinguish from integration. The dimensions of unity are simplicity, generality and scope, non-monstrosity, and systematization. In our account, unification is a graded property. The account is used to investigate the issue of how research traditions contribute to representational unification, focusing on embodied cognition in cognitive science. Embodied cognition contributes to unification even if it fails to offer a grand unification of cognitive science. The study of this (...) failure shows that unification, contrary to what defenders of mechanistic explanation claim, is an important mechanistic virtue of research traditions. (shrink)
In this paper, an account of theoretical integration in cognitive (neuro)science from the mechanistic perspective is defended. It is argued that mechanistic patterns of integration can be better understood in terms of constraints on representations of mechanisms, not just on the space of possible mechanisms, as previous accounts of integration had it. This way, integration can be analyzed in more detail with the help of constraintsatisfaction account of coherence between scientific representations. In particular, the account has resources to talk of (...) idealizations and research heuristics employed by researchers to combine separate results and theoretical frameworks. The account is subsequently applied to an example of successful integration in the research on hippocampus and memory, and to a failure of integration in the research on mirror neurons as purportedly explanatory of sexual orientation. (shrink)
In this paper, I review the objections against the claim that brains are computers, or, to be precise, information-processing mechanisms. By showing that practically all the popular objections are either based on uncharitable interpretation of the claim, or simply wrong, I argue that the claim is likely to be true, relevant to contemporary cognitive (neuro)science, and non-trivial.
In this paper, the role of the environment and physical embodiment of computational systems for explanatory purposes will be analyzed. In particular, the focus will be on cognitive computational systems, understood in terms of mechanisms that manipulate semantic information. It will be argued that the role of the environment has long been appreciated, in particular in the work of Herbert A. Simon, which has inspired the mechanistic view on explanation. From Simon’s perspective, the embodied view on cognition seems natural but (...) it is nowhere near as critical as its proponents suggest. The only point of difference between Simon and embodied cognition is the significance of body-based off-line cognition; however, it will be argued that it is notoriously over-appreciated in the current debate. The new mechanistic view on explanation suggests that even if it is critical to situate a mechanism in its environment and study its physical composition, or realization, it is also stressed that not all detail counts, and that some bodily features of cognitive systems should be left out from explanations. (shrink)
Replicability and reproducibility of computational models has been somewhat understudied by “the replication movement.” In this paper, we draw on methodological studies into the replicability of psychological experiments and on the mechanistic account of explanation to analyze the functions of model replications and model reproductions in computational neuroscience. We contend that model replicability, or independent researchers' ability to obtain the same output using original code and data, and model reproducibility, or independent researchers' ability to recreate a model without original code, (...) serve different functions and fail for different reasons. This means that measures designed to improve model replicability may not enhance (and, in some cases, may actually damage) model reproducibility. We claim that although both are undesirable, low model reproducibility poses more of a threat to long-term scientific progress than low model replicability. In our opinion, low model reproducibility stems mostly from authors' omitting to provide crucial information in scientific papers and we stress that sharing all computer code and data is not a solution. Reports of computational studies should remain selective and include all and only relevant bits of code. (shrink)
Predictive processing models of psychopathologies are not explanatorily consistent with the present account of abstract thought. These models are based on latent variables probabilistically mapping the structure of the world. As such, they cannot be informed by representational ontology based on mental objects and states. What actually is the case is merely some terminological affinity between subjective and informational uncertainty.
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically. -/- In this paper, I claim that mechanistic accounts of computation should allow for a broad variation of models (...) of computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail. (shrink)
The purpose of this paper is to argue against the claim that morphological computation is substantially different from other kinds of physical computation. I show that some (but not all) purported cases of morphological computation do not count as specifically computational, and that those that do are solely physical computational systems. These latter cases are not, however, specific enough: all computational systems, not only morphological ones, may (and sometimes should) be studied in various ways, including their energy efficiency, cost, reliability, (...) and durability. Second, I critically analyze the notion of “offloading” computation to the morphology of an agent or robot, by showing that, literally, computation is sometimes not offloaded but simply avoided. Third, I point out that while the morphology of any agent is indicative of the environment that it is adapted to, or informative about that environment, it does not follow that every agent has access to its morphology as the model of its environment. (shrink)
In this paper, we focus on the development of geometric cognition. We argue that to understand how geometric cognition has been constituted, one must appreciate not only individual cognitive factors, such as phylogenetically ancient and ontogenetically early core cognitive systems, but also the social history of the spread and use of cognitive artifacts. In particular, we show that the development of Greek mathematics, enshrined in Euclid’s Elements, was driven by the use of two tightly intertwined cognitive artifacts: the use of (...) lettered diagrams; and the creation of linguistic formulae. Together, these artifacts formed the professional language of geometry. In this respect, the case of Greek geometry clearly shows that explanations of geometric reasoning have to go beyond the confines of methodological individualism to account for how the distributed practice of artifact use has stabilized over time. This practice, as we suggest, has also contributed heavily to the understanding of what mathematical proof is; classically, it has been assumed that proofs are not merely deductively correct but also remain invariant over various individuals sharing the same cognitive practice. Cognitive artifacts in Greek geometry constrained the repertoire of admissible inferential operations, which made these proofs inter-subjectively testable and compelling. By focusing on the cognitive operations on artifacts, we also stress that mental mechanisms that contribute to these operations are still poorly understood, in contrast to those mechanisms which drive symbolic logical inference. (shrink)
Explanations in cognitive science and computational neuroscience rely predominantly on computational modeling. Although the scientific practice is systematic, and there is little doubt about the empirical value of numerous models, the methodological account of computational explanation is not up-to-date. The current chapter offers a systematic account of computational explanation in cognitive science and computational neuroscience within a mechanistic framework. The account is illustrated with a short case study of modeling of the mirror neuron system in terms of predictive coding.
In this paper, I want to deal with the triviality threat to computationalism. On one hand, the controversial and vague claim that cognition involves computation is still denied. On the other, contemporary physicists and philosophers alike claim that all physical processes are indeed computational or algorithmic. This claim would justify the computationalism claim by making it utterly trivial. I will show that even if these two claims were true, computationalism would not have to be trivial.
Herbert A. Simon is well known for his account of bounded rationality. Whereas classical economics idealized economic agency and framed rational choice in terms of the decision theory, Simon insisted that agents need not be optimal in their choices. They might be mere satispcers, i.e., attain good enough goals rather than optimal ones. At the same time, behaviorally as well as computationally, bounded rationality is much more realistic.
In this chapter, I argue that some aspects of cognitive phenomena cannot be explained computationally. In the first part, I sketch a mechanistic account of computational explanation that spans multiple levels of organization of cognitive systems. In the second part, I turn my attention to what cannot be explained about cognitive systems in this way. I argue that information-processing mechanisms are indispensable in explanations of cognitive phenomena, and this vindicates the computational explanation of cognition. At the same time, it has (...) to be supplemented with other explanations to make the mechanistic explanation complete, and that naturally leads to explanatory pluralism in cognitive science. The price to pay for pluralism, however, is the abandonment of the traditional autonomy thesis asserting that cognition is independent of implementation details. (shrink)
The paper defends the claim that the mechanistic explanation of information processing is the fundamental kind of explanation in cognitive science. These mechanisms are complex organized systems whose functioning depends on the orchestrated interaction of their component parts and processes. A constitutive explanation of every mechanism must include both appeal to its environment and to the role it plays in it. This role has been traditionally dubbed competence. To fully explain how this role is played it is necessary to explain (...) the information processing inside the mechanism embedded in the environment. The most usual explanation on this level has a form of a computational model, for example a software program or a trained artificial neural network. However, this is not the end of the explanatory chain. What is left to be explained is how the program is realized (or what processes are responsible for information processing in the artificial neural network). By using two dramatically different examples from the history of cognitive science I show the multi-level structure of explanations in cognitive science. These examples are (1) the explanation of human process solving as proposed by A. Newell & H. Simon; (2) the explanation of cricket phonotaxis via robotic models by B. Webb. (shrink)
I discuss whether there are some lessons for philosophical inquiry over the nature of simulation to be learnt from the practical methodology of reengineering. I will argue that reengineering serves a similar purpose as simulations in theoretical science such as computational neuroscience or neurorobotics, and that the procedures and heuristics of reengineering help to develop solutions to outstanding problems of simulation.
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically. In this paper, I claim that mechanistic accounts of computation should allow for a broad variation of models of (...) computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail. (shrink)
Many philosophers use “physicalism” and “naturalism” interchangeably. In this paper, I will distinguish ontological naturalism from physicalism. While broad versions of physicalism are compatible with naturalism, naturalism doesn't have to be committed to strong versions of physical reductionism, so it cannot be defined as equivalent to it. Instead of relying on the notion of ideal physics, naturalism can refer to the notion of ideal natural science that doesn't imply unity of science. The notion of ideal natural science, as well as (...) the notion of ideal physics, will be vindicated. I will shortly explicate the notion of ideal natural science, and define ontological naturalism based on it. (shrink)
The goal of the article is to show that a complete answer to the title question can be given only in the context of the natural sciences. We believe that the group of cognitive sciences are the most reliable source of information about cognitive mental processes is. Making use of their achievements we present a series of criteria for possessing a mind. We distinguish between many kinds of minds . We attempt to outline the conditions that must be fulfilled by (...) an adequate model of the mind. In our opinion such a model must make use of all available empirical data and of scientific theories constructed on the basis of such data. From the point of view of philosophy the requirements placed upon such theories by ontology are especially important. Their reconstruction can be a prolegomena to a future integrated ontology of the mind. We emphasize that the mind is not an independent thing . In speaking about the mind we have in mind states, events, processes, functions, and dispositions that are derivative with respect to processes of a lower order. We assume that an adequate model of the mind is multi-dimensional, taking into account several mutually interacting levels of organization . We interpret the psychophysical problem as one of the relation between levels of organization, a relation that is constitutive for the actualization of mental states. Psychophysical relations turn out to be a particular case of the broader issue of relations between levels. In carrying out a preliminary conceptualization we make use of the notion of emergence; this is why our position, which is mainly in opposition to substantial dualism, may be termed emergent monism or naturalism. (shrink)
In this paper, I suggest that the notion of module explicitly defined by Peter Carruthers in The Architecture of The Mind (Carruthers 2006) is not really In use in the book. Instead, a more robust notion seems to be actually in play. The more robust notion, albeit implicitly assumed, seems to be far more useful for making claims about the modularity of mind. Otherwise, the claims would become trivial. This robust notion will be reconstructed and improved upon by putting it (...) into a more general framework of mental architecture. I defend the view that modules are the outcome of structural rather than functional decomposition and that they should be conceived as near decomposable systems. (shrink)
Is the mathematical function being computed by a given physical system determined by the system’s dynamics? This question is at the heart of the indeterminacy of computation phenomenon (Fresco et al. [unpublished]). A paradigmatic example is a conventional electrical AND-gate that is often said to compute conjunction, but it can just as well be used to compute disjunction. Despite the pervasiveness of this phenomenon in physical computational systems, it has been discussed in the philosophical literature only indirectly, mostly with reference (...) to the debate over realism about physical computation and computationalism. A welcome exception is Dewhurst’s () recent analysis of computational individuation under the mechanistic framework. He rejects the idea of appealing to semantic properties for determining the computational identity of a physical system. But Dewhurst seems to be too quick to pay the price of giving up the notion of computational equivalence. We aim to show that the mechanist need not pay this price. The mechanistic framework can, in principle, preserve the idea of computational equivalence even between two different enough kinds of physical systems, say, electrical and hydraulic ones. (shrink)
Naturalism is currently the most vibrantly developing approach to philosophy, with naturalised methodologies being applied across all the philosophical disciplines. One of the areas naturalism has been focussing upon is the mind, traditionally viewed as a topic hard to reconcile with the naturalistic worldview. A number of questions have been pursued in this context. What is the place of the mind in the world? How should we study the mind as a natural phenomenon? What is the significance of cognitive science (...) research for philosophical debates? In this book, philosophical questions about the mind are asked in the context of recent developments in cognitive science, evolutionary theory, psychology, and the project of the naturalisation. Much of the focus is upon what we have learned by studying natural mental mechanisms as well as designing artificial ones. In the case of natural mental mechanisms, this includes consideration of such issues as the significance of deficits in these mechanisms for psychiatry. The significance of the evolutionary context for mental mechanisms as well as questions regarding rationality and wisdom is also explored. Mechanistic and functional models of the mind are used to throw new light on discussions regarding issues of explanation, reduction and the realisation of mental phenomena. Finally, naturalistic approaches are used to look anew at such traditional philosophical issues as the correspondence of mind to world and presuppositions of scientific research. (shrink)
Multiple realizability (MR) is traditionally conceived of as the feature of computational systems, and has been used to argue for irreducibility of higher-level theories. I will show that there are several ways a computational system may be seen to display MR. These ways correspond to (at least) five ways one can conceive of the function of the physical computational system. However, they do not match common intuitions about MR. I show that MR is deeply interest-related, and for this reason, difficult (...) to pin down exactly. I claim that MR is of little importance for defending computationalism, and argue that it should rather appeal to organizational invariance or substrate neutrality of computation, which are much more intuitive but cannot support strong antireductionist arguments. (shrink)
Artificial models of cognition serve different purposes, and their use determines the way they should be evaluated. There are also models that do not represent any particular biological agents, and there is controversy as to how they should be assessed. At the same time, modelers do evaluate such models as better or worse. There is also a widespread tendency to call for publicly available standards of replicability and benchmarking for such models. In this paper, I argue that proper evaluation ofmodels (...) does not depend on whether they target real biological agents or not; instead, the standards of evaluation depend on the use of models rather than on the reality of their targets. I discuss how models are validated depending on their use and argue that all-encompassing benchmarks for models may be well beyond reach. (shrink)
The debate between the defenders of explanatory unification and explanatory pluralism has been ongoing from the beginning of cognitive science and is one of the central themes of its philosophy. Does cognitive science need a grand unifying theory? Should explanatory pluralism be embraced instead? Or maybe local integrative efforts are needed? What are the advantages of explanatory unification as compared to the benefits of explanatory pluralism? These questions, among others, are addressed in this Synthese’s special issue. In the introductory paper, (...) we discuss the background of the questions, distinguishing integrative theorizing from building unified theories. On the one hand, integrative efforts involve collaboration between various disciplines, fields, approaches, or theories. These efforts could even be quite temporary, without establishing any long-term institutionalized fields or disciplines, but could also contribute to developing new interfield theories. On the other hand, unification can rely on developing complete theories of mechanisms and representations underlying all cognition, as Newell’s “unified theories of cognition”, or may appeal to grand principles, as predictive coding. Here, we also show that unification in contemporary cognitive science goes beyond reductive unity, and may involve various forms of joint efforts and division of explanatory labor. This conclusion is one of the themes present in the content of contributions constituting the special issue. (shrink)
It would be hard to find a more fervent advocate of the position that computers are of profound significance to philosophy than Aaron Sloman. Yet, he is not a stereotypical proponent of Artificial Intelligence (AI). Far from it; in his writings, he undermines several popular convictions of functionalists. Through his drafts and polemics, Sloman definitely exerts quite substantial influence on the philosophy of Artificial Intelligence. Sloman's paper “Evolution: The Computer Systems Engineer Designing Minds” presents a bold hypothesis that the evolution (...) of the human mind actually involved the development of a several dozen of virtual machines that support various forms of self-monitoring. This, in turn, helps explain different features of our cognitive functioning. (shrink)
Nietzsche's treatment of Epicurus is an interesting example of philosophical hermeneutics. Epicurus bas tren notoriously misinterpreted, claims Nietzsche, because bis mask bas been taken for bis true face. Traditionally Epicurus is presented as a utilitarian or hedonist avant la lettre. This is a simplification motivated by a desire to deprecate bis philosophy. To Nietzsche Epicurus was „an idyllic hero”, a teacher with anistocratic predilections aun his own concept of good, critical of the traditional form of religion, and of the „pre-existent (...) form of Cbristianity”. As a hedonist he was much less convincing, as he was afraid of both pain and pleasure. He assumed the mask of an epicure in order to hide bis true self. Nietzsche warn us over and over again not to trust the traditional interpretation of Epicurus and urges us to penetrate beyond the veil of a stylish disguise. (shrink)