We review evidence regarding Tomasello et al.'s proposal that individuals with autism understand intentions but fail socially because of a lack of motivation to share intentions. We argue that they are often motivated to understand others but fail because they lack the perceptual integration skills that are needed to apply their basically intact theory of mind skills in complex social situations.
This article studies institutional investor allocations to the socially responsible asset class. We propose two elements influence socially responsible institutional investment in private equity: internal organizational structure, and internationalization. We study socially responsible investments from Dutch institutional investments into private equity funds, and compare socially responsible investment across different asset classes and different types of institutional investors (banks, insurance companies, and pension funds). The data indicate socially responsible investment in private equity is 40–50% more common when the decision to implement (...) such an investment plan is centralised with a single chief investment officer. Socially responsible investment in private equity is also more common among institutional investors with a greater international investment focus, and less common among fund-of-fund private equity investments. (shrink)
In this contribution, I will develop a comprehensive tool for the reconstruction and evaluation of argumentation from expert opinion. This is done by analyzing and then combining two dialectical accounts of this type of argumentation. Walton’s account of the ‘appeal to expert opinion’ provides a number of useful, but fairly unsystematic suggestions for critical questions pertaining to argumentation from expert opinion. The pragma-dialectical account of ‘argumentation from authority’ offers a clear and systematic, but fairly general framework for the reconstruction and (...) evaluation of this type of argumentation. The tool is developed by incorporating Walton’s critical questions into a pragma-dialectical framework. (shrink)
Jean Wagemans: Redelijkheid en overredingskracht van argumentatie. Een historisch-filosofische studie over de combinatie van het dialectische en het retorische perspectief op argumentatie in de pragma-dialectische argumentatietheorie (Reasonableness and Persuasiveness of Argumentation. An Historical-Philosophical Study on the Combination of the Dialectical and Rhetorical Perspective on Argumentation in the Pragma-Dialectical Argumentation Theory) Content Type Journal Article Pages 123-125 DOI 10.1007/s10503-010-9197-0 Authors Paul Gillaerts, Lessius University College, Antwerp, Belgium Journal Argumentation Online ISSN 1572-8374 Print ISSN 0920-427X Journal Volume Volume 25 Journal Issue (...) Volume 25, Number 1. (shrink)
The relationships between logic and natural language are multiverse. On the one hand, logic is a theory of argumentation, proving and giving reasons, and such activities are primarily carried out in natural language. This means that logic is, in a certain loose sense, about natural language. On the other hand, logic has found it useful to develop its own linguistic means which sometimes in a sense compete with those of natural language. This has led to the situation where the systems (...) of logic can be taken as interesting "models" of various aspects of natural language. Â Â Â Â Â Â Â The alliance of logic and linguistics has ﬂowered especially from the beginning of the seventies, when scholars like Montague, Lewis, Cresswell, Partee and others showed how semantics of natural language can be explicated with the help certain suitable logical calculi and the corresponding model theory. (Montague went so far as to claim that in view of this, there is no principal diﬀerence between natural and formal languages - but this is, as far as I can see, rather misguiding.) Since that time, the interdisciplinary movement of formal semantics (associating not only linguists and logicians, but also philosophers, computer scientists, cognitive psychologists and others) has yielded a rich repertoire of formal theories of natural language, some of them (like Hintikka's game-theoretical semantics or the dynamic logic of Groenendijk and Stokhof) being based directly on logic, others (like the situation semantics of Barwise and Perry or DRT of Kamp) exploiting diﬀerent formal strategies. Â Â Â Â Â Â Â Moreover, although the enterprise of formal semantics (i.e. of modeling natural language semantics by means of certain formal structures) seems to be the principal point of contact between linguistics and logic, there are also other cooperative enterprises. One of the most fruitful ones seems to be the logical analysis of syntax, which has resulted from elaboration of what was originally called categorial grammar. (However, even this enterprise can be seen as importantly stimulated by Montague.) Â Â Â Â Â Â Â All in all, the region in which logic and theoretical linguistics overlap has grown both in size and fertility.. (shrink)
Veel Nederlandse woorden (dans, zet, oordeel, assertie, ...) duiden zowel een handeling aan als het resultaat van die handeling. Het fenomeen doet zich in vrijwel alle talen voor en het lijkt erop dat het menselijke cognitieve apparaat er niet zoveel moeite mee heeft te wisselen tussen een statisch perspectief dat resultaten ziet en een dynamisch perspectief dat vooral gericht is op de processen die tot die resultaten geleid hebben. De filosofie heeft meer moeite met het wisselen tussen een statisch en (...) een dynamisch perspectief. Na een veelbelovende start waarbij Heraclites zei dat alles vloeit, maar Zeno en Parmenides bewezen dat alles integendeel stilstaat, lijkt de statische invalshoek toch de overhand te hebben gekregen. Oordelen betreffen statische proposities en redeneren vindt zijn neerslag in statische bewijzen. (shrink)
Abstract Continuous sedation until death (CSD), the act of reducing or removing the consciousness of an incurably ill patient until death, often provokes medical–ethical discussions in the opinion sections of medical and nursing journals. Some argue that CSD is morally equivalent to physician-assisted death (PAD), that it is a form of “slow euthanasia.” A qualitative thematic content analysis of opinion pieces was conducted to describe and classify arguments that support or reject a moral difference between CSD and PAD. Arguments pro (...) and contra a moral difference refer basically to the same ambiguous themes, namely intention, proportionality, withholding artificial nutrition and hydration, and removing consciousness. This demonstrates that the debate is first and foremost a semantic rather than a factual dispute, focusing on the normative framework of CSD. Given the prevalent ambiguity, the debate on CSD appears to be a classical symbolic struggle for moral authority. Content Type Journal Article Category Original Research Pages 1-13 DOI 10.1007/s11673-012-9369-8 Authors Sam Rys, Department of Public Health, Faculty of Medicine and Pharmacy, Vrije Universiteit Brussel, Laarbeeklaan 103, 1090 Brussel, Belgium Reginald Deschepper, Department of Public Health, Faculty of Medicine and Pharmacy, Vrije Universiteit Brussel, Laarbeeklaan 103, 1090 Brussel, Belgium Freddy Mortier, End-of-Life Care Research Group, Ghent University and Vrije Universiteit Brussel, Brussels, Belgium Luc Deliens, End-of-Life Care Research Group, Ghent University and Vrije Universiteit Brussel, Brussels, Belgium Douglas Atkinson, Interfacultair Departement voor Taalonderwijs (ITO), Vrije Universiteit Brussel, Brussels, Belgium Johan Bilsen, Department of Public Health, Faculty of Medicine and Pharmacy, Vrije Universiteit Brussel, Laarbeeklaan 103, 1090 Brussel, Belgium Journal Journal of Bioethical Inquiry Online ISSN 1872-4353 Print ISSN 1176-7529. (shrink)
Jonathan Dancy works within almost all fields of philosophy but is best known as the leading proponent of moral particularism. Particularism challenges “traditional” moral theories, such as Contractualism, Kantianism and Utilitarianism, in that it denies that moral thought and judgement relies upon, or is made possible by, a set of more or less well-defined, hierarchical principles. During the summer of 2006, the Philosophy Departments of Lund University (Sweden) and the University of Reading (England) began a series of exchanges to take (...) place every other year, alternating between the departments. Andreas Lind and Johan Brännmark arranged to meet Dancy during the first meeting in Lund to talk about questions regarding particularism, moral theory and the shape of the analytical tradition. The major part of the conversation is printed below. (shrink)
Abstract Labeling of food consumption is related to food safety, food quality, environmental, safety, and social concerns. Future politics of food will be based on a redefinition of commodity food consumption as an expression of citizenship. “Citizen-consumers” realize that they could use their buying power in order to develop a new terrain of social agency and political action. It takes for granted kinds of moral selfhood in which human responsibility is bound into human agency based on knowledge and recognition. This (...) requires new kinds of food labeling practices. Existing research on consumer’s preferences often fails to recognize the full complexity of the motivations and intentions through which the identity of the moral self is built up in relation to food consumption practices. For citizens, not only food production practices matter but also the impact of what we eat on who we are and the ecological foot print of our food stuff. Two major drivers for this are the idea that we ourselves have to take care of our own bodies (“We are what we eat”) and that we are responsible for Planet Earth. Since both obesity and climate change have become major public concerns, also governments develop an increasing interest in defining how citizens ought to behave as consumers and how retailers and producers should facilitate such responsible behavior. Since they are supposed to defend the “ bonum commune ,” e.g., the public health of their citizens and a sound common future for all, a new consensus on “appropriate” consumption choices has to be found, balancing beneficence and autonomy. Content Type Journal Article Category Articles Pages 1-13 DOI 10.1007/s10806-011-9366-7 Authors Johan De Tavernier, Center for Science Technology and Ethics, Katholieke Universiteit Leuven, Sint-Michielsstraat 4, 3000 Leuven, Belgium Journal Journal of Agricultural and Environmental Ethics Online ISSN 1573-322X Print ISSN 1187-7863. (shrink)
Sitting in the office of a distinguished philosopher of language recently, I watched him lean back (somewhat precariously) in his chair, look at the ceiling, and sigh: “Johan, we both write all this stuff about information, context, and communication – but is not the only time you really feel that you are making progress, when you resolutely close your eyes, and shut out the world and the others?” I appreciated his point, and indeed, in most spheres of life on (...) this planet, “l’Enfer” is most definitely “Les Autres”. (shrink)
Arne Johan Vetlesen argues that to do evil is to intentionally inflict pain on another human being, against his or her will, and cause serious and foreseeable harm. Vetlesen investigates why and in what sort of circumstances such a desire arises, and how it is channeled, or exploited, into collective evildoing. He argues that such evildoing, pitting whole groups against each other, springs from a combination of character, situation, and social structure. Vetlesen shows how closely perpetrators, victims, and (...) bystanders interact, and how aspects of human agency are recognized, denied, and projected by different agents. (shrink)
John Locke’s account of personal identity is usually thought to have been proved false by Thomas Reid’s simple ‘Gallant Officer’ argument. Locke is traditionally interpreted as holding that your having memories of a past person’s thoughts or actions is necessary and sufficient for your being identical to that person. This paper argues that the traditional memory interpretation of Locke’s account is mistaken and defends a memory continuity view according to which a sequence of overlapping memories is necessary and sufficient for (...) personal identity. On this view Locke is not vulnerable to the Gallant Officer argument. (shrink)
Despite their divergent metaphysical assumptions, Reformed and evolutionary epistemologists have converged on the notion of proper basicality. Where Reformed epistemologists appeal to God, who has designed the mind in such a way that it successfully aims at the truth, evolutionary epistemologists appeal to natural selection as a mechanism that favors truth-preserving cog- nitive capacities. This paper investigates whether Reformed and evolutionary epistemological accounts of theistic belief are compatible. We will argue that their chief incompatibility lies in the noetic effects of (...) sin and what may be termed the noetic effects of evolution, systematic tendencies wherein human cognitive faculties go awry. We propose a reconceptualization of the noetic effects of sin to mitigate this tension. (shrink)
In recent controversies about Intelligent Design Creationism (IDC), the principle of methodological naturalism (MN) has played an important role. In this paper, an often neglected distinction is made between two different conceptions of MN, each with its respective rationale and with a different view on the proper role of MN in science. According to one popular conception, MN is a self-imposed or intrinsic limitation of science, which means that science is simply not equipped to deal with claims of the supernatural (...) (Intrinsic MN or IMN). Alternatively, we will defend MN as a provisory and empirically grounded attitude of scientists, which is justified in virtue of the consistent success of naturalistic explanations and the lack of success of supernatural explanations in the history of science (Provisory MN or PMN). Science does have a bearing on supernatural hypotheses, and its verdict is uniformly negative. We will discuss five arguments that have been proposed in support of IMN: the argument from the definition of science, the argument from lawful regularity, the science stopper argument, the argument from procedural necessity, and the testability argument. We conclude that IMN, because of its philosophical flaws, proves to be an ill-advised strategy to counter the claims of IDC. Evolutionary scientists are on firmer ground if they discard supernatural explanations on purely evidential grounds, instead of ruling them out by philosophical fiat. (shrink)
Our ability for scientific reasoning is a byproduct of cognitive faculties that evolved in response to problems related to survival and reproduction. Does this observation increase the epistemic standing of science, or should we treat scientific knowledge with suspicion? The conclusions one draws from applying evolutionary theory to scientific beliefs depend to an important extent on the validity of evolutionary arguments (EAs) or evolutionary debunking arguments (EDAs). In this paper we show through an analytical model that cultural transmission of scientific (...) knowledge can lead toward representations that are more truth-approximating or more efficient at solving science-related problems under a broad range of circumstances, even under conditions where human cognitive faculties would be further off the mark than they actually are. (shrink)
Ever since Chomsky, language has become the paradigmatic example of an innate capacity. Infants of only a few months old are aware of the phonetic structure of their mother tongue, such as stress-patterns and phonemes. They can already discriminate words from non-words and acquire a feel for the grammatical structure months before they voice their first word. Language reliably develops not only in the face of poor linguistic input, but even without it. In recent years, several scholars have extended this (...) uncontroversial view into the stronger claim that natural language is a human-speciﬁc adaptation. As I shall point out, this position is more problematic because of a lack of conceptual clarity over what human-specific cognitive adaptations are, and how they relate to modularity, the notion that mental phenomena arise from several domain-speciﬁc cognitive structures. The main aim of this paper is not to discuss whether or not language is an adaptation, but rather, to examine the concept of modularity with respect to the evolution and development of natural language. . (shrink)
In a recent paper Johan van Benthem reviews earlier work done by himself and colleagues on ‘natural logic’. His paper makes a number of challenging comments on the relationships between traditional logic, modern logic and natural logic. I respond to his challenge, by drawing what I think are the most significant lines dividing traditional logic from modern. The leading difference is in the way logic is expected to be used for checking arguments. For traditionals the checking is local, i.e. (...) separately for each inference step. Between inference steps, several kinds of paraphrasing are allowed. Today we formalise globally: we choose a symbolisation that works for the entire argument, and thus we eliminate intuitive steps and changes of viewpoint during the argument. Frege and Peano recast the logical rules so as to make this possible. I comment also on the traditional assumption that logical processing takes place at the top syntactic level, and I question Johan’s view that natural logic is ‘natural’. (shrink)
Providing a possible worlds semantics for a logic involves choosing a class of possible worlds models, and setting up a truth definition connecting formulas of the logic with statements about these models. This scheme is so flexible that a danger arises: perhaps, any (reasonable) logic whatsoever can be modelled in this way. Thus, the enterprise would lose its essential tension. Fortunately, it may be shown that the so-called incompleteness-examples from modal logic resist possible worlds modelling, even in the above wider (...) sense. More systematically, we investigate the interplay of truth definitions and model conditions, proving a preservation theorem characterizing those types of truth definition which generate the minimal modal logic. (shrink)
The argument from design stands as one of the most intuitively compelling arguments for the existence of a divine Creator. Yet, for many scientists and philosophers, Hume's critique and Darwin's theory of natural selection have definitely undermined the idea that we can draw any analogy from design in artifacts to design in nature. Here, we examine empirical studies from developmental and experimental psychology to investigate the cognitive basis of the design argument. From this it becomes clear that humans spontaneously discern (...) purpose in nature. When constructed theologically and philosophically correctly, the design argument is not presented as conclusive evidence for God's existence but rather as an abductive, probabilistic argument. We examine the cognitive basis of probabilistic judgments in relationship to natural theology. Placing emphasis on how people assess improbable events, we clarify the intuitive appeal of Paley's watch analogy. We conclude that the reason why some scientists find the design argument compelling and others do not lies not in any intrinsic differences in assessing design in nature but rather in the prior probability they place on complexity being produced by chance events or by a Creator. This difference provides atheists and theists with a rational basis for disagreement. (shrink)
In historical claims for nativism, mathematics is a paradigmatic example of innate knowledge. Claims by contemporary developmental psychologists of elementary mathematical skills in human infants are a legacy of this. However, the connection between these skills and more formal mathematical concepts and methods remains unclear. This paper assesses the current debates surrounding nativism and mathematical knowledge by teasing them apart into two distinct claims. First, in what way does the experimental evidence from infants, nonhuman animals and neuropsychology support the nativist (...) hypothesis? Second, granting that infants have some elementary mathematical skills, does this mean that such skills play an important role in the development of mathematical knowledge? (shrink)
In this paper we shed new light on the Argument from Disagreement by putting it to test in a computer simulation. According to this argument widespread and persistent disagreement on ethical issues indicates that our moral opinions are not influenced by any moral facts, either because no such facts exist or because they are epistemically inaccessible or inefficacious for some other reason. Our simulation shows that if our moral opinions were influenced at least a little bit by moral facts, we (...) would quickly have reached consensus, even if our moral opinions were affected by factors such as false authorities, external political shifts, and random processes. Therefore, since no such consensus has been reached, the simulation gives us increased reason to take seriously the Argument from Disagreement. Our conclusion is however not conclusive; the simulation also indicates what assumptions one has to make in order to reject the Argument from Disagreement. The simulation algorithm we use builds on the work of Hegselmann and Krause (J Artif Soc Social Simul 5(3); 2002, J Artif Soc Social Simul 9(3), 2006). (shrink)
Contemporary value theory has been characterized by a renewed interest in the analysis of concepts like “good” or “valuable”, the most prominent pattern of analysis in recent years being the socalled buck-passing or fitting-attitude analysis which reduces goodness to a matter of having properties that provide reasons for pro-attitudes. Here I argue that such analyses are best understood as metaphysical rather than linguistic and that while the buck-passing analysis has some virtues, it still fails to provide a suitably wide-ranging pattern (...) of analysis for conceptualizing evaluative properties. Instead, a better alternative can be found in a metaphysical version of the Geachean view that goodness is always attributive and never predicative, namely that goodness is always a matter of relative placement in certain kinds of comparison classes. It is then suggested that the good and the valuable need to be separated from each other and that the latter is a species of the former. (shrink)
This paper takes a cognitive perspective to assess the significance of some Late Palaeolithic artefacts (sculptures and engraved objects) for philosophicalconcepts of art. We examine cognitive capacities that are necessary to produceand recognize objects that are denoted as art. These include the ability toattribute and infer design (design stance), the ability to distinguish between themateriality of an object and its meaning (symbol-mindedness), and an aesthetic sensitivity to some perceptual stimuli. We investigate to what extent thesecognitive processes played a role in (...) the production and appreciation of somerecently discovered Palaeolithic artefacts. (shrink)
It is widely agreed that sentences containing a non-denoting description embedded in the scope of a propositional attitude verb have true de dicto interpretations, and Russell’s (1905) analysis of definite descriptions is often praised for its simple analysis of such cases, cf. e.g. Neale (1990). However, several people, incl. Elbourne (2005, 2009), Heim (1991), and Kripke (2005), have contested this by arguing that Russell’s analysis yields incorrect predictions in non-doxastic attitude contexts. Heim and Elbourne have subsequently argued that once certain (...) facts about presupposition projection are fully appreciated, the Frege/Strawson analysis of definite descriptions has an explanatory advantage. In this paper, I argue that both Russell’s analysis and the Frege/Strawson analysis face a serious problem when it comes to the interaction of attitude verbs and definite descriptions. I argue that the problem observed by Elbourne, Heim, and Kripke is much more general than standardly assumed and that a solution requires a revision of the semantics of definite and indefinite descriptions. I outline the conditions that are required to solve the problem and present an analysis couched in dynamic semantics which can provide a solution. I conclude by discussing some further issues related to propositional attitude verbs that complicate a fully general solution to the problem. (shrink)
This book contains selected papers from the First International Conference on the Ontology of Spacetime. Its fourteen chapters address two main questions: first, what is the current status of the substantivalism/relationalism debate, and second, what about the prospects of presentism and becoming within present-day physics and its philosophy? The overall tenor of the four chapters of the book’s first part is that the prospects of spacetime substantivalism are bleak, although different possible positions remain with respect to the ontological status of (...) spacetime. Part II and Part III of the book are devoted to presentism, eternalism, and becoming, from two different perspectives. In the six chapters of Part II it is argued, in different ways, that relativity theory does not have essential consequences for these issues. It certainly is true that the structure of time is different, according to relativity theory, from the one in classical theory. But that does not mean that a decision is forced between presentism and eternalism, or that becoming has proved to be an impossible concept. It may even be asked whether presentism and eternalism really offer different ontological perspectives at all. The writers of the last four chapters, in Part III, disagree. They argue that relativity theory is incompatible with becoming and presentism. Several of them come up with proposals to go beyond relativity, in order to restore the prospects of presentism. · Space and time in present-day physics and philosophy · Relatively low level of technicality, easily accessible · Introduction from scratch of the debates surrounding time · Top authors explaining their positions · Broad spectrum of approaches, coherently represented. (shrink)
Contributing Authors: Lilli Alanen & Frans Svensson, David Alm, Gustaf Arrhenius, Gunnar Björnsson, Luc Bovens, Richard Bradley, Geoffrey Brennan & Nicholas Southwood, John Broome, Linus Broström & Mats Johansson, Johan Brännmark, Krister Bykvist, John Cantwell, Erik Carlson, David Copp, Roger Crisp, Sven Danielsson, Dan Egonsson, Fred Feldman, Roger Fjellström, Marc Fleurbaey, Margaret Gilbert, Olav Gjelsvik, Kathrin Glüer & Peter Pagin, Ebba Gullberg & Sten Lindström, Peter Gärdenfors, Sven Ove Hansson, Jana Holsanova, Nils Holtug, Victoria Höög, Magnus Jiborn, Karsten Klint (...) Jensen, Sigurður Kristinsson, Isaac Levi, Kasper Lippert-Rasmussen, David Makinson, Anna-Sofia Maurin, Philippe Mongin, Kevin Mulligan, Lennart Nordenfelt, Jonas Olson, Erik J. Olsson, Ingmar Persson, Johannes Persson, Björn Petersson, Philip Pettit, Hans Rott, Toni Rønnow-Rasmussen, Krister Segerberg, John Skorupski, Howard Sobel, Fredrik Stjernberg, Fred Stoutland, Caj Strandberg, Pär Sundström, Folke Tersman, Torbjörn Tännsjö, Peter Vallentyne, Bruno Verbeek, Stella Villarmea, and Michael J. Zimmerman. (shrink)
The leading Intelligent Design theorist William Dembski (Rowman & Littlefield, Lanham MD, 2002) argued that the first No Free Lunch theorem, first formulated by Wolpert and Macready (IEEE Trans Evol Comput 1: 67–82, 1997), renders Darwinian evolution impossible. In response, Dembski’s critics pointed out that the theorem is irrelevant to biological evolution. Meester (Biol Phil 24: 461–472, 2009) agrees with this conclusion, but still thinks that the theorem does apply to simulations of evolutionary processes. According to Meester, the theorem shows (...) that simulations of Darwinian evolution, as these are typically set in advance by the programmer, are teleological and therefore non-Darwinian. Therefore, Meester argues, they are useless in showing how complex adaptations arise in the universe. Meester uses the term teleological inconsistently, however, and we argue that, no matter how we interpret the term, a Darwinian algorithm does not become non-Darwinian by simulation. We show that the NFL theorem is entirely irrelevant to this argument, and conclude that it does not pose a threat to the relevance of simulations of biological evolution. (shrink)
The small-improvement argument is usually considered the most powerful argument against comparability, viz the view that for any two alternatives an agent is rationally required either to prefer one of the alternatives to the other or to be indifferent between them. We argue that while there might be reasons to believe each of the premises in the small-improvement argument, there is a conflict between these reasons. As a result, the reasons do not provide support for believing the conjunction of the (...) premises. Without support for the conjunction of the premises, the small-improvement argument for incomparability fails. (shrink)
Recent experimental evidence from developmental psychology and cogni- tive neuroscience indicates that humans are equipped with unlearned elementary math- ematical skills. However, formal mathematics has properties that cannot be reduced to these elementary cognitive capacities. The question then arises how human beings cognitively deal with more advanced mathematical ideas. This paper draws on the extended mind thesis to suggest that mathematical symbols enable us to delegate some mathematical operations to the external environment. In this view, mathematical symbols are not only (...) used to express mathematical concepts—they are constitutive of the mathematical concepts themselves. Mathematical symbols are epistemic actions, because they enable us to represent concepts that are literally unthinkable with our bare brains. Using case-studies from the history of mathematics and from educational psychology, we argue for an intimate relationship between mathematical symbols and mathematical cognition. (shrink)
The impact of science on ethics forms since long the subject of intense debate. Although there is a growing consensus that science can describe morality and explain its evolutionary origins, there is less consensus about the ability of science to provide input to the normative domain of ethics. Whereas defenders of a scientific normative ethics appeal to naturalism, its critics either see the naturalistic fallacy committed or argue that the relevance of science to normative ethics remains undemonstrated. In this paper, (...) we argue that current scientific normative ethicists commit no fallacy, that criticisms of scientific ethics contradict each other, and that scientific insights are relevant to normative inquiries by informing ethics about the options open to the ethical debate. Moreover, when conceiving normative ethics as being a nonfoundational ethics, science can be used to evaluate every possible norm. This stands in contrast to foundational ethics in which some norms remain beyond scientific inquiry. Finally, we state that a difference in conception of normative ethics underlies the disagreement between proponents and opponents of a scientific ethics. Our argument is based on and preceded by a reconsideration of the notions naturalistic fallacy and foundational ethics. This argument differs from previous work in scientific ethics: whereas before the philosophical project of naturalizing the normative has been stressed, here we focus on concrete consequences of biological findings for normative decisions or on the day-to-day normative relevance of these scientific insights. (shrink)
Though there exists a vast literature dealing with Hannah Arendt's thoughts on evil in general and Adolf Eichmann in particular, few attempts have been made to assess Arendt's position on evil by tracing its connection with her reflections on conscience. This essay examines the nature and significance of such a connection. Beginning with her doctoral dissertation on St Augustine and ending with her posthumously published studies in The Life of the Mind, Arendt's oeuvre exhibits strong thematic continuity: the triad thinking-conscience-evil (...) forms its most enduring core. A puzzling core, to be sure, considering the controversies triggered, especially regarding her notion of the 'banality of evil'. By placing the role of conscience at the very center of Arendt's lifelong reflections, this essay explores the - in many ways related - influence exerted by St Augustine and Heidegger. Heidegger's conception of conscience in Sein und Zeit is identified as a crucial source for understanding - so the claim holds - why Arendt found Heidegger's philosophy particularly wanting as regards the question of evil. Key Words: Arendt Augustine conscience evil Heidegger Socrates thinking. (shrink)
The Handbook of Modal Logic contains 20 articles, which collectively introduce contemporary modal logic, survey current research, and indicate the way in which the field is developing. The articles survey the field from a wide variety of perspectives: the underling theory is explored in depth, modern computational approaches are treated, and six major applications areas of modal logic (in Mathematics, Computer Science, Artificial Intelligence, Linguistics, Game Theory, and Philosophy) are surveyed. The book contains both well-written expository articles, suitable for beginners (...) approaching the subject for the first time, and advanced articles, which will help those already familiar with the field to deepen their expertise. Please visit: http://people.uleth.ca/~woods/RedSeriesPromo_WP/PubSLPR.html - Compact modal logic reference - Computational approaches fully discussed - Contemporary applications of modal logic covered in depth. (shrink)
This paper offers an epistemological discussion of self-validating belief systems and the recurrence of ?epistemic defense mechanisms? and ?immunizing strategies? across widely different domains of knowledge. We challenge the idea that typical ?weird? belief systems are inherently fragile, and we argue that, instead, they exhibit a surprising degree of resilience in the face of adverse evidence and criticism. Borrowing from the psychological research on belief perseverance, rationalization and motivated reasoning, we argue that the human mind is particularly susceptible to belief (...) systems that are structurally self-validating. On this cognitive-psychological basis, we construct an epidemiology of beliefs, arguing that the apparent convenience of escape clauses and other defensive ?tactics? used by believers may well derive not from conscious deliberation on their part, but from more subtle mechanisms of cultural selection. (shrink)
We give a condensed survey of recent research on generalized quantifiers in logic, linguistics and computer science, under the following headings: Logical definability and expressive power, Polyadic quantifiers and linguistic definability, Weak semantics and axiomatizability, Computational semantics, Quantifiers in dynamic settings, Quantifiers and modal logic, Proof theory of generalized quantifiers.
What are the consequences of evolutionary theory for the epistemic standing of our beliefs? Evolutionary considerations can be used to either justify or debunk a variety of beliefs. This paper argues that evolutionary approaches to human cognition must at least allow for approximately reliable cognitive capacities. Approaches that portray human cognition as so deeply biased and deficient that no knowledge is possible are internally incoherent and self-defeating. As evolutionary theory offers the current best hope for a naturalistic epistemology, evolutionary approaches (...) to epistemic justification seem to be committed to the view that our sensory systems and belief-formation processes are at least approximately accurate. However, for that reason they are vulnerable to the charge of circularity, and their success seems to be limited to commonsense beliefs. This paper offers an extension of evolutionary arguments by considering the use of external media in human cognitive processes: we suggest that the way humans supplement their evolved cognitive capacities with external tools may provide an effective way to increase the reliability of their beliefs and to counter evolved cognitive biases. (shrink)
Psychological evidence suggests that laypeople understand the world around them in terms of intuitive ontologies which describe broad categories of objects in the world, such as ‘person’, ‘artefact’ and ‘animal’. However, because intuitive ontologies are the result of natural selection, they only need to be adaptive; this does not guarantee that the knowledge they provide is a genuine reflection of causal mechanisms in the world. As a result, science has parted ways with intuitive ontologies. Nevertheless, since the brain is evolved (...) to understand objects in the world according to these categories, we can expect that they continue to play a role in scientific understanding. Taking the case of human evolution, we explore relationships between intuitive ontological and scientific understanding. We show that intuitive ontologies not only shape intuitions on human evolution, but also guide the direction and topics of interest in its research programmes. Elucidating the relationships between intuitive ontologies and science may help us gain a clearer insight into scientific understanding. (shrink)
Epistemology and epistemic logic At first sight, the modern agenda of epistemology has little to do with logic. Topics include different definitions of knowledge, its basic formal properties, debates between externalist and internalist positions, and above all: perennial encounters with sceptics lurking behind every street corner, especially in the US. The entry 'Epistemology' in the Routledge Encyclopedia of Philosophy (Klein 1993) and the anthology (Kim and Sosa 2000) give an up-to-date impression of the field. Now, epistemic logic started as a (...) contribution to epistemology, or at least a tool in its modus operandi, with the seminal book Knowledge and Belief (Hintikka's 1962, 2005). Formulas like Ki for "the agent i knows that " Bi for "the agent i believes that " provided logical forms for stating and analyzing philosophical propositions and arguments. And more than that, their model-theoretic semantics in terms of ranges of alternatives provided an appealing extensional way of thinking about what agents know or believe in a given situation. In particular, on Hintikka's view, an agent knows those propositions which are true in all situations compatible with what she knows about the actual world; i.e., her current range of uncertainty. (shrink)
Issues about information spring up wherever one scratches the surface of logic. Here is a case that raises delicate issues of 'factual' versus 'procedural' information, or 'statics' versus 'dynamics'. What does intuitionistic logic, perhaps the earliest source of informational and procedural thinking in contemporary logic, really tell us about information? How does its view relate to its 'cousin' epistemic logic? We discuss connections between intuitionistic models and recent protocol models for dynamic-epistemic logic, as well as more general issues that emerge.
Any theory that analyses personal identity in terms of phenomenal continuity needs to deal with the ordinary interruptions of our consciousness that it is commonly thought that a person can survive. This is the bridge problem. The present paper offers a novel solution to the bridge problem based on the proposal that dreamless sleep need not interrupt phenomenal continuity. On this solution one can both hold that phenomenal continuity is necessary for personal identity and that persons can survive dreamless sleep.
Confronted with Adolf Eichmann, evildoer par excellence, Hannah Arendt sought in vain for any 'depth' to the evil he had wrought. How is the philosopher to approach evil ? Is the celebrated criterion of impartiality ill-equipped to guide judgment when its object is evil - as exhibited, for instance, in the recent genocide in Bosnia? This essay questions the ability of the neutral 'third party' to respond adequately to evil from a standpoint of avowed impartiality. Discussing the different roles of (...) perpetrator and victim, I argue that in any knowledge about evil the victim is the supremely privileged source; this being so, the non-party to the occurrence of evil must privilege the testimony of the victimized - even at the cost of strict impartiality of moral judgment. Key Words: Arendt evil genocide Goldhagen impartiality judgment Kant Levinas. (shrink)
This work seeks to develop a Kantian ethical theory in terms of a general ontology of values and norms together with a metaphysics of the person that makes sense of this ontology. It takes as its starting point Kant’s assertion that a good will is the only thing that has an unconditioned value and his accompanying view that the highest good consists in virtue and happiness in proportion to virtue. The soundness of Kant’s position on the value of the good (...) will is defended against criticisms directed against it by G. E. Moore and it is argued that there is an ambiguity in Moore’s notion of ‘intrinsic value’ that makes him unable to fully understand and appreciate the Kantian view. It is also argued that the special value of moral goodness has been unduly neglected in modern moral philosophy, even by those working in the Kantian tradition, and it is suggested that the possibility of a Kantian ethical theory centred on the notion of the highest good remains to be explored. In order to lay the ground for such a theory a Kantian approach to reasons for action and the metaphysics of the person is developed and defended, albeit in a way that, in contrast to Kant himself, emphasizes the social dimension inherent in being a person and acting on reasons. It is also argued that there exists what Henry Sidgwick has called a dualism of practical reason, which means that there are two systematic modes, the self-interested and the moral, of approaching action. These two modes correspond to the two components of the highest good as understood by Kant and it is argued that the highest good represents a reasonable way of unifying them. These two parts of the highest good are then considered, each in turn, and Kantian models for understanding them are elaborated and defended against main rivals. On the matter of happiness, it is argued that standard philosophical theories fail to properly account for the way in which a subject’s own opinions about what constitutes her happiness are important in determining where her happiness actually lies. On the matter of morality, Kantianism is contrasted with consequentialism, the other leading theory that understands morality in terms of an ideal of impartiality and it is argued that the Kantian ideal, which can be called ‘impartiality as universalizability’ is superior to the consequentialist one, which can be called ‘impartiality as impersonality’. A version of Kantian ethics that places its emphasis on the Formula of Universal Law is then elaborated and it is argued that it is reasonable to understand maxims, or at least those maxims eligible for the universalizability test, as having to do with the basic general principles according to which we live. This kind of interpretation creates a large room for the exercise of judgment on the part of the agent and it is suggested that the standards according to which such judgment is exercised are largely determined through our actual moral practices and discourses. (shrink)
Information is a notion of wide use and great intuitive appeal, and hence, not surprisingly, different formal paradigms claim part of it, from Shannon channel theory to Kolmogorov complexity. Information is also a widely used term in logic, but a similar diversity repeats itself: there are several competing logical accounts of this notion, ranging from semantic to syntactic. In this chapter, we will discuss three major logical accounts of information.
The standard argument for the claim that rational preferences are transitive is the pragmatic money-pump argument. However, a money-pump only exploits agents with cyclic strict preferences. In order to pump agents who violate transitivity but without a cycle of strict preferences, one needs to somehow induce such a cycle. Methods for inducing cycles of strict preferences from non-cyclic violations of transitivity have been proposed in the literature, based either on offering the agent small monetary transaction premiums or on multi-dimensional preferences. (...) This paper argues that previous proposals have been flawed and presents a new approach based on the dominance principle. (shrink)
Current dynamic-epistemic logics model different types of information change in multi-agent scenarios. We generalize these logics to a probabilistic setting, obtaining a calculus for multi-agent update with three natural slots: prior probability on states, occurrence probabilities in the relevant process taking place, and observation probabilities of events. To match this update mechanism, we present a complete dynamic logic of information change with a probabilistic character. The completeness proof follows a compositional methodology that applies to a much larger class of dynamic-probabilistic (...) logics as well. Finally, we discuss how our basic update rule can be parameterized for different update policies, or learning methods. (shrink)
Following the example of the many organizations in the United States which have a code of ethics, an increasing interest on the part of companies, trade organizations, (semi-)governmental organizations and professions in the Netherlands to develop codes of ethics can be witnessed. We have been able to escort a variety of organizations in this process. The process that organizations must go through in order to attain a code involves a variety of difficult decisions. In this article we will, based on (...) our experiences, describe twelve dilemmas which will have to be solved during the development of such a code. When one or more of these dilemmas is ignored or an ungrounded choice is made, the effectiveness of the code will be negatively affected. Furthermore, the twelve dilemmas could be used as twelve dimensions to exemplify organizational codes of ethics. In this article we will also discuss a method to organize ethics within the organization. This will serve as a guide as to how, with respect to the dilemmas described, adequate considerations can be made. The article will be concluded with a description of our experiences at the Dutch Schiphol Airport. This case demonstrates how the aforementioned reasoning can be applied in practice. (shrink)
Following John Rawls, writers like Bernard Williams and Christine Korsgaard have suggested that a transparency condition should be put on ethical theories. The exact nature of such a condition and its implications is however not anything on which there is any consensus. It is argued here that the ultimate rationale of transparency conditions is epistemic rather than substantively moral, but also that it clearly connects to substantive concerns about moral psychology. Finally, it is argued that once a satisfactory form of (...) the transparency condition is formulated, then, at least among the main contenders within ethical theory, it speaks in favor of a broadly Aristotelian approach to ethical theorizing. (shrink)
This article develops a new measure of freedom of choice based on the proposal that a set offers more freedom of choice than another if, and only if, the expected degree of dissimilarity between a random alternative from the set of possible alternatives and the most similar offered alternative in the set is smaller. Furthermore, a version of this measure is developed, which is able to take into account the values of the possible options.
Hva er vitenskap og hva anser vi som vitenskaplighet? Dette er spørsmål som kan være verdt å se nøyere på før vi aksepterer at det er et klart skille mellom den etablerte skolemedisinen og alt det vi kaller ”alternativ medisin” eller ”alternativ behandling”. For hva er det egentlig som gjør noe til etablert og noe annet til et alternativ? Er den etablerte medisin mer vitenskapelig enn den alternative, ved at den for eksempel benytter seg av mer vitenskapelige metoder? Er resultatene (...) til den etablerte behandlingen ”vitenskapelig bevist”, mens den alternative behandlingen ikke har noen vitenskapelig dokumenterte resultater å vise til? Og har den alternative tradisjonen et forklaringsproblem med hensyn til hvorfor behandlingen eventuelt fungerer, mens den etablerte tradisjonen kan henvise til entydige kausale modeller? -/- Hva vi er tilbøyelig til å svare på slike spørsmål avhenger av hva vi legger i begrepene vitenskap og vitenskapelighet. (shrink)
The evolutionary claim that the function of self-awareness lies, at least in part, in the benefits of theory of mind (TOM) regained attention in light of current findings in cognitive neuroscience, including mirror neuron research. Although certain non-human primates most likely possess mirror self-recognition skills, we claim that they lack the introspective abilities that are crucial for human-like TOM. Primate research on TOM skills such as emotional recognition, seeing versus knowing and ignorance versus knowing are discussed. Based upon current findings (...) in cognitive neuroscience, we provide evidence in favor of an introspection-based simulation theory account of human mindreading. (shrink)
This paper researches perceptions of the concept of price fairness in the Dutch coffee market. We distinguish four alternative standards of fair prices based on egalitarian, basic rights, capitalistic and libertarian approaches. We investigate which standards are guiding the perceptions of price fairness of citizens and coffee trade organizations. We find that there is a divergence in views between citizens and key players in the coffee market. Whereas citizens support the concept of fairness derived from the basic rights approach, holding (...) that the price should provide coffee farmers with a minimum level of subsistence, representatives of Dutch coffee traders hold the capitalistic view that the free world market price is fair. (shrink)
There is a popular view on depiction which holds that convincingly realistic paintings depict their subjects through evoking in the spectator the illusion of seeing these very subjects face to face. There is, as it were, an exact 'match' between the visual experience of seeing something in a picture and the corresponding visual experience one would entertain if one were to stand in front of the real thing. This view, which we shall call 'illusionism', supports the widespread assumption that some (...) kinds of pictures -- notably post-Renaissance perspective paintings -- provide the correct, 'natural' way to depict physical space because they capture the way the visual system enables one to see the world. The most notable defence of illusionism has been offered by Ernst Gombrich. In his Art and Illusion, Gombrich (1960) argues that the development of Western art consists in a series of discoveries about the nature of visual perception that eventually lead to pictorial techniques that are able to elicit illusionary visual experiences on part of the spectator and thus reach the perfection of naturalistic representation. (shrink)
In order to account for non-traditional preference relations the present paper develops a new, richer framework for preference relations. This new framework provides characterizations of non-traditional preference relations, such as incommensurateness and instability, that may hold when neither preference nor indifference do. The new framework models relations with swaps, which are conceived of as transfers from one alternative state to another. The traditional framework analyses dyadic preference relations in terms of a hypothetical choice between the two compared alternatives. The swap (...) framework extends this approach by analysing dyadic preference relations in terms of two hypothetical choices: the choice between keeping the first of the compared alternatives or swapping it for the second; and the choice between keeping the second alternative or swapping it for the first. (shrink)
This article takes off from Johan van Benthem’s ruminations on the interface between logic and cognitive science in his position paper “Logic and reasoning: Do the facts matter?”. When trying to answer Van Benthem’s question whether logic can be fruitfully combined with psychological experiments, this article focuses on a specific domain of reasoning, namely higher-order social cognition, including attributions such as “Bob knows that Alice knows that he wrote a novel under pseudonym”. For intelligent interaction, it is important that (...) the participants recursively model the mental states of other agents. Otherwise, an international negotiation may fail, even when it has potential for a win-win solution, and in a time-critical rescue mission, a software agent may depend on a teammate’s action that never materializes. First a survey is presented of past and current research on higher-order social cognition, from the various viewpoints of logic, artificial intelligence, and psychology. Do people actually reason about each other’s knowledge in the way proscribed by epistemic logic? And if not, how can logic and cognitive science productively work together to construct more realistic models of human reasoning about other minds? The paper ends with a delineation of possible avenues for future research, aiming to provide a better understanding of higher-order social reasoning. The methodology is based on a combination of experimental research, logic, computational cognitive models, and agent-based evolutionary models. (shrink)
In this paper I introduce a formalism for natural language understandingbased on a computational implementation of Discourse RepresentationTheory. The formalism covers a wide variety of semantic phenomena(including scope and lexical ambiguities, anaphora and presupposition),is computationally attractive, and has a genuine inference component. Itcombines a well-established linguistic formalism (DRT) with advancedtechniques to deal with ambiguity (underspecification), and isinnovative in the use of first-order theorem proving techniques.The architecture of the formalism for natural language understandingthat I advocate consists of three levels of processing:underspecification, (...) resolution, andinference. Each of these levels has a distinct function andtherefore employs a different kind of semantic representation. Themappings between these different representations define the interfacesbetween the levels. (shrink)
Andy Egan argues that neither evidential nor causal decision theory gives the intuitively right recommendation in the cases The Smoking Lesion, The Psychopath Button, and The Three-Option Smoking Lesion. Furthermore, Egan argues that we cannot avoid these problems by any kind of ratificationism. This paper develops a new version of ratificationism that gives the right recommendations. Thus, the new proposal has an advantage over evidential and casual decision theory and standard ratificationist evidential decision theory.
Abstract In contemporary moral philosophy, the standard way of understanding the constituents of the human good is in terms of a fairly limited number of features that contribute to our happiness independently of how they are situated in our lives. Even when this approach is supplemented by Moorean ideas about organic wholes, it still cannot do justice to the deep importance of how things are situated and even when meaning is seen as an important factor, it still tends to be (...) treated as simply another item on the list of constituents. It is argued here that we should abandon this approach in favor of one that recognizes that our lives are best understood in terms of their narrative structure and that treats narrative meaning as a pervasive phenomenon that strongly influences the importance that different features have in making our lives go more or less well. (shrink)
Dynamic update of information states is a new paradigm in logicalsemantics. But such updates are also a traditional hallmark ofprobabilistic reasoning. This note brings the two perspectives togetherin an update mechanism for probabilities which modifies state spaces.
An immunizing strategy is an argument brought forward in support of a belief system, though independent from that belief system, which makes it more or less invulnerable to rational argumentation and/or empirical evidence. By contrast, an epistemic defense mechanism is defined as a structural feature of a belief system which has the same effect of deflecting arguments and evidence. We discuss the remarkable recurrence of certain patterns of immunizing strategies and defense mechanisms in pseudoscience and other belief systems. Five different (...) types will be distinguished and analyzed, with examples drawn from widely different domains. The difference between immunizing strategies and defense mechanisms is analyzed, and their epistemological status is discussed. Our classification sheds new light on the various ways in which belief systems may achieve invulnerability against empirical evidence and rational criticism, and we propose our analysis as part of an explanation of these belief systems’ enduring appeal and tenacity. (shrink)
The political philosopher Hannah Arendt develops several arguments regarding why truthfulness cannot be counted among the political virtues. This article shows that similar arguments apply to lying in business. Based on Hannah Arendt's theory, we distinguish five reasons why lying is a structural temptation to businessmen: business is about action to change the world and therefore businessmen need the capacity to deny current reality; commerce requires successful image-making and liars have the advantage to come up with plausible stories; business communication (...) is more often about opinions than about facts, giving leeway to ignore uncomfortable signals; business increasingly makes use of plans and models, but these techniques foster inflexibility in acknowledging the real facts; and businessmen easily fall prey to self-deception, because one needs to act as if the vision already materializes. The theory is illustrated by a case study of Landis, which grew from a relatively insignificant organization into a large one within a short period of time, but ended with outright lies and bankruptcy. (shrink)
The purpose of this paper is to use an anaphoric notion of presupposition for solving the problem of zero argument anaphora. Since Shopen (1973) it has been known that many missing arguments have an anaphoric interpretation, but it has not been known how this interpretation arises. I argue that these arguments are involved in presuppositions. On an anaphoric account of presuppositions as in van der Sandt (1992) or Kamp and Roßdeutscher (1992), it can be shown that the zero arguments acquire (...) an anaphoric interpretation through the presuppositions. The analysis rests on the principle that the Discourse Representation Structure for the presupposition is proper, so that the discourse referents for the zero arguments are in its universe and must be anchored to discourse referents in the context. (shrink)
Some propositional attitude verbs require that the complement contain some “subjective predicate”. In terms of the theory proposed by Lasersohn, these verbs would seem to identify the “judge” of the embedded proposition with the matrix subject, and there have been suggestions in this direction. I show that it is possible to analyze these verbs as setting the judge and doing nothing more; then according to whether a judge index or a judge argument is assumed, unless the complement contains a subjective (...) predicate, the whole matrix is redundant or there is a type conflict. I further show that certain clear facts argue for assuming a judge argument which can be filled by a contextually salient entity–or by the subject of a subjective attitude verb. (shrink)
Contemporary historians of logic tend to credit Bernard Bolzano with the invention of the semantic notion, of consequence, a full century before Tarski. Nevertheless, Bolzano's work played no significant rôle in the genesis of modern logical semantics. The purpose of this paper is to point out three highly original, and still quite relevant themes in Bolzano's work, being a systematic study of possible types of inference, of consistency, as well as their meta-theory. There are certain analogies with Tarski's concerns here, (...) although the main thrust seems to be different, both philosophically and technically. Thus, if only obliquely, we also provide some additional historical perspective on Tarski's achievement. (shrink)
The shift in the prevailing view of alcoholism from a moral paradigm towards a biomedical paradigm is often characterized as a form of biomedicalization. We will examine and critique three reasons offered for the claim that viewing alcoholism as a disease is morally problematic. The first is that the new conceptualization of alcoholism as a chronic brain disease will lead to individualization, e.g., a too narrow focus on the individual person, excluding cultural and social dimensions of alcoholism. The second claim (...) is that biomedicalization will lead to stigmatization and discrimination for both alcoholics and people who are at risk of becoming alcoholics. The third claim is that as a result of the biomedical point of view, the autonomy and responsibility of alcoholics and possibly even persons at risk may be unjustly restricted. Our conclusion is that the claims against the biomedical conceptualization of alcoholism as a chronic brain disease are neither specific nor convincing. Not only do some of these concerns also apply to the traditional moral model; above that they are not strong enough to justify the rejection of the new biomedical model altogether. The focus in the scientific and public debate should not be on some massive “biomedicalization objection” but on the various concerns underlying what is framed in terms of the biomedicalization of alcoholism. (shrink)
The notion of preference occurs across many areas, including the philosophy of action, decision theory, optimality theory, and game theory. In these settings, individual preferences between worlds or actions can be used to predict behavior by rational agents. In a more abstract sense, the notion of preference also occurs in conditional logic, non-monotonic logic and belief revision theory, whose semantics involve an ordering of the possible worlds in terms of relative similarity or plausibility, or other preference-like relations.
Modern logic is undergoing a cognitive turn, side-stepping Frege’s ‘antipsychologism’. Collaborations between logicians and colleagues in more empirical fields are growing, especially in research on reasoning and information update by intelligent agents. We place this border-crossing research in the context of long-standing contacts between logic and empirical facts, since pure normativity has never been a plausible stance. We also discuss what the fall of Frege’s Wall means for a new agenda of logic as a theory of rational agency, and what (...) might then be a viable understanding of ‘psychologism’ as a friend rather than an enemy of logical theory. (shrink)
A variety of logical frameworks have been developed to study rational agents interacting over time. This paper takes a closer look at one particular interface, between two systems that both address the dynamics of knowledge and information flow. The first is Epistemic Temporal Logic (ETL) which uses linear or branching time models with added epistemic structure induced by agents’ different capabilities for observing events. The second framework is Dynamic Epistemic Logic (DEL) that describes interactive processes in terms of epistemic event (...) models which may occur inside modalities of the language. This paper systematically and rigorously relates the DEL framework with the ETL framework. The precise relationship between DEL and ETL is explored via a new representation theorem characterizing the largest class of ETL models corresponding to DEL protocols in terms of notions of Perfect Recall , No Miracles , and Bisimulation Invariance . We then focus on new issues of completeness . One contribution is an axiomatization for the dynamic logic of public announcements constrained by protocols, which has been an open problem for some years, as it does not fit the usual ‘reduction axiom’ format of DEL. Finally, we provide a number of examples that show how DEL suggests an interesting fine-structure inside ETL. (shrink)
The article explores the limits of buck-passing analysis in evaluating value or goodness. It talks about the inability of back-passers to account for two important types of value or goodness, which include excellence and means. The use of delimiting strategy in buck-passing analysis in order to be in possession of goodness is discussed.