In this study we investigate the influence of reason-relation readings of indicative conditionals and ‘and’/‘but’/‘therefore’ sentences on various cognitive assessments. According to the Frege-Grice tradition, a dissociation is expected. Specifically, differences in the reason-relation reading of these sentences should affect participants’ evaluations of their acceptability but not of their truth value. In two experiments we tested this assumption by introducing a relevance manipulation into the truth-table task as well as in other tasks assessing the participants’ acceptability and probability evaluations. Across (...) the two experiments a strong dissociation was found. The reason-relation reading of all four sentences strongly affected their probability and acceptability evaluations, but hardly affected their respective truth evaluations. Implications of this result for recent work on indicative conditionals are discussed. (shrink)
Gotthold Ephraim Lessing stands out among the thinkers of the 18th century for his refusal to synthesize theology and philosophy. But due to his notorious ambivalence about religious questions, even Lessing’s contemporaries remained uncertain whether he ultimately sided with the former or the latter. The short dialogue Hercules and Omphale is, to the detriment of research on this topic, largely unknown. I show that the dialogue offers in a nutshell Lessing’s comprehensive analysis of the intellectual and religious situation of his (...) time. By calling on the mythical travesty of the Asian queen and the Greek hero, Lessing illustrates the mutual attraction that has led astray both Enlightenment philosophy and contemporary Lutheran orthodoxy. Implicitly, his diagnosis of the aberrations of philosophy and theology sheds light on Lessing’s own position. The twofold criticism is an attempt to restore theology as well as philosophy in their genuine forms and to reestablish their proper relationship. Through his twofold restitutio in integrum, Lessing is able to reopen the quarrel between orthodoxy and the Enlightenment and, thus, to radically renew the all but forgotten theologico-philosophical antagonism. (shrink)
In the 1990s Estonia underwent a process of radical socio-political changes: a periphery of the Soviet conglomerate became a country with an independent political and economic life. The new situation also brought about a revision of cultural identity, which in the Soviet Union had been grounded primarily on the dichotomy between national and Soviet culture. Since these oppositions were rendered unimportant with the changed politico-economic conditions, a time of ideological vacuum followed. Estonia as an independent state and a cultural island (...) between the East and the West turned its face toward Europe, questioning for its new or true identity in the postmodernising and globalizing society. In this article three productions of Estonian theatre as examples of identity construction will be analysed, investigating the rewriting of cultural heritage, intercultural relationships and implicit ideologies. (shrink)
Recently, several scholars have argued that scientists can accept scientific claims in a collective process, and that the capacity of scientific groups to form joint acceptances is linked to a functional division of labor between the group members. However, these accounts reveal little about how the cognitive content of the jointly accepted claim is formed, and how group members depend on each other in this process. In this paper, I shall therefore argue that we need to link analyses of joint (...) acceptance with analyses of distributed cognition. To sketch how this can be done, I shall present a detailed case study, and on the basis of the case, analyze the process through which a group of scientists jointly accept a new scientific claim and at a later stage jointly accept to revise previously accepted claims. I shall argue that joint acceptance in science can be established in situations where an overall conceptual structure is jointly accepted by a group of scientists while detailed parts of it are distributed among group members with different areas of expertise, a condition that I shall call a heterogeneous conceptual consensus. Finally, I shall show how a heterogeneous conceptual consensus can work as a constraint against scientific change and address the question how changes may nevertheless occur. (shrink)
In this book, Hannes Charen presents an alternative examination of kinship structures in political theory. Employing a radically transdisciplinary approach, On the Politics of Kinship is structured in a series of six theoretical vignettes or frames. Each chapter frames a figure, aspect, or relational context of the family or kinship. Some chapters are focused on a critique of the family as a state-sanctioned institution, while others cautiously attempt to recast kinship in a way to reimagine mutual obligation through the (...) generation of kinship practices understood as a perpetually evolving set of relational responses to finitude. In doing so, Charen considers the ways in which kinship is a plastic social response to embodied exposure, both concealed and made more evident in the bloated, feeble, and broken individualities and nationalities that seem to dominate our social and political landscape today. On the Politics of Kinship will be of interest to political theorists, feminists, anthropologists, and social scientists in general. (shrink)
When making end-of-life decisions in intensive care units (ICUs), different staff groups have different roles in the decision-making process and may not always assess the situation in the same way. The aim of this study was to examine the challenges Danish nurses, intensivists, and primary physicians experience with end-of-life decisions in ICUs and how these challenges affect the decision-making process. Interviews with nurses, intensivists, and primary physicians were conducted, and data is discussed from an ethical perspective. All three groups found (...) that the main challenges were associated with interdisciplinary collaboration and future perspectives for the patient. Most of these challenges were connected with ethical issues. The challenges included different assessments of treatment potential, changes and postponements of withholding and withdrawing therapy orders, how and when to identify patients’ wishes, and suffering caused by the treatment. To improve end-of-life decision-making in the ICU, these challenges need to be addressed by interdisciplinary teams. (shrink)
If Wittgenstein's later account of language is applied to music, what seems to follow is a version of musical formalism. This is to say that the meaning of music is constituted by the rules of a given system of music, and the understanding of music is the ability to follow these rules. I argue that, while this view may seem unattractive at the outset, Wittgenstein actually held this view. Moreover, his later notion of a rule gives us resources to answer (...) some of the traditional criticisms directed against formalism. (shrink)
In everyday life we either express our beliefs in all-or-nothing terms or we resort to numerical probabilities: I believe it's going to rain or my chance of winning is one in a million. The Stability of Belief develops a theory of rational belief that allows us to reason with all-or-nothing belief and numerical belief simultaneously.
Business scholars have recently proposed that the virtue of personal wisdom may predict leadership behaviors and the quality of leader–follower relationships. This study investigated relationships among leaders’ personal wisdom—defined as the integration of advanced cognitive, reflective, and affective personality characteristics (Ardelt, Hum Dev 47:257–285, 2004)—transformational leadership behaviors, and leader–member exchange (LMX) quality. It was hypothesized that leaders’ personal wisdom positively predicts LMX quality and that intellectual stimulation and individualized consideration, two dimensions of transformational leadership, mediate this relationship. Data came from (...) 75 religious leaders and 1–3 employees of each leader (N = 158). Results showed that leaders’ personal wisdom had a positive indirect effect on follower ratings of LMX quality through individualized consideration, even after controlling for Big Five personality traits, emotional intelligence, and narcissism. In contrast, intellectual stimulation and the other two dimensions of transformational leadership (idealized influence and inspirational motivation) did not mediate the positive relationship between leaders’ personal wisdom and LMX quality. Implications for future research on personal wisdom and leadership are discussed, and some tentative suggestions for leadership development are outlined. (shrink)
Several investigations have shown language impairments following electrode implantation surgery for Deep Brain Stimulation (DBS) in movement disorders. The impact of the actual stimulation, however, differs between DBS targets with further deterioration in formal language tests induced by thalamic DBS in contrast to subtle improvement observed in subthalamic DBS. Here, we studied speech samples from interviews with participants treated with DBS of the thalamic ventral intermediate nucleus (VIM) for essential tremor (ET), or the subthalamic nucleus (STN) for Parkinson’s disease (PD), (...) and healthy volunteers (eachn= 13). We analyzed word frequency and the use of open and closed class words. Active DBS increased word frequency in case of VIM, but not STN stimulation. Further, relative to controls, both DBS groups produced fewer open class words. Whereas VIM DBS further decreased the proportion of open class words, it was increased by STN DBS. Thus, VIM DBS favors the use of relatively common words in spontaneous language, compatible with the idea of lexical simplification under thalamic stimulation. The absence or even partial reversal of these effects in patients receiving STN DBS is of interest with respect to biolinguistic concepts suggesting dichotomous thalamic vs. basal ganglia roles in language processing. (shrink)
When do children acquire a propositional attitude folk psychology or theory of mind? The orthodox answer to this central question of developmental ToM research had long been that around age 4 children begin to apply “belief” and other propositional attitude concepts. This orthodoxy has recently come under serious attack, though, from two sides: Scoffers complain that it over-estimates children’s early competence and claim that a proper understanding of propositional attitudes emerges only much later. Boosters criticize the orthodoxy for underestimating early (...) competence and claim that even infants ascribe beliefs. In this paper, the orthodoxy is defended on empirical grounds against these two kinds of attacks. On the basis of new evidence, not only can the two attacks safely be countered, but the orthodox claim can actually be strengthened, corroborated and refined: what emerges around age 4 is an explicit, unified, flexibly conceptual capacity to ascribe propositional attitudes. This unified conceptual capacity contrasts with the less sophisticated, less unified implicit forms of tracking simpler mental states present in ontogeny long before. This refined version of the orthodoxy can thus most plausibly be spelled out in some form of 2-systems-account of theory of mind. (shrink)
Thomas Reid's Geometry of Visibles, according to which the geometrical properties of an object's perspectival appearance equal the geometrical properties of its projection on the inside of a sphere with the eye in its centre allows for two different interpretations. It may (1) be understood as a theory about phenomenal visual space – i.e. an account of how things appear to human observers from a certain point of view – or it may (2) be seen as a mathematical model of (...) viewpoint-relative but mind-independent relational properties of objects. This paper makes a systematic and a historical claim. I shall argue, first, that given certain features of the human visual system phenomenal visual space differs in several aspects from Reidean visual space. Secondly, I suggest that, since Reid was aware of some of these empirical facts, we should interpret Reid as endorsing the second interpretation of the Geometry of Visibles. (shrink)
This essay develops a joint theory of rational (all-or-nothing) belief and degrees of belief. The theory is based on three assumptions: the logical closure of rational belief; the axioms of probability for rational degrees of belief; and the so-called Lockean thesis, in which the concepts of rational belief and rational degree of belief figure simultaneously. In spite of what is commonly believed, this essay will show that this combination of principles is satisfiable (and indeed nontrivially so) and that the principles (...) are jointly satisfied if and only if rational belief is equivalent to the assignment of a stably high rational degree of belief. Although the logical closure of belief and the Lockean thesis are attractive postulates in themselves, initially this may seem like a formal “curiosity”; however, as will be argued in the rest of the essay, a very reasonable theory of rational belief can be built around these principles that is not ad hoc and that has various philosophical features that are plausible independently. In particular, this essay shows that the theory allows for a solution to the Lottery Paradox, and it has nice applications to formal epistemology. The price that is to be paid for this theory is a strong dependency of belief on the context, where a context involves both the agent's degree of belief function and the partitioning or individuation of the underlying possibilities. But as this essay argues, that price seems to be affordable. (shrink)
I will defend the claim that we need to differentiate between thinking and reasoning in order to make progress in understanding the intricate relation between language and mind. The distinction between thinking and reasoning will allow us to apply a structural equivalent of Ludwig Wittgenstein’s Private Language Argument to the domain of mind and language. This argumentative strategy enables us to show that and how a certain subcategory of cognitive processes, namely reasoning, is constitutively dependent on language. The final outcome (...) and claim of this paper can be summarized as follows: We can think without language, but we cannot reason without language. While this still leaves several questions about the relation between mind and language unanswered, I hold that the insights defended in this paper provide the basis and proper framework for further investigation about the relationship between language and the mind.Keywords: Private language argument, Wittgenstein, thought/mind and language, reasoning, linguistic relativity, non-linguistic cognition. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its sequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In this paper, we make this norm mathematically precise in various ways. We describe three epistemic dilemmas that an agent might face if she attempts (...) to follow Accuracy, and we show that the only inaccuracy measures that do not give rise to such dilemmas are the quadratic inaccuracy measures. In the sequel, we derive the main tenets of Bayesianism from the relevant mathematical versions of Accuracy to which this characterization of the legitimate inaccuracy measures gives rise, but we also show that Jeffrey conditionalization has to be replaced by a different method of update in order for Accuracy to be satisfied. (shrink)
In Epistemic Entitlement. The Right to Believe Hannes Ole Matthiessen develops a social externalist account of epistemic entitlement and perceptual knowledge. The basic idea is that positive epistemic status should be understood as a specific kind of epistemic right, that is a right to believe. Since rights have consequences for how others are required to treat the bearer of the right, they have to be publicly accessible. The author therefore suggests that epistemic entitlement can plausibly be conceptualized as a (...) status that is grounded in a publicly observable perceptual situation, rather than in a perceptual experience as current theories of epistemic entitlement state. It is then argued that such a social externalist account of entitlement, in which the perceiver's epistemic perspective becomes relevant only in the exceptional case where an entitlement is challenged, can nevertheless do justice to our central intuitions about first-personal epistemic phenomenology. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its prequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we made this norm mathematically precise; in this paper, we derive its consequences. We show that the two core tenets of Bayesianism (...) follow from the norm, while the characteristic claim of the Objectivist Bayesian follows from the norm along with an extra assumption. Finally, we consider Richard Jeffrey’s proposed generalization of conditionalization. We show not only that his rule cannot be derived from the norm, unless the requirement of Rigidity is imposed from the start, but further that the norm reveals it to be illegitimate. We end by deriving an alternative updating rule for those cases in which Jeffrey’s is usually supposed to apply. (shrink)
Drawing on an idea proposed by Darwin, it has recently been hypothesised that violent intergroup conflict might have played a substantial role in the evolution of human cooperativeness and altruism. The central notion of this argument, dubbed ‘parochial altruism’, is that the two genetic or cultural traits, aggressiveness against out-groups and cooperativeness towards the in-group, including self-sacrificial altruistic behaviour, might have coevolved in humans. This review assesses the explanatory power of current theories of ‘parochial altruism’. After a brief synopsis of (...) the existing literature, two pitfalls in the interpretation of the most widely used models are discussed: potential direct benefits and high relatedness between group members implicitly induced by assumptions about conflict structure and frequency. Then, a number of simplifying assumptions made in the construction of these models are pointed out which currently limit their explanatory power. Next, relevant empirical evidence from several disciplines which could guide future theoretical extensions is reviewed. Finally, selected alternative accounts of evolutionary links between intergroup conflict and intragroup cooperation are briefly discussed which could be integrated with parochial altruism in the future. (shrink)
This article introduces, studies, and applies a new system of logic which is called ‘HYPE’. In HYPE, formulas are evaluated at states that may exhibit truth value gaps and truth value gluts. Simple and natural semantic rules for negation and the conditional operator are formulated based on an incompatibility relation and a partial fusion operation on states. The semantics is worked out in formal and philosophical detail, and a sound and complete axiomatization is provided both for the propositional and the (...) predicate logic of the system. The propositional logic of HYPE is shown to contain first-degree entailment, to have the Finite Model Property, to be decidable, to have the Disjunction Property, and to extend intuitionistic propositional logic conservatively when intuitionistic negation is defined appropriately by HYPE’s logical connectives. Furthermore, HYPE’s first-order logic is a conservative extension of intuitionistic logic with the Constant Domain Axiom, when intuitionistic negation is again defined appropriately. The system allows for simple model constructions and intuitive Euler-Venn-like diagrams, and its logical structure matches structures well-known from ordinary mathematics, such as from optimization theory, combinatorics, and graph theory. HYPE may also be used as a general logical framework in which different systems of logic can be studied, compared, and combined. In particular, HYPE is found to relate in interesting ways to classical logic and various systems of relevance and paraconsistent logic, many-valued logic, and truthmaker semantics. On the philosophical side, if used as a logic for theories of type-free truth, HYPE is shown to address semantic paradoxes such as the Liar Paradox by extending non-classical fixed-point interpretations of truth by a conditional as well-behaved as that of intuitionistic logic. Finally, HYPE may be used as a background system for modal operators that create hyperintensional contexts, though the details of this application need to be left to follow-up work. (shrink)
"Hegels Philosophie ist einer der letzten Versuche, alle Wissengebiete in ihren Zusammenhängen verständlich zu machen. Dass Hegel dabei auch theologisches Wissen als grundlegend ansieht, ist oft übersehen worden, weil Kants scharfe Kritik an Wissenansprüchen in der Theologie nur wenige Jahrzehnte wirkmächtig war. Jedoch zeigt Hannes Gustave Melichar anhand des ontologischen Gottesbeweises, das Hegels Denken zutiefst mit den Fragen der philosophischen Theologie verworben ist." --.
A perceptive and reflective state? Such a state might not exist yet, but it refers to an ideal for state-provided care which can protect both the caregiver and recipient of care from emotional overload and indifference. Such an ideal introduces new lines of enquiry to contemporary theories of care and their sociological orientation. It is argued that theories of care need to be combined with insights from political science concerning power and dilemmas within care. Dilemmas within state-provided care which might (...) temporarily be overcome through an application of an interactive universalism such as more generally advocated by the Turkish-American feminist philosopher Seyla Benhabib. Interactive universalism implies a shift of perspective, a kind of hypothetical moral dialogue, which would bring a stronger element of reflection into caring practices in the welfare state. However, interactive universalism presupposes empathy understood as a kind of emotional attention to the other. If implemented this ideal will introduce a new form of authority: more compassionate and bodily oriented. (shrink)
Building on and partially refining previous theoretical work, this paper presents an extended simulation model of ancestral warfare. This model (1) disentangles attack and defense, (2) tries to differentiate more strictly between selfish and altruistic efforts during war, (3) incorporates risk aversion and deterrence, and (4) pays special attention to the role of brutality. Modeling refinements and simulation results yield a differentiated picture of possible evolutionary dynamics. The main observations are: (i) Altruism in this model is more likely to evolve (...) for defenses than for attacks. (ii) Risk aversion, deterrence and the interplay of migration levels and brutality can change evolutionary dynamics substantially. (iii) Unexpectedly, one occasional simulation outcome is a dynamically stable state of ‘tolerated intergroup theft’, raising the question if corresponding patterns also exist in real intergroup conflicts. Finally, possible implications for theories of the co-evolution of bellicosity and altruism in humans are discussed. (shrink)
George Berkeley argues that vision is a language of God, that the immediate objects of vision are arbitrary signs for tactile objects and that there is no necessary connection between what we see and what we touch. Thomas Reid, on the other hand, aims to establish a geometrical connection between visible and tactile figures. Consequently, although Reid and Berkeley's theories of vision share important elements, Reid explicitly rejects Berkeley's idea that visible figures are merely arbitrary signs for tangible bodies. But (...) is he right in doing so? I show that many passages in Berkeley's work on vision suggest that he acknowledges a geometrical connection between visibles and tangibles. So the opposition between the arbitrariness Berkeley defends and a geometrical connection cannot be as universal as Reid thinks. This paper seeks to offer a plausible reading of Berkeley's theory of vision in this regard and an explanation of why Reid interprets Berkeley differently. (shrink)
Thomas Reid argued that the geometrical properties of visible figures equal the geometrical properties of their projections on the inside of a sphere centred around the eye. In recent scholarship there are only a few suggestions of which sources might have inspired Reid. I point to a widely ignored body of early eighteenth-century literature – introductions into projective geometry, the use of celestial globes and astronomy – in which the model of the eye in the centre of a sphere was (...) immensely popular. Moreover, I argue that Reid's account results naturally from some astronomical doctrines in conjunction with George Berkeley's theory of vision. (shrink)
We give an overview of the role of equicontinuity of sequences of real-valued functions on [0,1] and related notions in classical mathematics, intuitionistic mathematics, Bishop’s constructive mathematics, and Russian recursive mathematics. We then study the logical strength of theorems concerning these notions within the programme of Constructive Reverse Mathematics. It appears that many of these theorems, like a version of Ascoli’s Lemma, are equivalent to fan-theoretic principles.
In this article we try to give a philosophically reflected introductory overview of the current theoretical developments in the field of evolutionary aesthetics. Our aim is not completeness. Rather, we try to depict some of the central assumptions and explanatory tools frequently used in evolutionary accounts of human aesthetical preferences and address a number of currently debated, open research questions.
This article investigates the deep-rooted logical structures underlying our thinking about other animals with a particular focus on topics relevant for cognitive primate research. We begin with a philosophical propaedeutic that makes perspicuous how we are to differentiate ontological from epistemological considerations regarding primates, while also accounting for the many perplexities that will undoubtedly be encountered upon applying this difference to concrete phenomena. Following this, we give an account of what is to be understood by the assertion of a thesis (...) of anthropological difference, identifying, inter alia, a property that fulfils the exclusivity, universality, and constitution criteria and demarcates the differentia specifica between humans and other animals. Also, we systematically develop how such theses can be formulated more moderately. Furthermore, we account for different theoretical frameworks, argumentative schemes, and sociological factors whose employment is associated with theses as such. This endeavor is carried out under the guise of anthropomorphism and anthropodenial. Doing so, we show that both are favored by the logic of cognitive primate research. Put briefly, concepts like cladistic parsimony and arguments by analogy favor anthropomorphism whereas concepts like traditional parsimony and Morgan’s canon favor anthropodenial. We close by framing these topics in the light of the self-other category mistake that lies in ascribing exclusive self-properties to some other. Lastly, we probe this category mistake for potency and scope of implications and find it to be central to and unavoidably ingrained in our thinking about other animals. (shrink)
This article demonstrates the ways in which Goldenberg raises questions of the body, transcendence and epistemology and shows how they are all ultimately questions to do with authority. The author examines the source of authority, asks how it is ascertained and what it promises. The article outlines how these become complex questions when a female figure is projected as paradigmatic for example, Fatimah.
Over the last decades, science has grown increasingly collaborative and interdisciplinary and has come to depart in important ways from the classical analyses of the development of science that were developed by historically inclined philosophers of science half a century ago. In this paper, I shall provide a new account of the structure and development of contemporary science based on analyses of, first, cognitive resources and their relations to domains, and second of the distribution of cognitive resources among collaborators and (...) the epistemic dependence that this distribution implies. On this background I shall describe different ideal types of research activities and analyze how they differ. Finally, analyzing values that drive science towards different kinds of research activities, I shall sketch the main mechanisms underlying the perceived tension between disciplines and interdisciplinarity and argue for a redefinition of accountability and quality control for interdisciplinary and collaborative science. (shrink)
Thomas Kuhn's Structure of Scientific Revolutions became the most widely read book about science in the twentieth century. His terms 'paradigm' and 'scientific revolution' entered everyday speech, but they remain controversial. In the second half of the twentieth century, the new field of cognitive science combined empirical psychology, computer science, and neuroscience. In this book, the theories of concepts developed by cognitive scientists are used to evaluate and extend Kuhn's most influential ideas. Based on case studies of the Copernican revolution, (...) the discovery of nuclear fission, and an elaboration of Kuhn's famous 'ducks and geese' example of concept learning, this volume, first published in 2006, offers accounts of the nature of normal and revolutionary science, the function of anomalies, and the nature of incommensurability. (shrink)
Is it possible to give an explicit definition of belief in terms of subjective probability, such that believed propositions are guaranteed to have a sufficiently high probability, and yet it is neither the case that belief is stripped of any of its usual logical properties, nor is it the case that believed propositions are bound to have probability 1? We prove the answer is ‘yes’, and that given some plausible logical postulates on belief that involve a contextual “cautiousness” threshold, there (...) is but one way of determining the extension of the concept of belief that does the job. The qualitative concept of belief is not to be eliminated from scientific or philosophical discourse, rather, by reducing qualitative belief to assignments of resiliently high degrees of belief and a “cautiousness” threshold, qualitative and quantitative belief turn out to be governed by one unified theory that offers the prospects of a huge range of applications. Within that theory, logic and probability theory are not opposed to each other but go hand in hand. (shrink)