Angeregt von Freunden und Kollegen, hat Ernst Cassirer im amerikanischen Exil mit diesem Werk eine Summe seines Denkens vorgelegt, in der seine Philosophie der symbolischen Formen in ihren Hauptgedanken fortgeführt wird, und zwar in einer Weise, die auch einem breiteren Kreis interessierter Leser zugänglich ist. Cassirer stellt die alte Frage nach dem Wesen des Menschen neu und beantwortet sie, indem er die klassische Antwort mit weitreichenden Folgen abwandelt:Er bestimmt den Menschen als ein Wesen, das Symbole schafft und sich durch Symbole (...) mit seinesgleichen und der Welt verständigt. "Der Begriff der Vernunft ist höchst ungeeignet, die Formen menschlicher Kultur in ihrer Fülle und Mannigfaltigkeit zu erfassen [...]. Alle diese Formen sind symbolische Formen. Deshalb sollten wir den Menschen nicht als animal rationale, sondern als animal symbolicum definieren.". (shrink)
Back cover: This book develops a philosophical account that reveals the major characteristics that make an explanation in the life sciences reductive and distinguish them from non-reductive explanations. Understanding what reductive explanations are enables one to assess the conditions under which reductive explanations are adequate and thus enhances debates about explanatory reductionism. The account of reductive explanation presented in this book has three major characteristics. First, it emerges from a critical reconstruction of the explanatory practice of the life sciences itself. (...) Second, the account is monistic since it specifies one set of criteria that apply to explanations in the life sciences in general. Finally, the account is ontic in that it traces the reductivity of an explanation back to certain relations that exist between objects in the world (such as part-whole relations and level relations), rather than to the logical relations between sentences. Beginning with a disclosure of the meta-philosophical assumptions that underlie the author’s analysis of reductive explanation, the book leads into the debate about reduction(ism) in the philosophy of biology and continues with a discussion on the two perspectives on explanatory reduction that have been proposed in the philosophy of biology so far. The author scrutinizes how the issue of reduction becomes entangled with explanation and analyzes two concepts, the concept of a biological part and the concept of a level of organization. The results of these five chapters constitute the ground on which the author bases her final chapter, developing her ontic account of reductive explanation. (shrink)
While the enormous influence of Martin Heidegger's thought in Japan and China is well documented, the influence on him from East-Asian sources is much lesser known. This remarkable study shows that Heidegger drew some of the major themes of his philosophy--on occasion almost word for word--from German translations of Chinese Daoist and Zen Buddhist classics.
Sustainable development (SD) – that is, “Development that meets the needs of current generations without compromising the ability of future generations to meet their needs and aspirations” – can be pursued in many different ways. Stakeholder relations management (SRM) is one such way, through which corporations are confronted with economic, social, and environmental stakeholder claims. This paper lays the groundwork for an empirical analysis of the question of how far SD can be achieved through SRM. It describes the so-called SD–SRM (...) perspective as a distinctive research approach and shows how it relates to the wider body of stakeholder theory. Next, the concept of SD is operationalized for the microeconomic level with reference to important documents. Based on the ensuing SD framework, it is shown how SD and SRM relate to each other, and how the two concepts relate to other popular concepts such as Corporate Sustainability and Corporate Social Responsibility. The paper concludes that the significance of societal guiding models such as SD and of management approaches like CSR is strongly dependent on their footing in society. (shrink)
The paper shows how ideas that explain the sense of an expression as a method or algorithm for finding its reference, preshadowed in Frege’s dictum that sense is the way in which a referent is given, can be formalized on the basis of the ideas in Thomason (1980). To this end, the function that sends propositions to truth values or sets of possible worlds in Thomason (1980) must be replaced by a relation and the meaning postulates governing the behaviour of (...) this relation must be given in the form of a logic program. The resulting system does not only throw light on the properties of sense and their relation to computation, but also shows circular behaviour if some ingredients of the Liar Paradox are added. The connection is natural, as algorithms can be inherently circular and the Liar is explained as expressing one of those. Many ideas in the present paper are closely related to those in Moschovakis (1994), but receive a considerably lighter formalization. (shrink)
Measurement instruments assessing multiple emotions during epistemic activities are largely lacking. We describe the construction and validation of the Epistemically-Related Emotion Scales, which measure surprise, curiosity, enjoyment, confusion, anxiety, frustration, and boredom occurring during epistemic cognitive activities. The instrument was tested in a multinational study of emotions during learning from conflicting texts. The findings document the reliability, internal validity, and external validity of the instrument. A seven-factor model best fit the data, suggesting that epistemically-related emotions should be conceptualised in terms (...) of discrete emotion categories, and the scales showed metric invariance across the North American and German samples. Furthermore, emotion scores changed over time as a function of conflicting task information and related significantly to perceived task value and use of cognitive and metacognitive learning strategies. (shrink)
This paper embeds the core part of Discourse Representation Theory in the classical theory of types plus a few simple axioms that allow the theory to express key facts about variables and assignments on the object level of the logic. It is shown how the embedding can be used to combine core analyses of natural language phenomena in Discourse Representation Theory with analyses that can be obtained in Montague Semantics.
In his introductory paper to first-order logic, Jon Barwise writes in the Handbook of Mathematical Logic :[T]he informal notion of provable used in mathematics is made precise by the formal notion provable in first-order logic. Following a sug[g]estion of Martin Davis, we refer to this view as Hilbert’s Thesis.This paper reviews the discussion of Hilbert’s Thesis in the literature. In addition to the question whether it is justifiable to use Hilbert’s name here, the arguments for this thesis are compared with (...) those for Church’s Thesis concerning computability. This leads to the question whether one could provide an analogue for proofs of the concept of partial recursive function. (shrink)
This paper analyzes what it means for philosophy of science to be normative. It argues that normativity is a multifaceted phenomenon rather than a general feature that a philosophical theory either has or lacks. It analyzes the normativity of philosophy of science by articulating three ways in which a philosophical theory can be normative. Methodological normativity arises from normative assumptions that philosophers make when they select, interpret, evaluate, and mutually adjust relevant empirical information, on which they base their philosophical theories. (...) Object normativity emerges from the fact that the object of philosophical theorizing can itself be normative, such as when philosophers discuss epistemic norms in science. Metanormativity arises from the kind of claims that a philosophical theory contains, such as normative claims about science as it should be. Distinguishing these three kinds of normativity gives rise to a nuanced and illuminating view of how philosophy of science can be normative. (shrink)
In the contemporary life sciences more and more researchers emphasize the “limits of reductionism” (e.g. Ahn et al. 2006a, 709; Mazzocchi 2008, 10) or they call for a move “beyond reductionism” (Gallagher/Appenzeller 1999, 79). However, it is far from clear what exactly they argue for and what the envisioned limits of reductionism are. In this paper I claim that the current discussions about reductionism in the life sciences, which focus on methodological and explanatory issues, leave the concepts of a reductive (...) method and a reductive explanation too unspecified. In order to fill this gap and to clarify what the limits of reductionism are I identify three reductive methods that are crucial in the current practice of the life sciences: decomposition, focusing on internal factors, and studying parts in isolation. Furthermore, I argue that reductive explanations in the life sciences exhibit three characteristics: first, they refer only to factors at a lower level than the phenomenon at issue, second, they focus on internal factors and thus ignore or simplify the environment of a system, and, third, they cite only the parts of a system in isolation. (shrink)
Taking the lead from orthodox quantum theory, I will introduce a handy generalization of the Boolean approach to propositions and questions: the orthoalgebraic framework. I will demonstrate that this formalism relates to a formal theory of questions (or ‘observables’ in the physicist’s jargon). This theory allows formulating attitude questions, which normally are non-commuting, i.e., the ordering of the questions affects the answer behavior of attitude questions. Further, it allows the expression of conditional questions such as “If Mary reads the book, (...) will she recommend it to Peter?”, and thus gives the framework the semantic power of raising issues and being informative at the same time. In the case of commuting observables, there are close similarities between the orthoalgebraic approach to questions and the Jäger/Hulstijn approach to question semantics. However, there are also differences between the two approaches even in case of commuting observables. The main difference is that the Jäger/Hulstijn approach relates to a partition theory of questions whereas the orthoalgebraic approach relates to a ‘decorated’ partition theory (i.e. the elements of the partition are decorated by certain semantic values). Surprisingly, the orthoalgebraic approach is able to overcome most of the difficulties of the Jäger/Hulstijn approach. Furthermore, the general approach is suitable to describe the different types of (non-commutative) attitude questions as investigated in modern survey research. Concluding, I will suggest that an active dialogue between the traditional model-theoretic approaches to semantics and the orthoalgebraic paradigm is mandatory. (shrink)
The neurosciences not only challenge assumptions about the mind’s place in the natural world but also urge us to reconsider its role in the normative world. Based on mind-brain dualism, the law affords only one-sided protection: it systematically protects bodies and brains, but only fragmentarily minds and mental states. The fundamental question, in what ways people may legitimately change mental states of others, is largely unexplored in legal thinking. With novel technologies to both intervene into minds and detect mental activity, (...) the law should, we suggest, introduce stand alone protection for the inner sphere of persons. We shall address some metaphysical questions concerning physical and mental harm and demonstrate gaps in current doctrines, especially in regard to manipulative interferences with decision-making processes. We then outline some reasons for the law to recognize a human right to mental liberty and propose elements of a novel criminal offence proscribing severe interventions into other minds. (shrink)
This book radically simplifies Montague Semantics and generalizes the theory by basing it on a partial higher order logic. The resulting theory is a synthesis of Montague Semantics and Situation Semantics. In the late sixties Richard Montague developed the revolutionary idea that we can understand the concept of meaning in ordinary languages much in the same way as we understand the semantics of logical languages. Unfortunately, however, he formalized his idea in an unnecessarily complex way - two outstanding researchers in (...) the field even compared his work to a `Rube Goldberg machine.' Muskens' work does away with such unnecessary complexities, obtains a streamlined version of the theory, shows how partialising the theory automatically provides us with the most central concepts of Situation Semantics, and offers a simple logical treatment of propositional attitude verbs, perception verbs and proper names. (shrink)
_Heidegger's Hidden Sources_ documents for the first time Heidegger's remarkable debt to East Asian philosophy. In this groundbreaking study, Reinhard May shows conclusively that Martin Heidegger borrowed some of the major ideas of his philosophy - on occasion almost word for word - from German translations of Chinese Daoist and Zen Buddhist classics. The discovery of this astonishing appropriation of non-Western sources will have important consequences for future interpretations of Heidegger's work. Moreover, it shows Heidegger as a pioneer of (...) comparative philosophy and transcultural thinking. (shrink)
In this paper I argue that it is finally time to move beyond the Nagelian framework and to break new ground in thinking about epistemic reduction in biology. I will do so, not by simply repeating all the old objections that have been raised against Ernest Nagel’s classical model of theory reduction. Rather, I grant that a proponent of Nagel’s approach can handle several of these problems but that, nevertheless, Nagel’s general way of thinking about epistemic reduction in terms of (...) theories and their logical relations is entirely inadequate with respect to what is going on in actual biological research practice. (shrink)
Contents 1 Introduction – Points of Contact between Biology and History Marie I. Kaiser and Daniel Plenge Part I General Issues on Explanation 2 The Ontic Account of Scientific Explanation, Carl F. Craver Part II Explanation in the Biological Sciences 3 Causal Graphs and Biological Mechanisms, Alexander Gebharter and Marie I. Kaiser 4 Semiotic Explanation in the Biological Sciences, Ulrich Krohs 5 Mechanisms, Pathomechanisms, and Disease in Scientific Clinical Medicine, Gerhard Müller-Strahl 6 The Generalizations of Biology: Historical and (...) Contingent? Alexander Reutlinger 7 Evolutionary Explanations and the Role of Mechanisms, Gerhard Schurz Part III Explanation in the Historical Sciences 8 Explaining Roman History – A Case Study, Stephan Berry 9 Causal Explanation and Historical Meaning: How to Solve the Problem of the Specific Historical Relation be-tween Events, Doris Gerber 10 Do Historians Study the Mechanisms of History? A Sketch, Daniel Plenge 11 Philosophy of History – Metaphysics and Epistemology, Oliver R. Scholz 12 Causal Explanations of Historical Trends, Derek D. Turner Part IV Bridging the Two Disciplines 13 Aspects of Human Historiographic Explanation: A View from the Philosophy of Science, Stuart Glennan 14 History and the Sciences, Philip Kitcher and Daniel Immerwahr 15 Explanation and Intervention in Coupled Human and Natural Systems, Daniel Steel 16 Biology and Natural History: What Makes the Difference, Aviezer Tucker. (shrink)
We explore the different meanings of “quantum uncertainty” contained in Heisenberg’s seminal paper from 1927, and also some of the precise definitions that were developed later. We recount the controversy about “Anschaulichkeit”, visualizability of the theory, which Heisenberg claims to resolve. Moreover, we consider Heisenberg’s programme of operational analysis of concepts, in which he sees himself as following Einstein. Heisenberg’s work is marked by the tensions between semiclassical arguments and the emerging modern quantum theory, between intuition and rigour, and between (...) shaky arguments and overarching claims. Nevertheless, the main message can be taken into the new quantum theory, and can be brought into the form of general theorems. They come in two kinds, not distinguished by Heisenberg. These are, on one hand, constraints on preparations, like the usual textbook uncertainty relation, and, on the other, constraints on joint measurability, including trade-offs between accuracy and disturbance. (shrink)
Assuming an essential difference between scientific data and phenomena, this paper argues for the view that we have to understand how empirical findings get transformed into scientific phenomena. The work of scientists is seen as largely consisting in constructing these phenomena which are then utilized in more abstract theories. It is claimed that these matters are of importance for discussions of theory choice and progress in science. A case study is presented as a starting point: paleomagnetism and the use of (...) paleomagnetic data in early discussions of continental drift. Some general features of this study are presented in formalized language. It is suggested that the presentation given is particularly suited for a semantic conception of theories. Even though the construction of scientific phenomena is the main topic of this paper, the view presented here is more adapted to realism than social constructivism. (shrink)
Vector models of language are based on the contextual aspects of words and how they co-occur in text. Truth conditional models focus on the logical aspects of language, the denotations of phrases, and their compositional properties. In the latter approach the denotation of a sentence determines its truth conditions and can be taken to be a truth value, a set of possible worlds, a context change potential, or similar. In this short paper, we develop a vector semantics for language based (...) on the simply typed lambda calculus. Our semantics uses techniques familiar from the truth conditional tradition and is based on a form of dynamic interpretation inspired by Heim's context updates. (shrink)
A logic is called higher order if it allows for quantiﬁcation over higher order objects, such as functions of individuals, relations between individuals, functions of functions, relations between functions, etc. Higher order logic began with Frege, was formalized in Russell  and Whitehead and Russell  early in the previous century, and received its canonical formulation in Church .1 While classical type theory has since long been overshadowed by set theory as a foundation of mathematics, recent decades have shown remarkable (...) comebacks in the ﬁelds of mechanized reasoning (see, e.g., Benzm¨. (shrink)
In this paper we define intensional models for the classical theory of types, thus arriving at an intensional type logic ITL. Intensional models generalize Henkin's general models and have a natural definition. As a class they do not validate the axiom of Extensionality. We give a cut-free sequent calculus for type theory and show completeness of this calculus with respect to the class of intensional models via a model existence theorem. After this we turn our attention to applications. Firstly, it (...) is argued that, since ITL is truly intensional, it can be used to model ascriptions of propositional attitude without predicting logical omniscience. In order to illustrate this a small fragment of English is defined and provided with an ITL semantics. Secondly, it is shown that ITL models contain certain objects that can be identified with possible worlds. Essential elements of modal logic become available within classical type theory once the axiom of Extensionality is given up. (shrink)
The influential Berkeley theoretical physicist Geoffrey Chew renounced the reigning approach to the study of subatomic particles in the early 1960s. The standard approach relied on a rigid division between elementary and composite particles. Partly on the basis of his new interpretation of Feynman diagrams, Chew called instead for a “nuclear democracy” that would erase this division, treating all nuclear particles on an equal footing. In developing his rival approach, which came to dominate studies of the strong nuclear force throughout (...) the 1960s, Chew drew on intellectual resources culled from his own political activities and his attempts to reform how graduate students in physics would be trained. (shrink)
In this paper we consider the theory of predicate logics in which the principle of Bivalence or the principle of Non-Contradiction or both fail. Such logics are partial or paraconsistent or both. We consider sequent calculi for these logics and prove Model Existence. For L4, the most general logic under consideration, we also prove a version of the Craig-Lyndon Interpolation Theorem. The paper shows that many techniques used for classical predicate logic generalise to partial and paraconsistent logics once the right (...) set-up is chosen. Our logic L4 has a semantics that also underlies Belnap’s  and is related to the logic of bilattices. L4 is in focus most of the time, but it is also shown how results obtained for L4 can be transferred to several variants. (shrink)
This paper developes a relational---as opposed to a functional---theory of types. The theory is based on Hilbert and Bernays' eta operator plus the identity symbol, from which Church's lambda and the other usual operators are then defined. The logic is intended for use in the semantics of natural language.
In this paper we discuss a new perspective on the syntax-semantics interface. Semantics, in this new set-up, is not ‘read off’ from Logical Forms as in mainstream approaches to generative grammar. Nor is it assigned to syntactic proofs using a Curry-Howard correspondence as in versions of the Lambek Calculus, or read off from f-structures using Linear Logic as in Lexical-Functional Grammar (LFG, Kaplan & Bresnan ). All such approaches are based on the idea that syntactic objects (trees, proofs, fstructures) are (...) somehow prior and that semantics must be parasitic on those syntactic objects. We challenge this idea and develop a grammar in which syntax and semantics are treated in a strictly parallel fashion. The grammar will have many ideas in common with the (converging) frameworks of categorial grammar and LFG, but its treatment of the syntax-semantics interface is radically different. Also, although the meaning component of the grammar is a version of Montague semantics and although there are obvious affinities between Montague’s conception of grammar and the work presented here, the grammar is not compositional, in the sense that composition of meaning need not follow surface structure. (shrink)
This paper shows how the dynamic interpretation of natural language introduced in work by Hans Kamp and Irene Heim can be modeled in classical type logic. This provides a synthesis between Richard Montague's theory of natural language semantics and the work by Kamp and Heim.
We present Logical Description Grammar (LDG), a model ofgrammar and the syntax-semantics interface based on descriptions inelementary logic. A description may simultaneously describe the syntacticstructure and the semantics of a natural language expression, i.e., thedescribing logic talks about the trees and about the truth-conditionsof the language described. Logical Description Grammars offer a naturalway of dealing with underspecification in natural language syntax andsemantics. If a logical description (up to isomorphism) has exactly onetree plus truth-conditions as a model, it completely specifies thatgrammatical (...) object. More common is the situation, corresponding tounderspecification, in which there is more than one model. A situation inwhich there are no models corresponds to an ungrammatical input. (shrink)
The paper develops Lambda Grammars, a form of categorial grammar that, unlike other categorial formalisms, is non-directional. Linguistic signs are represented as sequences of lambda terms and are combined with the help of linear combinators.
Cultural forces such as film create and reinforce rigidly-defined images of a doctor's identity for both the public and for medical students. The authoritarian and hierarchical institution of medical school also encourages students to adopt rigidly-defined professional identities. This restrictive identity helps to perpetuate the power of the patriarchy, limits uniqueness, squelches inquisitiveness, and damages one's self-confidence. This paper explores the construction of a physician's identity using cultural theorists' psychoanalytic analyses of gender and race as a framework of analysis. Cultural (...) theorists' politically-motivated work provides an excellent point of departure for destabilizing parts of the authoritarian medical hierarchy that can damage a student's professional development. Drawing on such discourse, this paper examines the processes by which a doctor's identity becomes rigidly defined and fixed by daily training. It finally proposes a way for a medical student to extrapolate himself from the current definitions of this identity and create a broader, more malleable concept of professional identity by defining himself from outside of, rather than through, difference. (shrink)
Modeling mechanisms is central to the biological sciences – for purposes of explanation, prediction, extrapolation, and manipulation. A closer look at the philosophical literature reveals that mechanisms are predominantly modeled in a purely qualitative way. That is, mechanistic models are conceived of as representing how certain entities and activities are spatially and temporally organized so that they bring about the behavior of the mechanism in question. Although this adequately characterizes how mechanisms are represented in biology textbooks, contemporary biological research practice (...) shows the need for quantitative, probabilistic models of mechanisms, too. In this paper we argue that the formal framework of causal graph theory is well-suited to provide us with models of biological mechanisms that incorporate quantitative and probabilistic information. On the ba-sis of an example from contemporary biological practice, namely feedback regulation of fatty acid biosynthesis in Brassica napus, we show that causal graph theoretical models can account for feedback as well as for the multi-level character of mechanisms. However, we do not claim that causal graph theoretical representations of mechanisms are advantageous in all respects and should replace common qualitative models. Rather, we endorse the more balanced view that causal graph theoretical models of mechanisms are useful for some purposes, while being insufficient for others. (shrink)
The term “vagueness” describes a property of natural concepts, which normally have fuzzy boundaries, admit borderline cases, and are susceptible to Zeno's sorites paradox. We will discuss the psychology of vagueness, especially experiments investigating the judgment of borderline cases and contradictions. In the theoretical part, we will propose a probabilistic model that describes the quantitative characteristics of the experimental finding and extends Alxatib's and Pelletier's () theoretical analysis. The model is based on a Hopfield network for predicting truth values. Powerful (...) as this classical perspective is, we show that it falls short of providing an adequate coverage of the relevant empirical results. In the final part, we will argue that a substantial modification of the analysis put forward by Alxatib and Pelletier and its probabilistic pendant is needed. The proposed modification replaces the standard notion of probabilities by quantum probabilities. The crucial phenomenon of borderline contradictions can be explained then as a quantum interference phenomenon. (shrink)
This paper introduces λ-grammar, a form of categorial grammar that has much in common with LFG. Like other forms of categorial grammar, λ-grammars are multi-dimensional and their components are combined in a strictly parallel fashion. Grammatical representations are combined with the help of linear combinators, closed pure λ-terms in which each abstractor binds exactly one variable. Mathematically this is equivalent to employing linear logic, in use in LFG for semantic composition, but the method seems more practicable.
Page generated Sat Jul 31 07:40:41 2021 on philpapers-web-65948fd446-qrpbq
cache stats: hit=29820, miss=27293, save= autohandler : 1891 ms called component : 1870 ms search.pl : 1685 ms render loop : 1427 ms addfields : 865 ms publicCats : 651 ms next : 485 ms initIterator : 254 ms quotes : 189 ms menu : 128 ms retrieve cache object : 110 ms save cache object : 109 ms search_quotes : 64 ms autosense : 39 ms match_cats : 35 ms prepCit : 29 ms applytpl : 8 ms match_other : 2 ms intermediate : 2 ms match_authors : 1 ms init renderer : 0 ms setup : 0 ms auth : 0 ms writelog : 0 ms