Over the last decade the modeling and the storage of biological data has been a topic of wide interest for scientists dealing with biological and biomedical research. Currently most data is still stored in text files which leads to data redundancies and file chaos.In this paper we show how to use relational modeling techniques and relational database technology for modeling and storing biological sequence data, i.e. for data maintained in collections like EMBL or SWISS-PROT to better serve the needs for (...) these application domains. (shrink)
The elucidations and regimentations of grounding offered in the literature standardly take it to be a necessary connection. In particular, authors often assert, or at least assume, that if some facts ground another fact, then the obtaining of the former necessitates the latter; and moreover, that grounding is an internal relation, in the sense of being necessitated by the existence of the relata. In this article, I challenge the necessitarian orthodoxy about grounding by offering two prima facie counterexamples. First, some (...) physical facts may ground a certain phenomenal fact without necessitating it; and they may co-exist with the latter without grounding it. Second, some instantiations of categorical properties may ground the instantiation of a dispositional one without necessitating it; and they may co-exist without grounding it. After arguing that these may be genuine counterexamples, I ask whether there are modal constraints on grounding that are not threatened by them. I propose two: that grounding supervenes on what facts there are, and that every grounded fact supervenes on what grounds there are. Finally, I attempt to provide a rigorous formulation of the latter supervenience claim and discuss some technical questions that arise if we allow descending grounding chains of transfinite length. (shrink)
It is often claimed that the greatest value of the Bayesian framework in cognitive science consists in its unifying power. Several Bayesian cognitive scientists assume that unification is obviously linked to explanatory power. But this link is not obvious, as unification in science is a heterogeneous notion, which may have little to do with explanation. While a crucial feature of most adequate explanations in cognitive science is that they reveal aspects of the causal mechanism that produces the phenomenon to be (...) explained, the kind of unification afforded by the Bayesian framework to cognitive science does not necessarily reveal aspects of a mechanism. Bayesian unification, nonetheless, can place fruitful constraints on causal–mechanical explanation. 1 Introduction2 What a Great Many Phenomena Bayesian Decision Theory Can Model3 The Case of Information Integration4 How Do Bayesian Models Unify?5 Bayesian Unification: What Constraints Are There on Mechanistic Explanation?5.1 Unification constrains mechanism discovery5.2 Unification constrains the identification of relevant mechanistic factors5.3 Unification constrains confirmation of competitive mechanistic models6 ConclusionAppendix. (shrink)
The emerging consensus in the philosophy of cognition is that cognition is situated, i.e., dependent upon or co-constituted by the body, the environment, and/or the embodied interaction with it. But what about emotions? If the brain alone cannot do much thinking, can the brain alone do some emoting? If not, what else is needed? Do (some) emotions (sometimes) cross an individual's boundary? If so, what kinds of supra-individual systems can be bearers of affective states, and why? And does that make (...) emotions ?embedded? or ?extended? in the sense cognition is said to be embedded and extended? Section 2 shows why it is important to understand in which sense body, environment, and our embodied interaction with the world contribute to our affective life. Section 3 introduces some key concepts of the debate about situated cognition. Section 4 draws attention to an important disanalogy between cognition and emotion with regard to the role of the body. Section 5 shows under which conditions a contribution by the environment results in non-trivial cases of ?embedded? emotions. Section 6 is concerned with affective phenomena that seem to cross the organismic boundaries of an individual, in particular with the idea that emotions are ?extended? or ?distributed.? (shrink)
This comprehensive new book introduces the core history of phenomenology and assesses its relevance to contemporary psychology, philosophy of mind, and cognitive science. From critiques of artificial intelligence research programs to ongoing work on embodiment and enactivism, the authors trace how phenomenology has produced a valuable framework for analyzing cognition and perception, whose impact on contemporary psychological and scientific research, and philosophical debates continues to grow. The first part of _An Introduction to Phenomenology_ is an extended overview of the history (...) and development of phenomenology, looking at its key thinkers, focusing particularly on Husserl, Heidegger and Merleau-Ponty, as well as its cultural and intellectual precursors. In the second half Chemero and Käufer turn their attention to the contemporary interpretations and uses of phenomenology in cognitive science, showing that phenomenology is a living source of inspiration in contemporary interdisciplinary studies of the mind. Käufer and Chemero have written a clear, jargon-free account of phenomenology, providing abundant examples and anecdotes to illustrate and to entertain. This book is an ideal introduction to phenomenology and cognitive science for the uninitiated, as well as for philosophy and psychology students keen to deepen their knowledge. (shrink)
I suggest a way of extending Stalnaker’s account of assertion to allow for centered content. In formulating his account, Stalnaker takes the content of assertion to be uncentered propositions: entities that are evaluated for truth at a possible world. I argue that the content of assertion is sometimes centered: the content is evaluated for truth at something within a possible world. I consider Andy Egan’s proposal for extending Stalnaker’s account to allow for assertions with centered content. I argue that Egan’s (...) account does not succeed. Instead, I propose an account on which the contents of assertion are identified with sets of multi-centered worlds. I argue that such a view not only provides a plausible account of how assertions can have centered content, but also preserves Stalnaker’s original insight that successful assertion involves the reduction of shared possibilities. (shrink)
We define a notion of difference-making for partial grounds of a fact in rough analogy to existing notions of difference-making for causes of an event. Using orthodox assumptions about ground, we show that it induces a non-trivial division with examples of partial grounds on both sides. We then demonstrate the theoretical fruitfulness of the notion by applying it to the analysis of a certain kind of putative counter-example to the transitivity of ground recently described by Jonathan Schaffer. First, we show (...) that our conceptual apparatus of difference-making enables us to give a much clearer description than Schaffer does of what makes the relevant instances of transitivity appear problematic. Second, we suggest that difference-making is best seen as a mark of good grounding-based explanations rather than a necessary condition on grounding, and argue that this enables us to deal with the counter-example in a satisfactory way. Along the way, we show that Schaffer's own proposal for salvaging a form of transitivity by moving to a contrastive conception of ground is unsuccessful. We conclude by sketching some natural strategies for extending our proposal to a more comprehensive account of grounding-based explanations. (shrink)
Metaphysical grounding is standardly taken to be irreflexive: nothing grounds itself. Kit Fine has presented some puzzles that appear to contradict this principle. I construct a particularly simple variant of those puzzles that is independent of several of the assumptions required by Fine, instead employing quantification into sentence position. Various possible responses to Fine's puzzles thus turn out to apply only in a restricted range of cases.
Analytic philosophy is once again in a methodological frame of mind. Nowhere is this more evident than in metaphysics, whose practitioners and historians are actively reflecting on the nature of ontological questions, the status of their answers, and the relevance of contributions both from other areas within philosophy and beyond. Such reflections are hardly new: the debate between Willard van Orman Quine and Rudolf Carnap about how to understand and resolve ontological questions is widely seen as a turning point in (...) twentieth-century analytic philosophy. And indeed, this volume is occasioned by the fact that the deflationary approach to metaphysics advocated by Carnap in that debate is once again attracting considerable interest and support. Eleven original essays by many of today's leading voices in metametaphysics aim to deepen our understanding of Carnap's contributions to metaontology and to explore how this legacy might be mined for insights into the contemporary debate. (shrink)
There is currently disagreement about whether the phenomenon of first-person, or de se, thought motivates a move towards special kinds of contents. Some take the conclusion that traditional propositions are unable to serve as the content of de se belief to be old news, successfully argued for in a number of influential works several decades ago.1 Recently, some philosophers have challenged the view that there exist uniquely de se contents, claiming that most of the philosophical community has been under the (...) grip of an attractive but unmotivated myth.2 At the very least, this latter group has brought into question the arguments in favor of positing special kinds of content for de se belief; I think they have successfully shown that these arguments are not as conclusive, or fully articulated, as many have taken them to be. In this paper I will address these challenges directly and I will present and defend an argument for the conclusion that the phenomenon of de se thought does indeed motivate the move to a special kind of content, content that is uniquely de se. First, I characterize a notion of de se belief that is neutral with respect to friends and foes of uniquely de se content. I then argue for a determination thesis relating de se belief to belief content: that there is no difference in de se belief without a difference in belief content. I argue that various proposals for rejecting this determination thesis are unsuccessful. In the last part of the paper, I employ this determination thesis to argue for the existence of a type of belief content that is uniquely de se. (shrink)
In his 2010 paper ‘Grounding and Truth-Functions’, Fabrice Correia has developed the first and so far only proposal for a logic of ground based on a worldly conception of facts. In this paper, we show that the logic allows the derivation of implausible grounding claims. We then generalize these results and draw some conclusions concerning the structural features of ground and its associated notion of relevance, which has so far not received the attention it deserves.
Simulation techniques, especially those implemented on a computer, are frequently employed in natural as well as in social sciences with considerable success. There is mounting evidence that the "model-building era" (J. Niehans) that dominated the theoretical activities of the sciences for a long time is about to be succeeded or at least lastingly supplemented by the "simulation era". But what exactly are models? What is a simulation and what is the difference and the relation between a model and a simulation? (...) These are some of the questions addressed in this article. I maintain that the most significant feature of a simulation is that it allows scientists to imitate one process by another process. "Process" here refers solely to a temporal sequence of states of a system. Given the observation that processes are dealt with by all sorts of scientists, it is apparent that simulations prove to be a powerful interdisciplinarily acknowledged tool. Accordingly, simulations are best suited to investigate the various research strategies in different sciences more carefully. To this end, I focus on the function of simulations in the research process. Finally, a somewhat detailed case-study from nuclear physics is presented which, in my view, illustrates elements of a typical simulation in physics. (shrink)
Bayesian epistemology addresses epistemological problems with the help of the mathematical theory of probability. It turns out that the probability calculus is especially suited to represent degrees of belief (credences) and to deal with questions of belief change, confirmation, evidence, justification, and coherence. Compared to the informal discussions in traditional epistemology, Bayesian epis- temology allows for a more precise and fine-grained analysis which takes the gradual aspects of these central epistemological notions into account. Bayesian epistemology therefore complements traditional epistemology; it (...) does not re- place it or aim at replacing it. (shrink)
The concept of supervenience and a regimented concept of grounding are often taken to provide rival explications of pre-theoretical concepts of dependence and determination. Friends of grounding typically point out that supervenience claims do not entail corresponding grounding claims. Every fact supervenes on itself, but is not grounded in itself, and the fact that a thing exists supervenes on the fact that its singleton exists, but is not grounded in it. Common lore has it, though, that grounding claims do entail (...) corresponding supervenience claims. In this article, I show that this assumption is problematic. On one way of understanding it, the corresponding supervenience claim is just an entailment claim under a different name. On another way of understanding it, the corresponding claim is a distinctive supervenience claim, but its specification gives rise to what I call the "reference type problem": to associate the classes of facts that are the relata of grounding with the types of facts that are the relata of supervenience. However it is understood, supervenience rules out prima facie possibilities: alien realizers, blockers, heterogeneous realizers, floaters, and heterogeneous blockers. Instead of being rival explications of one and the same pre-theoretical concept, grounding and supervenience may be complementary concepts capturing different aspects of determination and dependence. (shrink)
In the past decade well-designed research studies have shown that the practice of collaborative philosophical inquiry in schools can have marked cognitive and social benefits. Student academic performance improves, and so too does the social dimension of schooling. These findings are timely, as many countries in Asia and the Pacific are now contemplating introducing Philosophy into their curricula. This paper gives a brief history of collaborative philosophical inquiry before surveying the evidence as to its effectiveness. The evidence is canvassed under (...) two categories: schooling and thinking skills; and schooling, socialisation and values. In both categories there is clear evidence that even short-term teaching of collaborative philosophical inquiry has marked positive effects on students. The paper concludes with suggestions for further research and a final claim that the presently-available research evidence is strong enough to warrant implementing collaborative philosophical inquiry as part of a long-term policy. (shrink)
The view known as animalism asserts that we are human animals—that each of us is an instance of the Homo sapiens species. The standard argument for this view is known as the thinking animal argument . But this argument has recently come under attack. So, here, a new argument for animalism is introduced. The animal ancestors argument illustrates how the case for animalism can be seen to piggyback on the credibility of evolutionary theory. Two objections are then considered and answered.
The goal of this paper is to examine moods, mostly in comparison to emotions. Nearly all of the features that allegedly distinguish moods from emotions are disputed though. In a first section I comment on duration, intentionality, and cause in more detail, and develop intentionality as the most promising distinguishing characteristic. In a second section I will consider the huge variety of moods, ranging from shallow environmentally triggered transient moods to deep existential moods that last much longer. I will explore (...) what their sources are, and how they impact one another, other affective processes, and our being in the world. I follow several eminent emotion researchers and try to carve out their insights, many seemingly mutually excluding each other. As it will turn out, most of them are, in fact, not excluding each other, but contribute to a layered picture of moods that fits well in between emotions and personality traits. Eventually, I will shortly discuss what we can do with our moods. (shrink)
This paper focuses on the question of how to resolve disagreement and uses the Lehrer-Wagner model as a formal tool for investigating consensual decision-making. The main result consists in a general definition of when agents treat each other as epistemic peers (Kelly 2005; Elga 2007), and a theorem vindicating the “equal weight view” to resolve disagreement among epistemic peers. We apply our findings to an analysis of the impact of social network structures on group deliberation processes, and we demonstrate their (...) stability with the help of numerical simulations. (shrink)
Effective field theories have been a very popular tool in quantum physics for almost two decades. And there are good reasons for this. I will argue that effective field theories share many of the advantages of both fundamental theories and phenomenological models, while avoiding their respective shortcomings. They are, for example, flexible enough to cover a wide range of phenomena, and concrete enough to provide a detailed story of the specific mechanisms at work at a given energy scale. So will (...) all of physics eventually converge on effective field theories? This paper argues that good scientific research can be characterised by a fruitful interaction between fundamental theories, phenomenological models and effective field theories. All of them have their appropriate functions in the research process, and all of them are indispensable. They complement each other and hang together in a coherent way which I shall characterise in some detail. To illustrate all this I will present a case study from nuclear and particle physics. The resulting view about scientific theorising is inherently pluralistic, and has implications for the debates about reductionism and scientific explanation. (shrink)
The German linguist and mythologist Heymann Steinthal taught at the University of Berlin and was especially engaged with Wilhelm von Humboldt and his linguistic works. He was a co-founder of the Berliner Gesellschaft für Anthropologie, Ethnologie und Urgeschichte. This innovatory volume, published in 1855, draws a connection between the disciplines of linguistics and psychology, and further relates them to the issue of logic. The three parts of the book deal with the nature of grammar, its relation to logic and (...) the connection of grammar and linguistics to cognitive behaviour. Finally Steinthal discusses the idea of linguistics as ethnopsychology. Pursuing this concept, he, with his brother-in-law Moritz Lazarus, co-founded the journal Zeitschrift für Völkerpsychologie und Sprachwissenschaft in 1860, thus laying the foundations for a promising new area of research. (shrink)
Discussions about a replicability crisis in science have been driven by the normative claim that all of science should be replicable and the empirical claim that most of it isn’t. Recently, such crisis talk has been challenged by a new localism, which argues a) that serious problems with replicability are not a general occurrence in science and b) that replicability itself should not be treated as a universal standard. The goal of this article is to introduce this emerging strand of (...) the debate and to discuss some of its implications and limitations. I will in particular highlight the issue of demarcation that localist accounts have to address, i.e. the question of how we can distinguish replicable science from disciplines where replicability does not apply. (shrink)
Several theories of emergence will be distinguished. In particular, these are synchronic, diachronic, and weak versions of emergence. While the weaker theories are compatible with property reductionism, synchronic emergentism and strong versions of diachronic emergentism are not. Synchronice mergentism is of particular interest for the discussion of downward causation. For such a theory, a system's property is taken to be emergent if it is irreducible, i.e., if it is not reductively explainable. Furthermore, we have to distinguish two different types of (...) irreducibility with quite different consequences: If, on the one hand, a system's property is irreducible because of the irreducibility of the system's parts' behavior on which the property supervenes, we seem to have a case of "downward causation". This kind of downward causation does not violate the principle of the causal closure of the physical domain. If, on the other hand, a systemic property is irreducible because it is not exhaustively analyzable in terms of its causal role, downward causation is not implied. Rather, it is dubitable how unanalyzable properties might play any causal role at all. Thus, epiphenomenalism seems to be implied. The failure to keep apart the two kinds of irreducibility has muddled recent debate about the emergence of properties considerably. (shrink)
Causal queries about singular cases are ubiquitous, yet the question of how we assess whether a particular outcome was actually caused by a specific potential cause turns out to be difficult to answer. Relying on the causal power framework, Cheng and Novick () proposed a model of causal attribution intended to help answer this question. We challenge this model, both conceptually and empirically. We argue that the central problem of this model is that it treats causal powers that are probabilistically (...) sufficient to generate the effect on a particular occasion as actual causes of the effect, and thus neglects that sufficient causal powers can be preempted in their efficacy. Also, the model does not take into account that reasoners incorporate uncertainty about the underlying general causal structure and strength of causes when making causal inferences. We propose a new measure of causal attribution and embed it into the structure induction model of singular causation. Two experiments support the model. (shrink)
Among the questions to be raised under the heading of “personal identity” are these: “What are we?” (fundamental nature question) and “Under what conditions do we persist through time?” (persistence question). Against the dominant neo-Lockean approach to these questions, the view known as animalism answers that each of us is an organism of the species Homo sapiens and that the conditions of our persistence are those of animals. Beyond describing the content and historical background of animalism and its rivals, this (...) entry explores some of the arguments for and objections to this controversial account of our nature and persistence. (shrink)
Fundamental theories are hard to come by. But even if we had them, they would be too complicated to apply. Quantum chromodynamics is a case in point. This theory is supposed to govern all strong interactions, but it is extremely hard to apply and test at energies where protons, neutrons and ions are the effective degrees of freedom. Instead, scientists typically use highly idealized models such as the MIT Bag Model or the Nambu Jona-Lasinio Model to account for phenomena in (...) this domain, to explain them and to gain nderstanding. Based on these models, which typically isolate a single feature of QCD and disregard many others, scientists attempt to get a better understanding of the physics of strong interactions. But does this practice make sense? Is it justified to use these models for the purposes at hand? Interestingly, these models do not even provide an accurate description of the mass spectrum of protons, neutrons and pions and their lowest lying excitations well - despite several adjustable parameters. And yet, the models are heavily used. I'll argue that a qualitative story, which establishes an explanatory link between the fundamental theory and a model, plays an important role in model acceptance in these cases. (shrink)
The aggregation of consistent individual judgments on logically interconnected propositions into a collective judgment on those propositions has recently drawn much attention. Seemingly reasonable aggregation procedures, such as propositionwise majority voting, cannot ensure an equally consistent collective conclusion. The literature on judgment aggregation refers to that problem as the discursive dilemma. In this paper, we motivate that many groups do not only want to reach a factually right conclusion, but also want to correctly evaluate the reasons for that conclusion. In (...) other words, we address the problem of tracking the true situation instead of merely selecting the right outcome. We set up a probabilistic model analogous to Bovens and Rabinowicz (2006) and compare several aggregation procedures by means of theoretical results, numerical simulations and practical considerations. Among them are the premise-based, the situation-based and the distance-based procedure. Our findings confirm the conjecture in Hartmann, Pigozzi and Sprenger (2008) that the premise-based procedure is a crude, but reliable and sometimes even optimal form of judgment aggregation. (shrink)
Theoretical models are an important tool for many aspects of scientific activity. They are used, i.a., to structure data, to apply theories or even to construct new theories. But what exactly is a model? It turns out that there is no proper definition of the term "model" that covers all these aspects. Thus, I restrict myself here to evaluate the function of models in the research process while using "model" in the loose way physicists do. To this end, I distinguish (...) four kinds of models. These are (1) models as special theories, (2) models as a substitute for a theory, (3) toy models and (4) developmental models. I argue that models of the types (3) and (4) are considerably useful in the process of theory construction. This will be demonstrated in an extended case-study from High-Energy Physics. (shrink)
Recently, some philosophers have argued that we should take quantification of any order to be a legitimate and irreducible, sui generis kind of quantification. In particular, they hold that a semantic theory for higher-order quantification must itself be couched in higher-order terms. Øystein Linnebo has criticized such views on the grounds that they are committed to general claims about the semantic values of expressions that are by their own lights inexpressible. I show that Linnebo's objection rests on the assumption of (...) a notion of semantic value or contribution which both applies to expressions of any order, and picks out, for each expression, an extra-linguistic correlate of that expression. I go on to argue that higher-orderists can plausibly reject this assumption, by means of a hierarchy of notions they can use to describe the extra-lingustic correlates of expressions of different orders. (shrink)
The concept of emergence is widely used in both the philosophy of mind and in cognitive science. In the philosophy of mind it serves to refer to seemingly irreducible phenomena, in cognitive science it is often used to refer to phenomena not explicitly programmed. There is no unique concept of emergence available that serves both purposes.
Many historians of philosophy, with all their intended praise, let the philosophers speak mere nonsense. They do not guess the purpose of the philosophers.… They cannot see beyond what the philosophers actually said, to what they really meant to say.Mou Zongsan (1909–1995) is one of the key figures of contemporary New Confucianism (當代新儒家) who to this day remains largely unknown and grossly understudied in the West.1 This neglect by the Western academy contrasts sharply with the ever-growing output of literature by (...) Chinese and Taiwanese scholars in which Mou Zongsan emerges as one of the most discussed and most controversial Chinese philosophers of the twentieth century. Given this unfortunate East-West divide—as .. (shrink)
In 2015 scientists called for a partial ban on genome editing in human germline cells. This call was a response to the rapid development of the CRISPR–Cas9 system, a molecular tool that allows researchers to modify genomic DNA in living organisms with high precision and ease of use. Importantly, the ban was meant to be a trust-building exercise that promises a ‘prudent’ way forward. The goal of this paper is to analyse whether the ban can deliver on this promise. To (...) do so the focus will be put on the precedent on which the current ban is modelled, namely the Asilomar ban on recombinant DNA technology. The analysis of this case will show that the Asilomar ban was successful because of a specific two-step containment strategy it employed and that this two-step approach is also key to making the current ban work. It will be argued, however, that the Asilomar strategy cannot be transferred to human genome editing and that the current ban therefore fails to deliver on its promise. The paper will close with a reflection on the reasons for this failure and on what can be learned from it about the regulation of novel molecular tools. (shrink)
A lot of research has recently been done on the topic of ground, and in particular on the logic of ground. According to a broad consensus in that debate, ground is hyperintensional in the sense that even logically equivalent truths may differ with respect to what grounds them, and what they ground. This renders pressing the question of what we may take to be the ground-theoretic content of a true statement, i.e. that aspect of the statement’s overall content to which (...) ground is sensitive. I propose a novel answer to this question, namely that ground tracks how, rather than just by what, a statement is made true. I develop that answer in the form of a formal theory of ground-theoretic content and show how the resulting framework may be used to articulate plausible theories of ground, including in particular a popular account of the grounds of truth-functionally complex truths that has proved difficult to accommodate on alternative views of content. (shrink)