In this paper I will argue that a profile of the pseudo-sciences can be gained from the scientific pretensions of the pseudo-scientist. These pretensions provide two yardsticks which together take care of the charge of scientific prejudice that any suggested demarcation of pseudo-science has to face. To demonstrate that my analysis has teeth I will apply it to Freud and modern-day Bach-kabbalists. Against Laudan I will argue that the problem of demarcation is not a pseudo-problem, though the discussion will bear (...) out that Laudan's replacement question, namely the question whether someone's theory is well-confirmed, is not, as Lugg claimed, independent of the question as to whether that person is a pseudo-scientist. I further argue that my prototype pseudo-scientists do not have the shortcomings highlighted in Thagard's recent analysis of pseudo-science. (shrink)
Dennett's "Consciousness Explained" (1991) is an inspiring but also a highly frustrating book. The line of the argument seems to be clear, but then at second sight it fades away. It turns out that Dennett uses six of the seven strategies which I discuss in my 'The Seven Strategies of the Sophisticated Pseudo-Scientist: A Look into Freud's Rhetorical Tool Box' (J. Gen. Phil. Sci., 2001) Discussing important examples of these strategies I show why "Consciousness Explained" is such a frustrating book. (...) As the examples used do not reflect minor problems but go to the heart of the matter and concern the book's main areas of contention, it turns out that, in spite of the valuable and insightful details, Dennett's materialistic view of consciousness is supported mainly by rhetorical sleights of hand. (shrink)
In my ‘Seven Sins of Pseudo-Science’ (Journal for General Philosophy of Science 1993) I argued against Grünbaum that Freud commits all Seven Sins of Pseudo-Science. Yet how does Freud manage to fool many people, including such a sophisticated person as Grünbaum? My answer is that Freud is a sophisticated pseudo-scientist, using all Seven Strategies of the Sophisticated Pseudo-Scientist to keep up appearances, to wit, (1) the Humble Empiricist, (2) the Severe Selfcriticism, (3) the Unbiased Me, (4) the Striking but Irrelevant (...) Example, (5) the Proof Given Elsewhere, (6) the Favorable Compromise, and (7) the Display of Methodological Sophistication. One should note that not all strategies are disreputable in themselves. But all are used very cunningly so as to hide weaknesses in Freud's arguments. To be fair, quite a few of his methodological remarks are sophisticated enough. As Freud combines these sophisticated remarks with an appalling methodology in practice, I call him a sophisticated pseudo-scientist. I do not claim that these rhetorical strategies are specific to him. (shrink)
John Hyman has used the objective character of occlusion shapes and of relative occlusion sizes to develop a more objective approach both in the analysis of linear perspective and in the theory of depiction. To this end Hyman develops two Occlusion Principles, plus an Aperture Colour Principle (which I do not discuss), which, together with our knowledge of appearances, are supposed to tell us what a picture depicts. I argue that Hyman underestimates the crucial role of the psychological element in (...) the work that the objective occlusion shape and relative occlusion sizes are assigned to do. Two pictures may have different contents in spite of the same occlusion shapes and the same (relative) occlusion sizes. It is the operation of constancy scaling in pictorial space which frustrates Hyman’s objectivism both in the domain of linear perspective and in the domain of depiction. (shrink)
In this note I test a specific thesis about the dependence of philosophy of science on science that Laudan presents in his Science and Hypothesis; namely, that the sciences were justificationally prior to the philosophy of science. I argue that Laudan's historical case studies show a justificational priority that goes the other way. I also argue that the justificational role that in Progress and Its Problems the history of science is alleged to play vis-à-vis competing conceptions of scientific rationality is (...) not apparent in Laudan's argumentation in favor of his suggested analysis in terms of problem-solving effectiveness. (shrink)
In his The Foundations of Psychoanalysis (1984) Grunbaum compliments Freud on the development of the Tally Argument as an answer to a number of serious methodological criticisms, "The epistemological considerations that prompted Freud to enunciate (this argument) make him a sophisticated methodologist" (p. 128). In contrast to this position I argue that the Tally Argument and the considerations for it are hardly sophisticated: They would equally well go to demonstrate the methodological sophistication of modern-day evangelists. Furthermore, I argue that the (...) Tally Argument does not play the crucial role that Grunbaum assigns to it. It is one of many arguments Freud used to counter criticism, but not one to which Freud gave pride of place. (shrink)
This book gives a comprehensive overview of central themes of finite model theory â expressive power, descriptive complexity, and zero-one laws â together with selected applications relating to database theory and artificial intelligence, especially constraint databases and constraint satisfaction problems. The final chapter provides a concise modern introduction to modal logic, emphasizing the continuity in spirit and technique with finite model theory. This underlying spirit involves the use of various fragments of and hierarchies within first-order, second-order, fixed-point, and infinitary logics (...) to gain insight into phenomena in complexity theory and combinatorics. The book emphasizes the use of combinatorial games, such as extensions and refinements of the Ehrenfeucht-Fraissé pebble game, as a powerful way to analyze the expressive power of such logics, and illustrates how deep notions from model theory and combinatorics, such as o-minimality and treewidth, arise naturally in the application of finite model theory to database theory and AI. Students of logic and computer science will find here the tools necessary to embark on research into finite model theory, and all readers will experience the excitement of a vibrant area of the application of logic to computer science. (shrink)
I respond to two criticisms levelled by A. A. Derksen in a recent issue of this journal against characterizing pseudoscience as structurally flawed practice: I argue that he surreptitiously invokes this conception, his official view that we should concentrate on pseudoscientists' pretensions rather than their practices notwithstanding; and I critically examine his contention that judgements of scientificity (and pseudoscientificity) cannot properly be made independently of a consideration of whether the relevant theories and practices are empirically well-confirmed.
Basisbegrippen. Een formeel model voor de ontwikkeling van de kunst is een structuur T, <, K, , d, p, q, s, B , waarbij T een verzameling van “tijdstippen” is, < (“is eerder dan”) een relatie op T is, K een verzameling van “mogelijke kunstwerken” is, (“levert commentaar op”) een relatie op K is, d, p, q en s functies van K naar de verzameling van alle deelverzamelingen van K zijn, en B een functie van T naar de verzameling van (...) alle deelverzamelingen van K is. d(x) is de discipline waartoe kunstwerk x behoort, p(x) is het proc´ed´e waarmee x vervaardigd is, q(x) is de kwaliteit van x, s(x) is de stijl van x, en B(t) is de verzameling kunstwerken die op tijdstip t bestaan. (shrink)
Machine generated contents note: Notes on Contributors.1. Introduction: Hatred of Democracy... and of the Public Role of Education? (Maarten Simons and Jan Masschelein).2. The Public Role of Teaching: To Keep the Door Closed (Goele Cornelissen).3. Learner, Student, Speaker: Why It Matters How We Call Those We Teach (Gert Biesta).4. Ignorance and Translation, 'Artifacts' for Practices of Equality (Marc Derycke).5. Democratic Education: An (im)possibility That Yet Remains to Come (Daniel Friedrich, Bryn Jaastad and Thomas S. Popkewitz)6. Governmental, Political and Pedagogic (...) Subjectivation: Foucault with Rancière (Maarten Simons and Jan Masschelein).7. The Immigrant Has No Proper Name: The Disease of Consensual Democracy Within the Myth of Schooling (Carl Anders Safstrom).8. Queer Politics in Schools: A Rancièrean Reading (Claudia W. Ruitenberg).9. Paulo Freire's Last Laugh: Rethinking Critical Pedagogy's Funny Bone Through Jacques Rancière ( Tyson Edward Lewis).10. Settling no Conflict in the Public Place: Truth in Education, and in Rancièrean Scholarship (Charles Bingham).11. The Hatred of Public Schooling: The School as the Mark of Democracy (Jan Masschelein and Maarten Simons).12. Endgame: Reading, Writing, Talking (and Perhaps Thinking) in a Faculty of Education (Jorge Larrosa). (shrink)
In this challenging essay, Maarten Doorman argues that in art, belief in progress is still relevant, if not essential. The radical freedoms of postmodernism, he claims, have had a crippling effect on art, leaving it in danger of becoming meaningless. Art can only acquire meaning through context the concept of progress, then, is ideal as the primary criterion for establishing that context. The history of art, in fact, can be seen as a process of constant accumulation, works of art (...) commenting on one another and enriching one another's meanings. It is these complex interrelationships and the progress they create in both art and its observers that Doorman, in a display of great philosophical erudition, defends. (shrink)
Genes are often described by biologists using metaphors derived from computa- tional science: they are thought of as carriers of information, as being the equivalent of ‘‘blueprints’’ for the construction of organisms. Likewise, cells are often characterized as ‘‘factories’’ and organisms themselves become analogous to machines. Accordingly, when the human genome project was initially announced, the promise was that we would soon know how a human being is made, just as we know how to make airplanes and buildings. Impor- tantly, (...) modern proponents of Intelligent Design, the latest version of creationism, have exploited biologists’ use of the language of information and blueprints to make their spurious case, based on pseudoscientific concepts such as ‘‘irreducible complexity’’ and on flawed analogies between living cells and mechanical factories. However, the living organ- ism = machine analogy was criticized already by David Hume in his Dialogues Concerning Natural Religion. In line with Hume’s criticism, over the past several years a more nuanced and accurate understanding of what genes are and how they operate has emerged, ironically in part from the work of computational scientists who take biology, and in particular developmental biology, more seriously than some biologists seem to do. In this article we connect Hume’s original criticism of the living organism = machine analogy with the modern ID movement, and illustrate how the use of misleading and outdated metaphors in science can play into the hands of pseudoscientists. Thus, we argue that dropping the blueprint and similar metaphors will improve both the science of biology and its understanding by the general public. (shrink)
In recent controversies about Intelligent Design Creationism (IDC), the principle of methodological naturalism (MN) has played an important role. In this paper, an often neglected distinction is made between two different conceptions of MN, each with its respective rationale and with a different view on the proper role of MN in science. According to one popular conception, MN is a self-imposed or intrinsic limitation of science, which means that science is simply not equipped to deal with claims of the supernatural (...) (Intrinsic MN or IMN). Alternatively, we will defend MN as a provisory and empirically grounded attitude of scientists, which is justified in virtue of the consistent success of naturalistic explanations and the lack of success of supernatural explanations in the history of science (Provisory MN or PMN). Science does have a bearing on supernatural hypotheses, and its verdict is uniformly negative. We will discuss five arguments that have been proposed in support of IMN: the argument from the definition of science, the argument from lawful regularity, the science stopper argument, the argument from procedural necessity, and the testability argument. We conclude that IMN, because of its philosophical flaws, proves to be an ill-advised strategy to counter the claims of IDC. Evolutionary scientists are on firmer ground if they discard supernatural explanations on purely evidential grounds, instead of ruling them out by philosophical fiat. (shrink)
The leading Intelligent Design theorist William Dembski (Rowman & Littlefield, Lanham MD, 2002) argued that the first No Free Lunch theorem, first formulated by Wolpert and Macready (IEEE Trans Evol Comput 1: 67–82, 1997), renders Darwinian evolution impossible. In response, Dembski’s critics pointed out that the theorem is irrelevant to biological evolution. Meester (Biol Phil 24: 461–472, 2009) agrees with this conclusion, but still thinks that the theorem does apply to simulations of evolutionary processes. According to Meester, the theorem shows (...) that simulations of Darwinian evolution, as these are typically set in advance by the programmer, are teleological and therefore non-Darwinian. Therefore, Meester argues, they are useless in showing how complex adaptations arise in the universe. Meester uses the term teleological inconsistently, however, and we argue that, no matter how we interpret the term, a Darwinian algorithm does not become non-Darwinian by simulation. We show that the NFL theorem is entirely irrelevant to this argument, and conclude that it does not pose a threat to the relevance of simulations of biological evolution. (shrink)
This paper offers an epistemological discussion of self-validating belief systems and the recurrence of ?epistemic defense mechanisms? and ?immunizing strategies? across widely different domains of knowledge. We challenge the idea that typical ?weird? belief systems are inherently fragile, and we argue that, instead, they exhibit a surprising degree of resilience in the face of adverse evidence and criticism. Borrowing from the psychological research on belief perseverance, rationalization and motivated reasoning, we argue that the human mind is particularly susceptible to belief (...) systems that are structurally self-validating. On this cognitive-psychological basis, we construct an epidemiology of beliefs, arguing that the apparent convenience of escape clauses and other defensive ?tactics? used by believers may well derive not from conscious deliberation on their part, but from more subtle mechanisms of cultural selection. (shrink)
It is argued that, contrary to prevailing opinion, Bas van Fraassen nowhere uses the argument from underdetermination in his argument for constructive empiricism. It is explained that van Fraassen’s use of the notion of empirical equivalence in The Scientific Image has been widely misunderstood. A reconstruction of the main arguments for constructive empiricism is offered, showing how the passages that have been taken to be part of an appeal to the argument from underdetermination should actually be interpreted.
What are the consequences of evolutionary theory for the epistemic standing of our beliefs? Evolutionary considerations can be used to either justify or debunk a variety of beliefs. This paper argues that evolutionary approaches to human cognition must at least allow for approximately reliable cognitive capacities. Approaches that portray human cognition as so deeply biased and deficient that no knowledge is possible are internally incoherent and self-defeating. As evolutionary theory offers the current best hope for a naturalistic epistemology, evolutionary approaches (...) to epistemic justification seem to be committed to the view that our sensory systems and belief-formation processes are at least approximately accurate. However, for that reason they are vulnerable to the charge of circularity, and their success seems to be limited to commonsense beliefs. This paper offers an extension of evolutionary arguments by considering the use of external media in human cognitive processes: we suggest that the way humans supplement their evolved cognitive capacities with external tools may provide an effective way to increase the reliability of their beliefs and to counter evolved cognitive biases. (shrink)
I show why Michael Friedman’s idea that we should view new constitutive frameworks introduced in paradigm change as members of a convergent series introduces an uncomfortable tension in his views. It cannot be justified on realist grounds, as this would compromise his Kantian perspective, but his own appeal to a Kantian regulative ideal of reason cannot do the job either. I then explain a way to make better sense of the rationality of paradigm change on what I take to be (...) Friedman’s own terms. (shrink)
Starting with a discussion of what I call Koyré’s paradox of conceptual novelty, I introduce the ideas of Damerow et al. on the establishment of classical mechanics in Galileo’s work. I then argue that although the view of Damerow et al. on the nature of Galileo’s conceptual innovation is convincing, it misses an essential element: Galileo’s use of the experiments described in the first day of the Two New Sciences. I describe these experiments and analyze their function. Central to my (...) analysis is the idea that Galileo’s pendulum experiments serve to secure the reference of his theoretical models in actually occurring cases of free fall. In this way Galileo’s experiments constitute an essential part of the meaning of the new concepts of classical mechanics. (shrink)
Recent years saw the rise of an interest in the roles and significance of thought experiments in different areas of human thinking. Heisenberg's gamma ray microscope is no doubt one of the most famous examples of a thought experiment in physics. Nevertheless, this particular thought experiment has not received much detailed attention in the philosophical literature on thought experiments up to date, maybe because of its often claimed inadequacies. In this paper, I try to do two things: to provide an (...) interesting interpretation of the roles played by Heisenberg's gamma ray microscope in interpreting quantum mechanics – partly based on Thomas Kuhn’s views on the function of thought experiments – and to contribute to the ongoing discussions on the roles and significance of thought experiments in physics. (shrink)
Social constructivist approaches to science have often been dismissed as inaccurate accounts of scientific knowledge. In this article, we take the claims of robust social constructivism (SC) seriously and attempt to find a theory which does instantiate the epistemic predicament as described by SC. We argue that Freudian psychoanalysis, in virtue of some of its well-known epistemic complications and conceptual confusions, provides a perfect illustration of what SC claims is actually going on in science. In other words, the features SC (...) mistakenly ascribes to science in general correctly characterize the epistemic status of Freudian psychoanalysis. This sheds some light on the internal disputes in the field of psychoanalysis, on the sociology of psychoanalytic movement, and on the “war” that has been waged over Freud's legacy with his critics. In addition, our analysis offers an indirect and independent argument against SC as an account of bona fide science, by illustrating what science would look like if it were to function as SC claims it does. (shrink)
In this paper I challenge Paolo Palmieri’s reading of the Mach-Vailati debate on Archimedes’s proof of the law of the lever. I argue that the actual import of the debate concerns the possible epistemic (as opposed to merely pragmatic) role of mathematical arguments in empirical physics, and that construed in this light Vailati carries the upper hand. This claim is defended by showing that Archimedes’s proof of the law of the lever is not a way of appealing to a non-empirical (...) source of information, but a way of explicating the mathematical structure that can represent the empirical information at our disposal in the most general way. (shrink)
We study several modal languages in which some (sets of) generalized quantifiers can be represented; the main language we consider is suitable for defining any first order definable quantifier, but we also consider a sublanguage thereof, as well as a language for dealing with the modal counterparts of some higher order quantifiers. These languages are studied both from a modal logic perspective and from a quantifier perspective. Thus the issues addressed include normal forms, expressive power, completeness both of modal systems (...) and of systems in the quantifier tradition, complexity as well as syntactic characterizations of special semantic constraints. Throughout the paper several techniques current in the theory of generalized quantifiers are used to obtain results in modal logic, and conversely. (shrink)
What does it mean to say that the Bible has authority? The author introduces and develops J. M. Bochenski's philosophical theory about the nature of authority. On this basis, he distinguishes between different kinds of authority, which he applies to the authority of the Bible. Subsequently, he shows that the theory of Bochenski should be improved by reworking it from the perspective of speech-act theory. This leads to the presentation of an overall theory of authority that matches authority in general (...) as well as the authority of the Bible. (shrink)
In this paper, we show that Arrow’s well-known impossibility theorem is instrumental in bringing the ongoing discussion about verisimilitude to a more general level of abstraction. After some preparatory technical steps, we show that Arrow’s requirements for voting procedures in social choice are also natural desiderata for a general verisimilitude definition that places content and likeness considerations on the same footing. Our main result states that no qualitative unifying procedure of a functional form can simultaneously satisfy the requirements of Unanimity, (...) Independence of irrelevant alternatives and Non-dictatorship at the level of sentence variables. By giving a formal account of the incompatibility of the considerations of content and likeness, our impossibility result makes it possible to systematize the discussion about verisimilitude, and to understand it in more general terms. (shrink)
The present paper presents a philosophical analysis of earth science, a discipline that has received relatively little attention from philosophers of science. We focus on the question of whether earth science can be reduced to allegedly more fundamental sciences, such as chemistry or physics. In order to answer this question, we investigate the aims and methods of earth science, the laws and theories used by earth scientists, and the nature of earth-scientific explanation. Our analysis leads to the tentative conclusion that (...) there are emergent phenomena in earth science but that these may be reducible to physics. However, earth science does not have irreducible laws, and the theories of earth science are typically hypotheses about unobservable (past) events or generalised - but not universally valid - descriptions of contingent processes. Unlike more fundamental sciences, earth science is characterised by explanatory pluralism: earth scientists employ various forms of narrative explanations in combination with causal explanations. The main reason is that earth-scientific explanations are typically hampered by local underdetermination by the data to such an extent that complete causal explanations are impossible in practice, if not in principle. (shrink)
This paper begins an analysis of the real line using an inconsistency-tolerant (paraconsistent) logic. We show that basic field and compactness properties hold, by way of novel proofs that make no use of consistency-reliant inferences; some techniques from constructive analysis are used instead. While no inconsistencies are found in the algebraic operations on the real number field, prospects for other non-trivializing contradictions are left open.
An immunizing strategy is an argument brought forward in support of a belief system, though independent from that belief system, which makes it more or less invulnerable to rational argumentation and/or empirical evidence. By contrast, an epistemic defense mechanism is defined as a structural feature of a belief system which has the same effect of deflecting arguments and evidence. We discuss the remarkable recurrence of certain patterns of immunizing strategies and defense mechanisms in pseudoscience and other belief systems. Five different (...) types will be distinguished and analyzed, with examples drawn from widely different domains. The difference between immunizing strategies and defense mechanisms is analyzed, and their epistemological status is discussed. Our classification sheds new light on the various ways in which belief systems may achieve invulnerability against empirical evidence and rational criticism, and we propose our analysis as part of an explanation of these belief systems’ enduring appeal and tenacity. (shrink)
In this article we criticize two recent articles that examinethe relation between explanation and unification. Halonen and Hintikka (1999), on the one hand,claim that no unification is explanation. Schurz (1999), on the other hand, claims that all explanationis unification. We give counterexamples to both claims. We propose a pluralistic approach to the problem:explanation sometimes consists in unification, but in other cases different kinds of explanation(e.g., causal explanation) are required; and none of these kinds is more fundamental.
[H. de Regt is ‘co-supervisor’ of the current UvT PhD project ‘Consciousness: Science Says It All?’ (drs. A. Frantzen; supervisor: prof. em. dr. A. A. Derksen). This project (in which the problem of phenomenal consciousness is approached via the work of the American pragmatist John Dewey) is absorbed in the programme Pragmatism: Living versus Paper Doubt. In order to realize the project described below he has provisionally planned (a) further collaboration with prof. dr. C.J.M. Schuyt (University of Amsterdam) to (...) realize a Pragmatism Center at Tilburg University, (b) international contacts with prof. dr. Nathan Houser & prof. dr. André De Tienne at the Peirce Project Research Center, and with dr. Timothy Lyons (Faculty of Philosophy) (all at Indiana University/Purdue University, Indianapolis, United States), he will also visit the Center for Peirce Studies at the Texas Tech University, Lubbock, where the well known Peirce scholar professor Ken Ketner hosts the complete electronic available Peirce Archives; for the establishment of a Pragmatism Center in Tilburg the support of prof. Ketner will be sought, and (c) to organize two international congresses on Peircean pragmatism and contemporary philosophy of science and mind (including proceedings)]. (shrink)
Starting with a discussion of what I call `Koyré's paradox of conceptual novelty', I introduce the ideas of Damerow et al. on the establishment of classical mechanics in Galileo's work. I then argue that although their view on the nature of Galileo's conceptual innovation is convincing, it misses an essential element: Galileo's use of the experiments described in the first day of the Two New Sciences. I describe these experiments and analyze their function. Central to my analysis is the idea (...) that Galileo's pendulum experiments serve to secure the reference of his theoretical models in actually occurring cases of free fall. In this way, Galileo's experiments constitute an essential part of the meaning of the new concepts of classical mechanics. (shrink)
In this paper we raise the question whether technological artifacts can properly speaking be trusted or said to be trustworthy. First, we set out some prevalent accounts of trust and trustworthiness and explain how they compare with the engineer’s notion of reliability. We distinguish between pure rational-choice accounts of trust, which do not differ in principle from mere judgments of reliability, and what we call “motivation-attributing” accounts of trust, which attribute specific motivations to trustworthy entities. Then we consider some examples (...) of technological entities that are, at first glance, best suited to serve as the objects of trust: intelligent systems that interact with users, and complex socio-technical systems. We conclude that the motivation-attributing concept of trustworthiness cannot be straightforwardly applied to these entities. Any applicable notion of trustworthy technology would have to depart significantly from the full-blown notion of trustworthiness associated with interpersonal trust. (shrink)
Peirce algebras combine sets, relations and various operations linking the two in a unifying setting. This paper offers a modal perspective on Peirce algebras. Using modal logic a characterization of the full Peirce algebras is given, as well as a finite axiomatization of their equational theory that uses so-called unorthodox derivation rules. In addition, the expressive power of Peirce algebras is analyzed through their connection with first-order logic, and the fragment of first-order logic corresponding to Peirce algebras is described in (...) terms of bisimulations. (shrink)
Gauthier's argument for constrained maximization, presented inMorals by Agreement, is perfected by taking into account the possibility of accidental exploitation and discussing the limitations on the values of the parameters which measure the translucency of the actors. Gauthier's argument is nevertheless shown to be defective concerning the rationality of constrained maximization as a strategic choice. It can be argued that it applies only to a single actor entering a population of individuals who are themselves not rational actors but simple rule-followers. (...) A proper analysis of the strategic choice situation involving two rational actors who confront each other shows that constrained maximization as the choice of both actors can only result under very demanding assumptions. (shrink)
The article presents an introduction to the Special Issue on the French philosopher Jacques Rancière who raises a provocative voice in the current public debate on democracy, equality and education. Instead of merely criticizing current practices and discourses, the attractiveness of Rancière's work is that he does try to formulate in a positive way what democracy is about, how equality can be a pedagogic or educational (instead of policy) concern, and what the public and democratic role of education is. His (...) work opens up a space to rethink and to study, as well as to ‘re-practice’, what democracy and equality in education are about. He questions the current neutralisation of politics that is motivated by a hatred of democracy. This questioning is for Rancière also a struggle over words. Against the old philosophical dream of defining the meaning of words, Rancière underlines the need for the struggle over their meaning. The aim of the article is to clarify how and why education, equality, and democracy are a major concern throughout his work and to offer an introduction to the articles collected in the Special Issue. (shrink)
This comment makes four related points. First, explaining coordination is different from explaining cooperation. Second, solving the coordination problem is more important for the theory of games than solving the cooperation problem. Third, a version of the Principle of Coordination can be rationalized on individualistic grounds. Finally, psychological game theory should consider how players perceive their gaming situation.
We define bisimulations for temporal logic with Since and Until. This new notion is compared to existing notions of bisimulations, and then used to develop the basic model theory of temporal logic with Since and Until. Our results concern both invariance and definability. We conclude with a brief discussion of the wider applicability of our ideas.
In recent years, a new type of Neo-Augustinian theology has received extensive attention: Radical Orthodoxy. Leading figures behind Radical Orthodoxy such as John Milbank, Catherine Pickstock, and Graham Ward assert that they reclaim Augustine's theology over and against almost every major types of modern theology. Their leading claim is that an Augustinian participationist theological ontology overcomes Enlightment sourced secularism. In this essay, the Augustinian character of Radical Orthodox theology is put to the test in terms of a comparison and confrontation (...) between Radical Orthodoxy and Augustine's Christology. It is shown that Radical Orthodoxy's sole concern with regard to Christology is in the manifestation or expression of the ontological relationship or unity of God and the world. Thus, Radical Orthodoxy has its roots in a post-Hegelian rethinking of unity in difference rather than being a rediscovery of Augustine's theology. Subsequently, it is shown that Radical Orthodoy's reading of Augustine denies his understanding of the manifestation of the being of God in Christ; furthermore, it does not account for Augustine's doctrine of atonement: where we recover our original justice and happiness through the substituting life and death of Christ, an atonement which prepares us for the vision of God in the Eschaton. (shrink)
Focal points seem to be important in helping players coordinate their strategies in coordination problems. Game theory lacks, however, a formal theory of focal points. This paper proposes a theory of focal points that is based on individual rationality considerations. The two principles upon which the theory rest are the Principle of Insufficient Reason (IR) and a Principle of Individual Team Member Rationality. The way IR is modelled combines the classic notion of description symmetry and a new notion of pay-off (...) symmetry, which yields different predictions in a variety of games. The theory can explain why people do better than pure randomization in matching games. (shrink)
Starting from a Foucaultian perspective, the article draws attention to current developments that neutralise democracy through the ‘governmentalisation of democracy’ and processes of ‘governmental subjectivation’. Here, ideas of Rancière are introduced in order to clarify how democracy takes place through the paradoxical process of ‘political subjectivation’, that is, a disengagement with governmental subjectivation through the verification of one's equality in demonstrating a wrong. We will argue that democracy takes place through the paradoxical process of political subjectivation, and that today's consensus (...) society tends to depoliticize all processes of subjectivation. A final step in the argumentation is to introduce the concept of ‘pedagogic subjectivation’—to be understood as the experience of potentiality—that is to be distinguished from governmental subjectivation and also from political subjectivation. The concept ‘pedagogic subjectivation’ is proposed as a way of thinking of the school as a public place. (shrink)
The failure to recognize a correlation as spurious can lead people to adopt strategies to bring about a specific outcome that manipulate something other than a cause of the outcome. However, in a 2008 paper appearing in the journal Analysis, Bert Leuridan, Erik Weber and Maarten Van Dyck suggest that knowledge of spurious correlations can, at least sometimes, justify adopting a strategy aiming at bringing about some change. This claim is surprising and, if true, throws into question the claim (...) of Nancy Cartwright and others that knowledge of laws of association is insufficient for distinguishing effective and ineffective strategies. This paper examines the nature of spurious correlations and their value in crafting strategies for change. The conclusion of the paper is that while knowledge of a spurious correlation may have practical value, the value depends on either having knowledge of the causal structure underlying the correlation or it depends on the use of ‘causal criteria’. (shrink)
In a survey of Internet resources available to philosophers of religion, the authors critically discuss philosophy sites, e-journals, virtual libraries etc that are relevant to philosophy of religion. They conclude that the Internet is increasingly becoming a helpful and even indispensable source of information.
Schools and classrooms, as well as the work place and the Internet, are considered today as learning environments . People are regarded as learners and the main target of school education has become 'learning' pupils and students how to learn. The roles of teachers and lecturers are redefined as instructors, designers of (powerful) learning environments and facilitators or coaches of learning processes. The aim of this paper is to argue that the current self-understanding in terms of learning environments is not (...) merely about a renewal of our vocabulary, but an indication of a far more general transformation of the world of education. It is argued that the current self-understanding in terms of 'learning environments' and 'learners' indicates a shift in our experience of time and place; a shift from (modern) historical self-understanding towards (post-modern) environmental self-understanding. The essay draws upon Foucauldian concepts in order to map the modern organisation of time and space in 'schools'. This past organisation is confronted with the current organisation of time and space in 'learning environments'. By contrasting both maps the paper focuses on the main characteristics of the current experience of time and space, that is, 'environmental self-understanding', and explores in the final section the dark side of this self-understanding. (shrink)
In this note we show that the classical modal technology of Sahlqvist formulas gives quick proofs of the completeness theorems in  (D. Gregory, Completeness and decidability results for some propositional modal logics containing actually operators, Journal of Philosophical Logic 30(1): 57–78, 2001) and vastly generalizes them. Moreover, as a corollary, interpolation theorems for the logics considered in  are obtained. We then compare Gregory's modal language enriched with an actually operator with the work of Arthur Prior now known under (...) the name of hybrid logic. This analysis relates the actually axioms to standard hybrid axioms, yields the decidability results in , and provides a number of complexity results. Finally, we use a bisimulation argument to show that the hybrid language is strictly more expressive than Gregory's language. (shrink)
Patients with a life-threatening illness can be confronted with various types of loneliness, one of which is existential loneliness (EL). Since the experience of EL is extremely disruptive, the issue of EL is relevant for the practice of end-of-life care. Still, the literature on EL has generated little discussion and empirical substantiation and has never been systematically reviewed. In order to systematically review the literature, we (1) identified the existential loneliness literature; (2) established an organising framework for the review; (3) (...) conducted a conceptual analysis of existential loneliness; and (4) discussed its relevance for end-of-life care. We found that the EL concept is profoundly unclear. Distinguishing between three dimensions of EL—as a condition, as an experience, and as a process of inner growth—leads to some conceptual clarification. Analysis of these dimensions on the basis of their respective key notions—everpresent, feeling, defence; death, awareness, difficult communication; and inner growth, giving meaning, authenticity—further clarifies the concept. Although none of the key notions are unambiguous, they may function as a starting point for the development of care strategies on EL at the end of life. (shrink)
This article takes up a text that Rancière published shortly after The Ignorant School Master appeared in French, ‘École, production, égalité’[School, Production, Equality] (1988), in which he sketched the school as being preeminently the place of equality. In this vein, and opposed to the story of the school as the place where inequality is reproduced and therefore in need of reform, the article wants to recount the story of the school as the invention of a site of equality and as (...) primordially a public space. Inspired by Rancière, we indicate first how the actual (international and national) policy story about the school and the organizational technologies that accompany it install and legitimate profound inequalities, which consequently can no longer be questioned (and become ‘invisible’). Second, the article recasts and rethinks different manifestations of equality and of ‘public-ness’ in school education and, finally, indicates various ways in which these manifestations are neutralized or immunized in actual discourses and educational technologies. (shrink)
In the past 25 years, many philosophers have endorsed the view that the practical value of causal knowledge lies in the fact that manipulation of causes is a good way to bring about a desired change in the effect. This view is intuitively very plausible. For instance, we can predict a storm on the basis of a barometer reading, but we cannot avoid the storm by manipulating the state of the barometer (barometer status and storm are effects of a common (...) cause, viz. atmospheric conditions). In Section 1 we present textual evidence which shows that this view is very popular. In Section 2 we show that this standard view is too restrictive: the practical value of causal knowledge is wider. In Section 3 we introduce the distinction between ‘manipulative policy’ and ‘selective policy’ as a theoretical framework to account for this wider practical value. (shrink)
Craig's interpolation lemma (if φ → ψ is valid, then φ → θ and θ → ψ are valid, for θ a formula constructed using only primitive symbols which occur both in φ and ψ) fails for many propositional and first order modal logics. The interpolation property is often regarded as a sign of well-matched syntax and semantics. Hybrid logicians claim that modal logic is missing important syntactic machinery, namely tools for referring to worlds, and that adding such machinery solves (...) many technical problems. The paper presents strong evidence for this claim by defining interpolation algorithms for both propositional and first order hybrid logic. These algorithms produce interpolants for the hybrid logic of every elementary class of frames satisfying the property that a frame is in the class if and only if all its point-generated subframes are in the class. In addition, on the class of all frames, the basic algorithm is conservative: on purely modal input it computes interpolants in which the hybrid syntactic machinery does not occur. (shrink)
In many logics dealing with information one needs to make statements not only about cognitive states, but also about transitions between them. In this paper we analyze a dynamic modal logic that has been designed with this purpose in mind. On top of an abstract information ordering on states it has instructions to move forward or backward along this ordering, to states where a certain assertion holds or fails, while it also allows combinations of such instructions by means of operations (...) from relation algebra. In addition, the logic has devices for expressing whether in a given state a certain instruction can be carried out, and whether that state can be arrived at by carrying out a certain instruction.This paper deals mainly with technical aspects of our dynamic modal logic. It gives an exact description of the expressive power of this language; it also contains results on decidability for the language with arbitrary structures and for the special case with a restricted class of admissible structures. In addition, a complete axiomatization is given. The paper concludes with a remark about the modal algebras appropriate for our dynamic modal logic, and some questions for further work. (shrink)
On many occasions, individuals are able to coordinate their actions. The first empirical evidence to this effect has been described by Schelling (1960) in an informal experiment. His results were corroborated many years later by Mehta et al. (1994a,b) and Bacharach and Bernasconi (1997). From the point of view of mainstream game theory, the success of individuals in coordinating their actions is something of a mystery. If there are two or more strict Nash equilibria, mainstream game theory has no means (...) of explaining why people tend to choose their part of one and the same equilibrium. Textbooks (see, e.g., Rasmusen, 1989 and Kreps, 1990) refer to the fact that players may use focal points (see Schelling (1960)) or social conventions (see Lewis (1969)). Both notions cannot easily be incorporated into mainstream game theory, however. The notion of social conventions has recently been extensively studied in the context of evolutionary game theory where a population of agents interacts with each other. The central focus of this paper, however, is on situations where a few players play a game only once and I study how they may coordinate their actions. (shrink)
In the 1970s Codd introduced the relational algebra, with operators selection, projection, union, difference and product, and showed that it is equivalent to first-order logic. In this paper, we show that if we replace in Codd’s relational algebra the product operator by the “semijoin” operator, then the resulting “semijoin algebra” is equivalent to the guarded fragment of first-order logic. We also define a fixed point extension of the semijoin algebra that corresponds to μGF.