Adina Roskies has argued that worries that recent developments in the neurosciences challenge our ideas of free will and responsibility are misguided. Her argument focuses on the idea that we are able to act differently than we do. However, according to a dominant view in contemporary philosophy, the ability to do otherwise is irrelevant to our judgments of responsibility and free will. It rather is our ability to act for reasons that is crucial. We argue that this view is most (...) significantly challenged by the recent discoveries. Those discoveries show that it is not as obvious and uncontroversial that we act for reasons as it seems. Hence, we have to rethink our concept of reasons-responsiveness. (shrink)
This paper reviews the debate on the notion of biological function and on functional explanation as this takes place in philosophy. It describes the different perspectives, issues, intuitions, theories and arguments that have emerged. The author shows that the debate has been too heavily influenced by the concerns of a naturalistic philosophy of mind and argues that in order to improve our understanding of biology the attention should be shifted from the study of intuitions to the study of the actual (...) practice of biological inquiry. (shrink)
Many philosophers ignore developments in the behavioral, cognitive, and neurosciences that purport to challenge our ideas of free will and responsibility. The reason for this is that the challenge is often framed as a denial of the idea that we are able to act differently than we do. However, most philosophers think that the ability to do otherwise is irrelevant to responsibility and free will. Rather it is our ability to act for reasons that is crucial. We argue that the (...) scientific findings indicate that it is not so obvious that our views of free will and responsibility can be grounded in the ability to act for reasons without introducing metaphysical obscurities. This poses a challenge to philosophers. We draw the conclusion that philosophers are wrong not to address the recent scientific developments and that scientists are mistaken in formulating their challenge in terms of the freedom to do otherwise. (shrink)
I argue that there are at least four different ways in which the term ‘function’ is used in connection with the study of living organisms, namely: (1) function as (mere) activity, (2) function as biological role, (3) function as biological advantage, and (4) function as selected effect. Notion (1) refers to what an item does by itself; (2) refers to the contribution of an item or activity to a complex activity or capacity of an organism; (3) refers to the value (...) for the organism of an item having a certain character rather than another; (4) refers to the way in which a trait acquired and has maintained its current share in the population. The recognition of a separate notion of function as biological advantage solves the problem of the indeterminate reference situation that has been raised against a counterfactual analysis of function, and emphasizes the importance of counterfactual comparison in the explanatory practice of organismal biology. This reveals a neglected problem in the philosophy of biology, namely that of accounting for the insights provided. (shrink)
This paper is concerned with reasonings that purport to explain why certain organisms have certain traits by showing that their actual design is better than contrasting designs. Biologists call such reasonings ‘functional explanations’. To avoid confusion with other uses of that phrase, I call them ‘design explanations’. This paper discusses the structure of design explanations and how they contribute to scientific understanding. Design explanations are contrastive and often compare real organisms to hypothetical organisms that cannot possibly exist. They are not (...) causal but appeal to functional dependencies between an organism’s different traits. These explanations point out that because an organism has certain traits (e.g., it lives on land), it cannot be alive if the trait to be explained (e.g., having lungs) were replaced by a specified alternative (e.g., having gills). They can be understood from a mechanistic point of view as revealing the constraints on what mechanisms can be alive. (shrink)
This article presents a new interpretation of Marx's dialectical method. Marx conceived dialectics as a method for constructing a model of society. The way this model is developed is analogous to the way organisms develop according to the German embryologist Karl Ernst von Baer, and, indeed, Marx's theory of capitalism hinges on the same concept of Organisation that is found in teleomechanical biology. The strong analogy between pre-Darwinian biology and Marx's structure of argument shows that the analogy often supposed to (...) exist between Darwin and Marx is not relevant to Marx's theory of capitalism. (shrink)
This paper evaluates Kuipers' account of functional explanation in biology in view of an example of such an explanation taken from real biology. The example is the explanation of why electric fishes swim backwards (Lannoo and Lannoo 1993). Kuipers' account depicts the answer to a request for functional explanation as consisting only of statements that articulate a certain kind of consequence. It is argued that such an account fails to do justice to the main insight provided by the example explanation, (...) namely the insight into why backwards swimming is needed by fishes that locate their food by means of an electric radar. The paper sketches an improved account that does justice to this kind of insight. It is argued that this account is consistent with and complementary to Kuipers' insight that function attributions are established by means of a process of hypothetico-deductive reasoning guided by a heuristic principle. (shrink)
Following Mayr (1961) evolutionary biologists often maintain that the hallmark of biology is its evolutionary perspective. In this view, biologists distinguish themselves from other natural scientists by their emphasis on why-questions. Why-questions are legitimate in biology but not in other natural sciences because of the selective character of the process by means of which living objects acquire their characteristics. For that reason, why-questions should be answered in terms of natural selection. Functional biology is seen as a reductionist science that applies (...) physics and chemistry to answer how-questions but lacks a biological point of view of its own. In this paper I dispute this image of functional biology. A close look at the kinds of issues studied in biology and at the way in which these issues are studied shows that functional biology employs a distinctive biological perspective that is not rooted in selection. This functional perspective is characterized by its concern with the requirements of the life-state and the way in which these are met. (shrink)
This article deals with a type of functional explanation, viability explanation, that has been overlooked in recent philosophy of science. Viability explanations relate traits of organisms and their environments in terms of what an individual needs to survive and reproduce. I show that viability explanations are neither causal nor historical and that, therefore, they should be accounted for as a distinct type of explanation.
I argue that there are at least four different ways in which the term 'function' is used in connection with the study of living organisms, namely: (1) function as (mere) activity, (2) function as biological role, (3) function as biological advantage, and (4) function as selected effect. Notion (1) refers to what an item does by itself; (2) refers to the contribution of an item or activity to a complex activity or capacity of an organism; (3) refers to the value (...) for the organism of an item having a certain character rather than another; (4) refers to the way in which a trait acquired and has maintained its current share in the population. The recognition of a separate notion of function as biological advantage solves the problem of the indeterminate reference situation that has been raised against a counterfactual analysis of function, and emphasizes the importance of counterfactual comparison in the explanatory practice of organismal biology. This reveals a neglected problem in the philosophy of biology, namely that of accounting for the insights provided by counterfactual comparison. (shrink)
Darwin’s insight that species are mutable, and descent, and origin by means of natural selection is one of the most widely acknowledged strategies for the origin of species and their survival in nature. In his famous contribution, however, Darwin also writes that he is convinced that “... Natural Selection has been the main but not exclusive means of modification ” (Darwin in The origin of species. Oxford Univeristy Press, Oxford, p. 7, 1996 ). This research suggests robustness as another fundamental (...) strategy for survival in nature. The paper does not contradict the popular view, which usually sees robustness as a feature making systems fault-tolerant, thereby focusing on the identification of strategies and techniques for making systems robust (i.e., how to achieve robustness). The paper rather extends this view with an interpretation resting on the question—WHY is robustness omnipresent in the world around us? From this point of view, robustness is interpreted as a fundamental mechanism that is in place because of another fundamental feature in nature—the design and use of sub-optimal systems. The paper argues that, in a sense, nature under-specifies systems but compensates for this by providing systems with various degrees of robustness. We believe that this interpretation may lead to fundamentally new design approaches and insights in several fields. (shrink)
The thought determinations of Hegel's Logic are tentatively considered as three mutually intersecting systems giving rise to a set of general ordering relations. Each main category is given its place in a main sequence with triadic structure and is associated with two sequences of dependent partial determinations. In this way a number of clear distinctions can be introduced and some significant but hitherto largely neglected correspondences between categories and their partial determinations, in particular the so-called moments, be studied.
Technology assessment (TA) is an important instrument for the regulation of innovation. From the perspective of sociology of knowledge, the regulatory process can be understood as a complex interplay between different forms of knowledge. The prevailing instruments of TA, expertise and participation, are both facing difficulties in dealing with the limits and impasses of regulatory knowledge in the realm of innovation. Nevertheless, as is argued in this article, reflexive forms of TA offer a good, if not the only, answer to (...) the question of how we can deal with the contradictions and paradoxes involved in the regulation of innovation. (shrink)
In practice, the relationship between business and ethics is not well-settled. In the past, organisations have developed an interest in setting value charts but this has been approached from a purely managerial perspective following the momentum and interest aroused by research on organisational cultures. Although interest in managing organisational cultures has slowly died down, for both theoretical and practical reasons we argue that there are feasible ways to explore values as part of an organisational culture. Indeed it is our claim (...) that it is feasible and productive to discuss values within organisations. However, rather than developing sophisticated theoretical frameworks, more efforts should be put into thinking about the conditions under which participants can enter into productive dialogue. It is our claim that if processes are carefully examined people within organisations can make better sense of their work and discover their own perspective to account for what they actually do and to project themselves into what they think they should be doing. Thus, values identified within the organisation can eventually reach a point where they become an expression of a shared commitment. The experience we describe aims to illustrate only one example of a concrete application of this approach. (shrink)
In Section I, different characterizations of the theoretical status, systematic importance and possible applications of cybernetics in the human sciences are sketched, according to view points currently developing in Soviet and Eastern science. Significant differences from the Western scientific approaches are pointed out. The connection of this field with work on heuristics and systems theory is briefly dealt with. Section II gives a critical appraisal of the ideas of B. V. Birjukov on the humanization of logic. The question of (...) the possibility of a special logic of creativity according to this and similar methods in cybernetics is outlined, followed by a short critical analysis. Section III gives a short general introduction to some problems of the comparison of formal with dialectical logic, criticizes the work of J. Erpenbeck and H. Hörz on lawlike sentences and proposes a new scheme for dialectical hypothesis formation. A Critical comparison with some recent developments in Soviet philosophy forms the conclusion. (shrink)
Tekst jest wyborem fragmentów przypisywanych Janowi Potockiemu. Jean Potocki (1761–1815), mistrz ironii, twardo usadowiony w nieuchwytnym na pozór miejscu pomiędzy oświeceniem a romantyzmem, pozostaje najczęściej plagiatowanym autorem Europy. Zapewne przysłużył się pośmiertnie sprawie praw autorskich, ale i stało się tak, jakby oryginalność i płodność jego myśli była na tyle inna, że nie sposób przedstawić go inaczej jak wymazując jego imię, jeśli po prostu nie niszcząc. Choć historia jego francuskich i amerykańskich plagiatów jest wyczerpująco udokumentowana, nie istnieje jeszcze definitywne wydanie jego (...) arcydzieła: Rękopisu znalezionego w Saragossie, już w 1809 roku przełożonego po części na niemiecki, a w 1847, również w Lipsku, zapewne z udziałem Societas Jablonoviana, w całości na polski (cytaty podług dotychczas najpełniejszego, francuskiego wydania René Radrizzianiego z 1989). W latach pięćdziesiątych XX wieku trud przywrócenia należnego mu miejsca podjęli R. Caillois, L. Kukulski, W. Has, Emanuel Rostworowski i Maria-Ewelina Żółtowska-Weintraub, której wciąż nieopublikowany doktorat (Yale, 1973) stanowi najpełniejszą biografię hrabiego Jana, powiązaną z karierą bohatera Rękopisu. Skromny ów oficer gwardii walońskiej: Alfons van Worden (nie tylko z habsburskiej Walonii, żeby nie powiedzieć: „Galonii”, ale i ze „Słów”), po wielu straszliwych próbach, które w gruncie rzeczy niczym innym dla niego nie były jak straszliwymi, teatralnie zaaranżowanymi opowieściami, awansował na gubernatora Saragossy, gdzie wiódł spokojny żywot troszcząc się o córeczkę swą Fatimę. […]. (shrink)
This paper aims to contribute to our understanding of the notion of coherence by explicating in probabilistic terms, step by step, what seem to be our most basic intuitions about that notion, to wit, that coherence is a matter of hanging or fitting together, and that coherence is a matter of degree. A qualitative theory of coherence will serve as a stepping stone to formulate a set of quantitative measures of coherence, each of which seems to capture well the aforementioned (...) intuitions. Subsequently it will be argued that one of those measures does better than the others in light of some more specific intuitions about coherence. This measure will be defended against two seemingly obvious objections. (shrink)
Research into the so-called “philosophical” Hermetica has long been dominated by the foundational scholarship of André-Jean Festugière, who strongly emphasized their Greek and philosophical elements. Since the late 1970s, this perspective has given way to a new and more complex one, due to the work of another French scholar, Jean-Pierre Mahé, who could profit from the discovery of new textual sources, and called much more attention to the Egyptian and religious dimensions of the hermetic writings. This article addresses the question (...) of how, on these foundations, we should evaluate and understand the frequent hermetic references to profound but wholly ineffable revelatory and salvational insights received during “ecstatic” states. Festugière dismissed them as “literary fictions”, whereas Mahé took them much more seriously as possibly reflecting ritual practices that took place in hermetic communities. Based upon close reading of three central texts (CH I, CH XIII, NH VI6), and challenging existing translations and interpretations, this article argues that the authors of the hermetic corpus assumed a sequential hierarchy of “levels of knowledge”, in which the highest and most profound knowledge (gnōsis) is attained only during ecstatic or “altered” states of consciousness that transcend rationality. While the hermetic teachings have often been described as unsystematic, inconsistent, incoherent or confused, in fact they are grounded in a precise and carefully formulated doctrine of how the hermetic initiate may move from the domain of mere rational discourse to the attainment of several “trans-rational” stages of direct experiential knowledge, and thereby from the limited and temporal domain of material reality to the unlimited and eternal one of Mind. (shrink)
Legal Realism Regained presents a comparison between the legal realists, a group of pragmatic legal theorists from the 1920s and 1930s, and critical legal studies, a movement of postmodern legal theory during the end of the twentieth century. The book argues for a return to legal realism and the classical pragmatism of John Dewey and William James and for a rejection of the postmodern critique of critical legal studies. It discusses the two movements with respect to three topics: their view (...) of history, their view of social science, and their view of language. Rejecting the claim that critical legal studies can be seen as the heir of legal realism, Legal Realism Regained argues that, with respect to each of these three topics, the realists still present a stronger argument than their more radical descendants. (shrink)
Along with the exploding attention to globalization, issues of global justice have become central elements in political philosophy. After decades in which debates were dominated by a state-centric paradigm, current debates in political philosophy also address issues of global inequality, global poverty, and the moral foundations of international law. As recent events have demonstrated, these issues also play an important role in the practice of international law. In fields such as peace and security, economic integration, environmental law, and human rights, (...) international lawyers are constantly confronted with questions of global justice and international legitimacy. This special issue contains four papers which address an important element of this emerging debate on cosmopolitan global justice, with much relevance for international law: the principle of sovereign equality, global economic inequality, and environmental law. (shrink)
In a famous experiment by Tversky and Kahneman (Psychol Rev 90:293–315, 1983), featuring Linda the bank teller, the participants assign a higher probability to a conjunction of propositions than to one of the conjuncts, thereby seemingly committing a probabilistic fallacy. In this paper, we discuss a slightly different example featuring someone named Walter, who also happens to work at a bank, and argue that, in this example, it is rational to assign a higher probability to the conjunction of suitably chosen (...) propositions than to one of the conjuncts. By pointing out the similarities between Tversky and Kahneman’s experiment and our example, we argue that the participants in the experiment may assign probabilities to the propositions in question in such a way that it is also rational for them to give the conjunction a higher probability than one of the conjuncts. (shrink)
If coherence is to have justificatory status, as some analytical philosophers think it has, it must be truth-conducive, if perhaps only under certain specific conditions. This paper is a critical discussion of some recent arguments that seek to show that under no reasonable conditions can coherence be truth-conducive. More specifically, it considers Bovens and Hartmann’s and Olsson’s “impossibility results,” which attempt to show that coherence cannot possibly be a truth-conducive property. We point to various ways in which the advocates of (...) a coherence theory of justification may attempt to divert the threat of these results. (shrink)
Is it possible and desirable to translate the basic principles underlying cosmopolitanism as a moral standard into eff ective global institutions? Will the ideals of inclusiveness and equal moral concern for all survive the marriage between cosmopolitanism and institutional power? What are the eff ects of such bureaucratization of cosmopolitan ideals? Th is book examines the strained relationship between cosmopolitanism as a moral standard and the legal institutions in which cosmopolitan norms and principles are to be implemented. Five areas of (...) global concern are analyzed: environmental protection; economic regulation; peace and security; the fight against international crimes; and migration. -/- . (shrink)
Glymour’s theory of bootstrap confirmation is a purely qualitative account of confirmation; it allows us to say that the evidence confirms a given theory, but not that it confirms the theory to a certain degree. The present paper extends Glymour’s theory to a quantitative account and investigates the resulting theory in some detail. It also considers the question how bootstrap confirmation relates to justification.
In this paper I consider whether there is a measure of coherence that could be rightly claimed to generalize the notion of logical equivalence. I show that Fitelson’s (2003) proposal to that effect encounters some serious difficulties. Furthermore, there is reason to believe that no mutual-support measure could ever be suitable for the formalization of coherence as generalized logical equivalence. Instead, it appears that the only plausible candidate for such a measure is one of relative overlap. The measure I propose (...) in this paper is quite similar to Olsson’s (2002) proposal but differs from it by not being susceptible to the type of counterexample that Bovens and Hartmann (2003) have devised against it. (shrink)
Institutional theory of law (ITL) reflects both continuity and change of Kelsen's legal positivism. The main alteration results from the way ITL extends Hart's linguistic turn towards ordinary language philosophy (OLP). Hart holds – like Kelsen – that law cannot be reduced to brute fact nor morality, but because of its attempt to reconstruct social practices his theory is more inclusive. By introducing the notion of law as an extra-linguistic institution ITL takes a next step in legal positivism and accounts (...) for the relationship between action and validity within the legal system. There are, however, some problems yet unresolved by ITL. One of them is its theory of meaning. An other is the way it accounts for change and development. Answers may be based on the pragmatic philosophy of Charles Sanders Peirce, who emphasises the intrinsic relation between the meaning of speech acts and the process of habit formation. (shrink)
Paul Otlet (1868–1944) was a Belgian intellectual, a utopian internationalist and a visionary theorist of the field of information science. His work is a milestone in the history of information science since he launched the concept of "documentation," a field that evolved out of bibliography and developed into information science.1 Otlet defined documentation as the whole of the proper means of passing on, communicating, and distributing information. Otlet was a convinced apostle of the idea of universalism as the title of (...) one of his seminal books, Monde. Essai d'Universalisme, illustrates. This was the outcome of a course of fifteen lessons, entitled "L'universalisme, doctrine philosophique et économie mondiale," .. (shrink)
Investigations into inter-level relations in computer science, biology and psychology call for an *empirical* turn in the philosophy of mind. Rather than concentrate on *a priori* discussions of inter-level relations between 'completed' sciences, a case is made for the actual study of the way inter-level relations grow out of the developing sciences. Thus, philosophical inquiries will be made more relevant to the sciences, and, more importantly, philosophical accounts of inter-level relations will be testable by confronting them with what really happens (...) in science. Hence, close observation of the ever-changing reduction relations in the developing sciences, and revision of philosophical positions based on these empirical observations, may, in the long run, be more conducive to an adequate understanding of inter-level relations than a traditional *a priori* approach. (shrink)
Synthese 156 (3) (2007). Special issue ed. with Luc Bovens. With contributions by Max Albert, Branden Fitelson, Dennis Dieks, Igor Douven and Wouter Meijs, Alan Hájek, Colin Howson, James Joyce, and Patrick Suppes.
Character education considers teachers to be role models, but it is unclear what this means in practice. Do teachers model admirable character traits? And do they do so effectively? In this article the relevant pedagogical and psychological literature is reviewed in order to shed light on these questions. First, the use of role modelling as a teaching method in secondary education is assessed. Second, adolescents? role models and their moral qualities are identified. Third, the psychology of moral learners is critically (...) examined, using Bandura?s social learning theory as point of departure. It turns out that role modelling is rarely used as an explicit teaching method and that only a very small percentage of adolescents recognises teachers as role models. If role modelling is to contribute to children?s moral education, teachers are recommended to explain why the modelled traits are morally significant and how students can acquire these qualities for themselves. (shrink)
Bovens and Hartmann (Bayesian Epistemology, Oxford: Oxford University Press, 2003) propose to analyze coherence as a confidence-boosting property. On the basis of this idea, they construct a new probabilistic theory of coherence. In this paper, I will attempt to show that the resulting measure of coherence clashes with some of the intuitions that motivate it. Also, I will try to show that this clash is not due to the view on coherence as a confidence-boosting property or to the general features (...) of the model that Bovens and Hartmann use to analyze coherence. It will turn out that there is at least one other measure that is similarly based on the concept of a confidence-boosting property, but does not have the same counterintuitive results. (shrink)
The medieval doctrine of God as first known presents a privileged moment in a tradition of classical metaphysics that runs from Plato to Levinas. The presentcontribution analyzes two versions of this doctrine formulated by Bonaventure († 1274) and Henry of Ghent († 1293). In reaction to the preceding discussion inParis, they advance a doctrine of God as first known that distinguishes the relative priority of God within the first known transcendental concepts from the absolutepriority of God over these. Although their (...) two-staged doctrines of God as first known structurally agree, they vary in their strategical embedding. Underlying this variation is a transformation of the concept of reality that abstracts actuality as a standard and criterion to the determination of the first known. As such, thisconcept of reality gives rise to the very idea of neutral existence against which Levinas objects. (shrink)
In this paper, reduction and its pragmatics are discussed in light of the development in computer science of languages to describe processes. The design of higher-level description languages within computer science has had the aim of allowing for description of the dynamics of processes in the (physical) world on a higher level avoiding all (physical) details of these processes. The higher description levels developed have dramatically increased the complexity of applications that came within reach. The pragmatic attitude of a (scientific) (...) practitioner in this area has become inherently anti-reductionist, but based on well-established reduction relations. The paper discusses how this perspective can be related to reduction in general, and to other domains where description of dynamics plays a main role, in particular, biological and cognitive domains. (shrink)
A social scientific survey on visions of human/nature relationships in western Europe shows that the public clearly distinguishes not only between anthropocentrism and ecocentrism, but also between two nonanthropocentric types of thought, which may be called “partnership with nature” and “participation in nature.” In addition, the respondents distinguish a form of human/nature relationship that is allied to traditional stewardship but has a more ecocentric content, labeled here as “guardianship of nature.” Further analysis shows that the general public does not subscribe (...) to an ethic of “mastery over nature.” Instead, practically all respondents embrace the image of guardianship, while the more radical relationships of partnership and participation also received significant levels of adherence. The results imply that ethicists should no longer assume that the ethics of the public are merely anthropocentric. Finally, they call into question the idea of a single form of ecocentrism and favor a hermeneutic virtue ethics approach to the study of the interface between ethics and action. (shrink)
According to moral error theory, moral discourse is error-ridden. Establishing error theory requires establishing two claims. These are that moral discourse carries a non-negotiable commitment to there being a moral reality and that there is no such reality. This paper concerns the first and so-called non-negotiable commitment claim. It starts by identifying the two existing argumentative strategies for settling that claim. The standard strategy is to argue for a relation of conceptual entailment between the moral statements that comprise moral discourse (...) and the statement that there is a moral reality. The non-standard strategy is to argue for a presupposition relation instead. Error theorists have so far failed to consider a third strategy, which uses a general entailment relation that doesn’t require intricate relations between concepts. The paper argues that both entailment claims struggle to meet a new explanatory challenge and that since the presupposition option doesn’t we have prima facie reason to prefer it over the entailment options. The paper then argues that suitably amending the entailment claims enables them to meet this challenge. With all three options back on the table the paper closes by arguing that error theorists should consider developing the currently unrecognised, non-conceptual entailment claim. (shrink)
Considerable attention has been given to the accessibility of legal documents, such as legislation and case law, both in legal information retrieval (query formulation, search algorithms), in legal information dissemination practice (numerous examples of on-line access to formal sources of law), and in legal knowledge-based systems (by translating the contents of those documents to ready-to-use rule and case-based systems). However, within AI & law, it has hardly ever been tried to make the contents of sources of law, and the relations (...) among them, more accessible to those without a legal education. This article presents a theory about translating sources of law into information accessible to persons without a legal education. It illustrates the theory by providing two elaborated examples of such translation ventures. In the first example, formal sources of law in the domain of exchanging police information are translated into rules of thumb useful for policemen. In the second example, the goal of providing non-legal professionals with insight into legislative procedures is translated into a framework for making available sources of law through an integrated legislative calendar. Although the theory itself does not support automating the several stages described, in this article some hints are given as to what such automation would have to look like. (shrink)
The communicative effect of a collective message from the Dutch former minister of finance Wouter Bos to inform all his contacts about his new email address is completely different from that of a set of individual messages to the same list. The talk will explain how differences of this kind can be modelled in epistemic logic (the logic of knowledge). A central notion here is common knowledge. We will explain the general framework for describing update effects of messages as mappings (...) on epistemic models (“knowledge models”), and we will give a sketch of some recent work in this area. (shrink)
Het communicatieve effect van een collectieve email van Wouter Bos aan al zijn contacten is totaal anders dan van hetzelfde bericht gestuurd aan iedere geadresseerde persoonlijk. In de lezing zal worden ingegaan op de vraag hoe je dit soort verschillen kunt modelleren in epistemische logica. Een centrale notie hierbij is ‘common knowledge’ of ‘collectief weten’. Dit begrip zal worden geillustreerd aan de hand van een aantal logische puzzles, en van protocollen uit het dagelijks leven die bedoeld zijn om collectief weten (...) tot stand te brengen. Er zal worden uitgelegd waarom het in gevallen waar een economische belang in het geding is niet rationeel is ‘to agree to disagree’ (van elkaar te weten dat we de waarde van een economisch goed verschillend beoordelen). Als er tijd is doen we ook een demonstratie epistemisch modelleren. (shrink)
In previous research, Toms, Morris, and Ward (1993) have shown that conditional reasoning is impaired by a concurrent task calling on executive functions but not by concurrent tasks that load on the slave systems of the working memory system as conceptualised by Baddeley and Hitch (1974). The present article replicates and extends this previous work by studying problems based on spatial as well as nonspatial relations. In the study 42 participants solved 16 types of spatial or nonspatial problems, both in (...) a single-task condition and under concurrent matrix tapping, a task loading the visuo-spatial sketch pad. The findings were consistent with those of Toms et al. (1993) for problems with a nonspatial content. However, when the content was spatial, and only then, a dual-task impairment was observed: processing time of the first premise was lengthened, especially for problems with negations in the antecedent term, the consequent term, or both; moreover, the number of correctly solved problems with negations in both terms was smaller. The implications of these findings for the mental models theory and the mental logic theory are discussed. (shrink)
We elaborate on Israel Scheffler's claim that principles of rationality can be rationally evaluated, focusing on foundational development, by which we mean the evolution of principles which are constitutive of our conceptualization of a certain domain of rationality. How can claims that some such principles are better than prior ones, be justified? We argue that Scheffler's metacriterion of overall systematic credibility is insufficient here. Two very different types of rational development are jointly involved, namely, development of general principles that are (...) strictly constitutive of rationality as such, and development of specific principles determinative of our conceptualization of particular domains. For the first type a transcendental argument applies. As to the second, we show how foundational development is itself a condition of the possibility of its justification. In both cases only principles that are typical of the later stage yield the second order criterion in terms of which the evaluative comparison with former stages can be made and defended. In a discussion of problems involved we indicate to what extent Scheffler's idea of rationally justifiable rational development may be realized here, avoiding pitfalls of both foundationalism and relativism. (shrink)
"Absolute Beginners" is a multi-approach study of the founding role of the Absolute as the very beginning of knowledge in medieval philosophy (Henry of Ghent, Richard Conington), the subject being addressed from historical, methodological, ...
This paper introduces and describes new protocols for proving knowledge of secrets without giving them away: if the verifier does not know the secret, he does not learn it. This can all be done while only using one-way hash functions. If also the use of encryption is allowed, these goals can be reached in a more efficient way. We extend and use the GNY authentication logic to prove correctness of these protocols.