Is it possible, and in the first place is it even desirable, to define what "development" means and to determine the scope of the field called "developmental biology"? Though these questions appeared crucial for the founders of "developmental biology" in the 1950s, there seems to be no consensus today about the need to address them. Here, in a combined biological, philosophical, and historical approach, we ask whether it is possible and useful to define biological development, and, if such a definition (...) is indeed possible and useful, which definition can be considered as the most satisfactory. (shrink)
This paper presents experimental data relevant to understanding the modal free choice effect (Kamp, 1973) when there are more than two disjuncts under the relevant modal operator. The results suggest that speakers' willingness to draw free choice inferences is correlated with whether the embedded disjuncts are *modally separable*, in a sense brought into focus by considering cases within which the relevant propositions fail to be pairwise redundant but are redundant as a set.
This paper addresses a little puzzle with a surprisingly long pedigree and a surprisingly large wake: the puzzle of Free Choice Permission. I begin by presenting a popular sketch of a pragmatic solution to the puzzle, due to Kratzer and Shimoyama, which has received a good deal of discussion, endorsement and elaboration in recent work :535–590, 2006; Fox, in: Sauerland and Stateva Presupposition and implicature in compositional semantics, 2007; Geurts, Mind Lang 24:51–79, 2009; von Fintel, Central APA session on Deontic (...) Modals, 2012). I then explain why the general form of the Kratzer and Shimoyama explanation is not extensionally adequate. This leaves us with two possibilities with regard to the original solution-sketch; either the suggested pragmatic route fails, or it succeeds in a particularly strange way: Free Choice permission is rendered a kind pragmatic illusion on the part of both speakers and hearers. Finally, I discuss some ramifications. (shrink)
I propose a unified solution to two puzzles: Ross's puzzle and free choice permission. I begin with a pair of cases from the decision theory literature illustrating the phenomenon of act dependence, where what an agent ought to do depends on what she does. The notion of permissibility distilled from these cases forms the basis for my analysis of 'may' and 'ought'. This framework is then combined with a generalization of the classical semantics for disjunction — equivalent to Boolean disjunction (...) on the diagonal, but with a different two-dimensional character — that explains the puzzling facts in terms of semantic consequence. (shrink)
The Free Choice effect—whereby \\) seems to entail both \ and \—has traditionally been characterized as a phenomenon affecting the deontic modal ‘may’. This paper presents an extension of the semantic account of free choice defended by Fusco to the agentive modal ‘can’, the ‘can’ which, intuitively, describes an agent’s powers. On this account, free choice is a nonspecific de re phenomenon that—unlike typical cases—affects disjunction. I begin by sketching a model of inexact ability, which grounds a modal approach (...) to agency in a Williamson -style margin of error. A classical propositional semantics combined with this framework can reflect the intuitions highlighted by Kenny ’s dartboard cases, as well as the counterexamples to simple conditional views recently discussed by Mandelkern et al.. In Section 3, I turn to an independently motivated actual-world-sensitive account of disjunction, and show how it extends free choice inferences into an object language for propositional modal logic. (shrink)
Recently the complexity of discursive practices has been widely acknowledged by the humanities and social sciences. In fact, to know anything is to know in terms of one or more discourse. The "discursive turn" in psychology may be considered as a new paradigm oriented to a correct study of (wo)man only if it is able to grasp the semiotical ground of psychic experience both as an "effort after meaning" and as a "struggle over meaning." In this sense the notion of (...) "diatext" has been proposed as a contribution in working out a psychosemiotical approach to understand how the discursive practices assign subject-positions to the agents of each interlocution scenario. (shrink)
Two-dimensional semantics, which can represent the distinction between a priority and necessity, has wielded considerable influence in the philosophy of language. In this paper, I axiomatize the dagger ) operator of Stalnaker’s “Assertion” in the formal context of two-dimensional modal logic. The language contains modalities of actuality, necessity, and a priority, but is also able to represent diagonalization, a conceptually important operation in a variety of contexts, including models of the relative a priori and a posteriori often appealed to Bayesian (...) and Gricean contexts. Finally, I sketch the prospects for extending this two-dimensional upgrade to other kinds of modal logics for natural language. (shrink)
Giuseppe Gallis konsequentes Streben nach einer angemessenen Berücksichtigung beider Pole, des Subjektpols ebenso wie des Objektpols, sowohl in der Forschung als auch in allen Bereichen des menschlichen Lebens kommt im vorliegenden Sammelband in allen Arbeiten zum Tragen. Galli eröffnet damit auch neue Felder f8r die gestalttheoretische Forschungs- und Anwendungspraxis. Er erschließt Themen, die nicht zuletzt auch f8r die medizinischen, philosophischen, psychologischen und psychotherapeutischen Aufgabenstellungen zentral sind. Er plädiert nicht nur allgemein für einen dialogischen Ansatz in der zwischenmenschlichen Begegnung, er (...) führt in einer Reihe von Aufsätzen in diesem Sammelband auch sehr konkret die Fruchtbarkeit eines solchen Dialogs mit der Gedankenwelt eines Norbert Elias, Szvetan Todorovic und Paul Ricoeur für eine Bereicherung und Weiterentwicklung der Gestalttheorie vor Augen. (shrink)
Artefacts do not always do what they are supposed to, due to a variety of reasons, including manufacturing problems, poor maintenance, and normal wear-and-tear. Since software is an artefact, it should be subject to malfunctioning in the same sense in which other artefacts can malfunction. Yet, whether software is on a par with other artefacts when it comes to malfunctioning crucially depends on the abstraction used in the analysis. We distinguish between “negative” and “positive” notions of malfunction. A negative malfunction, (...) or dysfunction, occurs when an artefact token either does not or cannot do what it is supposed to. A positive malfunction, or misfunction, occurs when an artefact token may do what is supposed to but, at least occasionally, it also yields some unintended and undesirable effects. We argue that software, understood as type, may misfunction in some limited sense, but cannot dysfunction. Accordingly, one should distinguish software from other technical artefacts, in view of their design that makes dysfunction impossible for the former, while possible for the latter. (shrink)
The dependence on history of both present and future dynamics of life is a common intuition in biology and in humanities. Historicity will be understood in terms of changes of the space of possibilities as well as by the role of diversity in life’s structural stability and of rare events in history formation. We hint to a rigorous analysis of “path dependence” in terms of invariants and invariance preserving transformations, as it may be found also in physics, while departing from (...) the physico-mathematical analyses. The idea is that the invariant traces of the past under organismal or ecosystemic transformations contribute to the understanding of present and future states of affairs. This yields a peculiar form of unpredictability in biology, at the core of novelty formation: the changes of observables and pertinent parameters may depend also on past events. In particular, in relation to the properties of synchronic measurement in physics, the relevance of diachronic measurement in biology is highlighted. This analysis may a fortiori apply to cognitive and historical human dynamics, while allowing to investigate some general properties of historicity in biology. (shrink)
This paper proposes a critical analysis of that interpretation of the Nāgārjunian doctrine of the two truths as summarized—by both Mark Siderits and Jay L. Garfield—in the formula: “the ultimate truth is that there is no ultimate truth”. This ‘semantic reading’ of Nāgārjuna’s theory, despite its importance as a criticism of the ‘metaphysical interpretations’, would in itself be defective and improbable. Indeed, firstly, semantic interpretation presents a formal defect: it fails to clearly and explicitly express that which it contains logically; (...) the previously mentioned formula must necessarily be completed by: “the conventional truth is that nothing is conventional truth”. Secondly, after having recognized what Siderits’ and Garfield’s analyses contain implicitly, other logical and philological defects in their position emerge: the existence of the ‘conventional’ would appear—despite the efforts of semantic interpreters to demonstrate quite the contrary—definitively inconceivable without the presupposition of something ‘real’; moreover, the number of verses in Nāgārjuna that are in opposition to the semantic interpretation (even if we grant semantic interpreters that these verses do not justify a metaphysical reconstruction of Nagarjuna’s doctrine) would seem too great and significant to be ignored. (shrink)
Open futurism is the indeterministic position according to which the future is ‘open’, i.e., there is now no fact of the matter as to what future contingent events will actually obtain. Many open futurists hold a branching conception of time, in which a variety of possible futures exist. This paper introduces two challenges to branching-time open futurism, which are similar in spirit to a challenge posed by Fine to tense realism. The paper argues that, to address the new challenges, open (...) futurists must adopt an objective, non-perspectival notion of actuality and subscribe to an A-theoretic, dynamic conception of reality. Moreover, given a natural understanding of “actual future”, it is perfectly sensible for open futurists to hold that a unique, objectively actual future exists, contrary to a common assumption in the current debate. The paper also contends that recognising the existence of a unique actual future helps open futurists to avoid potential misconceptions. (shrink)
In this paper, we axiomatize the deontic logic in Fusco 2015, which uses a Stalnaker-inspired account of diagonal acceptance and a two-dimensional account of disjunction to treat Ross’s Paradox and the Puzzle of Free Choice Permission. On this account, disjunction-involving validities are a priori rather than necessary. We show how to axiomatize two-dimensional disjunction so that the introduction/elimination rules for boolean disjunction can be viewed as one-dimensional projections of more general two-dimensional rules. These completeness results help make explicit the (...) restrictions Fusco’s account must place on free-choice inferences. They are also of independent interest, as they raise difficult questions about how to ‘lift’ a Kripke frame for a one- dimensional modal logic into two dimensions. (shrink)
Computing, today more than ever before, is a multi-faceted discipline which collates several methodologies, areas of interest, and approaches: mathematics, engineering, programming, and applications. Given its enormous impact on everyday life, it is essential that its debated origins are understood, and that its different foundations are explained. On the Foundations of Computing offers a comprehensive and critical overview of the birth and evolution of computing, and it presents some of the most important technical results and philosophical problems of the discipline, (...) combining both historical and systematic analyses. -/- The debates this text surveys are among the latest and most urgent ones: the crisis of foundations in mathematics and the birth of the decision problem, the nature of algorithms, the debates on computational artefacts and malfunctioning, and the analysis of computational experiments. By covering these topics, On the Foundations of Computing provides a much-needed resource to contextualize these foundational issues. -/- For practitioners, researchers, and students alike, a historical and philosophical approach such as what this volume offers becomes essential to understand the past of the discipline and to figure out the challenges of its future. (shrink)
Accidents involving autonomous vehicles raise difficult ethical dilemmas and legal issues. It has been argued that self-driving cars should be programmed to kill, that is, they should be equipped with pre-programmed approaches to the choice of what lives to sacrifice when losses are inevitable. Here we shall explore a different approach, namely, giving the user/passenger the task of deciding what ethical approach should be taken by AVs in unavoidable accident scenarios. We thus assume that AVs are equipped with what we (...) call an “Ethical Knob”, a device enabling passengers to ethically customise their AVs, namely, to choose between different settings corresponding to different moral approaches or principles. Accordingly, AVs would be entrusted with implementing users’ ethical choices, while manufacturers/programmers would be tasked with enabling the user’s choice and ensuring implementation by the AV. (shrink)
So-called Locke's thesis is the view that no two things of the same kind may coincide, that is, may be completely in the same place at the same time. A number of counter-examples to this view have been proposed. In this paper, some new and arguably more convincing counter-examples to Locke's thesis are presented. In these counter-examples, a particular entity (a string, a rope, a net, or similar) is interwoven to obtain what appears to be a distinct, thicker entity of (...) the same kind. It is argued that anyone who subscribes to certain standard metaphysical arguments, which are generally taken for granted in the debate about Locke's thesis, is virtually compelled to accept the counter-examples. (shrink)
We provide a full characterization of computational error states for information systems. The class of errors considered is general enough to include human rational processes, logical reasoning, scientific progress and data processing in some functional programming languages. The aim is to reach a full taxonomy of error states by analysing the recovery and processing of data. We conclude by presenting machine-readable checking and resolve algorithms.
The time marked by the clock hands, the so-called “objective time,” is deeply different from the one perceived by the individual. Starting from this hypothesis, directly connected to the subjective modality of “living” the time and defined as time perspective, we will try to understand how much it affects the various domains of people's lives, attitudes, and experiences. Therefore, the research investigates whether all our decisions can be influenced by one or more time perspectives beyond our awareness. Last, but not (...) least, we will try to understand if some time perspectives in specific contexts are more functional and adaptive than others. (shrink)
We analyze the results from three different risk attitude elicitation methods. First, the broadly used test by Holt and Laury, HL, second, the lottery-panel task by Sabater-Grande and Georgantzis, SG, and third, responses to a survey question on self-assessment of general attitude towards risk. The first and the second task are implemented with real monetary incentives, while the third concerns all domains in life in general. Like in previous studies, the correlation of decisions across tasks is low and usually statistically (...) non-significant. However, when we consider only subjects whose behavior across the panels of the SG task is compatible with constant relative risk aversion, the correlation between HL and self-assessed risk attitude becomes significant. Furthermore, the correlation between HL and SG also increases for CRRA-compatible subjects, although it remains statistically non-significant. (shrink)
The Free Choice effect---whereby <>(p or q) seems to entail both <>p and <>q---has traditionally been characterized as a phenomenon affecting the deontic modal "may". This paper presents an extension of the semantic account of free choice defended in Fusco (2015) to the agentive modal "can", the "can" which, intuitively, describes an agent's powers. -/- I begin by sketching a model of inexact ability, which grounds a modal approach to agency (Belnap & Perloff 1998; Belnap, Perloff, and Xu 2001) (...) in a Williamson (1992, 2014)-style margin of error. A classical propositional semantics combined with this framework can reflect the intuitions highlighted by Kenny (1976)'s much-discussed dartboard cases, as well as the counterexamples to simple conditional views recently discussed by Mandelkern, Schultheis, and Boylan (2017). In Section 3, I turn to an actual-world-sensitive account of disjunction, and show how it extends free choice inferences into an object language for propositional modal logic. (shrink)
This paper proposes an interpretation of Nāgārjuna’s doctrine of the two truths that considers saṃvṛti and paramārtha-satya two visions of reality on which the Buddhas, for soteriological and pedagogical reasons, build teachings of two types: respectively in agreement with (for example, the teaching of the Four Noble Truths) or in contrast to (for example, the teaching of emptiness) the category of svabhāva. The early sections of the article show to what extent the various current interpretations of the Nāgārjunian doctrine of (...) the dve satye—despite their sometimes even macroscopic differences—have a common tendency to consider the notion of śūnyatā as a teaching not based on, but equivalent to supreme truth. This equivalence—philologically questionable—leads to interpretative paths that prove inevitably aporetic: indeed, according to whether the interpretation of śūnyatā is ‘metaphysical’ or ‘anti-metaphysical’, it gives rise to readings of Nāgārjuna’s thought incompatible, respectively, with anti-metaphysical and realistic types of verses traceable in the works of the author of the Mūla-madhyamaka-kārikā (MMK). On the contrary, by giving more emphasis to the expression samupāśritya (“based on”), which recurs in MMK.24.8, and therefore, by epistemologically separating the notion of śūnyatā from the notion of paramārtha-satya (and of some of its conceptual equivalents such as nirvāṇa, tattva and dharmatā), we may obtain an interpretation—at once realistic and anti-metaphysical—of the theory of the two truths compatible with the vast majority (or even totality) of Nāgārjuna’s verses. (shrink)
This paper presents a systematic discussion, mainly for non-economists, on economic approaches to the concept of sustainable development. As a first step, the concept of sustainability is extensively discussed. As a second step, the argument that it is not possible to consider sustainability only from an economic or ecological point of view is defended; issues such as economic-ecological integration, inter-generational and intra-generational equity are considered of fundamental importance. Two different economic approaches to environmental issues, i.e. neo-classical environmental economics and ecological (...) economics, are compared. Some key differences such as weak versus strong sustainability, commensurability versus incommensurability and ethical neutrality versus different values acceptance are pointed out. (shrink)
This article reflects on a survey carried out at a non-profit organization that deals with health care for oncological terminally ill in order to find out for those who are involved in this project each worker's time projection and well-being class. The survey has pointed out each single team member's time perspective and well-being class and allowed building a pedagogical path for work orientation that has involved the same team members.
Symmetries play a major role in physics, in particular since the work by E. Noether and H. Weyl in the first half of last century. Herein, we briefly review their role by recalling how symmetry changes allow to conceptually move from classical to relativistic and quantum physics. We then introduce our ongoing theoretical analysis in biology and show that symmetries play a radically different role in this discipline, when compared to those in current physics. By this comparison, we stress that (...) symmetries must be understood in relation to conservation and stability properties, as represented in the theories. We posit that the dynamics of biological organisms, in their various levels of organization, are not just processes, but permanent (extended, in our terminology) critical transitions and, thus, symmetry changes. Within the limits of a relative structural stability (or interval of viability), variability is at the core of these transitions. (shrink)
The usual representation of quantum algorithms, limited to the process of solving the problem, is physically incomplete. We complete it in three steps: extending the representation to the process of setting the problem, relativizing the extended representation to the problem solver to whom the problem setting must be concealed, and symmetrizing the relativized representation for time reversal to represent the reversibility of the underlying physical process. The third steps projects the input state of the representation, where the problem solver is (...) completely ignorant of the setting and thus the solution of the problem, on one where she knows half solution. Completing the physical representation shows that the number of computation steps required to solve any oracle problem in an optimal quantum way should be that of a classical algorithm endowed with the advanced knowledge of half solution. (shrink)
Software-intensive science challenges in many ways our current scientific methods. This affects significantly our notion of science and scientific interpretation of the world, driving at the same time the philosophical debate. We consider some issues prompted by SIS in the light of the philosophical categories of ontology and epistemology.
The epistemology of computer simulations has become a mainstream topic in the philosophy of technology. Within this large area, significant differences hold between the various types of models and simulation technologies. Agent-based and multi-agent systems simulations introduce a specific constraint on the types of agents and systems modelled. We argue that such difference is crucial and that simulation for the artificial sciences requires the formulation of its own specific epistemological principles. We present a minimally committed epistemology which relies on the (...) methodological principles of the Philosophy of Information and requires weak assumptions on the usability of the simulation and the controllability of the model. We use these principles to provide a new definition of simulation for the context of interest. (shrink)
Malware has been around since the 1980s and is a large and expensive security concern today, constantly growing over the past years. As our social, professional and financial lives become more digitalised, they present larger and more profitable targets for malware. The problem of classifying and preventing malware is therefore urgent, and it is complicated by the existence of several specific approaches. In this paper, we use an existing malware taxonomy to formulate a general, language independent functional description of malware (...) as transformers between states of the host system and described by a trust relation with its components. This description is then further generalised in terms of mechanisms, thereby contributing to a general understanding of malware. The aim is to use the latter in order to present an improved classification method for malware. (shrink)
Notwithstanding the recent prominence of the term “problem” in the humanities, few scholars have analysed its history. This essay tries to partially fill that lack, principally covering the period from late modernity through to the 1960s, in order to understand the role that the term plays in “Continental” philosophy, with special emphasis on the writings of Gilles Deleuze. This analysis focuses on the strategies employed by different agents to define “philosophical” problems, or “philosophical” ways of posing problems. The term, originally (...) used in antiquity by knowledge-producers located in an autonomous position, implied an idea of cognition oscillating between production and reproduction. Once the term escaped the context of geometry, it was involved in symbolic struggles that radicalized during modernity. The Kantians placed “philosophy” in a supposedly neutral position of science treating “the problem of all the problems” and invented a new genre, the “history of philosophy,” focusing on the analysis of “philosophical problems.” This approach had great institutional success in the German and French universities and clashed, during the twentieth century, with another usage of philosophy as a practice of “dissolution of problems,” developed in the United Kingdom and the Austro-Habsburg Empire. (shrink)
Coherent-ambiguity aversion is defined within the smooth-ambiguity model as the combination of choice-ambiguity and value-ambiguity aversion. Five ambiguous decision tasks are analyzed theoretically, where an individual faces two-stage lotteries with binomial, uniform, or unknown second-order probabilities. Theoretical predictions are then tested through a 10-task experiment. In tasks 1–5, risk aversion is elicited through both a portfolio choice method and a BDM mechanism. In tasks 6–10, choice-ambiguity aversion is elicited through the portfolio choice method, while value-ambiguity aversion comes about through the (...) BDM mechanism. The behavior of over 75 % of classified subjects is in line with the KMM model in all tasks 6–10, independent of their degree of risk aversion. Furthermore, the percentage of coherent-ambiguity-averse subjects is lower in the binomial than in the uniform and in the unknown treatments, with only the latter difference being significant. The most part of coherent-ambiguity-loving subjects show a high risk aversion. (shrink)
This personal, yet scientific, letter to Alan Turing, reflects on Turing's personality in order to better understand his scientific quest. It then focuses on the impact of his work today. By joining human attitude and particular scientific method, Turing is able to “immerse himself” into the phenomena on which he works. This peculiar blend justifies the epistolary style. Turing makes himself a “human computer”, he lives the dramatic quest for an undetectable imitation of a man, a woman, a machine. He (...) makes us see the continuous deformations of a material action/reaction/diffusion dynamics of hardware with no software. Each of these investigations opens the way to new scientific paths with major consequences for contemporary live and for knowledge. The uses and the effects of these investigations will be discussed: the passage from classical AI to today's neural nets, the relevance of non-linearity in biological dynamics, but also their abuses, such as the myth of a computational world, from a Turing-machine like universe to an encoded homunculus in the DNA. It is shown that these latter ideas, which are sometimes even made in Turing's name, contradict his views. (shrink)
Cass Sunstein and Richard Thaler have been arguing for what they named libertarian paternalism (henceforth LP). Their proposal generated extensive debate as to how and whether LP might lead down a full-blown paternalistic slippery slope. LP has the indubitable merit of having hardwired the best of the empirical psychological and sociological evidence into public and private policy making. It is unclear, though, to what extent the implementation of policies so constructed could enhance the capability for the exercise of an autonomous (...) citizenship. Sunstein and Thaler submit it that in most of the cases in which one is confronted with a set of choices, some default option must be picked out. In those cases whoever devises the features of the set of options ought to rank them according to the moral principle of non-maleficence and possibly to that of beneficence. In this paper we argue that LP can be better implemented if there is a preliminary deliberative debate among the stakeholders that elicits their preferences, and makes it possible to rationally defend them. (shrink)
We offer a formal treatment of the semantics of both complete and incomplete mistrustful or distrustful information transmissions. The semantics of such relations is analysed in view of rules that define the behaviour of a receiving agent. We justify this approach in view of human agent communications and secure system design. We further specify some properties of such relations.
Page generated Tue Aug 3 20:03:12 2021 on philpapers-web-65948fd446-659hb
cache stats: hit=19952, miss=33767, save= autohandler : 1132 ms called component : 1117 ms search.pl : 1018 ms render loop : 727 ms addfields : 487 ms publicCats : 435 ms retrieve cache object : 354 ms initIterator : 287 ms next : 180 ms menu : 85 ms quotes : 35 ms prepCit : 23 ms save cache object : 19 ms search_quotes : 17 ms applytpl : 7 ms intermediate : 1 ms init renderer : 0 ms setup : 0 ms auth : 0 ms writelog : 0 ms