Starting from the sixties of the past century theory change has become a main concern of philosophy of science. Two of the best known formal accounts of theory change are the post-Popperian theories of verisimilitude (PPV for short) and the AGM theory of belief change (AGM for short). In this paper, we will investigate the conceptual relations between PPV and AGM and, in particular, we will ask whether the AGM rules for theory change are effective means for approaching the truth, (...) i.e., for achieving the cognitive aim of science pointed out by PPV. First, the key ideas of PPV and AGM and their application to a particular kind of propositional theories - the so called "conjunctive propositions" - will be illustrated. Afterwards, we will prove that, as far as conjunctive propositions are concerned, AGM belief change is an effective tool for approaching the truth. (shrink)
Este artigo apresenta as críticas de Francesco Patrizi à concepção aristotélica de tempo na sua Física, isto é, a crítica de Patrizi ao princípio de que o tempo é infinito em termos de infinidade matemática. A principal tese de Patrizi é a de que a “infinidade possível" da matemática acarreta contradições quando aplicada a substâncias naturais e à ciência natural em geral.
We present and defend the Australian Plan semantics for negation. This is a comprehensive account, suitable for a variety of different logics. It is based on two ideas. The first is that negation is an exclusion-expressing device: we utter negations to express incompatibilities. The second is that, because incompatibility is modal, negation is a modal operator as well. It can, then, be modelled as a quantifier over points in frames, restricted by accessibility relations representing compatibilities and incompatibilities between such points. (...) We defuse a number of objections to this Plan, raised by supporters of the American Plan for negation, in which negation is handled via a many-valued semantics. We show that the Australian Plan has substantial advantages over the American Plan. (shrink)
A counterpossible conditional is a counterfactual with an impossible antecedent. Common sense delivers the view that some such conditionals are true, and some are false. In recent publications, Timothy Williamson has defended the view that all are true. In this paper we defend the common sense view against Williamson’s objections.
I want to model a finite, fallible cognitive agent who imagines that p in the sense of mentally representing a scenario—a configuration of objects and properties—correctly described by p. I propose to capture imagination, so understood, via variably strict world quantifiers, in a modal framework including both possible and so-called impossible worlds. The latter secure lack of classical logical closure for the relevant mental states, while the variability of strictness captures how the agent imports information from actuality in the imagined (...) non-actual scenarios. Imagination turns out to be highly hyperintensional, but not logically anarchic. Section 1 sets the stage and impossible worlds are quickly introduced in Sect. 2. Section 3 proposes to model imagination via variably strict world quantifiers. Section 4 introduces the formal semantics. Section 5 argues that imagination has a minimal mereological structure validating some logical inferences. Section 6 deals with how imagination under-determines the represented contents. Section 7 proposes additional constraints on the semantics, validating further inferences. Section 8 describes some welcome invalidities. Section 9 examines the effects of importing false beliefs into the imagined scenarios. Finally, Sect. 10 hints at possible developments of the theory in the direction of two-dimensional semantics. (shrink)
We present a theory of truth in fiction that improves on Lewis's  ‘Analysis 2’ in two ways. First, we expand Lewis's possible worlds apparatus by adding non-normal or impossible worlds. Second, we model truth in fiction as belief revision via ideas from dynamic epistemic logic. We explain the major objections raised against Lewis's original view and show that our theory overcomes them.
The Humean view that conceivability entails possibility can be criticized via input from cognitive psychology. A mainstream view here has it that there are two candidate codings for mental representations (one of them being, according to some, reducible to the other): the linguistic and the pictorial, the difference between the two consisting in the degree of arbitrariness of the representation relation. If the conceivability of P at issue for Humeans involves the having of a linguistic mental representation, then it is (...) easy to show that we can conceive the impossible, for impossibilities can be represented by meaningful bits of language. If the conceivability of P amounts to the pictorial imaginability of a situation verifying P, then the question is whether the imagination at issue works purely qualitatively, that is, only by phenomenological resemblance with the imagined scenario. If so, the range of situations imaginable in this way is too limited to have a significant role in modal epistemology. If not, imagination will involve some arbitrary labeling component, which turns out to be sufficient for imagining the impossible. And if the relevant imagination is neither linguistic nor pictorial, Humeans will appear to resort to some representational magic, until they come up with a theory of a ‘third code’ for mental representations. (shrink)
Strong Reciprocity theorists claim that cooperation in social dilemma games can be sustained by costly punishment mechanisms that eliminate incentives to free ride, even in one-shot and finitely repeated games. There is little doubt that costly punishment raises cooperation in laboratory conditions. Its efficacy in the field however is controversial. I distinguish two interpretations of experimental results, and show that the wide interpretation endorsed by Strong Reciprocity theorists is unsupported by ethnographic evidence on decentralised punishment and by historical evidence on (...) common pool institutions. The institutions that spontaneously evolve to solve dilemmas of cooperation typically exploit low-cost mechanisms, turning finite games into indefinitely repeated ones and eliminating the cost of sanctioning. (shrink)
The experimental approach in economics is a driving force behind some of the most exciting developments in the field. The 'experimental revolution' was based on a series of bold philosophical premises which have remained until now mostly unexplored. This book provides the first comprehensive analysis and critical discussion of the methodology of experimental economics, written by a philosopher of science with expertise in the field. It outlines the fundamental principles of experimental inference in order to investigate their power, scope and (...) limitations. The author demonstrates that experimental economists have a lot to gain by discussing openly the philosophical principles that guide their work, and that philosophers of science have a lot to learn from their ingenious techniques devised by experimenters in order to tackle difficult scientific problems. (shrink)
Current debates in social ontology are dominated by approaches that view institutions either as rules or as equilibria of strategic games. We argue that these two approaches can be unified within an encompassing theory based on the notion of correlated equilibrium. We show that in a correlated equilibrium each player follows a regulative rule of the form ‘if X then do Y’. We then criticize Searle's claim that constitutive rules of the form ‘X counts as Y in C’ are fundamental (...) building blocks for institutions, showing that such rules can be derived from regulative rules by introducing new institutional terms. Institutional terms are introduced for economy of thought, but are not necessary for the creation of social reality. (shrink)
This study considers the contribution of Francesco Patrizi da Cherso to the development of the concepts of void space and an infinite universe. Patrizi plays a greater role in the development of these concepts than any other single figure in the sixteenth century, and yet his work has been almost totally overlooked. I have outlined his views on space in terms of two major aspects of his philosophical attitude: on the one hand, he was a devoted Platonist and sought (...) always to establish Platonism, albeit his own version of it, as the only currect philosophy; and on the other hand, he was more determinedly anti-Aristotelian than any other philosopher at that time. Patrizi's concept of space has its beginnings in Platonic notions, but is extended and refined in the light of a vigorous critique of Aristotle's position. Finally, I consider the influence of Patrizi's ideas in the seventeenth century, when various thinkers are seeking to overthrow the Aristotelian concept of place and the equivalence of dimensionality with corporeality. Pierre Gassendi , for example, needed a coherent concept of void space in which his atoms could move, while Henry More sought to demonstrate the reality of incorporeal entities by reference to an incorporeal space. Both men could find the arguments they needed in Patrizi's comprehensive treatment of the subject. (shrink)
It is well-known that versions of the lottery paradox and of the preface paradox show that the following three principles are jointly inconsistent: (Sufficiency) very probable propositions are justifiably believable; (Conjunction Closure) justified believability is closed under conjunction introduction; (No Contradictions) propositions known to be contradictory are not justifiably believable. This paper shows that there is a hybrid of the lottery and preface paradoxes that does not require Sufficiency to arise, but only Conjunction Closure and No Contradictions; and it argues (...) that, given any plausible solution to this paradox, if one is not ready to deny Conjunction Closure (and analogous consistency principles), then one must endorse the thesis that justified believability is factive. (shrink)
Recent debates on the nature of preferences in economics have typically assumed that they are to be interpreted either as behavioural regularities or as mental states. In this paper I challenge this dichotomy and argue that neither interpretation is consistent with scientific practice in choice theory and behavioural economics. Preferences are belief-dependent dispositions with a multiply realizable causal basis, which explains why economists are reluctant to make a commitment about their interpretation.
Drawing on different suggestions from the literature, we outline a unified metaphysical framework, labeled as Modal Meinongian Metaphysics (MMM), combining Meinongian themes with a non-standard modal ontology. The MMM approach is based on (1) a comprehension principle (CP) for objects in unrestricted, but qualified form, and (2) the employment of an ontology of impossible worlds, besides possible ones. In §§1–2, we introduce the classical Meinongian metaphysics and consider two famous Russellian criticisms, namely (a) the charge of inconsistency and (b) the (...) claim that naïve Meinongianism allows one to prove that anything exists. In §3, we have impossible worlds enter the stage and provide independent justification for their use. In §4, we introduce our revised comprehension principle: our CP has no restriction on the (sets of) properties that can characterize objects, but parameterizes them to worlds, therefore having modality explicitly built into it. In §5, we propose an application of the MMM apparatus to fictional objects and defend the naturalness of our treatment against alternative approaches. Finally, in §6, we consider David Lewis’ notorious objection to impossibilia, and provide a reply to it by resorting to an ersatz account of worlds. (shrink)
The argument from convention contends that the regular use of definite descriptions as referential devices strongly implies that a referential semantic convention underlies such usage. On the presumption that definite descriptions also participate in a quantificational semantic convention, the argument from convention has served as an argument for the thesis that the English definite article is ambiguous. Here, I revisit this relatively new argument. First, I address two recurring criticisms of the argument from convention: its alleged tendency to overgenerate and (...) its apparent evidential inadequacy. These criticisms are found wanting. Second, following Zacharska, I argue that while the argument from convention does alter the landscape of logical possibilities insofar as it provides good grounds for treating Donnellan’s referential–attributive distinction as having truth-conditional consequences, the argument from convention nonetheless fails to demonstrate that ‘the’ requires two lexical entries. (shrink)
We propose a reconstruction of the constellation of problems and philosophical positions on the nature and number of the primitives of logic in four authors of the nineteenth century logical scene: Peano, Padoa, Frege and Peirce. We argue that the proposed reconstruction forces us to recognize that it is in at least four different senses that a notation can be said to be simpler than another, and we trace the origins of these four senses in the writings of these authors. (...) We conclude that Frege, and even more so Peirce, developed new notations not to make drawing logical conclusions easier but in order to answer the needs of logical analysis. (shrink)
It is customary in current philosophy of time to distinguish between an A- (or tensed) and a B- (or tenseless) theory of time. It is also customary to distinguish between an old B-theory of time, and a new B-theory of time. We may say that the former holds both semantic atensionalism and ontological atensionalism, whereas the latter gives up semantic atensionalism and retains ontological atensionalism. It is typically assumed that the B-theorists have been induced by advances in the philosophy of (...) language and related A-theorists’ criticisms to acknowledge that semantic atensionalism can hardly stand, but have also maintained that what is essential for the B-theory is ontological atensionalism, which can be independently defended. Here it is argued that the B-theorists have been too quick in abandoning semantic atensionalism: they can still cling to it. (shrink)
Manufacturing and industry practices are undergoing an unprecedented revolution as a consequence of the convergence of emerging technologies such as artificial intelligence, robotics, cloud computing, virtual and augmented reality, among others. This fourth industrial revolution is similarly changing the practices and capabilities of operators in their industrial environments. This paper introduces and explores the notion of the Operator 4.0 as well as how this novel way of conceptualizing the human operator necessarily implicates human values in the technologies that constitute it. (...) The design approach known as value sensitive design (VSD) is used to explore how these Operator 4.0 technologies can be designed for human values. Expert elicitation surveys were used to determine the values of industry stakeholders and examples of how the VSD methodology can be adopted by engineers in order to design for these values is illustrated. The results provide preliminary adoption strategies that industrial teams can take to Operator 4.0 technology for human values. (shrink)
The Risk of Freedom presents an in-depth analysis of the philosophy of Jan Patočka, one of the most influential Central European thinkers of the twentieth century, examining both the phenomenological and ethical-political aspects of his work. In particular, Francesco Tava takes an original approach to the problem of freedom, which represents a recurring theme in Patočka’s work, both in his early and later writings.Freedom is conceived of as a difficult and dangerous experience. In his deep analysis of this particular (...) problem, Tava identifies the authentic ethical content of Patočka’s work and clarifies its connections with phenomenology, history of philosophy, politics and dissidence. The Risk of Freedom retraces Patočka’s philosophical journey and elucidates its more problematic and less evident traits, such as his original ethical conception, his political ideals and his direct commitment as a dissident. (shrink)
While corporate social responsibility (CSR) is becoming a mainstream issue for many organizations, most of the research to date addresses CSR in large businesses rather than in small- and medium-sized enterprises (SMEs), because it is too often considered a prerogative of large businesses only. The role of SMEs in an increasingly dynamic context is now being questioned, including what factors might affect their socially responsible behaviour. The goal of this paper is to make a comparison of SME and large firm (...) CSR strategies. Furthermore, size of the firm is analyzed as a factor that influences specific choices in the CSR field, and studied by means of a sample of 3,680 Italian firms. Based on a multi-stakeholder framework, the analysis provides evidence that large firms are more likely to identify relevant stakeholders and meet their requirements through specific and formal CSR strategies. (shrink)
Research indicates that religious values and ethical behavior are closely associated, yet, at a firm level, the processes by which this association occurs are poorly understood. Family firms are known to exhibit values-based behavior, which in turn can lead to specific firm-level outcomes. It is also known that one’s family is an important incubator, enabler, and perpetuator of religious values across successive generations. Our study examines the experiences of a single, multigenerational business family that successfully enacted their religious values in (...) their business. Drawing upon intergenerational solidarity and values-based leadership theory, and by way of an interpretive, qualitative analysis, we find that the family’s religious values enhanced their cohesion and were manifested in their leadership style, which, in turn, led to outcomes for the business. Our findings highlight the processes that underlie the relationship between religious values and organizational outcomes in family firms and offer insights into the role of solidarity in values-based leadership. (shrink)
This article investigates the effects of perceived supervisor support on ethical and unethical employee behavior using a multi-method approach. Specifically, we test the mediating mechanism and a boundary condition that moderate the relationship between support and ethical employee behaviors. We find that supervisor-based self-esteem fully mediates the relationship between supervisor support and ethical employee behavior and that employee task satisfaction intensifies the relationship between supervisor support and supervisor-based self-esteem.
An interpretation of Wittgenstein’s much criticized remarks on Gödel’s First Incompleteness Theorem is provided in the light of paraconsistent arithmetic: in taking Gödel’s proof as a paradoxical derivation, Wittgenstein was drawing the consequences of his deliberate rejection of the standard distinction between theory and metatheory. The reasoning behind the proof of the truth of the Gödel sentence is then performed within the formal system itself, which turns out to be inconsistent. It is shown that the features of paraconsistent arithmetics match (...) with some intuitions underlying Wittgenstein’s philosophy of mathematics, such as its strict finitism and the insistence on the decidability of any mathematical question. (shrink)
There is a view on consciousness that has strong intuitive appeal and empirical support: the intermediate-level theory of consciousness, proposed mainly by Ray Jackendoff and by Jesse Prinz. This theory identifies a specific “intermediate” level of representation as the basis of human phenomenal consciousness, which sits between high-level non-perspectival thought processes and low-level disjointed feature-detection processes in the perceptual and cognitive processing hierarchy. In this article, we show that the claim that consciousness arises at an intermediate-level is true of some (...) cognitive systems, but only in virtue of specific constraints on their active interactions with the environment. We provide ecological reasons for why certain processing levels in a cognitive hierarchy are privileged with respect to consciousness. We do this from the perspective of a prediction-error minimization model of perception and cognition, relying especially on the notion of active inference: the privileged level for consciousness depends on the specific dispositions of an organism concerned with inferring its policies for action. Such a level is indeed intermediate for humans, but this depends on the spatiotemporal resolution of the typical actions that a human organism can normally perform. Thus, intermediateness is not an essential feature of consciousness. In organisms with different action dispositions the privileged level or levels may differ as well. (shrink)
The COVID-19 pandemic has placed an enormous burden on health systems, and guidelines have been developed to help healthcare practitioners when resource shortage imposes the choice on who to treat. However, little is known on the public perception of these guidelines and the underlying moral principles. Here, we assess on a sample of 1033 American citizens’ moral views and agreement with proposed guidelines. We find substantial heterogeneity in citizens’ moral principles, often not in line with the guidelines recommendations. As the (...) guidelines are likely to directly affect a considerable number of citizens, our results call for policy interventions to inform people on the ethical rationale behind physicians or triage committees decisions to avoid resentment and feelings of unfairness. (shrink)
Thaler and Sunstein justify nudge policies from welfaristic premises: nudges are acceptable because they benefit the individuals who are nudged. A tacit assumption behind this strategy is that we can identify the true preferences of decision-makers. We argue that this assumption is often unwarranted, and that as a consequence nudge policies must be justified in a different way. A possible strategy is to abandon welfarism and endorse genuine paternalism. Another one is to argue that the biases of decision that choice (...) architects attempt to eliminate create externalities. For example, in the case of intertemporal discounting, the costs of preference reversals are not always paid by the discounters, because they are transferred onto other individuals. But if this is the case, then nudges are best justified from a political rather than welfaristic standpoint. (shrink)
Experimental “localism” stresses the importance of context‐specific knowledge, and the limitations of universal theories in science. I illustrate Latour's radical approach to localism and show that it has some unpalatable consequences, in particular the suggestion that problems of external validity (or how to generalize experimental results to nonlaboratory circumstances) cannot be solved. In the last part of the paper I try to sketch a solution to the problem of external validity by extending Mayo's error‐probabilistic approach.
Corporate social responsibility (CSR) has acquired an unquestionably high degree of relevance for a large number of different actors. Among others, academics and practitioners are developing a wide range of knowledge and best practices to further improve socially responsible competences. Within this context, one frequent question is according to what theory should general knowledge of CSR be developed, and in particular the relationship between CSR and small and medium-size enterprises (SMEs). This paper suggests that research on large firms should be (...) based on stakeholder theory, while research on CSR among SMEs should be based on the concept of social capital. This paper first provides a theoretical and practical perspective on CSR today; the focus then shifts to the specific literature on CSR and SMEs; some data and information follow on SMEs in Europe and Italy; finally, some conclusions and questions for future research are suggested. (shrink)
On the one hand, after Matteo d'Acquasparta's distinction between the three types of eternity and the temporal necessity of the past, Meyronnes radicalized Scotus's dynamic vision of duration, conceiving the modality as a relation of implication between predicate and existing subject, and time as relationship between Creator and creature. On the other hand, after Ockham denied the real simultaneity of opposed potencies, the Ochamist extension of temporal necessity to the present was denied by Gregory of Rimini, who was favourable, together (...) with Wodeham, to the mutability of the past in a divided sense. Mirecourt, strong on the English subtleties, appears to follow Gregory and tries to find a solution to the interaction between the two contingencies, from top to bottom, which had been formalized by Gregory: if I, performing or not performing X, can act as if God, as the supreme intellect from eternity, could have known or not known X to come, and, if God as agent, absolutely willing omnipotent and unimpeadable from eternity can act as if X happened or did not happen, then can I act as if X, which is from eternity, did not happen from eternity? (shrink)
In 1898 C. S. Peirce declares that the medieval doctrine of consequences had been the starting point of his logical investigations in the 1860s. This paper shows that Peirce studied the scholastic theory of consequentiae as early as 1866–67, that he adopted the scholastics’ terminology, and that that theory constituted a source of logical doctrine that sustained Peirce for a lifetime of creative and original work.
What is it for a car, a piece of art or a person to be good, bad or better than another? In this first book-length introduction to value theory, Francesco Orsi explores the nature of evaluative concepts used in everyday thinking and speech and in contemporary philosophical discourse. The various dimensions, structures and connections that value concepts express are interrogated with clarity and incision. -/- Orsi provides a systematic survey of both classic texts including Plato, Aristotle, Kant, Moore and (...) Ross and an array of contemporary theorists. The reader is guided through the moral maze of value theory with everyday examples and thought experiments. Rare stamps, Napoleon's hat, evil demons, and Kant's good will are all considered in order to probe our intuitions, question our own and philosophers' assumptions about value, and, ultimately, understand better what we want to say when we talk about value. -/- 1. Value and Normativity 1.1 Introduction 1.2 Which Evaluations? 1.3 The Idea of Value Theory 1.4 Value and Normativity 1.5 Overview 1.6 Meta-ethical Neutrality 1.7 Value Theory: The Questions -/- 2. Meet the Values: Intrinsic, Final & Co. 2.1 Introduction 2.2 Final and Unconditional Value: Some Philosophical Examples 2.3 Intrinsic Value and Final Value 2.4 The Reduction to Facts 2.5 Intrinsic and Conditional Value 2.6 Elimination of Extrinsic Value? 2.7 Summary -/- 3. The Challenge against Absolute Value 3.1 Introduction 3.2 Geach and Attributive Goodness 3.3 Foot and the Virtues 3.4 Thomson and Goodness in a Way 3.5 Zimmerman's Ethical Goodness 3.6 A Better Reply: Absolute Value and Fitting Attitudes 3.7 Summary -/- 4. Personal Value 4.1 Introduction 4.2 Moore on Good and Good For 4.3 Good For and Fitting Attitudes 4.4 Moore Strikes Back? 4.5 Agent-relative Value 4.6 Impersonal/Personal and Agent-neutral/Agent-relative 4.7 Summary -/- 5. The Chemistry of Value 5.1 Introduction 5.2 Supervenience and Other Relations 5.3 Organic Unities 5.4 Alternatives to Organic Unities: Virtual Value 5.5 Alternatives to Organic Unities: Conditional Value 5.6 Holism and Particularism 5.7 Summary -/- 6. Value Relations 6.1 Introduction 6.2 The Trichotomy Thesis and Incomparability 6.3 A Fitting Attitude Argument for Incomparability 6.4 Against Incomparability: Epistemic Limitations 6.5 Against Incomparability: Parity 6.6 Parity and Choice 6.7 Parity and Incomparability 6.8 Summary -/- 7. How Do I Favour Thee? 7.1 Introduction 7.2 Three Dimensions of Favouring 7.3 Responses to Value: Maximizing 7.4 Two Concepts of Intrinsic Value? 7.5 Summary -/- 8. Value and the Wrong Kind of Reasons 8.1 Introduction 8.2 The Fitting Attitude Account and its Rivals 8.3 The Wrong Kind of Reasons Problem 8.4 The Structure of the Problem and an Initial Response 8.5 Reasons for What? 8.6 Characteristic Concerns and Shared Reasons 8.7 Circular Path: No-Priority 8.8 Summary . (shrink)