Abstract We consider the classical concept of time of permanence and observe that its quantum equivalent is described by a bona fide self-adjoint operator. Its interpretation, by means of the spectral theorem, reveals that we have to abandon not only the idea that quantum entities would be characterizable in terms of spatial trajectories but, more generally, that they would possess the very attribute of spatiality . Consequently, a permanence time shouldn’t be interpreted as a “time” in quantum mechanics, but as (...) a measure of the total availability of a quantum entity in participating to a process of creation of a spatial localization. Content Type Journal Article Pages 1-22 DOI 10.1007/s10699-011-9233-z Authors Massimiliano Sassoli de Bianchi, Laboratorio di Autoricerca di Base, 6914 Carona, Switzerland Journal Foundations of Science Online ISSN 1572-8471 Print ISSN 1233-1821. (shrink)
The aim of this paper is not only to deal with the concept of infinity, but also to develop some considerations about the epistemological status of cosmology. These problems are connected because from an epistemological point of view, cosmology, meant as the study of the universe as a whole, is not merely a physical (or empirical) science. On the contrary it has an unavoidable metaphysical character which can be found in questions like “why is there this universe (or a universe (...) at all)?”. As a consequence, questions concerning the infinity of the universe in space and time can correctly arise only taking into account this metaphysical character of cosmology. Accordingly, in the following paper it will be shown that two different concepts of physical infinity of the universe (the relativistic one and the inflationary one) rely on two different ways of solution of a metaphysical problem. The difference between these concepts cannot be analysed using the classical distinctions between actual/potential infinity or numerable/continuum infinity, but the introduction of a new “modal” distinction will be necessary. Finally, it will be illustrated the role of a philosophical concept of infinity of the universe. (shrink)
Sometimes mereologists have problems with counting. We often don't want to count the parts of maximally connected objects as full-fledged objects themselves, and we don't want to count discontinuous objects as parts of further, full-fledged objects. But whatever one takes "full-fledged object" to mean, the axioms and theorems of classical, extensional mereology commit us to the existence both of parts and of wholes – all on a par, included in the domain of quantification – and this makes mereology look counterintuitive (...) to various philosophers. In recent years, a proposal has been advanced to solve the tension between mereology and familiar ways of counting objects, under the label of Minimalist View . The Minimalist View may be summarized in the slogan: "Count x as an object iff it does not overlap with any y you have already counted as an object". The motto seems prima facie very promising but, we shall argue, when one looks at it more closely, it is not. On the contrary, the Minimalist View involves an ambiguity that can be solved in quite different directions. We argue that one resolution of the ambiguity makes it incompatible with mereology. This way, the Minimalist View can lend no support to mereology at all. We suggest that the Minimalist View can become compatible with mereology once its ambiguity is solved by interpreting it in what we call an epistemic or conceptual fashion: whereas mereology has full metaphysical import, the Minimalist View may account for our ways of selecting "conceptually salient" entities. But even once it is so disambiguated, it is doubtful that the Minimalist View can help to make mereology more palatable, for it cannot make it any more compatible with commonsensical ways of counting objects. (shrink)
This article distinguishes three archetypal ways of articulating spatial cognition: (1) via metric representation of objective geometry, (2) via somatosensory constitution of the peripersonal environment, and (3) via pragmatic comprehension of the finalistic sense of action. The last one is documented by neuroscientific studies concerning mirror neurons. Bio-robotic experiments implementing mirror functions confirm the constitutive role of goal-oriented actions in spatial processes.
Some forms of analytic reconstructivism take natural language (and common sense at large) to be ontologically opaque: ordinary sentences must be suitably rewritten or paraphrased before questions of ontological commitment may be raised. Other forms of reconstructivism take the commitment of ordinary language at face value, but regard it as metaphysically misleading: common-sense objects exist, but they are not what we normally think they are. This paper is an attempt to clarify and critically assess some common limits of these two (...) reconstructivist strategies. (shrink)
The Knowability Paradox is a logical argument to the effect that, if there are truths not actually known, then there are unknowable truths. Recently, Alexander Paseau and Bernard Linsky have independently suggested a possible way to counter this argument by typing knowledge. In this article, we argue against their proposal that if one abstracts from other possible independent considerations supporting reasons for typing knowledge and considers the motivation for a type-theoretic approach with respect to the Knowability Paradox alone, there is (...) no substantive philosophical motivation to type knowledge, except that of solving the paradox. Every attempt to independently justify the typing of knowledge is doomed to failure. (shrink)
Stephen Schiffer holds that propositions are pleonastic entities. I will argue that there is a substantial difference between propositions and fictional characters, which Schiffer presents as typical pleonastic entities. My conclusion will be that if fictional characters are typical pleonastic entities, then Schiffer fails to show that propositions are pleonastic entities.
P.T. Geach has maintained (see, e.g., Geach (1967/1968)) that identity (as well as dissimilarity) is always relative to a general term. According to him, the notion of absolute identity has to be abandoned and replaced by a multiplicity of relative identity relations for which Leibniz’s Law – which says that if two objects are identical they have the same properties – does not hold. For Geach relative identity is at least as good as Frege’s cardinality thesis – which he takes (...) to be strictly connected with relative identity – according to which an ascription of cardinality is always relative to a concept which specifies what, in any particular case, counts as a unit. The idea that there is a close connection between relative identity and Frege’s cardinality thesis has been issued again quite recently by Alston and Bennett in (1984). In their opinion, Frege’s cardinality thesis is not only similar to relative identity – as Geach maintains – but it implies it. Moreover, they agree with Geach in claiming that a commitment to Frege’s cardinality thesis forces a parallel commitment to relative identity. Against Geach, Alston and Bennett we will claim that (T1): «Frege’s cardinality thesis is similar to relative identity» is false and that therefore (T2) «Frege’s cardinality thesis implies relative identity» is false as well. (shrink)
This paper analyzes the epistemological significance of the problem of induction. In the first section, the foundation of this problem is identified in the thesis of gnoseological dualism: we only know our representations as separate from ‘the world itself’. This thesis will be countered by the thesis of gnoseological monism. In the second section, the implications of Hume’s skeptical thesis will be highlighted and it will be demonstrated how the point of view of gnoseological monism can offer a way out (...) that I call the hermeneutic theory of induction. In the third section, a formal approach is proposed in agreement with this theory. Using tools of the theory of information, this defines the conditions of acceptance or refusal of a hypothesis starting with an experiment. In the fourth section, the epistemological consequences of this approach are analyzed. (shrink)
In “Mathematics is megethology,” Lewis reconstructs set theory using mereology and plural quantification (MPQ). In his recontruction he assumes from the beginning that there is an infinite plurality of atoms, whose size is equivalent to that of the set theoretical universe. Since this assumption is far beyond the basic axioms of mereology, it might seem that MPQ do not play any role in order to guarantee the existence of a large infinity of objects. However, we intend to demonstrate that mereology (...) and plural quantification are, in some ways, particularly relevant to a certain conception of the infinite. More precisely, though the principles of mereology and plural quantification do not guarantee the existence of an infinite number of objects, nevertheless, once the existence of any infinite object is admitted, they are able to assure the existence of an uncountable infinity of objects. So, if—as Lewis maintains—MPQ were parts of logic, the implausible consequence would follow that, given a countable infinity of individuals, logic would be able to guarantee an uncountable infinity of objects. (shrink)
Aim of the paper is to revise Boolos’ reinterpretation of second-order monadic logic in terms of plural quantification (, ) and expand it to full second order logic. Introducing the idealization of plural acts of choice, performed by a suitable team of agents, we will develop a notion of plural reference . Plural quantification will be then explained in terms of plural reference. As an application, we will sketch a structuralist reconstruction of second-order arithmetic based on the axiom of infinite (...) à la Dedekind, as the unique non-logical axiom. We will also sketch a virtual interpretation of the classical continuum involving no other infinite than a countable plurality of individuals. (shrink)
Founding our analysis on the Geneva-Brussels approach to quantum mechanics, we use conventional macroscopic objects as guiding examples to clarify the content of two important results of the beginning of twentieth century: Einstein–Podolsky–Rosen’s reality criterion and Heisenberg’s uncertainty principle. We then use them in combination to show that our widespread belief in the existence of microscopic particles is only the result of a cognitive illusion, as microscopic particles are not particles, but are instead the ephemeral spatial and local manifestations of (...) non-spatial and non-local entities. (shrink)
In this paper we consider the emerging position in metaphysics that artifact functions characterize real kinds of artifacts. We analyze how it can circumvent an objection by David Wiggins (Sameness and substance renewed, 2001, 87) and then argue that this position, in comparison to expert judgments, amounts to an interesting fine-grained metaphysics: taking artifact functions as (part of the) essences of artifacts leads to distinctions between principles of activity of artifacts that experts in technology have not yet made. We show, (...) moreover, that our argument holds not only in the artifactual realm but also in biology: taking biological functions as (part of the) essences of organs leads to distinctions between principles of activity of organs that biological experts have not yet made. We run our argument on the basis of analyses of artifact and biological functions as developed in philosophy of technology and of biology, thus importing results obtained outside of metaphysics into the debate on ontological realism. In return, our argument shows that a position in metaphysics provides experts reason for trying to detect differences between principles of activities of artifacts and organs that have not been detected so far. (shrink)
Boltzmann’s equilibrium theory has not received by the scholars the attention it deserves. It was always interpreted as a mere generalization of Maxwell’s work or, in the most favorable case, a sketch of some ideas more consistently developed in the 1872 memoir. In this paper, I try to prove that this view is ungenerous. My claim is that in the theory developed during the period 1866-1871 the generalization of Maxwell’s distribution was mainly a mean to get a more general scope: (...) a theory of the equilibrium of a system of mechanical points from a general point of view. To face this issue Boltzmann analyzed and discussed probabilistic assumptions so that his equilibrium theory cannot be considered a purely mechanical theory. I claim also that the special perspective adopted by Boltzmann and his view about probabilistic requirements played a role in the transition to the non equilibrium theory of 1872. (shrink)
Nicholas Rescher, in The Limits of Science (1984), argued that: «perfected science is a mirage; complete knowledge a chimera» . He reached the above conclusion from a logical argument known as Fitch’s Paradox of Knowability. The argument, starting from the assumption that every truth is knowable, proves that every truth is also actually known and, given that some true propositions are not actually known, it concludes, by modus tollens, that there are unknowable truths. Prima facie, this argument seems to seriously (...) narrow our epistemic possibilities and to constitute a limit for knowledge (included scientific knowledge). Rescher’s above quoted conclusion follows the same sort of reasoning. Recently, Bernard Linsky exploited a possible way to block the argument employing a type-distinction of knowledge. If the Knowability paradox is blocked, then Rescher’s conclusion cannot be drawn. After an introduction to the paradox, we suggest, in our paper, a possible way of justifying a type-solution for it in the scientific field. A noteworthy point is that the effectiveness of this solution depends on the degree of reductionism adopted in science: the given solution is available only if we do not adopt a complete reductionism in science so that there is just one kind of scientific knowledge and, consequently, of scientific justification. Otherwise Rescher's argument still works. (shrink)
There are two basic approaches to the problem of induction:the empirical one, which deems that the possibility of induction depends on how theworld was made (and how it works) and the logical one, which considers the formation(and function) of language. The first is closer to being useful for induction, whilethe second is more rigorous and clearer. The purpose of this paper is to create an empiricalapproach to induction that contains the same formal exactitude as the logical approach.This requires: (a) that (...) the empirical conditions for the induction are enunciatedand (b) that the most important results already obtained from inductive logic are againdemonstrated to be valid. Here we will be dealing only with induction by elimination,namely the analysis of the experimental confutation of a theory. The result will bea rule of refutation that takes into consideration all of the empirical aspect of theexperiment and has each of the asymptotic properties which inductive logic has shown tobe characteristic of induction. (shrink)
In our paper, we propose a relativisticand metaphysically neutral identity criterionfor biological entities. We start from thecriterion of genidentity proposed by K. Lewinand H. Reichenbach. Then we enrich it to renderit more philosophical powerful and so capableof dealing with the real transformations thatoccur in the extremely variegated biologicalworld.
The topic of this paper is the notion of technical (as opposed to biological) malfunction. It is shown how to form the property being a malfunctioning F from the property F and the property modifier malfunctioning (a mapping taking a property to a property). We present two interpretations of malfunctioning. Both interpretations agree that a malfunctioning F lacks the dispositional property of functioning as an F. However, its subsective interpretation entails that malfunctioning Fs are Fs, whereas its privative interpretation entails (...) that malfunctioning Fs are not Fs. We chart various of their respective logical consequences and discuss some of the philosophical implications of both interpretations. (shrink)
The goal of the paper is to analyse some specific features of a very central concept for top-level ontologies for information systems: i.e. the concept of artefact. Specifically, we analyse the relation to be a copy of that is strongly linked to the notion of artefact andâas we will demonstrateâcould be useful to distinguish artefacts from objects of other kinds. Firstly, we outline some intuitive and commonsensical reasons for the need of a clarification of the notion of artefact in ontologies (...) for information systems, and we analyse some characterisations of the notion given by two top-level ontologies (Cyc and Wordnet). Secondly, we introduce and critically analyse Tzouvarasâ notion of copy. Thirdly, we try to complete an analysis of copy by distinguishing three kinds of copies: replicas (Tzouvarasâ notion of copy), rigid copies, and functional copies. With the help of these three notions we outline a first and preliminary distinction between artefacts, objects of art and natural objects. (shrink)
Relativists maintain that identity is always relative to a general term (RI). According to them, the notion of absolute identity has to be abandoned and replaced by a multiplicity of relative identity relations for which Leibniz’s Law does not hold. For relativists RI is at least as good as the Fregean cardinality thesis (FC), which contends that an ascription of cardinality is always relative to a concept specifying what, in any specific case, counts as a unit. The same train of (...) thought on cardinality and identity is apparent among those – Artifactualists – who take relative identity sentences for artifacts as the norm. The aim of this paper is (i) to criticize the thesis (T1) thatfrom FC it is possible to derive RI, and (ii) to explain why Artifactualists mistakenly believe that RI can be derived from FC. The misunderstanding derives from their assumption that the concept of artifact – like the concept of object – is not a sortal concept. (shrink)
The foundation of statistical mechanics and the explanation of the success of its methods rest on the fact that the theoretical values of physical quantities (phase averages) may be compared with the results of experimental measurements (infinite time averages). In the 1930s, this problem, called the ergodic problem, was dealt with by ergodic theory that tried to resolve the problem by making reference above all to considerations of a dynamic nature. In the present paper, this solution will be analyzed first, (...) highlighting the fact that its very general nature does not duly consider the specificities of the systems of statistical mechanics. Second, Khinchin’s approach will be presented, that starting with more specific assumptions about the nature of systems, achieves an asymptotic version of the result obtained with ergodic theory. Third, the statistical meaning of Khinchin’s approach will be analyzed and a comparison between this and the point of view of ergodic theory is proposed. It will be demonstrated that the difference consists principally of two different perspectives on the ergodic problem: that of ergodic theory puts the state of equilibrium at the center, while Khinchin’s attempts to generalize the result to non-equilibrium states. (shrink)
The purpose of this article is threefold. Firstly, it aims to present, in an educational and non-technical fashion, the main ideas at the basis of Aerts’ creation-discovery view and hidden measurement approach : a fundamental explanatory framework whose importance, in this author’s view, has been seriously underappreciated by the physics community, despite its success in clarifying many conceptual challenges of quantum physics. Secondly, it aims to introduce a new quantum machine—that we call the δ quantum machine —which is able to (...) reproduce the transmission and reflection probabilities of a one-dimensional quantum scattering process by a Dirac delta-function potential. The machine is used not only to demonstrate the pertinence of the above mentioned explanatory framework, in the general description of physical systems, but also to illustrate (in the spirit of Aerts’ ∊-model) the origin of classical and quantum structures, by revealing the existence of processes which are neither classical nor quantum, but irreducibly intermediate. We do this by explicitly introducing what we call the k-model and by proving that its processes cannot be modelized by a classical or quantum scattering system. The third purpose of this work is to exploit the powerful metaphor provided by our quantum machine, to investigate the intimate relation between the concept of potentiality and the notion of non-spatiality , that we characterize in precise terms, introducing for this the new concept of process-actuality. (shrink)
I will argue that the standard formulation of non-factualism in terms of a denial of truth-aptness is consistent with a version of deflationsim. My line of argument assumes the use conception of meaning. This brings out an interesting consequence since mostly the philosophers who endorse the use conception of meaning, e.g. Paul Horwich, hold that deflationism is inconsistent with the strategy of implementing non-factualism in terms of a denial of truth-aptness and thereby urge a reformulation of non-factualism.
I argue that deflationism about truth does not imply minimalism about truthaptness. The condition for truth-aptness can be strengthened and the disquotationalschema restricted without resorting to any inflationary conception of truth-theoretic notions.
Using web standards, such as uniform resource identifiers (URIs), XML and HTTP, for naming and describing resources which are not information objects is the key difference between the Web as we know it today and the Semantic Web. Naming and interlinking this type of resources by HTTP URIs (instead of individual constants in a formal language) is the key feature which distinguishes traditional knowledge representation from web-scale knowledge representation. However, this use of URIs brought back attention to the old philosophical (...) problem of identity and reference in a new form. In this paper, we analyze the new version of the problem, provide a formal model for dealing with it when interlinking knowledge on the Web, and argue for the need of a distinction between the use of URIs for describing and accessing resources, and the use of URIs for fixing the reference . We show that in the current practice of linking data these roles are not clearly distinguished, and that this fact may cause unwanted effects and prevent some basic forms of data integration. We also discuss the role of an entity name system as a potential piece of infrastructure for fixing the reference in the Semantic Web. (shrink)
The foundation of statistical mechanics and the explanation of the success of its methods rest on the fact that the theoretical values of physical quantities (phase averages) may be compared with the results of experimental measurements (infinite time averages). In the Thirties, this problem, called the ergodic problem, was dealt with by an ergodic theory that tried to resolve the problem by making reference above all to considerations of a dynamic nature. In the present paper, this solution will be analyzed (...) first, highlighting the fact that its very general nature does not duly consider the specificities of the systems of statistical mechanics. Second, A.I. Khinchin’s approach will be presented, that starting with the more specific assumptions about the nature of systems, achieves an asymptotic version of the result obtained with ergodic theory. Third, the statistical meaning of Khinchin’s approach will be analyzed and a comparison between this and the point of view of ergodic theory is proposed. It will be demonstrated that the difference consists principally of two different perspectives on the ergodic problem: that of the ergodic theory puts the state of equilibrium at the center, while Khinchin’s attempts to generalize the result to non-equilibrium states. (shrink)
This volume provides analyses of the logic-reality relationship from different approaches and perspectives. The point of convergence lies in the exploration of the connections between reality – social, natural or ideal – and logical structures employed in describing or discovering it. Moreover, the book connects logical theory with more concrete issues of rationality, normativity and understanding, thus pointing to a wide range of potential applications. -/- -/- The papers collected in this volume address cutting-edge topics in contemporary discussions amongst specialists. (...) Some essays focus on the role of indispensability considerations in the justification of logical competence, and the wide range of challenges within the philosophy of mathematics. Others present advances in dynamic logical analysis such as extension of game semantics to non-logical part of vocabulary and development of models of contractive speech act. -/- Table of Contents: Introduction: Majda Trobok, Nenad Miščević and Berislav Žarnić.- I. Logical and Mathematical Structures.- Life on the Ship of Neurath: Mathematics in the Philosophy of Mathematics: Stewart Shapiro.- Applied Mathemathics in the Sciences: Dale Jacquette.- The Philosophical Impact of the Löwenheim-Skolem Theorem: Miloš Arsenijević.- Debating (Neo)logicism: Frege and the neo-Fregeans: Majda Trobok.- II. Epistemology and Logic.- Informal Logic and Informal Consequence: Danilo Šuster.- Logical Consequence and Rationality: Nenad Smokrović.- Logic, Indispensability and Aposteriority: Nenad Miščević.- III . Dynamic Logical Models of Meaning.- Extended Game-Theoretical Semantics: Manuel Rebuschi.- Dynamic Logic of Propositional Commitments: Tomoyuki Yamada.- Is Unsaying Polite?: Berislav Žarnić.- IV Logical Methods in Ontological and Linguistic Analyses.- Towards a Formal Account of Identity Criteria: Massimiliano Carrara and Silvia Gaio.- A Mereology for the Change of Parts: Pierdaniele Giaretta and Giuseppe Spolaore.- Russell versus Frege: Imre Rusza.- Goodman’s OnlyWorld: Vladan Djordjević.-. (shrink)
I argue that the contextualist account of the referential/attributive interpretation of definite descriptions, presented by Recanati and Bezuidehnout and based on the idea that definite descriptions are semantically underdetermined and in need of completion through optional top-down pragmatic processes, suffers from an explanatory gap. I defend the contextualist view but hold that the determination of the content of definite descriptions is a mandatory, linguistically driven process based on saturation rather than on optional pragmatic processes.
Studies in economics and humanities generally have intrinsic problems that this work illustrates, along with innovations for overcoming them. The main limitations and weak-points of orthodox theory necessitate the use in their stead of other multi-disciplinary approaches, like complexity science, agent-based simulations and artificial life simulations. An example of an artificial life simulation applied in the economics field concerning the exchange process shows the benefits of such new conceptual and methodological instruments.
We speak of products in two senses: in one, we speak of types of products, in the other we speak of the particular objects that are instances of those types. I argue that types of products have the same ontological status as that of material stuffs, like water and gold, which have a non-particular level of existence. I also argue that the relationship between types of products and their instances is logically similar to the relation of constitution, which holds between, (...) say, gold and a ring made of gold. In my approach, types of products are concrete entities, having spatiotemporal properties. This picture fits our commonplace conception of types of products better than alternative approaches according to which types of products are universal, abstract, or mereological entities. (shrink)
An intricate, long, and occasionally heated debate surrounds Boltzmann’s H-theorem (1872) and his combinatorial interpretation of the second law (1877). After almost a century of devoted and knowledgeable scholarship, there is still no agreement as to whether Boltzmann changed his view of the second law after Loschmidt’s 1876 reversibility argument or whether he had already been holding a probabilistic conception for some years at that point. In this paper, I argue that there was no abrupt statistical turn. In the first (...) part, I discuss the development of Boltzmann’s research from 1868 to the formulation of the H-theorem. This reconstruction shows that Boltzmann adopted a pluralistic strategy based on the interplay between a kinetic and a combinatorial approach. Moreover, it shows that the extensive use of asymptotic conditions allowed Boltzmann to bracket the problem of exceptions. In the second part I suggest that both Loschmidt’s challenge and Boltzmann’s response to it did not concern the H-theorem. The close relation between the theorem and the reversibility argument is a consequence of later investigations on the subject. (shrink)