The concept system around ‘quantity’ and ‘quantity value’ is fundamental for measurement science, but some very basic issues are still open on such concepts and their relations. This paper proposes a duality between quantities and quantity values, a proposal that simplifies their characterization and makes it consistent.
The concept system around 'quantity' and 'quantity value' is fundamental for measurement science, but some very basic issues are still open on such concepts and their relation. This paper argues that quantity values are in fact individual quantities, and that a complementarity exists between measurands and quantity values. This proposal is grounded on the analysis of three basic 'equality' relations: (i) between quantities, (ii) between quantity values and (iii) between quantities and quantity values. A (...) consistent characterization of such concepts is obtained, which is then generalized to 'property' and 'property value'. This analysis also throws some light on the elusive concept of magnitude. (shrink)
A discussion of Suarez's views on continuous quantity in the context of his place in the history of philosophy. The paper raises issues about conceptual change in intellectual history. It advances original interpretations of Aristotle and Suarez on continuous quantity.
A formal theory of quantity T Q is presented which is realist, Platonist, and syntactically second-order (while logically elementary), in contrast with the existing formal theories of quantity developed within the theory of measurement, which are empiricist, nominalist, and syntactically first-order (while logically non-elementary). T Q is shown to be formally and empirically adequate as a theory of quantity, and is argued to be scientifically superior to the existing first-order theories of quantity in that it does (...) not depend upon empirically unsupported assumptions concerning existence of physical objects (e.g. that any two actual objects have an actual sum). The theory T Q supports and illustrates a form of naturalistic Platonism, for which claims concerning the existence and properties of universals form part of natural science, and the distinction between accidental generalizations and laws of nature has a basis in the second-order structure of the world. (shrink)
Immanuel Kant's Metaphysical Foundations of Natural Science (1786) provides metaphysical foundations for the application of mathematics to empirically given nature. The application that Kant primarily has in mind is that achieved in Isaac Newton's Principia (1687). Thus, Kant's first chapter, the Phoronomy, concerns the mathematization of speed or velocity, and his fourth chapter, the Phenomenology, concerns the empirical application of the Newtonian notions of true or absolute space, time, and motion. This paper concentrates on Kant's second and third chapters—the Dynamics (...) and the Mechanics, respectively—and argues that they are best read as providing a transcendental explanation of the conditions for the possibility of applying the (mathematical) concept of quantity of matter to experience. Kant again has in mind the empirical measures of this quantity that Newton fashions in the Principia, and he aims to make clear, in particular, how Newton achieves a universal measure for all bodies whatsoever by projecting the static quantity of terrestrial weight into the heavens by means of the theory of universal gravitation. Kant is not attempting to prove a priori what Newton has established empirically but, rather, to clarify the character of Newton's mathematization by building Newton's empirical measures into the very concept of matter that is articulated in the Metaphysical Foundations. (shrink)
Advocates of the conserved quantity (CQ) theory of causation have their own peculiar problem with conservation laws. Since they analyze causal process and interaction in terms of conserved quantities that are in turn defined as physical quantities governed by conservation laws, they must formulate conservation laws in a way that does not invoke causation, or else circularity threatens. In this paper I will propose an adequate formulation of a conservation law that serves CQ theorists' purpose.
A well known paragraph in Mill’s ‘Utilitarianism’ has standardly been misread. Mill does not claim that if some pleasure is of ‘higher quality’, then it will be (or ought to be) chosen over the pleasure of lower quality regardless of their respective quantities. Instead he says that if some pleasure will be chosen over another available in larger quantity, then we are justified in saying that the pleasure so chosen is of higher quality than the other. This assertion is (...) unproblematic. (shrink)
Recent research has suggested that the Pirahã, an Amazonian tribe with a number-less language, are able to match quantities > 3 if the matching task does not require recall or spatial transposition. This finding contravenes previous work among the Pirahã. In this study, we re-tested the Pirahãs’ performance in the crucial one-to-one matching task utilized in the two previous studies on their numerical cognition, as well as in control tasks requiring recall and mental transposition. We also conducted a novel (...) class='Hi'>quantity recognition task. Speakers were unable to consistently match quantities > 3, even when no recall or transposition was involved. We provide a plausible motivation for the disparate results previously obtained among the Pirahã. Our findings are consistent with the suggestion that the exact recognition of quantities > 3 requires number terminology. (shrink)
Grice's Quantity maxims have been widely misinterpreted as enjoining a speaker to make the strongest claim that she can, while respecting the other conversational maxims. Although many writers on the topic of conversational implicature interpret the Quantity maxims as enjoining such volubility, so construed the Quantity maxims are unreasonable norms for conversation. Appreciating this calls for attending more closely to the notion of what a conversation requires. When we do so, we see that eschewing an injunction to (...) maximal informativeness need not deprive us of any ability to predict or explain genuine cases of implicature. Crucial to this explanation is an appreciation of how what a conversation, or a given stage of a conversation, requires, depends upon what kind of conversation is taking place. I close with an outline of this dependence relation that distinguishes among three importantly distinct types of conversation. (shrink)
Empirical evidence, including a recent field study in Northwest Indiana, indicates that supermarkets and other retail merchants frequently incorporate quantity surcharges in their product pricing strategy. Retailers impose surcharges by charging higher unit prices for products packaged in a larger quantity than smaller quantity of the same goods and brand. The purpose of this article is to examine the business ethics of such pricing strategy in light of empirical findings, existing government regulations, factors that motivate quantity (...) surcharges and prevailing consumer perceptions. (shrink)
Developing some suggestions of Ramsey (1925), elementary logic is formulated with respect to an arbitrary categorial system rather than the categorial system of Logical Atomism which is retained in standard elementary logic. Among the many types of non-standard categorial systems allowed by this formalism, it is argued that elementary logic with predicates of variable degree occupies a distinguished position, both for formal reasons and because of its potential value for application of formal logic to natural language and natural science. This (...) is illustrated by use of such a logic to construct a theory of quantity which is argued to be scientifically superior to existing theories of quantity based on standard categorial systems, since it yields realvalued scales without the need for unrealistic existence assumptions. This provides empirical evidence for the hypothesis that the categorial structure of the physical world itself is non-standard in this sense. (shrink)
The conserved quantity theory of causation aims to analyze causal processes and interactions in terms of conserved quantities. In order to be successful, the theory must correctly distinguish between causal processes and interactions, on the one hand, and pseudoprocesses and mere intersections on the other.Moreover, it must do this while satisfying two further criteria: it must avoid circularity; and the appeal to conserved quantities must not be redundant. I argue that the theory is not successful in meeting these criteria.
Compared to more familiar varieties of Swedish, the dialects spoken in Finland have rather diverse syllable structures. The distribution of distinctive syllable weight is determined by grammatical factors, and by varying effects of final consonant weightlessness. In turn it constrains several gemination processes which create derived superheavy syllables, in an unexpected way which provides evidence for an anti-neutralization constraint. Stratal OT, which integrates OT with Lexical Phonology, sheds light on these complex quantity systems.
I defend the conserved quantity theory of causation against two objections: firstly, that to tie the notion of “cause” to conservation laws is impossible, circular or metaphysically counterintuitive; and secondly, that the conserved quantity theory entails an undesired notion of identity through time. My defence makes use of an important meta-philosophical distinction between empirical analysis and conceptual analysis. My claim is that the conserved quantity theory of causation must be understood primarily as an empirical, not a conceptual, (...) analysis of causation. (shrink)
This paper presents an empirical analysis of the determinants of quantity of health insurance in the context of employer-based health insurance using the micro-level data from the 1987 National Medical Expenditure Survey (NMES). It extends the previous research by including additional factors in the analysis, which significantly affect health insurance offers by employers. This paper emphasizes two determinants of employers’ insurance offer decisions that are particularly relevant: union membership and selfinsured versus not self-insured health plans. The conducted empirical analysis (...) reported in this paper reveals the following predictors of higher health insurance coverage: union membership, not self-insured health plan(s), union membership in Midwest or South, as well as self-insured union membership. Further, other factors such as: age, male, income, for profit and other employer organizational forms, and firm’s size determine a higher level of health insurance. (shrink)
Friedman's 1956 essay, ?The Quantity Theory of Money: A Restatement?, in his Studies in the Quantity Theory of Money should be read in the context of the prevailing Keynesian consensus of the time. His primary task had to be to convince economists to reconsider this theory. This required an ecumenical presentation that would not drive off potential readers. At the same time it required making some strong claims for the quantity theory to induce readers to reconsider it. (...) A combination of ?sweet reason? and shock tactics was needed. Friedman accomplished this rhetorical task brilliantly. (shrink)
We give derivations of two formal models of Gricean Quantity implicature and strong exhaustivity in bidirectional optimality theory and in a signalling games framework. We show that, under a unifying model based on signalling games, these interpretative strategies are game-theoretic equilibria when the speaker is known to be respectively minimally and maximally expert in the matter at hand. That is, in this framework the optimal strategy for communication depends on the degree of knowledge the speaker is known to have (...) concerning the question she is answering. In addition, and most importantly, we give a game-theoretic characterisation of the interpretation rule Grice (formalising Quantity implicature), showing that under natural conditions this interpretation rule occurs in the unique equilibrium play of the signalling game. (shrink)
This paper examines Wesley Salmon's "process" theory of causality, arguing in particular that there are four areas of inadequacy. These are that the theory is circular, that it is too vague at a crucial point, that statistical forks do not serve their intended purpose, and that Salmon has not adequately demonstrated that the theory avoids Hume's strictures about "hidden powers". A new theory is suggested, based on "conserved quantities", which fulfills Salmon's broad objectives, and which avoids the problems discussed.
The article evaluates the Domain Postulate of the Classical Model of Science and the related Aristotelian prohibition rule on kind-crossing as interpretative tools in the history of the development of mathematics into a general science of quantities. Special reference is made to Proclus’ commentary to Euclid’s first book of Elements , to the sixteenth century translations of Euclid’s work into Latin and to the works of Stevin, Wallis, Viète and Descartes. The prohibition rule on kind-crossing formulated by Aristotle in Posterior (...) analytics is used to distinguish between conceptions that share the same name but are substantively different: for example the search for a broader genus including all mathematical objects; the search for a common character of different species of mathematical objects; and the effort to treat magnitudes as numbers. (shrink)
Phil Dowe has argued persuasively for a reductivist theory of causality. Drawing on Wesley Salmon's mark transmission theory and David Fair's transferencetheory, Dowe proposes to reduce causality to the exchange of conserved quantities. Dowe's account has the virtue of being simple and offering a definite "visible" idea of causation. According to Dowe and Salmon, it is also virtuous in being localist. That a theory of causation is localist means that it does not need the aid of counterfactuals and/or laws to (...) work. Moreover, it can become the means by which we explain counterfactuals and laws. In this paper, I will argue that the theory is not localist (and hence, that it is less simple than it seems). As far as I can see, the theory needs the aid of laws. (shrink)
If the import of a book can be assessed by the problem it takes on, how that problem unfolds, and the extent of the problem’s fruitfulness for further exploration and experimentation, then Duffy has produced a text worthy of much close attention. Duffy constructs an encounter between Deleuze’s creation of a concept of difference in Difference and Repetition (DR) and Deleuze’s reading of Spinoza in Expressionism in Philosophy: Spinoza (EP). It is surprising that such an encounter has not already been (...) explored, at least not to this extent and in this much detail. Since the two works were written simultaneously, as Deleuze’s primary and secondary dissertations, it is to be expected that there is much to learn from their interaction. Duffy proceeds by explicating, in terms of the differential calculus, a logic of what Deleuze in DR calls different/ciation, and then maps this onto Deleuze’s account of modal expression in EP. (shrink)
The paper discusses some changes in Bolzano's definition of mathematics attested in several quotations from the Beyträge, Wissenschaftslehre and Grössenlehre: is mathematics a theory of forms or a theory of quantities? Several issues that are maintained throughout Bolzano's works are distinguished from others that were accepted in the Beyträge and abandoned in the Grössenlehre. Changes are interpreted as a consequence of the new logical theory of truth introduced in the Wissenschaftslehre, but also as a consequence of the overcome of Kant's (...) terminology, and of the radicalization of Bolzano's anti‐Kantianism. Bolzano's evolution is understood as a coherent move, once the criticism expressed in the Beyträge on the notion of quantity is compared with a different and larger notion of quantity that Bolzano developed already in 1816. This discussion is enriched by the discovery that two unknown texts mentioned by Bolzano in the Beyträge can be identified with works by von Spaun and Vieth respectively. Bolzano's evolution is interpreted as a radicalization of the criticism of the Kantian definition of mathematics and as an effect of Bolzano's unaltered interest in the Leibnizian notion of mathesis universalis. As a conclusion, the author claims that Bolzano never abandoned his original idea of considering mathematics as a scientia universalis, i.e. as the science of quantities in general, and suggests that the question of ideal elements in mathematics, apart from being a main reason for the development of a new logical theory, can also be considered as a main reason for developing a different definition of quantity. (shrink)
If a brain is duplicated so that there are two brains in identical states, are there then two numerically distinct phenomenal experiences or only one? There are two, I argue, and given computationalism, this has implications for what it is to implement a computation. I then consider what happens when a computation is implemented in a system that either uses unreliable components or possesses varying degrees of parallelism. I show that in some of these cases there can be, in a (...) deep and intriguing sense, a fractional (non-integer) number of qualitatively identical phenomenal experiences. This, in turn, has implications for what lessons one should draw from neural replacement scenarios such as Chalmers. (shrink)
An examination of Deleuze’s reading of Spinoza, that focuses on how Spinoza becomes a significant figure in Deleuze’s project of tracing an alternative lineage in the history of philosophy, which, by distancing itself from Hegelian idealism, culminates in the construction of a philosophy of difference. By exploiting the implication of the differential point of view of the infinitesimal calculus in his reading of Spinoza, Deleuze presents Spinoza’s metaphysics as determined according to a ‘logic of expression’. This logic is offered as (...) an alternative to the Hegelian dialectical logic. The main argument of the book is that Deleuze redeploys Spinoza, or the Spinozist concepts that he extracts from Spinoza’s philosophy, to mobilise his philosophy of difference as an alternative to the dialectical philosophy determined by the Hegelian dialectic logic. (shrink)
We show that the contemporary debate surrounding the question “What is the norm of assertion?” presupposes what we call the quantitative view, i.e. the view that this question is best answered by determining how much epistemic support is required to warrant assertion. We consider what Jennifer Lackey ( 2010 ) has called cases of isolated second-hand knowledge and show—beyond what Lackey has suggested herself—that these cases are best understood as ones where a certain type of understanding , rather than knowledge, (...) constitutes the required epistemic credential to warrant assertion. If we are right that understanding (and not just knowledge) is the epistemic norm for a restricted class of assertions, then this straightforwardly undercuts not only the widely supposed quantitative view, but also a more general presupposition concerning the universalisability of some norm governing assertion—the presumption (almost entirely unchallenged since Williamson’s 1996 paper) that any epistemic norm that governs some assertions should govern assertions—as a class of speech act—uniformly. (shrink)
In this paper I offer an 'integrating account' of singular causation, where the term 'integrating' refers to the following program for analysing causation. There are two intuitions about causation, both of which face serious counterexamples when used as the basis for an analysis of causation. The 'process' intuition, which says that causes and effects are linked by concrete processes, runs into trouble with cases of 'misconnections', where an event which serves to prevent another fails to do so on a particular (...) occasion and yet the two events are linked by causal processes. The chance raising intuition, according to which causes raise the chance of their effects, easily accounts for misconnections but faces the problem of chance lowering causes, a problem easily accounted for by the process approach. The integrating program attempts to provide an analysis of singular causation by synthesising the two insights, so as to solve both problems. In this paper I show that extant versions of the integrating program due to Eells, Lewis, and Menzies fail to account for the chance-lowering counterexample. I offer a new diagnosis of the chance lowering case, and use that as a basis for an integrating account of causation which does solve both cases. In doing so, I accept various assumptions of the integrating program, in particular that there are no other problems with these two approaches. As an example of the process account, I focus on the recent CQ theory of Wesley Salmon (1997). (shrink)
Modern philosophy of mathematics has been dominated by Platonism and nominalism, to the neglect of the Aristotelian realist option. Aristotelianism holds that mathematics studies certain real properties of the world – mathematics is neither about a disembodied world of “abstract objects”, as Platonism holds, nor it is merely a language of science, as nominalism holds. Aristotle’s theory that mathematics is the “science of quantity” is a good account of at least elementary mathematics: the ratio of two heights, for example, (...) is a perceivable and measurable real relation between properties of physical things, a relation that can be shared by the ratio of two weights or two time intervals. Ratios are an example of continuous quantity; discrete quantities, such as whole numbers, are also realised as relations between a heap and a unit-making universal. For example, the relation between foliage and being-a-leaf is the number of leaves on a tree, a relation that may equal the relation between a heap of shoes and being-a-shoe. Modern higher mathematics, however, deals with some real properties that are not naturally seen as quantity, so that the “science of quantity” theory of mathematics needs supplementation. Symmetry, topology and similar structural properties are studied by mathematics, but are about pattern, structure or arrangement rather than quantity. (shrink)
A description is given of the quantitative-qualitative distinction for terms in theories of measurable attributes, and, adjoined to that account, a suggestion is made concerning the sense in which empirical relational systems have an empirical attribute as their topic or focus. Since this characterization of quantitative terms, relative to a partition, makes no explicit reference to numbers, concatenation operations, or ordering relations, we show how our results are related to some standard theorems in the literature. Analogs of representation and uniqueness (...) theorems are proved, and the notions of exact quantitative term and the underlying attribute of a quantitative term, are described and studied. (shrink)
The paper deals with credible and relevantinformation flow in dialogs: How useful is it for areceiver to get some information, how useful is it fora sender to give this information, and how much credibleinformation can we expect to flow between sender andreceiver? What is the relation between semantics andpragmatics? These Gricean questions will be addressedfrom a decision and game-theoretical point of view.
Gricean pragmatics. Saying vs. implicating ; Discourse and cooperation ; Conversational implicatures ; Generalised vs. particularised ; Cancellability ; Gricean reasoning and the pragmatics of what is said -- The standard recipe for Q-implicatures. The standard recipe ; Inference to the best explanation ; Weak implicatures and competence ; Relevance ; Conclusion -- Scalar implicatures. Horn scales and the generative view ; Implicatures and downward entailing environments ; Disjunction : exclusivity and ignorance ; Conclusion -- Psychological plausibility. Charges of psychological (...) inadequacy ; Logical complexity ; Abduction ; Incremental processing ; The intentional stance ; Alternatives ; Conclusion -- Nonce inference or defaults?. True defaults ; Strong defaultism ; Weak defaultism ; Contextualism ; Conclusion -- Intentions, alternatives, and free choice. Free choice ; Problems with the standard recipe ; Intentions first ; Free choice explained ; Comparing alternatives ; Two flavours of Q-implicature ; Conclusion -- Embedded implicatures : the problems. The problems ; Varieties of conventionalism ; Against conventionalism ; Conclusion -- Embedded implicatures : a Gricean approach. Disjunction ; Belief reports ; Factives and other presupposition inducers ; Indefinites ; Contrastive construals and lexical pragmatics ; Conclusion. (shrink)
This paper locates Kierkegaard within the philosophical tradition and as the co-founder with Nietzsche of existential-postmodern philosophy. With his analysis of the quantitative build up of human motion Kierkegaard follows the pre-Socratics and their tradition in wanting to know the truth about the becoming of all things. But in his analysis of the qualitative leap with hints from Leibniz he founds postmodernphilosophy. His double movement leap as first quantitative and then qualitative is here explained in terms of (1) sin and (...) faith, (2) despair and truth, (3) anxiety and freedom, (4) offence and love, (5) madness and earnestness. Finally, an explanation of his concept of repetition shows how there can be a new quality and more of that same quality.Cet article situe Kierkegaard à l’intérieur de la tradition philosophique et en tant que co-fondateur, avec Nietzsehe, de la philosophie existentielle-postmoderne. Par son analyse de l’accroissement quantitatif du mouvement humain, Kierkegaard suit I’ornière des présocratiques et de leur tradition en cherchant à connaitre la vérité du devenir de toutes choses. Or, dans son analyse du saut qualitatif, il fonde, à I’aided’indications de Leibniz, la philosophie postmoderne. Son double saut, en tant que quantitatif d’une part, qualitatif d’autre part, est expliqué ici en termes de: (1) péché et foi, (2) désespoir et vérité, (3) anxiété et liberté, (4) offense et amour, (5) folie et sincérité. Enfin, une explication de son concept de répétition montrera comment il peut y avoir une nouvelle qualité et davantage de cette même qualité. (shrink)
This paper deals with problems that vagueness raises for choices involving evaluative tradeoffs. I focus on a species of such choices, which I call ‘qualitative barrier cases.’ These are cases in which a qualitatively significant tradeoff in one evaluative dimension for a given improvement in another dimension could not make an option better all things considered, but a merely quantitative tradeoff for the given improvement might. Trouble arises, however, when one of the options constitutes a borderline case of an evaluative (...) kind. I argue that in such cases we can neither affirm nor deny that trading off losses in one evaluative dimension for gains in another yields a better outcome. Theoretically, this result provides a way to defuse an argument that has been presented by both Larry Temkin and Stuart Rachels that purports to show that the ‘better than’ relation is intransitive. Practically, it allows us to undermine the claim that rational agents are better off withholding their contribution to a public good in certain instances of the free-rider problem, and thus to take an important step towards solving these problems. (shrink)
In this paper we seek to account for scalar implicatures and Horn's division of pragmatic labor in game?theoretical terms by making use mainly of refinements of the standard solution concept of signaling games. Scalar implicatures are accounted for in terms of Farrell's (1993) notion of a ?neologism?proof? equilibrium together with Grice's maxim of Quality. Horn's division of pragmatic labor is accounted for in terms of Cho and Kreps? (1987) notion of ?equilibrium domination? and their ?Intuitive Criterion?
In this paper we motivate and develop the analytic theory of measurement, in which autonomously specified algebras of quantities (together with the resources of mathematical analysis) are used as a unified mathematical framework for modeling (a) the time-dependent behavior of natural systems, (b) interactions between natural systems and measuring instruments, (c) error and uncertainty in measurement, and (d) the formal propositional language for describing and reasoning about measurement results. We also discuss how a celebrated theorem in analysis, known as Gelfand (...) representation, guarantees that autonomously specified algebras of quantities can be interpreted as algebras of observables on a suitable state space. Such an interpretation is then used to support (i) a realist conception of quantities as objective characteristics of natural systems, and (ii) a realist conception of measurement results (evaluations of quantities) as determined by and descriptive of the states of a target natural system. As a way of motivating the analytic approach to measurement, we begin with a discussion of some serious philosophical and theoretical problems facing the well-known representational theory of measurement. We then explain why we consider the analytic approach, which avoids all these problems, to be far more attractive on both philosophical and theoretical grounds. (shrink)
In a recent paper (1994) Wesley Salmon has replied to criticisms (e.g., Dowe 1992c, Kitcher 1989) of his (1984) theory of causality, and has offered a revised theory which, he argues, is not open to those criticisms. The key change concerns the characterization of causal processes, where Salmon has traded "the capacity for mark transmission" for "the transmission of an invariant quantity." Salmon argues against the view presented in Dowe (1992c), namely that the concept of "possession of a conserved (...)quantity" is sufficient to account for the difference between causal and pseudo processes. Here that view is defended, and important questions are raised about the notion of transmission and about gerrymandered aggregates. (shrink)
Quantities are naturally viewed as functions, whose arguments may be construed as situations, events, objects, etc. We explore the question of the range of these functions: should it be construed as the real numbers (or some subset thereof)? This is Carnap's view. It has attractive features, specifically, what Carnap views as ontological economy. Or should the range of a quantity be a set of magnitudes? This may have been Helmholtz's view, and it, too, has attractive features. It reveals the (...) close connection between measurement and natural law, it makes dimensional analysis intelligible, and explains the concern of scientists and engineers with units in equations. It leaves the philosophical problem of the relation between the structure of magnitudes and the structure of the reals. What explains it? And is it always the same? We will argue that on the whole, construing the values of quantities as magnitudes has some advantages, and that (as Helmholtz seems to suggest in "Numbering and Measuring from an Epistemological Viewpoint") the relation between magnitudes and real numbers can be based on foundational similarities of structure. (shrink)
The philosophy of mathematics has been accused of paying insufficient attention to mathematical practice: one way to cope with the problem, the one we will follow in this paper on extensive magnitudes, is to combine the `history of ideas' and the `philosophy of models' in a logical and epistemological perspective. The history of ideas allows the reconstruction of the theory of extensive magnitudes as a theory of ordered algebraic structures; the philosophy of models allows an investigation into the way epistemology (...) might affect relevant mathematical notions. The article takes two historical examples as a starting point for the investigation of the role of numerical models in the construction of a system of non-Archimedean magnitudes. A brief exposition of the theories developed by Giuseppe Veronese and by Rodolfo Bettazzi at the end of the 19th century will throw new light on the role played by magnitudes and numbers in the development of the concept of a non-Archimedean order. Different ways of introducing non-Archimedean models will be compared and the influence of epistemological models will be evaluated. Particular attention will be devoted to the comparison between the models that oriented Veronese's and Bettazzi's works and the mathematical theories they developed, but also to the analysis of the way epistemological beliefs affected the concepts of continuity and measurement. (shrink)
This article aims first at showing that Russell's general doctrine according to which all mathematics is deducible 'by logical principles from logical principles' does not require a preliminary reduction of all mathematics to arithmetic. In the Principles, mechanics (part VII), geometry (part VI), analysis (part IV-V) and magnitude theory (part III) are to be all directly derived from the theory of relations, without being first reduced to arithmetic (part II). The epistemological importance of this point cannot be overestimated: Russell's logicism (...) does not only contain the claim that mathematics is no more than logic, it also contains the claim that the differences between the various mathematical sciences can be logically justified?and thus, that, contrary to the arithmetization stance, analysis, geometry and mechanics are not merely outgrowths of arithmetic. The second aim of this article is to set out the neglected Russellian theory of quantity. The topic is obviously linked with the first, since the mere existence of a doctrine of magnitude, in a work dated from 1903, is a sign of a distrust vis-à-vis the arithmetization programme. After having shown that, despite the works of Cantor, Dedekind and Weierstrass, many mathematicians at the end of the 19th Century elaborated various axiomatic theories of the magnitude, I will try to define the peculiarity of the Russellian approach. I will lay stress on the continuity of the logicist's thought on this point: Whitehead, in the Principia, deepens and generalizes the first Russellian 1903 theory. (shrink)
The problem of the failure of value definiteness (VD) for the idea of quantity in quantum mechanics is stated, and what VD is and how it fails is explained. An account of quantity, called BP, is outlined and used as a basis for discussing the problem. Several proposals are canvassed in view of, respectively, Forrest's indeterminate particle speculation, the "standard" interpretation of quantum mechanics and Bub's modal interpretation.
This paper discusses and develops an important distinction drawn by Jevons, viz . that between natural and fictitious quantities. This distinction provides a basis for a theory of economic concept formation that aims at picking out families of models that are phenomenally adequate, explanatory and exact simultaneously. Essentially, the theory demands of an economic quantity to be natural that (1) it is explained by a causal model, (2) it is measurable and (3) the measurement procedure is justified. The proposed (...) theory is tested against two case studies, one historical and one contemporary. (shrink)
This paper argues against neo-Fregeans that Frege was right to conclude that we cannot obtain the concept of number from Hume's Principle. Neo-Fregeans have claimed that Hume's Principle is analytic since it can be viewed as an implicit definition of the concept of cardinal number. But it will be shown that if taken as an implicit definition, Hume's Principle is satisfied not just by the concept of number but also by the concept of discrete quantity, and hence it cannot (...) be viewed as an implicit definition of the concept of cardinal number as distinct from the concept of discrete quantity. (shrink)
Starting from a recent paper by S. Kaufmann, we introduce a notion of conjunction of two conditional events and then we analyze it in the setting of coherence. We give a representation of the conjoined conditional and we show that this new object is a conditional random quantity, whose set of possible values normally contains the probabilities assessed for the two conditional events. We examine some cases of logical dependencies, where the conjunction is a conditional event; moreover, we give (...) the lower and upper bounds on the conjunction. We also examine an apparent paradox concerning stochastic independence which can actually be explained in terms of uncorrelation. We briefly introduce the notions of disjunction and iterated conditioning and we show that the usual probabilistic properties still hold. (shrink)
This paper expounds the relations between continuous symmetries and conserved quantities, i.e. Noether's ``first theorem'', in both the Lagrangian and Hamiltonian frameworks for classical mechanics. This illustrates one of mechanics' grand themes: exploiting a symmetry so as to reduce the number of variables needed to treat a problem. I emphasise that, for both frameworks, the theorem is underpinned by the idea of cyclic coordinates; and that the Hamiltonian theorem is more powerful. The Lagrangian theorem's main ``ingredient'', apart from cyclic coordinates, (...) is the rectification of vector fields afforded by the local existence and uniqueness of solutions to ordinary differential equations. For the Hamiltonian theorem, the main extra ingredients are the asymmetry of the Poisson bracket, and the fact that a vector field generates canonical transformations iff it is Hamiltonian. (shrink)
In this paper, I define and study an abstract algebraic structure, the dimensive algebra, which embodies the most general features of the algebra of dimensional physical quantities. I prove some elementary results about dimensive algebras and suggest some directions for future work.
Defining the real numbers by abstraction as ratios of quantities gives prominence to then- applications in just the way that Frege thought we should. But if all the reals are to be obtained in this way, it is necessary to presuppose a rich domain of quantities of a land we cannot reasonably assume to be exemplified by any physical or other empirically measurable quantities. In consequence, an explanation of the applications of the reals, defined in this way, must proceed indirectly. (...) This paper explains the main complications involved and answers the main objections advanced in Batitsky's paper in this issue. (shrink)
Within the traditional Hilbert space formalism of quantum mechanics, it is not possible to describe a particle as possessing, simultaneously, a sharp position value and a sharp momentum value. Is it possible, though, to describe a particle as possessing just a sharp position value (or just a sharp momentum value)? Some, such as Teller, have thought that the answer to this question is No – that the status of individual continuous quantities is very different in quantum mechanics than in classical (...) mechanics. On the contrary, I shall show that the same subtle issues arise with respect to continuous quantities in classical and quantum mechanics; and that it is, after all, possible to describe a particle as possessing a sharp position value without altering the standard formalism of quantum mechanics. (shrink)
Quantum Mechanics, and apparently its successors, claim that there are minimum quantities by which objects can differ, at least in some situations: electrons can have various “energy levels” in an atom, but to move from one to another they must jump rather than move via continuous variation: and an electron in a hydrogen atom going from -13.6 eV of energy to -3.4 eV does not pass through states of -10eV or -5.1eV, let along -11.1111115637 eV or -4.89712384 eV.
To state an important fact about the photon, physicists use such expressions as (1) “the photon has zero (null, vanishing) mass” and (2) “the photon is (a) massless (particle)” interchangeably. Both (1) and (2) express the fact that the photon has no non-zero mass. However, statements (1) and (2) disagree about a further fact: (1) attributes to the photon the property of zero-masshood whereas (2) denies that the photon has any mass at all. But is there really a difference between (...) saying that something has zero mass (charge, spin, etc.) and saying that it has no mass (charge, spin, etc.)? Does the distinction cut any physical or philosophical ice? I argue that the answer to these questions is yes. Put briefly, the claim of this paper is that some zero-value physical quantities are not mere “privations”, “absences” or “holes in being”. They are respectable properties in the same sense in which their non-zero partners are. This, I will show, has implications for the debate between two rival views of the nature of property, dispositionalism and categoricalism. (shrink)
This paper expounds the relations between continuous symmetries and conserved quantities, i.e. Noether’s “ﬁrst theorem”, in both the Lagrangian and Hamiltonian frameworks for classical mechanics. This illustrates one of mechanics’ grand themes: exploiting a symmetry so as to reduce the number of variables needed to treat a problem. I emphasise that, for both frameworks, the theorem is underpinned by the idea of cyclic coordinates; and that the Hamiltonian theorem is more powerful. The Lagrangian theorem’s main “ingredient”, apart from cyclic coordinates, (...) is the rectiﬁcation of vector ﬁelds aﬀorded by the local existence and uniqueness of solutions to ordinary diﬀerential equations. For the Hamiltonian theorem, the main extra ingredients are the asymmetry of the Poisson bracket, and the fact that a vector ﬁeld generates canonical transformations iﬀ it is Hamiltonian. (shrink)
On the ghosts of departed quantities Content Type Journal Article Category Book Review Pages 1-3 DOI 10.1007/s11016-011-9606-5 Authors Fred Ablondi, Department of Philosophy, Hendrix College, Conway, AR, USA Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.
How many people should there be? Can there be overpopulation: too many people living? I shall present a puzzling argument about these questions, show how this argument can be strengthened, then sketch a possible reply.1 1. QUALITY AND QUANTITY Consider the outcomes that might be produced, in some part of the world, by two rates of population growth. Suppose that, if there is faster growth, there would later be (...) more people, who would all be worse off. These outcomes are shown in Fig. I. (shrink)
[p. 45] I wish to represent a certain subclass of nonconventional implicatures, which I shall call CONVERSATIONAL implicatures, as being essentially connected with certain general features of discourse; so my next step is to try to say what these features are. The following may provide a first approximation to a general principle. Our talk exchanges do not normally consist of a succession of disconnected remarks, and would not be rational if they did. They are characteristically, to some degree at least, (...) cooperative efforts; and each participant recognizes in them, to some extent, a common purpose or set of purposes, or at least a mutually accepted direction. This purpose or direction may be fixed from the start (e.g., by an initial proposal of a question for discussion), or it may evolve during the exchange; it may be fairly definite, or it may be so indefinite as to leave very considerable latitude to the participants (as in a casual conversation). But at each stage, SOME possible conversational moves would be excluded as conversationally unsuitable. We might then formulate a rough general principle which participants will be expected (ceteris paribus) to observe, namely: Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged. One might label this the COOPERATIVE PRINCIPLE. On the assumption that some such general principle as this is acceptable, one may perhaps distinguish four categories under one or another of which will fall certain more specific maxims and submaxims, the following of which will, in general, yield results in accordance with the Cooperative Principle. Echoing Kant, I call these categories Quantity, Quality, Relation, and Manner. The category of QUANTITY relates to the quantity of information to be provided, and under it fall the following maxims. (shrink)
Consider this situation: Here are two envelopes. You have one of them. Each envelope contains some quantity of money, which can be of any positive real magnitude. One contains twice the amount of money that the other contains, but you do not know which one. You can keep the money in your envelope, whose numerical value you do not know at this stage, or you can exchange envelopes and have the money in the other. You wish to maximise your (...) money. What should you do?1 Here are three forms of reasoning about this situation, which we shall call.. (shrink)