The philosophy of measurement studies the conceptual, ontological, epistemic, and technological conditions that make measurement possible and reliable. A new wave of philosophical scholarship has emerged in the last decade that emphasizes the material and historical dimensions of measurement and the relationships between measurement and theoretical modeling. This essay surveys these developments and contrasts them with earlier work on the semantics of quantity terms and the representational character of measurement. The conclusions highlight four characteristics of the emerging research program in (...) philosophy of measurement: it is epistemological, coherentist, practice oriented, and model based. (shrink)
Contrary to the claim that measurement standards are absolutely accurate by definition, I argue that unit definitions do not completely fix the referents of unit terms. Instead, idealized models play a crucial semantic role in coordinating the theoretical definition of a unit with its multiple concrete realizations. The accuracy of realizations is evaluated by comparing them to each other in light of their respective models. The epistemic credentials of this method are examined and illustrated through an analysis of the contemporary (...) standardization of time. I distinguish among five senses of ‘measurement accuracy’ and clarify how idealizations enable the assessment of accuracy in each sense. (shrink)
This article develops a model-based account of the standardization of physical measurement, taking the contemporary standardization of time as its central case-study. To standardize the measurement of a quantity, I argue, is to legislate the mode of application of a quantity-concept to a collection of exemplary artefacts. Legislation involves an iterative exchange between top-down adjustments to theoretical and statistical models regulating the application of a concept, and bottom-up adjustments to material artefacts in light of remaining gaps. The model-based account clarifies (...) the cognitive role of ad hoc corrections, arbitrary rules and seemingly circular inferences involved in contemporary timekeeping, and explains the stability of networks of standards better than its conventionalist and constructivist counterparts. (shrink)
This work develops an epistemology of measurement, that is, an account of the conditions under which measurement and standardization methods produce knowledge as well as the nature, scope, and limits of this knowledge. I focus on three questions: (i) how is it possible to tell whether an instrument measures the quantity it is intended to? (ii) what do claims to measurement accuracy amount to, and how might such claims be justified? (iii) when is disagreement among instruments a sign of error, (...) and when does it imply that instruments measure different quantities? Based on a series of case studies conducted in collaboration with the US National Institute of Standards and Technology (NIST), I argue for a model-based approach to the epistemology of physical measurement. To measure a physical quantity, I argue, is to estimate the value of a parameter in an idealized model of a physical process. Such estimation involves inference from the final state (‘indication’) of a process to the value range of a parameter (‘outcome’) in light of theoretical and statistical assumptions. Contrary to contemporary philosophical views, measurement outcomes cannot be obtained by mapping the structure of indications. Instead, measurement outcomes as well as claims to accuracy, error and quantity individuation can only be adjudicated relative to a choice of idealized modelling assumptions. (shrink)
Can a heritability value tell us something about the weight of genetic versus environmental causes that have acted in the development of a particular individual? Two possible questions arise. Q1: what portion of the phenotype of X is due to its genes and what portion to its environment? Q2: what portion of X’s phenotypic deviation from the mean is a result of its genetic deviation and what portion a result of its environmental deviation? An answer to Q1 provides the full (...) information about X’s development, while an answer to Q2 leaves out a large portion unexplained—that portion which corresponds to the phenotypic mean. Q1 is unanswerable, but I show it is nevertheless legitimate under certain quantitative genetics models. With regard to Q2, opinions in the philosophical and biological literature differ as to its legitimacy. I argue that not only is it legitimate, but in particular, under a few simplifying assumptions, it allows for a quantitative probabilistic answer: for normally distributed quantitative traits with no G-E correlation or statistical G × E interaction, we can assess the probability that X’s genes had a greater effect than its environment on its deviation from the mean population value. This probability is expressed as a function the heritability and the individual’s phenotypic value; we can also provide a quantitative probabilistic answer to Q2 for an arbitrary individual where the probability is a function only of heritability. (shrink)
This paper draws attention to an increasingly common method of using computer simulations to establish evidential standards in physics. By simulating an actual detection procedure on a computer, physicists produce patterns of data (‘signatures’) that are expected to be observed if a sought-after phenomenon is present. Claims to detect the phenomenon are evaluated by comparing such simulated signatures with actual data. Here I provide a justification for this practice by showing how computer simulations establish the reliability of detection procedures. I (...) argue that this use of computer simulation undermines two fundamental tenets of the Bogen–Woodward account of evidential reasoning. Contrary to Bogen and Woodward’s view, computer-simulated signatures rely on ‘downward’ inferences from phenomena to data. Furthermore, these simulations establish the reliability of experimental setups without physically interacting with the apparatus. I illustrate my claims with a study of the recent detection of the superfluid-to-Mott-insulator phase transition in ultracold atomic gases. (shrink)
We consider a question of T. Jech and K. Prikry that asks if the existence of a precipitous filter implies the existence of a normal precipitous filter. The aim of this paper is to improve a result of Gitik (Israel J Math, 175:191–219, 2010) and to show that measurable cardinals of a higher order rather than just measurable cardinals are necessary in order to have a model with a precipitous filter but without a normal one.
The presence of gene–environment statistical interaction and correlation in biological development has led both practitioners and philosophers of science to question the legitimacy of heritability estimates. The paper offers a novel approach to assess the impact of GxE and rGE on the way genetic and environmental causation can be partitioned. A probabilistic framework is developed, based on a quantitative genetic model that incorporates GxE and rGE, offering a rigorous way of interpreting heritability estimates. Specifically, given an estimate of heritability and (...) the variance components associated with estimates of GxE and rGE, I arrive at a probabilistic account of the relative effect of genes and environment. (shrink)
Livestock production in both industrial systems, where livestock are packed tightly together, and in highly traditional systems, where a shepherd follows her herd in dispersed rangelands, are cited as key contributors in some of the most acute environmental problems around the globe. Israel is one of the few countries where both of these systems exist, with surprisingly little contact between them. The environmental impact of the sectors were examined along with Israel’s public policies in the field. While historically, much attention (...) has been placed on the contribution of the Bedouin pastoralists to desertification and erosion, this may be linked to historic misapprehension about actual impacts of goats on local rangelands as well as political motivations and concerns about losing national sovereignty over large areas of rangelands. The true environmental effects appear to be minor. A far more critical concern is water pollution caused by the industrial sector of livestock production—an issue that recently has attracted considerable government attention and investment in a successful dairy infrastructure initiative. The divisions between governmental supports for the Jewish and Arab sectors of livestock management are inconsistent with efficient environmental management. Policies should be designed to encourage Bedouin to find ways to sustainably continue their traditional livestock husbandry practises, which today are largely associated with ecological benefits and constitute a unique cultural asset for Israel and the world. (shrink)
R. Feldman defends a general principle about evidence the slogan form of which says that ‘evidence of evidence is evidence’. B. Fitelson considers three renditions of this principle and contends they are all falsified by counterexamples. Against both Feldman and Fitelson, J. Comesaña and E. Tal show that the third rendition––the one actually endorsed by Feldman––isn’t affected by Fitelson’s counterexamples, but only because it is trivially true and thus uninteresting. Tal and Comesaña defend a fourth version of Feldman’s principle, which––they (...) claim––has not yet been shown false. Against Tal and Comesaña I show that this new version of Feldman’s principle is false. (shrink)
Comesaña and Tal have used a contentious account of evidence possession to claim that an ``evidence of evidence is evidence'' principle of R. Feldman (EEE3) is (true but) trivial. We demonstrate to the contrary that, on the Comesaña--Tal account of evidence possession, EEE3 is false.
We offer a critical evaluation of a recent proposal of E. Tal and J. Comesa\~na on the topic of when evidence of evidence constitutes evidence. After establishing that attempts of L. Moretti and W. Roche to discredit the proposal miss their mark, we fashion another, which does not.
This article addresses the topic of death and immortality in Leibniz and Diderot. The plaisanterie by Diderot that perception continues after death is compared to Leibniz’ position regarding monad incessancy. This will lead to an analysis of Leibniz’ reasons to defend not only the incessancy, but even the immortality of certain monads.