Against the tradition, which has considered measurement able to produce pure data on physical systems, the unavoidable role played by the modeling activity in measurement is increasingly acknowledged, particularly with respect to the evaluation of measurement uncertainty. This paper characterizes measurement as a knowledge-based process and proposes a framework to understand the function of models in measurement and to systematically analyze their influence in the production of measurement results and their interpretation. To this aim, (...) a general model of measurement is sketched, which gives the context to highlight the unavoidable, although sometimes implicit, presence of models in measurement and, finally, to propose some remarks on the relations between models and measurement uncertainty, complementarily classified as due to the idealization implied in the models and their realization in the experimental setup. (shrink)
The paper introduces what is deemed as the general epistemological problem of measurement: what characterizes measurement with respect to generic evaluation? It also analyzes the fundamental positions that have been maintained about this issue, thus presenting some sketches for a conceptual history of measurement. This characterization, in which three distinct standpoints are recognized, corresponding to a metaphysical, an anti-metaphysical, and relativistic period, allows us to introduce and briefly discuss some general issues on the current epistemological status of (...)measurement science. (shrink)
Given the common assumption that measurement plays an important role in the foundation of science, the paper analyzes the possibility that Measurement Science, and therefore measurement itself, can be properly founded. The realist and the representational positions are analyzed at this regards: the conclusion, that such positions unavoidably lead to paradoxical situations, opens the discussion for a new epistemology of measurement, whose characteristics and interpretation are sketched here but are still largely matter of investigation.
The paper introduces and formally defines a functional concept of a measuring system, on this basis characterizing the measurement as an evaluation performed by means of a calibrated measuring system. The distinction between exact and uncertain measurement is formalized in terms of the properties of the traceability chain joining the measuring system to the primary standard. The consequence is drawn that uncertain measurements lose the property of relation-preservation, on which the very concept of measurement is founded according (...) to the representational viewpoint. Finally, from the analysis of the inter-relations between calibration and measurement the fundamental reasons of the claimed objectivity and intersubjectivity of measurement are highlighted, a valuable epistemological result to characterize measurement as a particular kind of evaluation. (shrink)
Measurement is a special type of evaluation that is more exact than either opinion or estimation. In the social sciences, in particular, most evaluations are not measures, but rather mixtures of opinion and estimation. Over-measurement represents anchoring to evaluations which are not measures. For an over-measured characteristic, single measures are used when instead a portfolio of possible measures should be used. There are three implications. First, measurements of characteristics which depend on the over-measured characteristic are biased. Secondly, decisions (...) which depend on the over-measured characteristic are biased. Thirdly, over-measurement biases the measurement of uncertainty. (shrink)
This paper discusses a relational modeling of measurement which is complementary to the standard representational point of view: by focusing on the experimental character of the measurand-related comparison between objects, this modeling emphasizes the role of the measuring systems as the devices which operatively perform such a comparison. The non-idealities of the operation are formalized in terms of non-transitivity of the substitutability relation between measured objects, due to the uncertainty on the measurand value remaining after the measurement. The (...) metrological structure of traceability is shown to be an effective solution to cope with the problem of the general non-transitivity of measurement results. A preliminary theory is introduced as a possible formalization for the presented model. (shrink)
Measurement in soft systems generally cannot exploit physical sensors as data acquisition devices. The emphasis in this case is instead on how to choose the appropriate indicators and to combine their values so to obtain an overall result, interpreted as the value of a property, i.e., the measurand, for the system under analysis. This paper aims at discussing the epistemological conditions of the claim that such a process is a measurement, and performance evaluation is the case introduced to (...) support the analysis, performed in systematic comparison with the paradigm of measurement of physical quantities. Some background questions arising here are: – Are the chosen indicators appropriate performance indicators? – Do such indicators convey complete and non-redundant information on performance? – Does the chosen combination rule generate results suitably interpretable as performance values? And enlarging the focus: – Does the obtained value specifically convey information on the system under analysis, instead of some different entity (typically including the subject who is evaluating)? Operatively: would different subjects evaluate the same system in the same way? i.e., is the obtained information objective? – Does the obtained value convey information that is interpretable in the same way by different subjects? Operatively: would different subjects who have agreed on a decision procedure make the same decision from the same performance information? i.e., is the obtained information intersubjective? Any well founded positive answers to these questions significantly support a structural interpretation of measurement encompassing both physical and soft measurement. (shrink)
The philosophy of measurement studies the conceptual, ontological, epistemic, and technological conditions that make measurement possible and reliable. A new wave of philosophical scholarship has emerged in the last decade that emphasizes the material and historical dimensions of measurement and the relationships between measurement and theoretical modeling. This essay surveys these developments and contrasts them with earlier work on the semantics of quantity terms and the representational character of measurement. The conclusions highlight four characteristics of (...) the emerging research program in philosophy of measurement: it is epistemological, coherentist, practice oriented, and model based. (shrink)
The need for quantitative measurement represents a unifying bond that links all the physical, biological, and social sciences. Measurements of such disparate phenomena as subatomic masses, uncertainty, information, and human values share common features whose explication is central to the achievement of foundational work in any particular mathematical science as well as for the development of a coherent philosophy of science. This book presents a theory of measurement, one that is "abstract" in that it is concerned with highly (...) general axiomatizations of empirical and qualitative settings and how these can be represented quantitatively. It was inspired by, and represents a generalization and extension of, the last major research work in this field, Foundations of Measurement Vol. I, by Krantz, Luce, Suppes, and Tversky published in 1971. (shrink)
One of the major roadblocks in conducting Environmental Corporate Social Responsibility (ECSR) research is operationalization of the construct. Existing ECSR measurement tools either require primary data gathering or special subscriptions to proprietary databases that have limited replicability. We address this deficiency by developing a transparent ECSR measure, with an explicit coding scheme, that strictly relies on publicly available data. Our ECSR measure tests favorably for internal consistency and inter-rater reliability, as well as convergent and discriminant validity.
This research examines business and psychology students’ attitude toward unethical behavior (measured at Time 1) and their propensity to engage in unethical behavior (measured at Time 1 and at Time 2, 4 weeks later) using a 15-item Unethical Behavior measure with five Factors: Abuse Resources, Not Whistle Blowing, Theft, Corruption, and Deception. Results suggested that male students had stronger unethical attitudes and had higher propensity to engage in unethical behavior than female students. Attitude at Time 1 predicted Propensity at Time (...) 1 accurately for all five factors (concurrent validity): If students consider it to be unethical, then, they are less likely to engage in that unethical behavior. Attitude at Time 1 predicted only Factor Abuse Resources for Propensity at Time 2. Propensity at Time 1 was significantly related to Propensity at Time 2. Attitude at Time 1, Propensity at Time 1, and Propensity at Time 2 had achieved configural and metric measurement invariance across major (business vs. psychology). Thus, researchers may have confidence in using these measures in future research. (shrink)
Measurement is fundamental to all the sciences, the behavioural and social as well as the physical and in the latter its results provide our paradigms of 'objective fact'. But the basis and justification of measurement is not well understood and is often simply taken for granted. Henry Kyburg Jr proposes here an original, carefully worked out theory of the foundations of measurement, to show how quantities can be defined, why certain mathematical structures are appropriate to them and (...) what meaning attaches to the results generated. Crucial to his approach is the notion of error - it can not be eliminated entirely from its introduction and control, her argues, arises the very possibility of measurement. Professor Kyburg's approach emphasises the empirical process of making measurements. In developing it he discusses vital questions concerning the general connection between a scientific theory and the results which support it (or fail to). (shrink)
This paper distinguishes between two arguments based on measurement robustness and defends the epistemic value of robustness for the assessment of measurement reliability. I argue that the appeal to measurement robustness in the assessment of measurement is based on a different inferential pattern and is not exposed to the same objections as the no-coincidence argument which is commonly associated with the use of robustness to corroborate individual results. This investigation sheds light on the precise meaning of (...) reliability that emerges from measurement assessment practice. In addition, by arguing that the measurement assessment robustness argument has similar characteristics across the physical, social and behavioural sciences, I defend the idea that there is continuity in the notion of measurement reliability across sciences. (shrink)
Measurement is widely applied because its results are assumed to be more reliable than opinions and guesses, but this reliability is sometimes justified in a stereotyped way. After a critical analysis of such stereotypes, a structural characterization of measurement is proposed, as partly empirical and partly theoretical process, by showing that it is in fact the structure of the process that guarantees the reliability of its results. On this basis the role and the structure of background knowledge in (...)measurement and the justification of the conditions of object-relatedness ("objectivity") and subject-independence ("intersubjectivity") of measurement are specifically discussed. (shrink)
This article develops a model-based account of the standardization of physical measurement, taking the contemporary standardization of time as its central case-study. To standardize the measurement of a quantity, I argue, is to legislate the mode of application of a quantity-concept to a collection of exemplary artefacts. Legislation involves an iterative exchange between top-down adjustments to theoretical and statistical models regulating the application of a concept, and bottom-up adjustments to material artefacts in light of remaining gaps. The model-based (...) account clarifies the cognitive role of ad hoc corrections, arbitrary rules and seemingly circular inferences involved in contemporary timekeeping, and explains the stability of networks of standards better than its conventionalist and constructivist counterparts. (shrink)
The science of metrology characterizes the concept of precision in exceptionally loose and open terms. That is because the details of the concept must be filled in—what I call narrowing of the concept—in ways that are sensitive to the details of a particular measurement or measurement system and its use. Since these details can never be filled in completely, the concept of the actual precision of an instrument system must always retain some of the openness of its general (...) characterization. The idea that there is something that counts as the actual precision of a measurement system must therefore always remain an idealization, a conclusion that would appear to hold very broadly for terms and the concepts they express. (shrink)
This book provides an introduction to measurement theory for non-specialists and puts measurement in the social and behavioural sciences on a firm mathematical foundation. Results are applied to such topics as measurement of utility, psychophysical scaling and decision-making about pollution, energy, transportation and health. The results and questions presented should be of interest to both students and practising mathematicians since the author sets forth an area of mathematics unfamiliar to most mathematicians, but which has many potentially significant (...) applications. (shrink)
In the last few decades the role played by models and modeling activities has become a central topic in the scientific enterprise. In particular, it has been highlighted both that the development of models constitutes a crucial step for understanding the world and that the developed models operate as mediators between theories and the world. Such perspective is exploited here to cope with the issue as to whether error-based and uncertainty-based modeling of measurement are incompatible, and thus alternative with (...) one another, as sometimes claimed nowadays. The crucial problem is whether assuming this standpoint implies definitely renouncing to maintain a role for truth and the related concepts, particularly accuracy, in measurement. It is argued here that the well known objections against true values in measurement, which would lead to refuse the concept of accuracy as non-operational, or to maintain it as only qualitative, derive from a not clear distinction between three distinct processes: the metrological characterization of measuring systems, their calibration, and finally measurement. Under the hypotheses that (1) the concept of true value is related to the model of a measurement process, (2) the concept of uncertainty is related to the connection between such model and the world, and (3) accuracy is a property of measuring systems (and not of measurement results) and uncertainty is a property of measurement results (and not of measuring systems), not only the compatibility but actually the conjoint need of error-based and uncertainty-based modeling emerges. (shrink)
Comparativism is the view that comparative beliefs (e.g., believing p to be more likely than q) are more fundamental than partial beliefs (e.g., believing p to some degree x), with the latter explicable as theoretical constructs designed to facilitate reasoning about patterns within systems of comparative beliefs that exist under special conditions. In this paper, I fi rst outline several varieties of comparativism, including two `Ramseyan' varieties which generalise the standard `probabilistic' approaches. I then provide a general critique that applies (...) to any and all comparativist views. Ultimately, there are too many things that we ought to be able to say about partial beliefs that comparativism renders unintelligible. Moreover, there are alternative ways to account for the measurement of belief that need not face the same expressive limitations. (shrink)
Measurement is a process aimed at acquiring and codifying information about properties of empirical entities. In this paper we provide an interpretation of such a process comparing it with what is nowadays considered the standard measurement theory, i.e., representational theory of measurement. It is maintained here that this theory has its own merits but it is incomplete and too abstract, its main weakness being the scant attention reserved to the empirical side of measurement, i.e., to (...) class='Hi'>measurement systems and to the ways in which the interactions of such systems with the entities under measurement provide a structure to an empirical domain. In particular it is claimed that (1) it is on the ground of the interaction with a measurement system that a partition can be induced on the domain of entities under measurement and that relations among such entities can be established, and that (2) it is the usage of measurement systems that guarantees a degree of objectivity and intersubjectivity to measurement results. As modeled in this paper, measurement systems link the abstract theory of measuring, as developed in representational terms, and the practice of measuring, as coded in standard documents such as the International Vocabulary of Metrology. (shrink)
The social welfare functional approach to social choice theory fails to distinguish a genuine change in individual well-beings from a merely representational change due to the use of different measurement scales. A generalization of the concept of a social welfare functional is introduced that explicitly takes account of the scales that are used to measure well-beings so as to distinguish between these two kinds of changes. This generalization of the standard theoretical framework results in a more satisfactory formulation of (...) welfarism, the doctrine that social alternatives are evaluated and socially ranked solely in terms of the well-beings of the relevant individuals. This scale-dependent form of welfarism is axiomatized using this framework. The implications of this approach for characterizing classes of social welfare orderings are also considered. (shrink)
The first two sections of this paper investigate what Newton could have meant in a now famous passage from “De Graviatione” (hereafter “DeGrav”) that “space is as it were an emanative effect of God.” First it offers a careful examination of the four key passages within DeGrav that bear on this. The paper shows that the internal logic of Newton’s argument permits several interpretations. In doing so, the paper calls attention to a Spinozistic strain in Newton’s thought. Second it sketches (...) four interpretive options: (i) one approach is generic neo-Platonic; (ii) another approach is associated with the Cambridge Platonist, Henry More; a variant on this (ii*) emphasizes that Newton mixes Platonist and Epicurean themes; (iii) a necessitarian approach; (iv) an approach connected with Bacon’s efforts to reformulate a useful notion of form and laws of nature. Hitherto only the second and third options have received scholarly attention in scholarship on DeGrav. The paper offers new arguments to treat Newtonian emanation as a species of Baconian formal causation as articulated, especially, in the first few aphorisms of part two of Bacon’s New Organon. If we treat Newtonian emanation as a species of formal causation then the necessitarian reading can be combined with most of the Platonist elements that others have discerned in DeGrav, especially Newton’s commitment to doctrines of different degrees of reality as well as the manner in which the first existing being ‘transfers’ its qualities to space (as a kind of causa-sui). This can clarify the conceptual relationship between space and its formal cause in Newton as well as Newton’s commitment to the spatial extended-ness of all existing beings. While the first two sections of this paper engage with existing scholarly controversies, in the final section the paper argues that the recent focus on emanation has obscured the importance of Newton’s very interesting claims about existence and measurement in “DeGrav”. The paper argues that according to Newton God and other entities have the same kind of quantities of existence; Newton is concerned with how measurement clarifies the way of being of entities. Newton is not claiming that measurement reveals all aspects of an entity. But if we measure something then it exists as a magnitude in space and as a magnitude in time. This is why in DeGrav Newton’s conception of existence really helps to “lay truer foundations of the mechanical sciences.”. (shrink)
Although research on the corporate social responsibility (CSR) dimension of corporate image has notably increased in recent years, the definition and measurement of the concept for academic purposes still concern researchers. In this article, literature regarding the measurement of CSR image from a customer viewpoint is revised and areas of improvement are identified. A multistage method is implemented to develop and to validate a reliable scale based on stakeholder theory. Results demonstrate the reliability and validity of this new (...) scale for measuring customer perceptions regarding the CSR performance of their service providers. With regard to this, CSR includes corporate responsibilities towards customers, shareholders, employees and society. The scale is consistent among diverse customer cohorts with different gender, age and level of education. Furthermore, results also confirm the applicability of this new scale to structural equation modelling. (shrink)
According to orthodox quantum mechanics, state vectors change in two incompatible ways: "deterministically" in accordance with Schroedinger's time-dependent equation, and probabilistically if and only if a measurement is made. It is argued here that the problem of measurement arises because the precise mutually exclusive conditions for these two types of transitions to occur are not specified within orthodox quantum mechanics. Fundamentally, this is due to an inevitable ambiguity in the notion of "meawurement" itself. Hence, if the problem of (...)measurement is to be resolved, a new, fully objective version of quantjm mechanics needs to be developed which does not incorporate the notion of measurement in its basic postuolates at all. (shrink)
The quantum theory of de Broglie and Bohm solves the measurement problem, but the hypothetical corpuscles play no role in the argument. The solution ﬁnds a more natural home in the Everett interpretation.
Corporate environmental performance (CEP) has been of fundamental interest in scholarly research during the last few decades. However, there is a great deal of disagreement pertaining to the definition, conceptualization, and adequate measurement of CEP. Our study addresses these issues and provides a methodologically rigorous and comprehensive examination of content validity and construct validity. By integrating the available literature on CEP, we derive a parsimonious definition and theoretically sound framework of the focal construct. Drawing on non-aggregated and publicly available (...) data for a sample of 706 firm-years, we test the construct validity of this framework by means of factor analysis. Our results provide evidence for the multidimensional nature of the focal construct. By contrasting our findings with existing measurement approaches in empirical research, we emphasize several deficiencies with regard to the inferences and conclusions yielded in prior research. Future empirical and practically oriented studies can build on our findings and thus provide more stringent results. (shrink)
This article analyzes the implications of protective measurement for the meaning of the wave function. According to protective measurement, a charged quantum system has mass and charge density proportional to the modulus square of its wave function. It is shown that the mass and charge density is not real but effective, formed by the ergodic motion of a localized particle with the total mass and charge of the system. Moreover, it is argued that the ergodic motion is not (...) continuous but discontinuous and random. This result suggests a new interpretation of the wave function, according to which the wave function is a description of random discontinuous motion of particles, and the modulus square of the wave function gives the probability density of the particles being in certain locations. It is shown that the suggested interpretation of the wave function disfavors the de Broglie-Bohm theory and the many-worlds interpretation but favors the dynamical collapse theories, and the random discontinuous motion of particles may provide an appropriate random source to collapse the wave function. (shrink)
In a recent paper in this Journal San Pedro I formulated a conjecture relating Measurement Independence and Parameter Independence, in the context of common cause explanations of EPR correlations. My conjecture suggested that a violation of Measurement Independence would entail a violation of Parameter Independence as well. Leszek Wroński has shown that conjecture to be false. In this note, I review Wroński’s arguments and agree with him on the fate of the conjecture. I argue that what is interesting (...) about the conjecture, however, is not whether it is true or false in itself, but the reasons for the actual verdict, and their implications regarding locality. (shrink)
The aim of this paper is to give a systematic account of the so-called “measurement problem” in the frame of the standard interpretation of quantum mechanics. It is argued that there is not one but five distinct formulations of this problem. Each of them depends on what is assumed to be a “satisfactory” description of the measurement process in the frame of the standard interpretation. Moreover, the paper points out that each of these formulations refers not to a (...) unique problem, but to a set of sub-problems. (shrink)
It is wholly uncontroversial that measurements-or, more properly, propositions that are measurement reports-are often paradigmatically good cases of propositions that serve the function of evidence. In normal cases it is also obvious that stating such a report is an utterly pedestrian case of successful assertion. So, for example, there is nothing controversial about the following claims: (1) that a proposition to the effect that a particular thermometer reads 104C when properly used to determine the temperature of a particular patient (...) is evidence that the patient in question has a fever and (2) that there is nothing wrong with asserting the proposition that a particular thermometer reads 104C for appropriate reasons of communication, etc. when the thermometer has been properly used to determine the temperature of a particular patient. Here it will be shown that Timothy Williamson’s commitments to a number of principles about knowledge and assertion imply that a whole class of utterly ordinary statements like these that are used as evidence are not really evidence because they are not knowledge and so are (perversely) unassertable according to his principled commitments. This paper deals primarily with the second of these two problems and an alternative account of the norms of assertion is introduced which allows for the assertability of such measurement reports. (shrink)
Neoliberal precepts of the governance of academic science-deregulation; reification of markets; emphasis on competitive allocation processes have been conflated with those of performance management—if you cannot measure it, you cannot manage it—into a single analytical and consequent single programmatic worldview. As applied to the United States’ system of research universities, this conflation leads to two major divergences from relationships hypothesized in the governance of science literature. (1) The governance and financial structures supporting academic science in the United States’ system of (...) higher education are sufficiently different from those found in many other OECD countries where these policies have been adopted to produce political pressures for an increase rather than a decrease in governmental control over university affairs. (2) The major impact upon academic science of performance measurement systems has come not externally from new government requirements but internally from the independent adoption of these techniques by universities, initially in the name of rational management and increasingly as devices to foster reputational enhancement. The overall thrust of the two trends in the U.S. has been less a shift as experienced elsewhere from bureaucratic to market modes of governance than the displacement of professional-collegial control by internal bureaucratic control. (shrink)
This paper introduces the reader to Meinong's work on the metaphysics of magnitudes and measurement in his Über die Bedeutung des Weber'schen Gesetzes. According to Russell himself, who wrote a review of Meinong's work on Weber's law for Mind, Meinong's theory of magnitudes deeply influenced Russell's theory of quantities in the Principles of Mathematics. The first and longest part of the paper discusses Meinong's analysis of magnitudes. According to Meinong, we must distinguish between divisible and indivisible magnitudes. He argues (...) that relations of distance, or dissimilarity, are indivisible magnitudes that coincide with divisible magnitudes called "stretches". The second part of the paper is concerned with Meinong's account of measurement as a comparison of parts. According to Meinong, since measuring consists in comparing parts only divisible magnitudes are directly measurable. Indivisible magnitudes can only be measured indirectly, by measuring the divisible stretches that coincide with them. (shrink)
Measurement is said to be the basis of exact sciences as the process of assigning numbers to matter (things or their attributes), thus making it possible to apply the mathematically formulated laws of nature to the empirical world. Mathematics and empiria are best accorded to each other in laboratory experiments which function as what Nancy Cartwright calls nomological machine: an arrangement generating (mathematical) regularities. On the basis of accounts of measurement errors and uncertainties, I will argue for two (...) claims: 1) Both fundamental laws of physics, corresponding to ideal nomological machine, and phenomenological laws, corresponding to material nomological machine, lie, being highly idealised relative to the empirical reality; and also laboratory measurement data do not describe properties inherent to the world independently of human understanding of it. 2) Therefore the naive, representational view of measurement and experimentation should be replaced with a more pragmatic or practice-based view. (shrink)
The use of real clocks and measuring rods in quantum mechanics implies a natural loss of unitarity in the description of the theory. We briefly review this point and then discuss the implications it has for the measurement problem in quantum mechanics. The intrinsic loss of coherence allows to circumvent some of the usual objections to the measurement process as due to environmental decoherence.
In this paper, we argue that calls for widespread implementation of ethics measurement systems would be better informed by institutional economic analysis. Specifically, we assert that proponents of such systems must first recognize and understand the institutions that potentially impede such efforts. We identify two potential institutional impediments to measuring ethics and social responsibility. First, we suggest that neoclassical economics, supported by traditional business education and legal precedent, serves to reinforce the notion that shareholders are the primary corporate constituency (...) group. Such an emphasis on the needs of shareholders severely hinders implementation of measurement systems that address the needs of multiple stakeholder groups. Second, we argue that the threat of litigation may constrain corporate managers from measuring and considering ethics and corporate social responsibility matters. In particular, managers may be reluctant to quantify various ethical concerns if the resulting measurements could be used as evidence against the corporation in a lawsuit. (shrink)
Entanglement has been called the most important new feature of the quantum world. It is expressed in the quantum formalism by the joint measurement formula. We prove the formula for projection valued observables from a plausible assumption, which for spacelike separated measurements is an expression of relativistic causality. The state reduction formula is simply a way to express the joint measurement formula after one measurement has been made, and its result known.
Psychologists debate whether mental attributes can be quantified or whether they admit only qualitative comparisons of more and less. Their disagreement is not merely terminological, for it bears upon the permissibility of various statistical techniques. This article contributes to the discussion in two stages. First it explains how temperature, which was originally a qualitative concept, came to occupy its position as an unquestionably quantitative concept (§§1–4). Specifically, it lays out the circumstances in which thermometers, which register quantitative (or cardinal) differences, (...) became distinguishable from thermoscopes, which register merely qualitative (or ordinal) differences. I argue that this distinction became possible thanks to the work of Joseph Black, ca. 1760. Second, the article contends that the model implicit in temperature’s quantitative status offers a better way for thinking about the quantitative status of mental attributes than models from measurement theory (§§5–6). (shrink)
We investigate the implications of protective measurement for de Broglie-Bohm theory, mainly focusing on the interpretation of the wave function. It has been argued that the de Broglie-Bohm theory gives the same predictions as quantum mechanics by means of quantum equilibrium hypothesis. However, this equivalence is based on the premise that the wave function, regarded as a Ψ-field, has no mass and charge density distributions. But this premise turns out to be wrong according to protective measurement; a charged (...) quantum system has effective mass and charge density distributing in space, proportional to the square of the absolute value of its wave function. Then in the de Broglie-Bohm theory both Ψ-field and Bohmian particle will have charge density distribution for a charged quantum system. This will result in the existence of an electrostatic self-interaction of the field and an electromagnetic interaction between the field and Bohmian particle, which not only violates the superposition principle of quantum mechanics but also contradicts experimental observations. Therefore, the de Broglie-Bohm theory as a realistic interpretation of quantum mechanics is problematic according to protective measurement. Lastly, we briefly discuss the possibility that the wave function is not a physical field but a description of some sort of ergodic motion (e.g. random discontinuous motion) of particles. (shrink)
This proposes a new theory of Quantum measurement; a state reduction theory in which reduction is to the elements of the number operator basis of a system, triggered by the occurrence of annihilation or creation (or lowering or raising) operators in the time evolution of a system. It is from these operator types that the acronym ‘LARC’ is derived. Reduction does not occur immediately after the trigger event; it occurs at some later time with probability P t per unit (...) time, where P t is very small. Localisation of macroscopic objects occurs in the natural way: photons from an illumination field are reflected off a body and later absorbed by another body. Each possible absorption of a photon by a molecule in the second body generates annihilation and raising operators, which in turn trigger a probability per unit time P t of a state reduction into the number operator basis for the photon field and the number operator basis of the electron orbitals of the molecule. Since all photons in the illumination field have come from the location of the first body, wherever that is, a single reduction leads to a reduction of the position state of the first body relative to the second, with a total probability of mP t , where m is the number of photon absorption events. Unusually for a reduction theory, the larc theory is naturally relativistic. (shrink)
In this article we discuss the ethical dilemmas facing performance evaluators and the "evaluatees" whose performances are measured in a business context. The concepts of role morality and common morality are used to develop a framework of behaviors that are normally seen as the moral responsibilities of these actors. This framework is used to analyze, based on four empirical situations, why the implementation of a performance measurement system has not been as effective as expected. It was concluded that, in (...) these four cases, unethical behavior (i.e. deviations from the ethical behaviors identified in the framework) provided, at least to some extent, an explanation for the lower than expected effectiveness of the performance measurement procedures. At the end of the paper we present an agenda for further research through which the framework could be further developed and systematically applied to a broader set of cases. (shrink)
Community Development Finance Institutions (CDFIs) are publicly funded organisations that provide small loans to people in financially underserved areas of the UK. Policy makers have repeatedly sought to understand and measure the performance of CDFIs to ensure the efficient use of public funds, but have struggled to identify an appropriate way of doing so. In this article, we empirically derive a framework that measures the performance of CDFIs through an analysis of their stakeholder relationships. Based on qualitative data from 20 (...) English CDFIs, we develop a typology of CDFIs according to three dimensions: organisational structure, type of lending and type of market served. Following on from this, we derive several propositions that consider how these dimensions relate to the financial and social performance of CDFIs, and provide the basis for a performance measurement framework. (shrink)
There is a consistent and simple interpretation of the quantum theory of isolated systems. The interpretation suffers no measurement problem and provides a quantum explanation of state reduction, which is usually postulated. Quantum entanglement plays an essential role in the construction of the interpretation.
Signal causality, the prohibition of superluminal information transmission, is the fundamental property shared by quantum measurement theory and relativity, and it is the key to understanding the connection between nonlocal measurement effects and elementary interactions. To prevent those effects from transmitting information between the generating and observing process, they must be induced by the kinds of entangling interactions that constitute measurements, as implied in the Projection Postulate. They must also be nondeterministic as reflected in the Born Probability Rule. (...) The nondeterminism of entanglement-generating processes explains why the relevant types of information cannot be instantiated in elementary systems, and why the sequencing of nonlocal effects is, in principle, unobservable. This perspective suggests a simple hypothesis about nonlocal transfers of amplitude during entangling interactions, which yields straightforward experimental consequences. (shrink)
This work develops an epistemology of measurement, that is, an account of the conditions under which measurement and standardization methods produce knowledge as well as the nature, scope, and limits of this knowledge. I focus on three questions: (i) how is it possible to tell whether an instrument measures the quantity it is intended to? (ii) what do claims to measurement accuracy amount to, and how might such claims be justified? (iii) when is disagreement among instruments a (...) sign of error, and when does it imply that instruments measure different quantities? Based on a series of case studies conducted in collaboration with the US National Institute of Standards and Technology (NIST), I argue for a model-based approach to the epistemology of physical measurement. To measure a physical quantity, I argue, is to estimate the value of a parameter in an idealized model of a physical process. Such estimation involves inference from the final state (‘indication’) of a process to the value range of a parameter (‘outcome’) in light of theoretical and statistical assumptions. Contrary to contemporary philosophical views, measurement outcomes cannot be obtained by mapping the structure of indications. Instead, measurement outcomes as well as claims to accuracy, error and quantity individuation can only be adjudicated relative to a choice of idealized modelling assumptions. (shrink)
This paper challenges “traditional measurement-accuracy realism”, according to which there are in nature quantities of which concrete systems have definite values. An accurate measurement outcome is one that is close to the value for the quantity measured. For a measurement of the temperature of some water to be accurate in this sense requires that there be this temperature. But there isn’t. Not because there are no quantities “out there in nature” but because the term ‘the temperature of (...) this water’ fails to refer owing to idealization and failure of specificity in picking out concrete cases. The problems can be seen as an artifact of vagueness, and so doing facilitates applying Eran Tal’s robustness account of measurement accuracy to suggest an attractive way of understanding vagueness in terms of the function of idealization, a way that sidesteps the problems of higher order vagueness and that shows how idealization provides a natural generalization of what it is to be vague. (shrink)
Peirce earned his keep making measurements, mainly of gravity but also astronomical, and he made several contributions to the science of measurement. It has been said that his experience measuring had philosophical consequences: his adoption of fallibilism, his argument against necessitarianism, and his conception of inquiry as converging on the truth have all been mentioned. But not much attention has been paid to the curious episode of his making “the study of great men” part of a course in logic: (...) students were asked to rank a long list of men by order of greatness. That was at Johns Hopkins in 1883. I shall argue that that study, together with his reflections on pre-instrumental estimates of stars’ brightness, bears directly on the method of phaneroscopy formulated nearly two decades later. In each case, the problem is to show how objectivity is possible under conditions in which it must seem that objectivity is impossible. (shrink)
The polemical term “interaction-free measurement” (IFM) is analyzed in its interpretative nature. Two seminal works proposing the term are revisited and their underlying interpretations are assessed. The role played by nonlocal quantum correlations (entanglement) is formally discussed and some controversial conceptions in the original treatments are identified. As a result the term IFM is shown to be consistent neither with the standard interpretation of quantum mechanics nor with the lessons provided by the EPR debate.
Ordinary measurement using a standard scale, such as a ruler or a standard set of weights, has two fundamental properties. First, the results are approximate, for example, within 0.1 g. Second, the resulting indistinguishability is transitive, rather than nontransitive, as in the standard psychological comparative judgments without a scale. Qualitative axioms are given for structures having the two properties mentioned. A representation theorem is then proved in terms of upper and lower measures.
Van Fraassen has presented in Scientific Representation an attractive notion of measurement as an important part of the empiricist structuralism that he endorses. However, he has been criticized on the grounds that both his notion of measurement and his empiricist structuralism force him to do the very thing he objects to in other philosophical projects—to endorse a controversial metaphysics. This paper proposes a defense of van Fraassen by arguing that his project is indeed a ‘metaphysical’ project, but one (...) which is very similar to Strawson’s ‘descriptive metaphysics’; if this is the case, van Fraassen’s project may be taken, following recent suggestions made by Ney and Paul, as a form of metaphysics that can potentially make a crucial contribution to scientific inquiry. (shrink)