Summary Prägnanz was suggested by Max Wertheimer in the 1920s as subsuming all “Laws of Gestalt” as they apply to visual awareness. Thus, it assumes a prominent position in any account of Gestalt phenomena. From a phenomenological perspective, some visual stimuli evidently “have more Prägnanz” than others, so Prägnanz seems to be an intensive quality. Here, we investigate the intricacies that need to be faced on the way to a definition of formal scales. Such measures naturally depend both upon the (...) stimulus and upon the observer. Structural complexity bottlenecks of visual systems play a role, as well as the relevance to biological fitness, that is the affinity to the optical user interface. This positions the notion of Prägnanz squarely within the realm of biology. Indeed, the familiar “releasers” of ethology are singular cases of extremely high Prägnanz. (shrink)
The need for quantitative measurement represents a unifying bond that links all the physical, biological, and social sciences. Measurements of such disparate phenomena as subatomic masses, uncertainty, information, and human values share common features whose explication is central to the achievement of foundational work in any particular mathematical science as well as for the development of a coherent philosophy of science. This book presents a theory of measurement, one that is "abstract" in that it is concerned with highly (...) general axiomatizations of empirical and qualitative settings and how these can be represented quantitatively. It was inspired by, and represents a generalization and extension of, the last major research work in this field, Foundations of Measurement Vol. I, by Krantz, Luce, Suppes, and Tversky published in 1971. (shrink)
In this paper I assess the adequacy of no-conspiracy conditions employed in the usual derivations of the Bell inequality in the context of EPR correlations. First, I look at the EPR correlations from a purely phenomenological point of view and claim that common cause explanations of these cannot be ruled out. I argue that an appropriate common cause explanation requires that no-conspiracy conditions are re-interpreted as mere common cause-measurement independence conditions. In the right circumstances then, violations of measurement (...) independence need not entail any kind of conspiracy (nor backwards in time causation). To the contrary, if measurement operations in the EPR context are taken to be causally relevant in a specific way to the experiment outcomes, their explicit causal role provides the grounds for a common cause explanation of the corresponding correlations. (shrink)
Measures of student ethical sensitivity and their increases help to answer questions such as whether accounting ethics should be taught at all. We investigate different sensitivity measures and alternatives to the well-established Defining Issues Test (DIT-2, Rest, J. R. et al. [1999, Postconventional Moral Thinking: A Neo-Kohlbergian Approach (Lawrence Erlbaum Associates, Mahwah, NJ]), frequently used to measure the effects of undergraduate accounting ethics education. Because the DIT measures cognitive development, which increases with age, the DIT scores for younger accounting students (...) are typically lower, have limited range, and are not likely to vary sufficiently with corresponding choices in ethical dilemmas. Since the DIT measures only the moral judgment component of ethical decision-making, we consider the multidimensional ethical scale (MES) to allow respondents to provide explanations for their moral and other judgments. The MES has been used to measure attitudes related to justice, utility, contractualism, egoism, and relativism. Unfortunately, the MES is not comparable in one-dimension to the DIT, and unlike the DIT, the MES has no theoretical or objective base. Therefore, we construct a comparable one-dimensional relative measure, a Composite MES Score, obtained from previous research on practicing accountants. We compare the reliability of this measure to the DIT in explaining the ethical choices of 54 specially chosen, somewhat homogeneous students, whose ages range from 18 to 19, and who are taking a second semester freshman accounting course at a private, religion-affiliated university. These particular students are relatively untrained in the formal use of questionable accounting choices. These students are less likely to recognize the dilemmas of the MES and are also less likely to demonstrate sufficient variation in their DIT scores, traditionally low for freshmen students. As freshmen, they are recent graduates of high school and more likely guided by other ethical influences including friends, family, or contractual obligations (some of the MES constructs) rather than higher cognitive development. This study confirms suspicions. We find the DIT scores do not vary sufficiently to explain the moral reasoning of freshmen. For eight dilemmas and 24 choices we find the DIT score correlates with only three choices, whereas the MES regression models have at least one significant construct for 23 out of 24 ethical choices. The Composite MES Score (a relative measure) also explains 23 out of 24 choices and is statistically related to the DIT in only one of the choices. Unlike the DIT, the Composite MES permits pretest and retesting with different dilemmas to evaluate changes in ethical sensitivity. These results argue for relative rather than absolute measures of sensitivity and guides beyond cognitive development (the DIT-score) to explain undergraduate student sensitivity. (shrink)
A ubiquitous argument against mental-state accounts of well-being is based on the notion that mental states like happiness and satisfaction simply cannot be measured. The purpose of this paper is to articulate and to assess this “argument from measurability.” My main thesis is that the argument fails: on the most charitable interpretation, it relies on the false proposition that measurement requires the existence of an observable ordering satisfying conditions like transitivity. The failure of the argument from measurability, however, does (...) not translate into a defense of mental-state accounts as accounts of well-being or of measures of happiness and satisfaction as measures of well-being. Indeed, I argue, the ubiquity of the argument from measurability may have obscured other, very real problems associated with mental-state accounts of well-being – above all, that happiness and satisfaction fail to track well-being – and with measures of happiness and satisfaction – above all, the tendency toward reification. I conclude that the central problem associated with the measurement of, e.g., happiness as a subjectively experienced mental state is not that it is too hard to measure, but rather that it is too easy to measure. (shrink)
This book provides an introduction to measurement theory for non-specialists and puts measurement in the social and behavioural sciences on a firm mathematical foundation. Results are applied to such topics as measurement of utility, psychophysical scaling and decision-making about pollution, energy, transportation and health. The results and questions presented should be of interest to both students and practising mathematicians since the author sets forth an area of mathematics unfamiliar to most mathematicians, but which has many potentially significant (...) applications. (shrink)
The debate on probabilistic measures of coherence flourishes for about 15 years now. Initiated by papers that have been published around the turn of the millennium, many different proposals have since then been put forward. This contribution is partly devoted to a reassessment of extant coherence measures. Focusing on a small number of reasonable adequacy constraints I show that (i) there can be no coherence measure that satisfies all constraints, and that (ii) subsets of these adequacy constraints motivate two different (...) classes of coherence measures. These classes do not coincide with the common distinction between coherence as mutual support and coherence as relative set-theoretic overlap. Finally, I put forward arguments to the effect that for each such class of coherence measures there is an outstanding measure that outperforms all other extant proposals. One of these measures has recently been put forward in the literature, while the other one is based on a novel probabilistic measure of confirmation. (shrink)
Measuring the effectiveness of medical interventions faces three epistemological challenges: the choice of good measuring instruments, the use of appropriate analytic measures, and the use of a reliable method of extrapolating measures from an experimental context to a more general context. In practice each of these challenges contributes to overestimating the effectiveness of medical interventions. These challenges suggest the need for corrective normative principles. The instruments employed in clinical research should measure patient-relevant and disease-specific parameters, and should not be sensitive (...) to parameters that are only indirectly relevant. Effectiveness always should be measured and reported in absolute terms (using measures such as 'absolute risk reduction'), and only sometimes should effectiveness also be measured and reported in relative terms (using measures such as 'relative risk reduction')-employment of relative measures promotes an informal fallacy akin to the base-rate fallacy, which can be exploited to exaggerate claims of effectiveness. Finally, extrapolating from research settings to clinical settings should more rigorously take into account possible ways in which the intervention in question can fail to be effective in a target population. (shrink)
Corporate social responsibility (CSR) is one of the most prominent concepts in the literature and, in short, indicates the positive impacts of businesses on their stakeholders. Despite the growing body of literature on this concept, the measurement of CSR is still problematic. Although the literature provides several methods for measuring corporate social activities, almost all of them have some limitations. The purpose of this study is to provide an original, valid, and reliable measure of CSR reflecting the responsibilities of (...) a business to various stakeholders. Based on a proposed conceptual framework of CSR, a scale was developed through a systematic scale development process. In the study, exploratory factor analysis was conducted to determine the underlying factorial structure of the scale. Data was collected from 269 business professionals working in Turkey. The results of the analysis provided a four-dimensional structure of CSR, including CSR to social and nonsocial stakeholders, employees, customers, and government. (shrink)
in a 2nd task (e.g., pleasant vs. unpleasant words for an evaluation attribute). When instructions oblige highly associated categories (e.g., liower + pleasant) to share a response key, performance is faster than when less associated categories (e.g., insect + pleasant) share a key. This performance difference implicitly measures differential association of the 2 concepts with the attribute. In 3..
When making decisions about action to improve animal lives, it is important that we have accurate estimates of how much animals are suffering under different conditions. The current frameworks for making comparative estimates of suffering all fall along the lines of multiplying numbers of animals used by length of life and amount of suffering experienced. However, the numbers used to quantify suffering are usually generated through unreliable and subjective processes which make them unlikely to be correct. In this paper, I (...) look at how we might apply principled methods from animal welfare science to arrive at more accurate scores, which will then help us in making the best decisions for animals. I argue that a combined use of both a whole-animal measure and a combination measurement framework for assessing welfare will give us the most accurate answers to guide our action. (shrink)
In Inventing Temperature, Chang takes a historical and philosophical approach to examine how scientists were able to use scientific method to test the reliability of thermometers; how they measured temperature beyond the reach of thermometers; and how they came to measure the reliability and accuracy of these instruments without a circular reliance on the instruments themselves. Chang discusses simple epistemic and technical questions about these instruments, which in turn lead to more complex issues about the solutions that were developed.
This long-awaited two-volume set constitutes the definitive presentation of the system of classifying moral judgment built up by Lawrence Kohlberg and his associates over a period of twenty years. Researchers in child development and education around the world, many of whom have worked with interim versions of the system, indeed, all those seriously interested in understanding the problem of moral judgment, will find it an indispensable resource. Volume I reviews Kohlberg's stage theory, and the by-now large body of research on (...) the significance and utility of his moral stages. Issues of reliability and validity are addressed. The volume ends with detailed instructions for using the forms in Volume 2. Volume 2, in a specially-designed, user-friendly format, includes three alternative functionally-equivalent forms of the scoring system. (shrink)
This paper aims to contribute to our understanding of the notion of coherence by explicating in probabilistic terms, step by step, what seem to be our most basic intuitions about that notion, to wit, that coherence is a matter of hanging or fitting together, and that coherence is a matter of degree. A qualitative theory of coherence will serve as a stepping stone to formulate a set of quantitative measures of coherence, each of which seems to capture well the aforementioned (...) intuitions. Subsequently it will be argued that one of those measures does better than the others in light of some more specific intuitions about coherence. This measure will be defended against two seemingly obvious objections. (shrink)
Social enterprises in the microfinance industry need to adhere to both financial and social demands. Critics argue that there is a mission drift away from the social mission, and this has motivated the introduction of social rating agencies to strengthen the business ethics of microfinance institutions. Using a global dataset of 204 socially rated MFIs from 58 countries, we assess the factors that drive the social performance ratings of MFIs. Overall our results show that social ratings of MFIs are significantly (...) related to financial performance, greater outreach especially in rural areas, well-defined social objectives, staff commitment, service quality and an enhanced customer service. We observe that various rating agencies attach different importance to each of the social indicators. The public policy implication is that social rating agencies need to become more transparent, to reduce the information asymmetries between heterogenous socially motivated investors and the focal MFI. (shrink)
How do we know when one person or society is 'freer' than another? Can freedom be measured? Is more freedom better than less? This book provides the first full-length treatment of these fundamental yet neglected issues, throwing new light both on the notion of freedom and on contemporary liberalism.
The aim of this essay is to distinguish and analyze several difficulties confronting attempts to reconcile the fundamental quantum mechanical dynamics with Born''s rule. It is shown that many of the proposed accounts of measurement fail at least one of the problems. In particular, only collapse theories and hidden variables theories have a chance of succeeding, and, of the latter, the modal interpretations fail. Any real solution demands new physics.
How can we tell whether an incident counts as a microaggression? How do we draw the boundary between microaggressions and weightier forms of oppression, such as hate crimes? I address these questions by exploring the ontology and epistemology of microaggression, in particular the constitutive relationship between microaggression and systemic social oppression. I argue that we ought to define microaggression in terms of the ambiguous experience that its victims undergo, focusing attention on their perspectives while providing criteria for distinguishing microaggression.
What is the best way of assessing the extent to which people are aware of a stimulus? Here, using a masked visual identification task, we compared three measures of subjective awareness: The Perceptual Awareness Scale , through which participants are asked to rate the clarity of their visual experience; confidence ratings , through which participants express their confidence in their identification decisions, and Post-decision wagering , in which participants place a monetary wager on their decisions. We conducted detailed explorations of (...) the relationships between awareness and identification performance, looking to determine which scale best correlates with performance, and whether we can detect performance in the absence of awareness and how the scales differ from each other in terms of revealing such unconscious processing. Based on these findings we discuss whether perceptual awareness should be considered graded or dichotomous. Results showed that PAS showed a much stronger performance-awareness correlation than either CR or PDW, particularly for low stimulus intensities. In general, all scales indicated above-chance performance when participants claimed not to have seen anything. However, such above-chance performance only showed when we also observed a correlation between awareness and performance. Thus PAS seems to be the most exhaustive measure of awareness, and we find support for above-chance performance in the absence of subjective awareness, but such unconscious knowledge only contributes to performance when we observe conscious knowledge as well. Similarities and differences between scales are discussed in the light of consciousness theories and response strategies. (shrink)
The paper presents an argument for treating certain types of computer simulation as having the same epistemic status as experimental measurement. While this may seem a rather counterintuitive view it becomes less so when one looks carefully at the role that models play in experimental activity, particularly measurement. I begin by discussing how models function as “measuring instruments” and go on to examine the ways in which simulation can be said to constitute an experimental activity. By focussing on (...) the connections between models and their various functions, simulation and experiment one can begin to see similarities in the practices associated with each type of activity. Establishing the connections between simulation and particular types of modelling strategies and highlighting the ways in which those strategies are essential features of experimentation allows us to clarify the contexts in which we can legitimately call computer simulation a form of experimental measurement. (shrink)
ABSTRACT In this contribution, we look – both historically and in the present – at how children are objectified in data and how it is assumed that this objectivation is a way to dismiss ideology, or at least to separate the ideological from the scientific. We argue, however, that the separation of data from ideology is itself a highly ideological choice. As Freire points out: education never was and never can be objective. The objectivation of the child and, more generally, (...) of pedagogy means that the agenda that this serves remains veiled. A closer look at what data are put forward as the means to objectify education reveals that this agenda is deeply individualistic and fit to serve a competitive capitalist society. We argue that if this is the case, it ought to be the result of democratic debate and, therefore, we need more, not less, ideologies in pedagogy, because facts and data are always embedded in ideologies. And so they should be. (shrink)
Do sensory measurements deserve the label of “measurement”? We argue that they do. They fit with an epistemological view of measurement held in current philosophy of science, and they face the same kinds of epistemological challenges as physical measurements do: the problem of coordination and the problem of standardization. These problems are addressed through the process of “epistemic iteration,” for all measurements. We also argue for distinguishing the problem of standardization from the problem of coordination. To exemplify our (...) claims, we draw on olfactory performance tests, especially studies linking olfactory decline to neurodegenerative disorders. (shrink)
Individual differences in ethical ideology are believed to play a key role in ethical decision making. Forsyths (1980) Ethics Position Questionnaire (EPQ) is designed to measure ethical ideology along two dimensions, relativism and idealism. This study extends the work of Forsyth by examining the construct validity of the EPQ. Confirmatory factor analyses conducted with independent samples indicated three factors – idealism, relativism, and veracity – account for the relationships among EPQ items. In order to provide further evidence of the instruments (...) nomological and convergent validity, correlations among the EPQ subscales, dogmatism, empathy, and individual differences in the use of moral rationales were examined. The relationship between EPQ measures of idealism and moral judgments demonstrated modest predictive validity, but the appreciably weaker influence of relativism and the emergence of a veracity factor raise questions about the utility of the EPQ typology. (shrink)
What is it to know more? By what metric should the quantity of one's knowledge be measured? I start by examining and arguing against a very natural approach to the measure of knowledge, one on which how much is a matter of how many. I then turn to the quasi-spatial notion of counterfactual distance and show how a model that appeals to distance avoids the problems that plague appeals to cardinality. But such a model faces fatal problems of its own. (...) Reflection on what the distance model gets right and where it goes wrong motivates a third approach, which appeals not to cardinality, nor to counterfactual distance, but to similarity. I close the paper by advocating this model and briefly discussing some of its significance for epistemic normativity. In particular, I argue that the 'trivial truths' objection to the view that truth is the goal of inquiry rests on an unstated, but false, assumption about the measure of knowledge, and suggest that a similarity model preserves truth as the aim of belief in an intuitively satisfying way. (shrink)
Several authors have argued that causes differ in the degree to which they are ‘specific’ to their effects. Woodward has used this idea to enrich his influential interventionist theory of causal explanation. Here we propose a way to measure causal specificity using tools from information theory. We show that the specificity of a causal variable is not well-defined without a probability distribution over the states of that variable. We demonstrate the tractability and interest of our proposed measure by measuring the (...) specificity of coding DNA and other factors in a simple model of the production of mRNA. (shrink)
This paper takes a critical look at the empirical studies assessing the effectiveness of teaching courses in business and society and business ethics. It is generally found that students' ethical awareness or reasoning skills improve after taking the courses, yet this improvement appears to be short-lived. The generalizability of these findings is limited due to the lack of extensive empirical research and the inconsistencies in research design, empirical measures, and statistical analysis across studies. Thus, recommendations are presented and discussed for (...) improving the generalizability and sophistication of future research efforts in this area. (shrink)
We characterize access to empirical objects in biology from a theoretical perspective. Unlike objects in current physical theories, biological objects are the result of a history and their variations continue to generate a history. This property is the starting point of our concept of measurement. We argue that biological measurement is relative to a natural history which is shared by the different objects subjected to the measurement and is more or less constrained by biologists. We call symmetrization (...) the theoretical and often concrete operation which leads to considering biological objects as equivalent in a measurement. Last, we use our notion of measurement to analyze research strategies. Some strategies aim to bring biology closer to the epistemology of physical theories, by studying objects as similar as possible, while others build on biological diversity. (shrink)
This paper aims to assess knowledge management maturity at HEI to determine the most effecting variables on knowledge management that enhance the total performance of the organization. This study was applied on Al-Azhar University in Gaza strip, Palestine. This paper depends on Asian productivity organization model that used to assess KM maturity. Second dimension assess high performance was developed by the authors. The controlled sample was (364). Several statistical tools were used for data analysis and hypotheses testing, including reliability Correlation (...) using Cronbach’s alpha, “ANOVA”, Simple Linear Regression and Step Wise Regression. The overall findings of the current study suggest that KMM is suitable for measuring and lead to enhance high performance. KMM assessment shows that the university maturity level is in level three. Findings also support the main hypothesis and it is subhypotheses. The most important factors effecting high performance are: Processes, KM leadership, People, KM Outcomes, Knowledge Process. Furthermore the current study is unique by the virtue of its nature, scope and way of implied investigation, as it is the first study at HEI in Palestine explores the status of KMM using the Asian productivity model. (shrink)
In the last few decades the role played by models and modeling activities has become a central topic in the scientific enterprise. In particular, it has been highlighted both that the development of models constitutes a crucial step for understanding the world and that the developed models operate as mediators between theories and the world. Such perspective is exploited here to cope with the issue as to whether error-based and uncertainty-based modeling of measurement are incompatible, and thus alternative with (...) one another, as sometimes claimed nowadays. The crucial problem is whether assuming this standpoint implies definitely renouncing to maintain a role for truth and the related concepts, particularly accuracy, in measurement. It is argued here that the well known objections against true values in measurement, which would lead to refuse the concept of accuracy as non-operational, or to maintain it as only qualitative, derive from a not clear distinction between three distinct processes: the metrological characterization of measuring systems, their calibration, and finally measurement. Under the hypotheses that (1) the concept of true value is related to the model of a measurement process, (2) the concept of uncertainty is related to the connection between such model and the world, and (3) accuracy is a property of measuring systems (and not of measurement results) and uncertainty is a property of measurement results (and not of measuring systems), not only the compatibility but actually the conjoint need of error-based and uncertainty-based modeling emerges. (shrink)
This paper examines some aspects of the grammar of measurement based on data from non-split and split measure phrase (MP) constructions in Japanese. I claim that the non-split MP construction involves measurement of individuals, while the split MP construction involves measurement of events as well as of individuals. This claim is based on the observation that, while both constructions are subject to some semantic restrictions in the nominal domain, only the split MP construction is sensitive to restrictions (...) in the verbal domain (namely, incompatibility with single-occurrence events and with individual-level predicates, and (un)availability of collective readings). It is shown that these semantic restrictions can be explained by a uniform semantic constraint on the measure function, namely, Schwarzschild’s [(2002). The grammar of measurement. The Proceedings of Semantics and Linguistics Theory, 24, 241–306] monotonicity constraint. In particular, I argue that, in the two constructions at issue, the measure function is subject to the monotonicity constraint, and that we observe different semantic restrictions depending on whether the measure function applies to a nominal or a verbal domain. (shrink)
Many philosophers hold that the probability axioms constitute norms of rationality governing degrees of belief. This view, known as subjective Bayesianism, has been widely criticized for being too idealized. It is claimed that the norms on degrees of belief postulated by subjective Bayesianism cannot be followed by human agents, and hence have no normative force for beings like us. This problem is especially pressing since the standard framework of subjective Bayesianism only allows us to distinguish between two kinds of credence (...) functions—coherent ones that obey the probability axioms perfectly, and incoherent ones that don’t. An attractive response to this problem is to extend the framework of subjective Bayesianism in such a way that we can measure differences between incoherent credence functions. This lets us explain how the Bayesian ideals can be approximated by humans. I argue that we should look for a measure that captures what I call the ‘overall degree of incoherence’ of a credence function. I then examine various incoherence measures that have been proposed in the literature, and evaluate whether they are suitable for measuring overall incoherence. The competitors are a qualitative measure that relies on finding coherent subsets of incoherent credence functions, a class of quantitative measures that measure incoherence in terms of normalized Dutch book loss, and a class of distance measures that determine the distance to the closest coherent credence function. I argue that one particular Dutch book measure and a corresponding distance measure are particularly well suited for capturing the overall degree of incoherence of a credence function. (shrink)
There are different Bayesian measures to calculate the degree of confirmation of a hypothesis H in respect of a particular piece of evidence E. Zalabardo (Analysis 69:630–635, 2009) is a recent attempt to defend the likelihood-ratio measure (LR) against the probability-ratio measure (PR). The main disagreement between LR and PR concerns their sensitivity to prior probabilities. Zalabardo invokes intuitive plausibility as the appropriate criterion for choosing between them. Furthermore, he claims that it favours the ordering of pairs evidence/hypothesis generated by (...) LR. We will argue, however, that the intuitive non-numerical example provided by Zalabardo does not show that prior probabilities do not affect the degree of confirmation. On account of this, we conclude that there is no compelling reason to endorse LR qua measure of degree of confirmation. On the other side, we should not forget some technicalities which still benefit PR. (shrink)
A prospective introduction -- The received view -- Troubles with the received view -- Are propositional attitudes relations? -- Foundations of a measurement-theoretic account of the attitudes -- The basic measurement-theoretic account -- Elaboration and explication of the proposed measurement-theoretic account.
Many advocates of the Everettian interpretation consider that theirs is the only approach to take quantum mechanics really seriously, and that this approach allows to deduce a fantastic scenario for our reality, one that consists of an infinite number of parallel worlds that branch out continuously. In this article, written in dialogue form, we suggest that quantum mechanics can be taken even more seriously, if the many-worlds view is replaced by a many-measurements view. This allows not only to derive the (...) Born rule, thus solving the measurement problem, but also to deduce a one-world non-spatial reality, providing an even more fantastic scenario than that of the multiverse. (shrink)
Previous studies have found Forsyth’s Ethical Position Questionnaire (EPQ) to vary between countries, but none has made a systematic evaluation of its psychometric properties across consumers from many countries. Using confirmatory factor analysis and multi-group LISREL analysis, this paper explores the factor structure of the EPQ and the measurement equivalence in five societies: Austria, Britain, Brunei, Hong Kong and USA. The results suggest that the modified scale, measuring idealism and relativism, was applicable in all five societies. Equivalence was found (...) across Britain, Brunei and USA, but the original scale cannot be used validly. (shrink)
One of the major roadblocks in conducting Environmental Corporate Social Responsibility (ECSR) research is operationalization of the construct. Existing ECSR measurement tools either require primary data gathering or special subscriptions to proprietary databases that have limited replicability. We address this deficiency by developing a transparent ECSR measure, with an explicit coding scheme, that strictly relies on publicly available data. Our ECSR measure tests favorably for internal consistency and inter-rater reliability, as well as convergent and discriminant validity.
This paper has two main goals: first, it reconstructs Aristotle’s account of measurement in his Metaphysics and shows how it connects to modern notions of measurement. Second, it demonstrates that Aristotle’s notion of measurement only works for simple measures, but leads him into a dilemma once it comes to measuring complex phenomena, like mo-tion, where two or more different aspects, such as time and space, have to be taken into account. This is shown with the help of (...) Aristotle’s reaction to one of the problems Zeno’s dichotomy paradox raises: Aristotle implicitly employs a complex measure of motion when solving this problem, while he explicitly characterizes the measure of motion as a simple measure in his Physics. (shrink)
David Cooper explores and defends the view that a reality independent of human perspectives is necessarily indescribable, a "mystery." Other views are shown to be hubristic. Humanists, for whom "man is the measure" of reality, exaggerate our capacity to live without the sense of an independent measure. Absolutists, who proclaim our capacity to know an independent reality, exaggerate our cognitive powers. In this highly original book Cooper restores to philosophy a proper appreciation of mystery-that is what provides a measure of (...) our beliefs and conduct. (shrink)
Next SectionAn attempt to resolve the controversy regarding the solution of the Sleeping Beauty Problem in the framework of the Many-Worlds Interpretation led to a new controversy regarding the Quantum Sleeping Beauty Problem. We apply the concept of a measure of existence of a world and reach the solution known as ‘thirder’ solution which differs from Peter Lewis’s ‘halfer’ assertion. We argue that this method provides a simple and powerful tool for analysing rational decision theory problems.
Background: Screen time among adults represents a continuing and growing problem in relation to health behaviors and health outcomes. However, no instrument currently exists in the literature that quantifies the use of modern screen-based devices. The primary purpose of this study was to develop and assess the reliability of a new screen time questionnaire, an instrument designed to quantify use of multiple popular screen-based devices among the US population. -/- Methods: An 18-item screen-time questionnaire was created to quantify use of (...) commonly used screen devices (e.g. television, smartphone, tablet) across different time points during the week (e.g. weekday, weeknight, weekend). Test-retest reliability was assessed through intra-class correlation coefficients (ICCs) and standard error of measurement (SEM). The questionnaire was delivered online using Qualtrics and administered through Amazon Mechanical Turk (MTurk). -/- Results: Eighty MTurk workers completed full study participation and were included in the final analyses. All items in the screen time questionnaire showed fair to excellent relative reliability (ICCs = 0.50–0.90; all < 0.000), except for the item inquiring about the use of smartphone during an average weekend day (ICC = 0.16, p = 0.069). The SEM values were large for all screen types across the different periods under study. -/- Conclusions: Results from this study suggest this self-administered questionnaire may be used to successfully classify individuals into different categories of screen time use (e.g. high vs. low); however, it is likely that objective measures are needed to increase precision of screen time assessment. (shrink)
We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to be codes (...) of computable ordinals. We prove that if one agent knows certain things about another agent, then the former necessarily has a higher intelligence level than the latter. This allows our intelligence no- tion to serve as a stepping stone to obtain results which, by themselves, are not stated in terms of our intelligence notion (results of potential in- terest even to readers totally skeptical that our notion correctly captures intelligence). As an application, we argue that these results comprise evidence against the possibility of intelligence explosion (that is, the no- tion that sufficiently intelligent machines will eventually be capable of designing even more intelligent machines, which can then design even more intelligent machines, and so on). (shrink)
Measurement is fundamental to all the sciences, the behavioural and social as well as the physical and in the latter its results provide our paradigms of 'objective fact'. But the basis and justification of measurement is not well understood and is often simply taken for granted. Henry Kyburg Jr proposes here an original, carefully worked out theory of the foundations of measurement, to show how quantities can be defined, why certain mathematical structures are appropriate to them and (...) what meaning attaches to the results generated. Crucial to his approach is the notion of error - it can not be eliminated entirely from its introduction and control, her argues, arises the very possibility of measurement. Professor Kyburg's approach emphasises the empirical process of making measurements. In developing it he discusses vital questions concerning the general connection between a scientific theory and the results which support it (or fail to). (shrink)
Based on an extensive review of the literature and field surveys, the paper proposes a conceptualization and operationalization of corporate citizenship meaningful in two countries: the United States and France. A survey of 210 American and 120 French managers provides support for the proposed definition of corporate citizenship as a construct including the four correlated factors of economic, legal, ethical, and discretionary citizenship. The managerial implications of the research and directions for future research are discussed.
Measurement instruments assessing multiple emotions during epistemic activities are largely lacking. We describe the construction and validation of the Epistemically-Related Emotion Scales, which measure surprise, curiosity, enjoyment, confusion, anxiety, frustration, and boredom occurring during epistemic cognitive activities. The instrument was tested in a multinational study of emotions during learning from conflicting texts. The findings document the reliability, internal validity, and external validity of the instrument. A seven-factor model best fit the data, suggesting that epistemically-related emotions should be conceptualised in (...) terms of discrete emotion categories, and the scales showed metric invariance across the North American and German samples. Furthermore, emotion scores changed over time as a function of conflicting task information and related significantly to perceived task value and use of cognitive and metacognitive learning strategies. (shrink)
This book traces how such a seemingly immutable idea as measurement proved so malleable when it collided with the subject matter of psychology. It locates philosophical and social influences reshaping the concept and, at the core of this reshaping, identifies a fundamental problem: the issue of whether psychological attributes really are quantitative. It argues that the idea of measurement now endorsed within psychology actually subverts attempts to establish a genuinely quantitative science and it urges a new direction. It (...) relates views on measurement by thinkers such as Holder, Russell, Campbell and Nagel to earlier views, like those of Euclid and Oresme. Within the history of psychology, it considers contributions by Fechner, Cattell, Thorndike, Stevens and Suppes, among others. It also contains a non-technical exposition of conjoint measurement theory and recent foundational work by leading measurement theorist R. Duncan Luce. (shrink)
Page generated Thu Aug 5 21:30:18 2021 on philpapers-web-65948fd446-659hb
cache stats: hit=25898, miss=19077, save= autohandler : 1306 ms called component : 1290 ms search.pl : 1152 ms render loop : 759 ms addfields : 502 ms publicCats : 407 ms initIterator : 364 ms next : 199 ms retrieve cache object : 120 ms menu : 95 ms quotes : 75 ms save cache object : 59 ms search_quotes : 37 ms autosense : 31 ms match_cats : 28 ms intermediate : 27 ms prepCit : 24 ms applytpl : 6 ms match_other : 1 ms match_authors : 1 ms init renderer : 0 ms setup : 0 ms auth : 0 ms writelog : 0 ms