There has been a recent surge of research interest in videogames of moral engagement for entertainment, advocacy and education. We have seen a wealth of analysis and several theoretical models proposed, but experimental evaluation has been scarce. One of the difficulties lies in the measurement of moral engagement. How do we meaningfully measure whether players are engaging with and affected by the moral choices in the games they play? In this paper, we survey the various standard psychometric instruments from (...) the moral psychology literature and discuss how they might be applied in the evaluation of games. (shrink)
Entanglement measures quantify the amount of quantum entanglement that is contained in quantum states. Typically, different entanglement measures do not have to be partially ordered. The presence of a definite partial order between two entanglement measures for all quantum states, however, allows for meaningful conceptualization of sensitivity to entanglement, which will be greater for the entanglement measure that produces the larger numerical values. Here, we have investigated the partial order between the normalized versions of four entanglement measures based on Schmidt (...) decomposition of bipartite pure quantum states, namely, concurrence, tangle, entanglement robustness and Schmidt number. We have shown that among those four measures, the concurrence and the Schmidt number have the highest and the lowest sensitivity to quantum entanglement, respectively. Further, we have demonstrated how these measures could be used to track the dynamics of quantum entanglement in a simple quantum toy model composed of two qutrits. Lastly, we have employed state-dependent entanglement statistics to compute measurable correlations between the outcomes of quantum observables in agreement with the uncertainty principle. The presented results could be helpful in quantum applications that require monitoring of the available quantum resources for sharp identification of temporal points of maximal entanglement or system separability. (shrink)
Corporate social responsibility (CSR) is one of the most prominent concepts in the literature and, in short, indicates the positive impacts of businesses on their stakeholders. Despite the growing body of literature on this concept, the measurement of CSR is still problematic. Although the literature provides several methods for measuring corporate social activities, almost all of them have some limitations. The purpose of this study is to provide an original, valid, and reliable measure of CSR reflecting the responsibilities of (...) a business to various stakeholders. Based on a proposed conceptual framework of CSR, a scale was developed through a systematic scale development process. In the study, exploratory factor analysis was conducted to determine the underlying factorial structure of the scale. Data was collected from 269 business professionals working in Turkey. The results of the analysis provided a four-dimensional structure of CSR, including CSR to social and nonsocial stakeholders, employees, customers, and government. (shrink)
Measuring the effectiveness of medical interventions faces three epistemological challenges: the choice of good measuring instruments, the use of appropriate analytic measures, and the use of a reliable method of extrapolating measures from an experimental context to a more general context. In practice each of these challenges contributes to overestimating the effectiveness of medical interventions. These challenges suggest the need for corrective normative principles. The instruments employed in clinical research should measure patient-relevant and disease-specific parameters, and should not be sensitive (...) to parameters that are only indirectly relevant. Effectiveness always should be measured and reported in absolute terms (using measures such as 'absolute risk reduction'), and only sometimes should effectiveness also be measured and reported in relative terms (using measures such as 'relative risk reduction')-employment of relative measures promotes an informal fallacy akin to the base-rate fallacy, which can be exploited to exaggerate claims of effectiveness. Finally, extrapolating from research settings to clinical settings should more rigorously take into account possible ways in which the intervention in question can fail to be effective in a target population. (shrink)
The need for quantitative measurement represents a unifying bond that links all the physical, biological, and social sciences. Measurements of such disparate phenomena as subatomic masses, uncertainty, information, and human values share common features whose explication is central to the achievement of foundational work in any particular mathematical science as well as for the development of a coherent philosophy of science. This book presents a theory of measurement, one that is "abstract" in that it is concerned with highly (...) general axiomatizations of empirical and qualitative settings and how these can be represented quantitatively. It was inspired by, and represents a generalization and extension of, the last major research work in this field, Foundations of Measurement Vol. I, by Krantz, Luce, Suppes, and Tversky published in 1971. (shrink)
in a 2nd task (e.g., pleasant vs. unpleasant words for an evaluation attribute). When instructions oblige highly associated categories (e.g., liower + pleasant) to share a response key, performance is faster than when less associated categories (e.g., insect + pleasant) share a key. This performance difference implicitly measures differential association of the 2 concepts with the attribute. In 3..
This book provides an introduction to measurement theory for non-specialists and puts measurement in the social and behavioural sciences on a firm mathematical foundation. Results are applied to such topics as measurement of utility, psychophysical scaling and decision-making about pollution, energy, transportation and health. The results and questions presented should be of interest to both students and practising mathematicians since the author sets forth an area of mathematics unfamiliar to most mathematicians, but which has many potentially significant (...) applications. (shrink)
According to the realist interpretation, measurement commits us not just to the logically independent existence of things in space and time, but also to the existence of quantitatively structured properties and relations, and to the existence of real numbers, understood as relations of ratio between specific levels of such attributes. Measurement is defined as the estimation of numerical relations (or ratios) between magnitudes of a quantitative attribute and a unit. The history of scientific measurement, from antiquity to (...) the present may be interpreted as revealing a progressive deepening in the understanding of this position. First, the concept of ratio was broadened to include ratios between incommensurable magnitudes; second, the concept of a quantitative attribute was broadened to include non-extensive quantities; third, quantitative structure and its relations to ratios and real numbers were elaborated; and finally, the issue of empirically distinguishing between quantitative and non-quantitative structures was addressed. This interpretation of measurement understands it in a way that is continuous with scientific investigation in general, i.e., as an attempt to discover independently existing facts. (shrink)
The debate on probabilistic measures of coherence flourishes for about 15 years now. Initiated by papers that have been published around the turn of the millennium, many different proposals have since then been put forward. This contribution is partly devoted to a reassessment of extant coherence measures. Focusing on a small number of reasonable adequacy constraints I show that (i) there can be no coherence measure that satisfies all constraints, and that (ii) subsets of these adequacy constraints motivate two different (...) classes of coherence measures. These classes do not coincide with the common distinction between coherence as mutual support and coherence as relative set-theoretic overlap. Finally, I put forward arguments to the effect that for each such class of coherence measures there is an outstanding measure that outperforms all other extant proposals. One of these measures has recently been put forward in the literature, while the other one is based on a novel probabilistic measure of confirmation. (shrink)
Derived measurements involve problems of coordination. Conducting them often requires detailed theoretical assumptions about their target, while such assumptions can lack sources of evidence that are independent from these very measurements. In this paper, I defend two claims about problems of coordination. I motivate both by a novel case study on a central measurement problem in the history of physical geodesy: the determination of the earth's ellipticity. First, I argue that the severity of problems of coordination varies according to (...) scientists' predictive and experimental control over perturbations of the measurement process. Second, I identify a methodology by which scientists can solve hard problems of coordination and gradually increase their predictive control over perturbations. I dub this methodology ‘operational pluralism’ since it is driven by the introduction of alternative measurement operations that involve different physical indicators. (shrink)
When making decisions about action to improve animal lives, it is important that we have accurate estimates of how much animals are suffering under different conditions. The current frameworks for making comparative estimates of suffering all fall along the lines of multiplying numbers of animals used by length of life and amount of suffering experienced. However, the numbers used to quantify suffering are usually generated through unreliable and subjective processes which make them unlikely to be correct. In this paper, I (...) look at how we might apply principled methods from animal welfare science to arrive at more accurate scores, which will then help us in making the best decisions for animals. I argue that a combined use of both a whole-animal measure and a combination measurement framework for assessing welfare will give us the most accurate answers to guide our action. (shrink)
Several authors have argued that causes differ in the degree to which they are ‘specific’ to their effects. Woodward has used this idea to enrich his influential interventionist theory of causal explanation. Here we propose a way to measure causal specificity using tools from information theory. We show that the specificity of a causal variable is not well-defined without a probability distribution over the states of that variable. We demonstrate the tractability and interest of our proposed measure by measuring the (...) specificity of coding DNA and other factors in a simple model of the production of mRNA. (shrink)
What is the best way of assessing the extent to which people are aware of a stimulus? Here, using a masked visual identification task, we compared three measures of subjective awareness: The Perceptual Awareness Scale , through which participants are asked to rate the clarity of their visual experience; confidence ratings , through which participants express their confidence in their identification decisions, and Post-decision wagering , in which participants place a monetary wager on their decisions. We conducted detailed explorations of (...) the relationships between awareness and identification performance, looking to determine which scale best correlates with performance, and whether we can detect performance in the absence of awareness and how the scales differ from each other in terms of revealing such unconscious processing. Based on these findings we discuss whether perceptual awareness should be considered graded or dichotomous. Results showed that PAS showed a much stronger performance-awareness correlation than either CR or PDW, particularly for low stimulus intensities. In general, all scales indicated above-chance performance when participants claimed not to have seen anything. However, such above-chance performance only showed when we also observed a correlation between awareness and performance. Thus PAS seems to be the most exhaustive measure of awareness, and we find support for above-chance performance in the absence of subjective awareness, but such unconscious knowledge only contributes to performance when we observe conscious knowledge as well. Similarities and differences between scales are discussed in the light of consciousness theories and response strategies. (shrink)
This investigation sought to find the relationships among multiple dimensions of personality and multiple features of language style. Unlike previous investigations, after controlling for such other moderators as culture and socio-demographics, the current investigation explored those dimensions of naturalistic spoken language that most closely align with communication. In groups of five to eight players, participants from eight international locales completed hour-long competitive games consisting of a series of ostensible missions. Composite measures of quantity, lexical diversity, sentiment, immediacy and negations were (...) measured with an automated tool called SPLICE and with Linguistic Inquiry and Word Count. We also investigated style dynamics over the course of an interaction. We found predictors of extraversion, agreeableness, and neuroticism, but overall fewer significant associations than prior studies, suggesting greater heterogeneity in language style in contexts entailing interactivity, conversation rather than solitary message production, oral rather than written discourse, and groups rather than dyads. Extraverts were found to maintain greater linguistic style consistency over the course of an interaction. The discussion addresses the potential for Type I error when studying the relationship between language and personality. (shrink)
This paper addresses what we consider to be the most pressing challenge for the emerging science of consciousness: the measurement problem of consciousness. That is, by what methods can we determine the presence of and properties of consciousness? Most methods are currently developed through evaluation of the presence of consciousness in humans and here we argue that there are particular problems in application of these methods to nonhuman cases—what we call the indicator validity problem and the extrapolation problem. The (...) first is a problem with the application of indicators developed using the differences between conscious and unconscious processing in humans to the identification of other conscious vs. nonconscious organisms or systems. The second is a problem in extrapolating any indicators developed in humans or other organisms to artificial systems. However, while pressing ethical concerns add urgency to the attribution of consciousness and its attendant moral status to nonhuman animals and intelligent machines, we cannot wait for certainty and we advocate the use of a precautionary principle to avoid causing unintentional harm. We also intend that the considerations and limitations discussed in this paper can be used to further analyze and refine the methods of consciousness science with the hope that one day we may be able to solve the measurement problem of consciousness. (shrink)
The paper presents an argument for treating certain types of computer simulation as having the same epistemic status as experimental measurement. While this may seem a rather counterintuitive view it becomes less so when one looks carefully at the role that models play in experimental activity, particularly measurement. I begin by discussing how models function as “measuring instruments” and go on to examine the ways in which simulation can be said to constitute an experimental activity. By focussing on (...) the connections between models and their various functions, simulation and experiment one can begin to see similarities in the practices associated with each type of activity. Establishing the connections between simulation and particular types of modelling strategies and highlighting the ways in which those strategies are essential features of experimentation allows us to clarify the contexts in which we can legitimately call computer simulation a form of experimental measurement. (shrink)
How do we know when one person or society is 'freer' than another? Can freedom be measured? Is more freedom better than less? This book provides the first full-length treatment of these fundamental yet neglected issues, throwing new light both on the notion of freedom and on contemporary liberalism.
This paper aims to contribute to our understanding of the notion of coherence by explicating in probabilistic terms, step by step, what seem to be our most basic intuitions about that notion, to wit, that coherence is a matter of hanging or fitting together, and that coherence is a matter of degree. A qualitative theory of coherence will serve as a stepping stone to formulate a set of quantitative measures of coherence, each of which seems to capture well the aforementioned (...) intuitions. Subsequently it will be argued that one of those measures does better than the others in light of some more specific intuitions about coherence. This measure will be defended against two seemingly obvious objections. (shrink)
In the last few decades the role played by models and modeling activities has become a central topic in the scientific enterprise. In particular, it has been highlighted both that the development of models constitutes a crucial step for understanding the world and that the developed models operate as mediators between theories and the world. Such perspective is exploited here to cope with the issue as to whether error-based and uncertainty-based modeling of measurement are incompatible, and thus alternative with (...) one another, as sometimes claimed nowadays. The crucial problem is whether assuming this standpoint implies definitely renouncing to maintain a role for truth and the related concepts, particularly accuracy, in measurement. It is argued here that the well known objections against true values in measurement, which would lead to refuse the concept of accuracy as non-operational, or to maintain it as only qualitative, derive from a not clear distinction between three distinct processes: the metrological characterization of measuring systems, their calibration, and finally measurement. Under the hypotheses that (1) the concept of true value is related to the model of a measurement process, (2) the concept of uncertainty is related to the connection between such model and the world, and (3) accuracy is a property of measuring systems (and not of measurement results) and uncertainty is a property of measurement results (and not of measuring systems), not only the compatibility but actually the conjoint need of error-based and uncertainty-based modeling emerges. (shrink)
Many philosophers hold that the probability axioms constitute norms of rationality governing degrees of belief. This view, known as subjective Bayesianism, has been widely criticized for being too idealized. It is claimed that the norms on degrees of belief postulated by subjective Bayesianism cannot be followed by human agents, and hence have no normative force for beings like us. This problem is especially pressing since the standard framework of subjective Bayesianism only allows us to distinguish between two kinds of credence (...) functions—coherent ones that obey the probability axioms perfectly, and incoherent ones that don’t. An attractive response to this problem is to extend the framework of subjective Bayesianism in such a way that we can measure differences between incoherent credence functions. This lets us explain how the Bayesian ideals can be approximated by humans. I argue that we should look for a measure that captures what I call the ‘overall degree of incoherence’ of a credence function. I then examine various incoherence measures that have been proposed in the literature, and evaluate whether they are suitable for measuring overall incoherence. The competitors are a qualitative measure that relies on finding coherent subsets of incoherent credence functions, a class of quantitative measures that measure incoherence in terms of normalized Dutch book loss, and a class of distance measures that determine the distance to the closest coherent credence function. I argue that one particular Dutch book measure and a corresponding distance measure are particularly well suited for capturing the overall degree of incoherence of a credence function. (shrink)
The aim of this essay is to distinguish and analyze several difficulties confronting attempts to reconcile the fundamental quantum mechanical dynamics with Born''s rule. It is shown that many of the proposed accounts of measurement fail at least one of the problems. In particular, only collapse theories and hidden variables theories have a chance of succeeding, and, of the latter, the modal interpretations fail. Any real solution demands new physics.
The old evidence problem affects any probabilistic confirmation measure based on comparing pr(H/E) and pr(H). The article argues for the following points: (1) measures based on likelihood ratios also suffer old evidence difficulties; (2) the less-discussed synchronic old evidence problem is, in an important sense, the most acute; (3) prominent attempts to solve or dissolve the synchronic problem fail; (4) a little-discussed variant of the standard measure avoids the problem, in an appealing way; and (5) this measure nevertheless reveals a (...) different problem for probabilistic confirmation measures, a problem that is unlikely to lend itself to formal solution. (shrink)
In Measuring the Immeasurable Mind: Where Contemporary Neuroscience Meets the Aristotelian Tradition, Matthew Owen argues that despite its nonphysical character, it is possible to empirically detect and measure consciousness. -/- Toward the end of the previous century, the neuroscience of consciousness set its roots and sprouted within a materialist milieu that reduced the mind to matter. Several decades later, dualism is being dusted off and reconsidered. Although some may see this revival as a threat to consciousness science aimed at measuring (...) the conscious mind, Owen argues that measuring consciousness, along with the medical benefits of such measurements, is not ruled out by consciousness being nonphysical. Owen proposes the Mind-Body Powers model of neural correlates of consciousness, which is informed by Aristotelian causation and a substance dualist view of human nature inspired by Thomas Aquinas, who often followed Aristotle. In addition to explaining why there are neural correlates of consciousness, the model provides a philosophical foundation for empirically discerning and quantifying consciousness. En route to presenting and applying the Mind-Body Powers model to neurobiology, Owen rebuts longstanding objections to dualism related to the mind-body problem. With scholarly precision and readable clarity, Owen applies an oft forgotten yet richly developed historical vantage point to contemporary cognitive neuroscience. (shrink)
It is often said that one person or society is `freer' than another, or that people have a right to equal freedom, or that freedom should be increased or even maximized. Such quantitative claims about freedom are of great importance to us, forming an essential part of our political discourse and theorizing. Yet their meaning has been surprisingly neglected by political philosophers until now. Ian Carter provides the first systematic account of the nature and importance of our judgements about degrees (...) of freedom. He begins with an analysis of the normative assumptions behind the claim that individuals are entitled to a measure of freedom, and then goes on to ask whether it is indeed conceptually possible to measure freedom. Adopting a coherentist approach, the author argues for a conception of freedom that not only reflects commonly held intuitions about who is freer than who but is also compatible with a liberal or freedom-based theory of justice. (shrink)
We characterize access to empirical objects in biology from a theoretical perspective. Unlike objects in current physical theories, biological objects are the result of a history and their variations continue to generate a history. This property is the starting point of our concept of measurement. We argue that biological measurement is relative to a natural history which is shared by the different objects subjected to the measurement and is more or less constrained by biologists. We call symmetrization (...) the theoretical and often concrete operation which leads to considering biological objects as equivalent in a measurement. Last, we use our notion of measurement to analyze research strategies. Some strategies aim to bring biology closer to the epistemology of physical theories, by studying objects as similar as possible, while others build on biological diversity. (shrink)
Thick concepts, namely those concepts that describe and evaluate simultaneously, present a challenge to science. Since science does not have a monopoly on value judgments, what is responsible research involving such concepts? Using measurement of wellbeing as an example, we first present the options open to researchers wishing to study phenomena denoted by such concepts. We argue that while it is possible to treat these concepts as technical terms, or to make the relevant value judgment in-house, the responsible thing (...) to do, especially in the context of public policy, is to make this value judgment through a legitimate political process that includes all the stakeholders of this research. We then develop a participatory model of measurement based on the ideal of co-production. To show that this model is feasible and realistic, we illustrate it with a case study of co-production of a concept of thriving conducted by the authors in collaboration with a UK anti-poverty charity Turn2us. (shrink)
This article analyses the relationship between the concept of single aspect similarity and proposed measures of similarity. More precisely, it compares eleven measures of similarity in terms of how well they satisfy a list of desiderata, chosen to capture common intuitions concerning the properties of similarity and the relations between similarity and dissimilarity. Three types of measures are discussed: similarity as commonality, similarity as a function of dissimilarity, and similarity as a joint function of commonality and difference. Relative to the (...) desiderata, it is found that a measure of the second type fares the best. However, rather than recommend this measure alone as a measure of similarity, it is suggested that there are at least three separate concepts of single aspect similarity, corresponding to the three types of measures. In light of this proposal, three of the eleven measures (and variants of these) are deemed acceptable. (shrink)
The problem of measurement is often considered an inconsistency inside the quantum formalism. Many attempts to solve it have been made since the inception of quantum mechanics. The form of these attempts depends on the philosophical position that their authors endorse. I will review some of them and analyze their relevance. The phenomenon of decoherence is often presented as a solution lying inside the pure quantum formalism and not demanding any particular philosophical assumption. Nevertheless, a widely debated question is (...) to decide between two different interpretations. The first one is to consider that the decoherence process has the effect to actually project a superposed state into one of its classically interpretable component, hence doing the same job as the reduction postulate. For the second one, decoherence is only a way to show why no macroscopic superposed state can be observed, so explaining the classical appearance of the macroscopic world, while the quantum entanglement between the system, the apparatus and the environment never disappears. In this case, explaining why only one single definite outcome is observed remains to do. In this paper, I examine the arguments that have been given for and against both interpretations and defend a new position, the “Convivial Solipsism”, according to which the outcome that is observed is relative to the observer, different but in close parallel to the Everett’s interpretation and sharing also some similarities with Rovelli’s relational interpretation and Quantum Bayesianism. I also show how “Convivial Solipsism” can help getting a new standpoint about the EPR paradox providing a way out of the seemingly unavoidable non-locality of quantum mechanics. (shrink)
There remains no consensus among social scientists as to how to measure and understand forms of information deprivation such as misinformation. Machine learning and statistical analyses of information deprivation typically contain problematic operationalizations which are too often biased towards epistemic elites' conceptions that can undermine their empirical adequacy. A mature science of information deprivation should include considerable citizen involvement that is sensitive to the value-ladenness of information quality and that doing so may improve the predictive and explanatory power of extant (...) models. (shrink)
Do sensory measurements deserve the label of “measurement”? We argue that they do. They fit with an epistemological view of measurement held in current philosophy of science, and they face the same kinds of epistemological challenges as physical measurements do: the problem of coordination and the problem of standardization. These problems are addressed through the process of “epistemic iteration,” for all measurements. We also argue for distinguishing the problem of standardization from the problem of coordination. To exemplify our (...) claims, we draw on olfactory performance tests, especially studies linking olfactory decline to neurodegenerative disorders. (shrink)
This book traces how such a seemingly immutable idea as measurement proved so malleable when it collided with the subject matter of psychology. It locates philosophical and social influences reshaping the concept and, at the core of this reshaping, identifies a fundamental problem: the issue of whether psychological attributes really are quantitative. It argues that the idea of measurement now endorsed within psychology actually subverts attempts to establish a genuinely quantitative science and it urges a new direction. It (...) relates views on measurement by thinkers such as Holder, Russell, Campbell and Nagel to earlier views, like those of Euclid and Oresme. Within the history of psychology, it considers contributions by Fechner, Cattell, Thorndike, Stevens and Suppes, among others. It also contains a non-technical exposition of conjoint measurement theory and recent foundational work by leading measurement theorist R. Duncan Luce. (shrink)
Based on an extensive review of the literature and field surveys, the paper proposes a conceptualization and operationalization of corporate citizenship meaningful in two countries: the United States and France. A survey of 210 American and 120 French managers provides support for the proposed definition of corporate citizenship as a construct including the four correlated factors of economic, legal, ethical, and discretionary citizenship. The managerial implications of the research and directions for future research are discussed.
This paper takes a critical look at the empirical studies assessing the effectiveness of teaching courses in business and society and business ethics. It is generally found that students' ethical awareness or reasoning skills improve after taking the courses, yet this improvement appears to be short-lived. The generalizability of these findings is limited due to the lack of extensive empirical research and the inconsistencies in research design, empirical measures, and statistical analysis across studies. Thus, recommendations are presented and discussed for (...) improving the generalizability and sophistication of future research efforts in this area. (shrink)
The decision-theoretic account of probability in the Everett or many-worlds interpretation, advanced by David Deutsch and David Wallace, is shown to be circular. Talk of probability in Everett presumes the existence of a preferred basis to identify measurement outcomes for the probabilities to range over. But the existence of a preferred basis can only be established by the process of decoherence, which is itself probabilistic.
In ‘Measuring the Consequences of Rules’, Holly Smith presents two problems involving the indeterminacy of compliance, which she takes to be fatal for all forms of rule-utilitarianism. In this reply, I attempt to dispel both problems.
This book brings together a team of leading theorists to address the question 'What is the right measure of justice?' Some contributors, following Amartya Sen and Martha Nussbaum, argue that we should focus on capabilities, or what people are able to do and to be. Others, following John Rawls, argue for focussing on social primary goods, the goods which society produces and which people can use. Still others see both views as incomplete and complementary to one another. Their essays evaluate (...) the two approaches in the light of particular issues of social justice - education, health policy, disability, children, gender justice - and the volume concludes with an essay by Amartya Sen, who originated the capabilities approach. (shrink)
A prospective introduction -- The received view -- Troubles with the received view -- Are propositional attitudes relations? -- Foundations of a measurement-theoretic account of the attitudes -- The basic measurement-theoretic account -- Elaboration and explication of the proposed measurement-theoretic account.
This paper challenges “traditional measurement-accuracy realism”, according to which there are in nature quantities of which concrete systems have definite values. An accurate measurement outcome is one that is close to the value for the quantity measured. For a measurement of the temperature of some water to be accurate in this sense requires that there be this temperature. But there isn’t. Not because there are no quantities “out there in nature” but because the term ‘the temperature of (...) this water’ fails to refer owing to idealization and failure of specificity in picking out concrete cases. The problems can be seen as an artifact of vagueness, and so doing facilitates applying Eran Tal’s robustness account of measurement accuracy to suggest an attractive way of understanding vagueness in terms of the function of idealization, a way that sidesteps the problems of higher order vagueness and that shows how idealization provides a natural generalization of what it is to be vague. (shrink)
What is it to know more? By what metric should the quantity of one's knowledge be measured? I start by examining and arguing against a very natural approach to the measure of knowledge, one on which how much is a matter of how many. I then turn to the quasi-spatial notion of counterfactual distance and show how a model that appeals to distance avoids the problems that plague appeals to cardinality. But such a model faces fatal problems of its own. (...) Reflection on what the distance model gets right and where it goes wrong motivates a third approach, which appeals not to cardinality, nor to counterfactual distance, but to similarity. I close the paper by advocating this model and briefly discussing some of its significance for epistemic normativity. In particular, I argue that the 'trivial truths' objection to the view that truth is the goal of inquiry rests on an unstated, but false, assumption about the measure of knowledge, and suggest that a similarity model preserves truth as the aim of belief in an intuitively satisfying way. (shrink)
We develop measure theory in the context of subsystems of second order arithmetic with restricted induction. We introduce a combinatorial principleWWKL (weak-weak König's lemma) and prove that it is strictly weaker thanWKL (weak König's lemma). We show thatWWKL is equivalent to a formal version of the statement that Lebesgue measure is countably additive on open sets. We also show thatWWKL is equivalent to a formal version of the statement that any Borel measure on a compact metric space is countably additive (...) on open sets. (shrink)
This paper aims to assess knowledge management maturity at HEI to determine the most effecting variables on knowledge management that enhance the total performance of the organization. This study was applied on Al-Azhar University in Gaza strip, Palestine. This paper depends on Asian productivity organization model that used to assess KM maturity. Second dimension assess high performance was developed by the authors. The controlled sample was (364). Several statistical tools were used for data analysis and hypotheses testing, including reliability Correlation (...) using Cronbach’s alpha, “ANOVA”, Simple Linear Regression and Step Wise Regression. The overall findings of the current study suggest that KMM is suitable for measuring and lead to enhance high performance. KMM assessment shows that the university maturity level is in level three. Findings also support the main hypothesis and it is subhypotheses. The most important factors effecting high performance are: Processes, KM leadership, People, KM Outcomes, Knowledge Process. Furthermore the current study is unique by the virtue of its nature, scope and way of implied investigation, as it is the first study at HEI in Palestine explores the status of KMM using the Asian productivity model. (shrink)
This work has been selected by scholars as being culturally important and is part of the knowledge base of civilization as we know it. This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity has a copyright on the body of the work. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and (...) made generally available to the public. To ensure a quality reading experience, this work has been proofread and republished using a format that seamlessly blends the original graphical elements with text in an easy-to-read typeface. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant. (shrink)
Next SectionAn attempt to resolve the controversy regarding the solution of the Sleeping Beauty Problem in the framework of the Many-Worlds Interpretation led to a new controversy regarding the Quantum Sleeping Beauty Problem. We apply the concept of a measure of existence of a world and reach the solution known as ‘thirder’ solution which differs from Peter Lewis’s ‘halfer’ assertion. We argue that this method provides a simple and powerful tool for analysing rational decision theory problems.
Measurement instruments assessing multiple emotions during epistemic activities are largely lacking. We describe the construction and validation of the Epistemically-Related Emotion Scales, which measure surprise, curiosity, enjoyment, confusion, anxiety, frustration, and boredom occurring during epistemic cognitive activities. The instrument was tested in a multinational study of emotions during learning from conflicting texts. The findings document the reliability, internal validity, and external validity of the instrument. A seven-factor model best fit the data, suggesting that epistemically-related emotions should be conceptualised in (...) terms of discrete emotion categories, and the scales showed metric invariance across the North American and German samples. Furthermore, emotion scores changed over time as a function of conflicting task information and related significantly to perceived task value and use of cognitive and metacognitive learning strategies. (shrink)
One of the major roadblocks in conducting Environmental Corporate Social Responsibility (ECSR) research is operationalization of the construct. Existing ECSR measurement tools either require primary data gathering or special subscriptions to proprietary databases that have limited replicability. We address this deficiency by developing a transparent ECSR measure, with an explicit coding scheme, that strictly relies on publicly available data. Our ECSR measure tests favorably for internal consistency and inter-rater reliability, as well as convergent and discriminant validity.