Comparative overall similarity lies at the basis of a lot of recent metaphysics and epistemology. It is a poor foundation. Overall similarity is supposed to be an aggregate of similarities and differences in various respects. But there is no good way of combining them all.
In a recent article, Okasha challenges Kuhn’s claim that there is no ‘neutral’ algorithm for theory choice. He argues using Arrow’s ‘impossibility’ theorem that — except under certain favourable conditions concerning the measurability and comparability of theoretical values — there are no theory choice algorithms at all, neutral or otherwise. But Okasha’s argument does not apply to important theory choice problems, among them the case of Copernican and Ptolemaic astronomy that much occupied Kuhn. The reason is that Kuhn’s choice criteria (...) can rank rival theories in only a few ways, which makes the analogue of Arrow’s domain assumption inappropriate. It is hard to see any consequences for Kuhn’s claim, or threat to the rationality of science. (shrink)
An analogue of Arrow’s theorem has been thought to limit the possibilities for multi-criterial theory choice. Here, an example drawn from Toy Science, a model of theories and choice criteria, suggests that it does not. Arrow’s assumption that domains are unrestricted is inappropriate in connection with theory choice in Toy Science. There are, however, variants of Arrow’s theorem that do not require an unrestricted domain. They require instead that domains are, in a technical sense, ‘rich’. Since there are rich domains (...) in Toy Science, such theorems do constrain theory choice to some extent—certainly in the model and perhaps also in real science. (shrink)
The social welfare functional approach to social choice theory fails to distinguish a genuine change in individual well-beings from a merely representational change due to the use of different measurement scales. A generalization of the concept of a social welfare functional is introduced that explicitly takes account of the scales that are used to measure well-beings so as to distinguish between these two kinds of changes. This generalization of the standard theoretical framework results in a more satisfactory formulation of welfarism, (...) the doctrine that social alternatives are evaluated and socially ranked solely in terms of the well-beings of the relevant individuals. This scale-dependent form of welfarism is axiomatized using this framework. The implications of this approach for characterizing classes of social welfare orderings are also considered. (shrink)
Juries, committees and experts panels commonly appraise things of one kind or another on the basis of grades awarded by several people. When everybody's grading thresholds are known to be the same, the results sometimes can be counted on to reflect the graders’ opinion. Otherwise, they often cannot. Under certain conditions, Arrow's ‘impossibility’ theorem entails that judgements reached by aggregating grades do not reliably track any collective sense of better and worse at all. These claims are made by adapting the (...) Arrow–Sen framework for social choice to study grading in groups. (shrink)
Kenneth Arrow’s “impossibility” theorem—or “general possibility” theorem, as he called it—answers a very basic question in the theory of collective decision-making. Say there are some alternatives to choose among. They could be policies, public projects, candidates in an election, distributions of income and labour requirements among the members of a society, or just about anything else. There are some people whose preferences will inform this choice, and the question is: which procedures are there for deriving, from what is known or (...) can be found out about their preferences, a collective or “social” ordering of the alternatives from better to worse? The answer is startling. Arrow’s theorem says there are no such procedures whatsoever—none, anyway, that satisfy certain apparently quite reasonable assumptions concerning the autonomy of the people and the rationality of their preferences. The technical framework in which Arrow gave the question of social orderings a precise sense and its rigorous answer is now widely used for studying problems in welfare economics. The impossibility theorem itself set much of the agenda for contemporary social choice theory. Arrow accomplished this while still a graduate student. In 1972, he received the Nobel Prize in economics for his contributions. (shrink)
The hypothetical syllogism is invalid in standard interpretations of conditional sentences. Many arguments of this sort are quite compelling, though, and you can wonder what makes them so. I shall argue that it is our parsimony in regard to connections among events and states of affairs. All manner of things just might, for all we know, be bound up with one another in all sorts of ways. But ordinarily it is better, being simpler, to assume they are unconnected. In so (...) doing, we jump to the conclusions of some compelling but invalid arguments. (shrink)
Panels, boards, and committees throughout society evaluate all manner of things by grading them, first individually and then collectively. Thus risks are prioritized, research proposals are funded, and candidates are shortlisted for jobs. It is not usual to pick winners in political elections by grading the candidates, but there are examples in history. This article takes up a question about the quality of judgments and decisions made by grading: under which conditions are they likely to be right? An answer comes (...) in the form of a jury theorem for median grading. Here, the collective grade for a thing is the median of its individually assigned grades—the one in the middle, when all of them are listed from "top" to "bottom." A second objective of this article is to suggest a solution to problems of voter ignorance in democracies. The idea is for democratic assemblies to use voting methods that make more of people's limited knowledge than do commonly used methods, such as majority voting. It turns out that in theory anyway, and perhaps also in practice, median grading can enable unenlightened assemblies to “track the truth”—even as majority voting would run them off the rails. (shrink)
Among other good things, supervaluation is supposed to allow vague sentences to go without truth values. But Jerry Fodor and Ernest Lepore have recently argued that it cannot allow this - not if it also respects certain conceptual truths. The main point I wish to make here is that they are mistaken. Supervaluation can leave truth-value gaps while respecting the conceptual truths they have in mind.
Syntactical treatments of propositional attitudes are attractive to artificial intelligence researchers. But results of Montague (1974) and Thomason (1980) seem to show that syntactical treatments are not viable. They show that if representation languages are sufficiently expressive, then axiom schemes characterizing knowledge and belief give rise to paradox. Des Rivières and Levesque (1988) characterize a class of sentences within which these schemes can safely be instantiated. These sentences do not quantify over the propositional objects of knowledge and belief. We argue (...) that their solution is incomplete, and extend it by characterizing a more inclusive class of sentences over which the axiom schemes can safely range. Our sentences do quantify over propositional objects. (shrink)
Making good decisions depends on having accurate information – quickly, and in a form in which it can be readily communicated and acted upon. Two features of medical practice can help: deliberation in groups and the use of scores and grades in evaluation. We study the contributions of these features using a multi-agent computer simulation of groups of physicians. One might expect individual differences in members’ grading standards to reduce the capacity of the group to discover the facts on which (...) well-informed decisions depend. Observations of the simulated groups suggest on the contrary that this kind of diversity can in fact be conducive to epistemic performance. Sometimes, it is adopting common standards that may be expected to result in poor decisions. (shrink)
Sir David Ross introduced prima facie duties, or acts with a tendency to be duties proper. He also spoke of general prima facie principles, wwhich attribute to acts having some feature the tendency to be a duty proper. Like Utilitarians from Mill to Hare, he saw a role for such principles in the epistemology of duty: in the process by means of which, in any given situation, a moral code can help us to find out what we ought to do.After (...) formalizing general prima facie principles as universally quantified conditionals I will show how seeming duties can be detached from them. There will be examples involving lies, burnt offerings and the question of whether to have a napkin on your lap while eating asparagus. They will illustrate the defeasibility of this detachment, how it can lead into dilemmas, and how general prima facie principles are overridden by more specific ones. (shrink)
Avalanche studies have undergone a transition in recent years. Early research focused mainly on environmental factors. More recently, attention has turned to human factors in decision making, such as behavioural and cognitive biases. This article adds a social component to this human turn in avalanche studies. It identifies lessons for decision making by groups of skiers from the perspective of social choice theory, a sub-field of economics, decision theory, philosophy and political science that investigates voting methods and other forms of (...) collective decision making. In the first part, we outline the phenomenon of wisdom of crowds, where groups make better decisions than their individual members. Drawing on the conceptual apparatus of social choice theory and using idealised scenarios, we identify conditions under which wisdom of crowds arises and also explain how and when deciding together can instead result in worse decisions than may be expected from individual group members. In the second part, we use this theoretical understanding to offer practical suggestions for decision making in avalanche terrain. Finally, we make several suggestions for risk management in other outdoor and adventure sports and for outdoor sports education. (shrink)
A computer simulation is used to study collective judgements that an expert panel reaches on the basis of qualitative probability judgements contributed by individual members. The simulated panel displays a strong and robust crowd wisdom effect. The panel's performance is better when members contribute precise probability estimates instead of qualitative judgements, but not by much. Surprisingly, it doesn't always hurt for panel members to interpret the probability expressions differently. Indeed, coordinating their understandings can be much worse.
Objectives To explore how factors relating to grades and grading affect the correctness of choices that grant-review panels make among submitted proposals. To identify interventions in panel design that may be expected to increase the correctness of choices. -/- Method Experimentation with an empirically-calibrated computer simulation model of panel review. Model parameters are set in accordance with procedures at a national science funding agency. Correctness of choices among research proposals is operationalized as agreement with the choices of an elite panel. (...) -/- Conclusions The simulation model generates several hypotheses to guide further research. Increasing the number of grades used by panel members increases the correctness of simulated choices among submitted proposals. Collective decision procedures giving panels a greater capacity for discriminating among proposals also increase correctness. Surprisingly, differences in grading standards among panel members do not appreciably decrease correctness. (shrink)
The method of supergrading is introduced for deriving a ranking of items from scores or grades awarded by several people. Individual inputs may come in different languages of grades. Diversity in grading standards is an advantage, enabling rankings derived by this method to separate more items from one another. A framework is introduced for studying grading on the basis of observations. Measures of accuracy, reliability and discrimination are developed within this framework. Ability in grading is characterized for individuals and groups (...) as the capacity to grade reliably, accurately and at a high level of discrimination. It is shown that the collective ability of a supergrading group with diverse standards can be greater than that of a less diverse group whose members have greater ability. (shrink)
The so called Ramsey test is a semantic recipe for determining whether a conditional proposition is acceptable in a given state of belief. Informally, it can be formulated as follows: (RT) Accept a proposition of the form "if A, then C" in a state of belief K, if and only if the minimal change of K needed to accept A also requires accepting C. In Gärdenfors (1986) it was shown that the Ramsey test is, in the context of some other (...) weak conditions, on pain of triviality incompatible with the following principle, which was there called the preservation criterion: (P) If a proposition B is accepted in a given state of belief K and the proposition A is consistent with the beliefs in K, then B is still accepted in the minimal change of K needed to accept A. (RT) provides a necessary and sufficient criterion for when a 'positive' conditional should be included in a belief state, but it does not say anything about when the negation of a conditional sentence should be accepted. A very natural candidate for this purpose is the following negative Ramsey test: (NRT) Accept the negation of a proposition of the form "if A, then C" in a consistent state of belief K, if and only if the minimal change of K needed to accept A does not require accepting C. This note shows that (NRT) leads to triviality results even in the absence of additional conditions like (P). (shrink)