What should we expect and demand of experts, given the authority vested in them and the gravity of decisions in areas such as science and technology policy? Risk-management is not a value-free science, and experts are not immune from damaging bias, from vested interests, or from error in their projections of benefits and risks in the application of science and technology to the real world. The concept of inductive risk concerns the seriousness of potential errors and accidents, or more generally of ‘getting it wrong’ in any inductive context of inquiry. The study of inductive risk involves the interplay between applied science/technology and different kinds of values (epistemic, but also moral, social, economic, etc.). This involves how risks are calculated and assigned, but also more generally how decisions are, and ought to be made. It can involve the relationships between scientific experts, business interests, policy-makers, and citizen stakeholders in decisions and policy-making that carries risk for those affected. The study of inductive risk as defined by leading authors in this area shares with science studies, broadly construed, that science is a legitimate and pressing site of debate as long as its implications are relevant to the public, and to public policy-making.
The roles of values in expertise has been a question of great concern at least since The Public and its Problems, Dewey 1927. Expertise may be vitally important for effective public decision-making, yet experts need to be held responsible to the epistemic authority they wield, and to specific risks of the policies and decisions they advise. Inductive risk is a concept first applied in an article “Science and Human Values” by Hempel 1965 which helps encapsulate a great deal of debate about the relationship between epistemic and non-epistemic values in science. Hempel allowed that because no evidence can establish a hypothesis with certainty, the acceptance of a hypothesis “carries with it the ‘inductive risk’” (92) that it may turn out to be incorrect. While articulating inductive risk as the risk of error in accepting or rejecting a hypothesis, Hempel’s view largely aligned with others including Levi (1962) and McMullen (1983), that value judgments attached to various outcomes or “utilities” in the application of science may be of practical and moral concern, but are not part of science proper: “the scientist is not called upon to make value judgments in their regard as part of his scientific work” (8). This view contrast with that of Richard Rudner (1953), that scientists qua scientists make value judgments. The insulation of scientific research from social values came under criticism from many in the latter half of the twentieth-century, as the “value-free” conception of science and scientific objectivity. Numerous post-positivist and feminist thinkers urged a re-thinking of that conception and of the cognitive/social value distinction (Rooney 1992; Longino 1990; Machamer and Douglas 1999; Intemann 2005; Mayo and Spanos (eds.) 2009; Elliott and Richards (eds.) 2017)). In an early paper in what would become voluminous work on inductive risk, “Inductive Risk and Values in Science”, Douglas 2000 argues that non-epistemic consequences of error “can and should be considered in the internal stages of science: choice of methodology, characterization of data, and interpretation of results” (559). Since inductive risk arises whenever knowledge is inductively based, and there are often clear or potentially important consequences of getting it wrong, discussions of inductive risk become a hub for concerns with technology policy. In the article “Science, Values, and Citizens”, Douglas 2017 maintains that in societally relevant areas of science, a focus on inductive risk "opens the door to social and ethical values in the assessment of what counts as sufficient evidence for a claim” (93). Stephen John (2015) defends disentangling and excluding non-epistemic values in science; the argument from inductive risk does not undercut a rightly-conceived value-free conception of science. While the entanglements and attempted disentanglements of science and social values remains a matter of vigorous debate, numerous authors presently utilize the concept of inductive risk not only in regard to policy procedures, but also in articulating the different avenues for legitimate contestation of scientific claims, and in stimulating timely and more robust, multi-directional conversations over science and values.[BROKEN REFERENCE: HEMSAHw]#DOUIRA
Biddle, J.B. and Kukla, R. (2017), The Geography of Epistemic Risk. In Exploring Inductive Risk: Case Studies of Values in Science, K.C. Elliott and T. Richards (eds.). Oxford: Oxford university Press.
Douglas, H. (2009), Science, Policy, and the Value-Free Ideal. University of Pittsburgh Press, Pittsburg PA.
Elliott, Kevin C. (2013), Douglas on Values: From Indirect Roles to Multiple Goals, Studies in History and Philosophy of Science Part A 44 (3), 375-383.
Intemann, K. (2005), Feminism, Underdetermination, and Values in Science, Philosophy of Science, 72, 1001-1012.
Machamer, P. and Osbeck, L. (2004), The Social in the Epistemic. In Machamer, P. and Wolters, G. (eds.) Values, Science and Objectivity. University of Pittsburgh Press.
|1 — 50 / 161|
|1 — 50 / 161|