Over the last decade, philosophers of science have extensively criticized the epistemic superiority of randomized controlled trials for testing safety and effectiveness of new drugs, defending instead various forms of evidential pluralism. We argue that scientific methods in regulatory decision-making cannot be assessed in epistemic terms only: there are costs involved. Drawing on the legal distinction between rules and standards, we show that drug regulation based on evidential pluralism has much higher costs than our current RCT-based system. We analyze these (...) costs and advocate for evaluating any scheme for drug regulatory tests in terms of concrete empirical benchmarks, like the error rates of regulatory decisions. (shrink)
Why scientists reach an agreement on new experimental methods when there are conflicts of interest about the evidence they yield? I argue that debiasing methods play a crucial role in this consensus, providing a warrant about the impartiality of the outcome regarding the preferences of different parties involved in the experiment. From a contractarian perspective, I contend that an epistemic pre-requisite for scientists to agree on an experimental method is that this latter is neutral regarding their competing interests. I present (...) two medical experiments (on smallpox inoculation and Mesmerism) in which debiasing procedures such as blinding and data tabulation provided warrants of impartiality that made people agree on the experimental design even if they disagreed on the outcome. (shrink)
Pharmaceutical paternalism is the normative stance upheld by pharmaceutical regulatory agencies like the US Food and Drug Administration. These agencies prevent patients from accessing treatments declared safe and ineffective for the patient’s good without their consent. Libertarian critics of the FDA have shown a number of significant flaws in regulatory paternalism. Against these objections, I will argue that, in order to make an informed decision about treatments, a libertarian patient should request full disclosure of the uncertainty about an experimental treatment. (...) But pharmaceutical markets, on their own, are not a reliable source of information about such uncertainty. And companies have the power to capture any independent expert who may assess it. Therefore, the libertarian is better off deferring on an independent regulatory body the assessment of the pharmaceutical risks, constraining access to treatments until tested. (shrink)
Debiasing procedures are experimental methods aimed at correcting errors arising from the cognitive biases of the experimenter. We discuss two of these methods, the predesignation rule and randomization, showing to what extent they are open to the experimenter’s regress: there is no metarule to prove that, after implementing the procedure, the experimental data are actually free from biases. We claim that, from a contractarian perspective, these procedures are nonetheless defensible since they provide a warrant of the impartiality of the experiment: (...) we only need proof that the result has not been intentionally manipulated for prima facie acceptance. (shrink)
This paper discusses the so-called non-interference assumption (NIA) grounding causal inference in trials in both medicine and the social sciences. It states that for each participant in the experiment, the value of the potential outcome depends only upon whether she or he gets the treatment. Drawing on methodological discussion in clinical trials and laboratory experiments in economics, I defend the necessity of partial forms of blinding as a warrant of the NIA, to control the participants’ expectations and their strategic interactions (...) with the experimenter. (shrink)
In this paper, Isuggest that placebo effects, as we know them today, should be understood as experimental phenomena, low-level regularities whose causal structure is grasped through particular experimental designs with little theoretical guidance. Focusing on placebo interventions with needles for pain reduction -one of the few placebo regularities that seems to arise in meta-analytical studies- I discuss the extent to which it is possible to decompose the different factors at play through more fine-grained randomized clinical trials. My sceptical argument is (...) twofold. On the one hand, I argue that experiments alone are not enough to standardize interventions, and that it is necessary to include theories. On the other hand, I argue that the social interactions that seem to be part of placebo effects are difficult, if not impossible, to blind. Therefore, the measurement biases arising from the participants’ reactivity to the experimental setup cannot be controlled for. Further decomposition of placebo effects requires a theoretical account of the existing experimental regularities that may guide further tests. (shrink)
In this article we explore an argumentative pattern that provides a normative justification for expected utility functions grounded on empirical evidence, showing how it worked in three different episodes of their development. The argument claims that we should prudentially maximize our expected utility since this is the criterion effectively applied by those who are considered wisest in making risky choices (be it gamblers or businessmen). Yet, to justify the adoption of this rule, it should be proven that this is empirically (...) true: i.e. that a given function allows us to predict the choices of that particular class of agents. We show how expected utility functions were introduced and contested in accordance with this pattern in the 18th century and how it recurred in the 1950s when Allais made his case against the neo-Bernoullians. (shrink)
I will open the first part of this paper by trying to elucidate the frequentist foundations of RCTs. I will then present a number of methodological objections against the viability of these inferential principles in the conduct of actual clinical trials. In the following section, I will explore the main ethical issues in frequentist trials, namely those related to randomisation and the use of stopping rules. In the final section of the first part, I will analyse why RCTs were accepted (...) for regulatory purposes. I contend that their main virtue, from a regulatory viewpoint, is their impartiality, which is grounded in randomisation and fixed rules for the interpretation of the experiment. Thus the question will be whether Bayesian trials can match or exceed the achievements of frequentist RCTs in all these respects. In the second part of the paper, I will first present a quick glimpse of the introduction of Bayesianism in the field of medical experiments, followed by a summary presentation of the basic tenets of a Bayesian trial. The point here is to show that there is no such thing as “a” Bayesian trial. Bayesianism can ground many different approaches to medical experiments and we should assess their respective virtues separately. Thus I present two actual trials, planned with different goals in mind, and assess their respective epistemic, ethical and regulatory merits. In a tentative conclusion, I contend that, given the constraints imposed by our current regulatory framework, impartiality should preside over the design of clinical trials, even at the expense of many of their inferential and ethical virtues. (shrink)
Did the impartiality of clinical trials play any role in their acceptance as regulatory standards for the safety and efficacy of drugs? According to the standard account of early British trials in the 1930s and 1940s, their impartiality was just rhetorical: the public demanded fair tests and statistical devices such as randomization created an appearance of neutrality. In fact, the design of the experiment was difficult to understand and the British authorities took advantage of it to promote their own particular (...) interests. I claim that this account is based on a poorly defined concept of experimental fairness . I present an alternative approach in which a test would be impartial if it incorporates warrants of non-manipulability. With this concept, I reconstruct the history of British trials showing that they were indeed fair and this fairness played a role in their acceptance as regulatory yardsticks. (shrink)
In this paper I offer an account of the normative dimension implicit in D. Bernoulli’s expected utility functions by means of an analysis of the juridical metaphors upon which the concept of mathematical expectation was moulded. Following a suggestion by the late E. Coumet, I show how this concept incorporated a certain standard of justice which was put in question by the St. Petersburg paradox. I contend that Bernoulli would have solved it by introducing an alternative normative criterion rather than (...) a positive model of decision making processes. (shrink)
Why scientists reach an agreement on new experimental methods when there are conflicts of interest about the evidence they yield? I argue that debiasing methods play a crucial role in this consensus, providing a warrant about the impartiality of the outcome regarding the preferences of different parties involved in the experiment. From a contractarian perspective, I contend that an epistemic pre-requisite for scientists to agree on an experimental method is that this latter is neutral regarding their competing interests. I present (...) two medical experiments (on smallpox inoculation and Mesmerism) in which debiasing procedures such as blinding and data tabulation provided warrants of impartiality that made people agree on the experimental design even if they disagreed on the outcome. (shrink)
I analyze cultural relativism as a methodological strategy to correct for ethnocentric biases in anthropological fieldwork. I discuss the format debiasing norms may adopt depending on whether a discipline has a causal or interpretative outlook. Franz Boas and his school advocated for an interpretative approach to ethnographic fieldwork, in which cultural relativism was implemented as a standard to be interpreted by expert third parties. Legitimate as it may be as a debiasing method, it does not allow anthropologists to adjudicate their (...) debates on biases in their ethnographic record. (shrink)
In actuarial parlance, the price of an insurance policy is considered fair if customers bearing the same risk are charged the same price. The estimate of this fair amount hinges on the expected value obtained by weighting the different claims by their probability. We argue that, historically, this concept of actuarial fairness originates in an Aristotelian principle of justice in exchange. We will examine how this principle was formalized in the 16th century and shaped in life insurance during the following (...) two hundred years, in two different interpretations. The Domatian account of actuarial fairness relied on subjective uncertainty: An agreement on risk was fair if both parties were equally ignorant about the chances of an uncertain event. The objectivist version grounded any agreement on an objective risk estimate drawn from a mortality table. We will show how the objectivist approach collapsed in the market for life annuities during the 18th century, leaving open the question of why we still speak of actuarial fairness as if it were an objective expected value. (shrink)
One of the defining features of the classical gene was its position. In molecular genetics, positions are defined instead as nucleotide numbers and there is no clear correspondence with its classical counterpart. However, the classical gene position did not simply disappear with the development of the molecular approach, but survived in the lab associated to different genetic practices. The survival of classical gene position would illustrate Waters’ view about the practical persistence of the genetic approach beyond reductionism and anti-reductionist claims. (...) We show instead that at the level of laboratory practices there are also reductive processes, operating through the rise and fall of different techniques. Molecular markers made the concept of classical gene position practically dispensable, leading us to rethink whether it had any causal role or was just a mere heuristic. (shrink)
In this paper I study how the theoretical categories of consumption theory were used by Milton Friedman in order to classify empirical data and obtain predictions. Friedman advocated a case by case definition of these categories that traded theoretical coherence for empirical content. I contend that this methodological strategy puts a clear incentive to contest any prediction contrary to our interest: it can always be argued that these predictions rest on a wrong classification of data. My conjecture is that this (...) methodological strategy can contribute to explain why Friedman’s predictions never generated the consensus he expected among his peers. (shrink)
Exploramos aquí la conexión entre los conceptos de riesgo e igualdad en el argumento del observador imparcial. La concepción de la justicia que elegiría un observador imparcial se justifica por la pureza del procedimiento de elección. Sin embargo, si modelizamos esta decisión utilizando medidas del riesgo habituales en matemática financiera, veremos cómo el criterio de elección del observador bajo el velo de la ignorancia contiene una preferencia implícita por el grado de desigualdad resultante. Esto nos obliga a reconsiderar la pureza (...) procedimental de la elección. (shrink)
In this paper I study Milton Friedman’s statistical education, paying special attention to the different methodological approaches (Fisher, Neyman and Savage) to which he was exposed. I contend that these statistical procedures involved different views as to the evaluation of statistical predictions. In this light, the thesis defended in Friedman’s 1953 methodological essay appears substantially ungrounded.
Jesús Zamora Bonilla, profesor de la Universidad Carlos III, es un autor bien conocido para el lector de Theoria por sus publicaciones en filosofía general de las ciencias y en filosofía de la economía. Sólo su distribuidora es culpable de que muchos lectores ignoren aún su primer libro, Mentiras a medias, un amplio estudio de la idea de verosimilitud que incluye una propuesta original, ya discutida en foros internacionales. El trabajo en epistemología general de las ciencias que aquí presentamos se (...) articula, además, con sus aportaciones a la economía de la ciencia y a la propia reconstrucción racional de la metodología económica. No está de más, por tanto, el volver sobre los fundamentos de este proyecto, pues, como es sabido, no son pocas las dificultades que ofrece el concepto de verosimilitud. (shrink)
Pretendo establecer aquí un diálogo con la concepción pragmatista de la probabilidad defendida por Roberto Torretti a partir del enfoque propensionista. En la primera parte del trabajo, quiero mostrar en qué sentido la esperanza matemática formalizaba un principio aristotélico de justicia. En la segunda parte ilustraré, apoyándome en los trabajos de G. Shafer y V. Vovk, cómo iluminar sistemáticamente esa normatividad a partir de una concepción de la probabilidad articulada sobre la teoría de juegos. Veremos así cómo hay una dimensión (...) pragmatista en la probabilidad que cumple con los desiderata de Torretti, aunque no se apoye en simetrías físicas. (shrink)
We discuss the role of practical costs in the epistemic justification of a novice choosing expert advice, taking as a case study the choice of an expert statistician by a lay politician. First, we refine Goldman’s criteria for the assessment of this choice, showing how the costs of not being impartial impinge on the epistemic justification of the different actors involved in the choice. Then, drawing on two case studies, we discuss in which institutional setting the costs of partiality can (...) play an epistemic role. This way we intend to show how the sociological explanation of the choice of experts can incorporate its epistemic justification. (shrink)
A la vista del catálogo de nuestro Instituto Nacional de Estadística, quién dejará de sorprenderse al descubrir entre las colecciones auspiciadas por el INED francés una dedicada a los Clásicos de la economía y la población, con cuidadísimas ediciones de Condorcet, Süssmilch, Quesnay, Graunt... A esta colección se suma ahora, bajo la dirección de Eric Brian, otra serie de Estudios e investigaciones históricas, cuyo primer volumen comentamos aquí. Matemáticas y acción política, compilado por Thierry Martin, es, además, una excelente representación (...) de los trabajos que en Francia se desarrollan en torno a la matemática social desde múltiples enfoques. (shrink)
In this paper I explore a positivist methodological tradition in early demand theory, as exemplified by several common traits that I draw from the works of V. Pareto, H. L. Moore and H. Schultz. Assuming a current approach to explanation in the social sciences, I will discuss the building of their various explanans, showing that the three authors agreed on two distinctive methodological features: the exclusion of any causal commitment to psychology when explaining individual choice and the mandate to test (...) the truth of demand theory on aggregate data by statistical means. However, I also contend, from an epistemological point of view, that the truth of demand theory was conceived of in three different ways by our authors. Inspired by Poincaré, Pareto assumed that many different theories could account for the same data on individual choice, coming close to a kind of conventionalism -though I prefer to refer to this position as theoreticism. Moore was himself akin to Pearson's approach, which could be named descriptivist insofar as it resolved scientific laws into statistical descriptions of the data. Finally, Schultz tried to reconcile both approaches in an adequationist stance with no success, as we shall see. (shrink)
Randomized controlled trials test new drugs using various debiasing devices to prevent participants from manipulating the trials. But participants often dislike controls, arguing that they impose a paternalist constraint on their legitimate preferences. The 21st Century Cures Act, passed by US Congress in 2016, encourages the Food and Drug Administration to use alternative testing methods, incorporating participants’ preferences, for regulatory purposes. We discuss, from a historical perspective, the trade-off between trial impartiality and participants’ freedom. We argue that the only way (...) out is considering which methods improve upon the performance of conventional trials in keeping dangerous or inefficacious compounds out of pharmaceutical markets. (shrink)
In this paper I explore a positivist methodological tradition in early demand theory, as exemplified by several common traits that I draw from the works of V. Pareto, H. L. Moore and H. Schultz. Assuming a current approach to explanation in the social sciences, I will discuss the building of their various explanans, showing that the three authors agreed on two distinctive methodological features: the exclusion of any causal commitment to psychology when explaining individual choice and the mandate to test (...) the truth of demand theory on aggregate data by statistical means. However, I also contend, from an epistemological point of view, that the truth of demand theory was conceived of in three different ways by our authors. Inspired by Poincaré, Pareto assumed that many different theories could account for the same data on individual choice, coming close to a kind of conventionalism ? though I prefer to refer to this position as theoreticism. Moore was himself akin to Pearson's approach, which could be named descriptivist in so far as it resolved scientific laws into statistical descriptions of the data. Finally, Schultz tried to reconcile both approaches in an adequationist stance with no success, as we shall see. (shrink)