This paper considers ways that experimentaldesign can affect judgments about informally presented context shifting experiments. Reasons are given to think that judgments about informal context shifting experiments are affected by an exclusive reliance on binary truth value judgments and by experimenter bias. Exclusive reliance on binary truth value judgments may produce experimental artifacts by obscuring important differences of degree between the phenomena being investigated. Experimenter bias is an effect generated when, for example, experimenters disclose (even unconsciously) (...) their own beliefs about the outcome of an experiment. Eliminating experimenter bias from context shifting experiments makes it far less obvious what the “intuitive” responses to those experiments are. After it is shown how those different kinds of bias can affect judgments about informal context shifting experiments, those experiments are revised to control for those forms of bias. The upshot of these investigations is that participants in the contextualist debate who employ informal experiments should pay just as much attention to the design of their experiments as those who employ more formal experimental techniques if they want to avoid obscuring the phenomena they aim to uncover. (shrink)
To what extent does the design of statistical experiments, in particular sequential trials, affect their interpretation? Should postexperimental decisions depend on the observed data alone, or should they account for the used stopping rule? Bayesians and frequentists are apparently deadlocked in their controversy over these questions. To resolve the deadlock, I suggest a three‐part strategy that combines conceptual, methodological, and decision‐theoretic arguments. This approach maintains the pre‐experimental relevance of (...)experimentaldesign and stopping rules but vindicates their evidential, postexperimental irrelevance. †To contact the author, please write to: Tilburg Center for Logic and Philosophy of Science, Tilburg University, P.O. Box 90153, 5000 LE Tilburg, The Netherlands; e‐mail: firstname.lastname@example.org. (shrink)
We applaud the authors' basic message. We note that the negative research emphasis is not special solely to social psychology and judgment and decision-making. We argue that the proposed integration of null hypothesis significance testing (NHST) and Bayesian analysis is promising but will ultimately succeed only if more attention is paid to proper experimentaldesign and implementation.
Methodological practices differ between economics and psychology because economists use game theory as the basis for the design and interpretation of experiments, while psychologists do not. This methodological choice explains the “four key variables” stressed by Hert-wig and Ortmann. Game theory is currently the most rigorous basis for modeling strategic choice.
This chapter is organised around two topics: the first one is the methodology of experimental economics, a research programme that is becoming increasingly influential in contemporary economic science. The second one is normative methodology, an issue that has been widely debated by philosophers of economics over the last two decades.
Much of the early history of developmental and physiological genetics in Germany remains to be written. Together with Carl Correns and Richard Goldschmidt, Alfred Kühn occupies a special place in this history. Trained as a zoologist in Freiburg im Breisgau, he set out to integrate physiology, development and genetics in a particular experimental system based on the flour moth Ephestia kühniella Zeller. This paper is meant to reconstruct the crucial steps in the experimental pathway that led Kühn (...) and his collaborators at the University of Göttingen, and later at the Kaiser Wilhelm Institutes of Biology and Biochemistry in Berlin, to formulate, in their specific way, what later became known as the "one gene-one enzyme hypothesis." Special attention will be given to the interaction of the different parts of Kühn's Ephestia-based project, which were rooted in different research traditions. The paper retraces how, roughly between 1925 and 1945, these elements came to form a mixed experimental setup composed of genetic, embryological, physiological and, finally, biochemical constituents. Accordingly, emphasis is laid on the development of the terminology in which the results were cast, and how it reflected the hybrid state of an experimental system successively acquiring new epistemic layers. (shrink)
Since modern medicine is based substantially in clinical medical research, the flaws and ethical problems that arise in this research as it is conceived and practiced in the United States are likely to be reflected to some extent in current medicine and its practice. This paper explores some of the ways in which clinical research has suffered from an androcentric focus in its choice and definition of problems studied, approaches and methods used in design and interpretation of experiments, and (...) theories and conclusions drawn from the research. Some examples of re-visioned research hint at solutions to the ethical dilemmas created by this biased focus; an increased number of feminists involved in clinical research may provide avenues for additional changes that would lead to improved health care for all. (shrink)
Standard practices in experimental economics arise for different reasons. The “no deception” rule comes from a cost-benefit tradeoff; other practices have to do with the uses to which economists put experiments. Because experiments are part of scientific conversations that mostly go on within disciplines, differences in standard practices between disciplines are likely to persist.
Almost all admit that there is beauty in the natural world. Many suspect that such beauty is more than an adornment of nature. Few in our contemporary world suggest that this beauty is an empirical principle of the natural world itself and instead relegate beauty to the eye and mind of the beholder. Guided by theological and scientific insight, the authors propose that such exclusion is no longer tenable, at least in the data of modern biology and in our view (...) of the natural world in general. More important, we believe an empirical aesthetics exists that can help guide experimentaldesign and development of computational models in biology. Moreover, because theology and science can both contribute toward and equally profit from such an aesthetics, we propose that this empirical aesthetics provides the foundation for a living synergy between theology and science. (shrink)
We review the use of introspective and phenomenological methods in experimental settings. We distinguish diﬀerent senses of introspection, and further distinguish phenomenological method from introspectionist approaches. Two ways of using phenomenology in experimental procedures are identiﬁed: ﬁrst, the neurophenomenological method, proposed by Varela, involves the training of experimental subjects. This approach has been directly and productively incorporated into the protocol of experiments on perception. A second approach may have wider application and does not involve training experimental (...) subjects in phenomenological method. It requires front-loading phenomenological insights into experimentaldesign. A number of experiments employing this approach are reviewed. We conclude with a discussion of the implications for both the cognitive sciences and phenomenology. Ó 2006 Published by Elsevier Inc. (shrink)
This paper concerns the philosophical significance of a choice about how to design the context shifting experiments used by contextualists and anti-intellectualists: Should contexts be judged jointly, with contrast, or separately, without contrast? Findings in experimental psychology suggest (1) that certain contextual features are more difficult to evaluate when considered separately, and there are reasons to think that one feature--stakes or importance--that interests contextualists and anti-intellectualists is such a difficult to evaluate attribute, and (2) that joint evaluation of (...) contexts can yield judgments that are more reflective and rational in certain respects. With those two points in mind, a question is raised about what source of evidence provides better support for philosophical theories of how contextual features affect knowledge ascriptions and evidence: Should we prefer evidence consisting of "ordinary" judgments, or more reflective, perhaps more rational judgments? That question is answered in relation to different accounts of what such theories aim to explain, and it is concluded that evidence from contexts evaluated jointly should be an important source of evidence for contextualist and anti-intellectualist theories, a conclusion that is at odds with the methodology of some recent studies in experimental epistemology. (shrink)
The requirement of randomization in experimentaldesign was first stated by R. A. Fisher, statistician and geneticist, in 1925 in his book Statistical Methods for Research Workers. Earlier designs were systematic and involved the judgment of the experimenter; this led to possible bias and inaccurate interpretation of the data. Fisher's dictum was that randomization eliminates bias and permits a valid test of significance. Randomization in experimenting had been used by Charles Sanders Peirce in 1885 but the practice was (...) not continued. Fisher developed his concepts of randomizing as he considered the mathematics of small samples, in discussions with "Student," William Sealy Gosset. Fisher published extensively. His principles of experimentaldesign were spread worldwide by the many "voluntary workers" who came from other institutions to Rothamsted Agricultural Station in England to learn Fisher's methods. (shrink)
Keith DeRose has argued that context shifting experiments should be designed in a specific way in order to accommodate what he calls a ‘truth/falsity asymmetry’. I explain and critique DeRose's reasons for proposing this modification to contextualist methodology, drawing on recent experimental studies of DeRose's bank cases as well as experimental findings about the verification of affirmative and negative statements. While DeRose's arguments for his particular modification to contextualist methodology fail, the lesson of his proposal is that there (...) is good reason to pay close attention to several subtle aspects of the design of context shifting experiments. (shrink)
Could a person ever transcend what it is like to be in the world as a human being? Could we ever know what it is like to be other creatures? Questions about the overcoming of a human perspective are not uncommon in the history of philosophy. In the last century, those very interrogatives were notably raised by American philosopher Thomas Nagel in the context of philosophy of mind. In his 1974 essay What is it Like to Be a Bat?, Nagel (...) offered reflections on human subjectivity and its constraints. Nagel’s insights were elaborated before the social diffusion of computers and could not anticipate the cultural impact of technological artefacts capable of materializing interactive simulated worlds as well as disclosing virtual alternatives to the “self.” In this sense, this article proposes an understanding of computers as epistemological and ontological instruments. The embracing of a phenomenological standpoint entails that philosophical issues are engaged and understood from a fundamentally practical perspective. In terms of philosophical praxis, or “applied philosophy,” I explored the relationship between human phenomenologies and digital mediation through the design and the development of experimental video games. For instance, I have conceptualized the first-person action-adventure video game Haerfest (Technically Finished 2009) as a digital re-formulation of the questions posed in Nagel’s famous essay. Experiencing a bat’s perceptual equipment in Haerfest practically corroborates Nagel’s conclusions: there is no way for humans to map, reproduce, or even experience the consciousness of an actual bat. Although unverifiable in its correspondence to that of bats, Haerfest still grants access to experiences and perceptions that, albeit still inescapably within the boundaries of human kinds of phenomenologies, were inaccessible to humans prior to the advent of computers. Phenomenological alterations and virtual experiences disclosed by interactive digital media cannot take place without a shift in human kinds of ontologies, a shift which this study recognizes as the fundamental ground for the development of a new humanism (I deem it necessary to specify that I am not utilizing the term “humanism” in its common connotation, that is to say the one that emerged from the encounter between the Roman civilization and the late Hellenistic culture. According to this conventional acceptation, humanism indicates the realization of the human essence through “scholarship and training in good conduct” (Heidegger 1998, p. 244). However, Heidegger observed that this understanding of humanism does not truly cater to the original essence of human beings, but rather “is determined with regard to an already established interpretation of nature, history, world, and […] beings as a whole.” (Heidegger 1998, p. 245) The German thinker found this way of embracing humanism reductive: a by-product of Western metaphysics. As Heidegger himself specified in his 1949 essay Letter on Humanism, his opposition to the traditional acceptation of the term humanism does not advocate for the “inhuman” or a return to the “barbaric” but stems instead from the belief that the humanism can only be properly understood and restored in culture as more original way of meditating and caring for humanity and understanding its relationship with Being.). Additionally, this study explicitly proposes and exemplifies the use of interactive digital technology as a medium for testing, developing and disseminating philosophical notions, problems and hypotheses in ways which are alternative to the traditional textual one. Presented as virtual experiences, philosophical concepts can be accessed without the filter of subjective imagination. In a persistent, interactive, simulated environment, I claim that the crafting and the mediation of thought takes a novel, projective (In Martin Heidegger’s 1927 Being and Time, the term “projectivity” indicates the way a Being opens to the world in terms of its possibilities of being (Heidegger 1962, pp. 184–185, BT 145). Inspired by Heidegger’s and Vilem Flusser’s work in the field of philosophy of technology as well as Helmuth Plessner’s anthropological position presented in his 1928 book Die Stufen des Organischen und der Mensch. Einleitung in die philosophische Anthropologie, this study understands the concept of projectivity as the innate openness of human beings to construct themselves and their world by means of technical artefacts. In this sense, this study proposes a fundamental understanding of technology as the materialization of mankind’s tendency to overcome its physical, perceptual and communicative limitations.) dimension which I propose to call “augmented ontology.”. (shrink)
A commentary on a current paper by Aaron Sloman (“An alternative to working on machine consciousness"). Sloman argues that in order to make progress in AI, consciousness (and related unclear folk mental concepts), "should be replaced by more precise and varied architecture-based concepts better suited to specify what needs to be explained by scientific theories". This original vision of philosophical inquiry as mapping out 'design-spaces' for a contested concept seeks to achieve a holistic, synthetic understanding of what possibilities such (...) spaces embody. It therefore does not reduce to either "relations of ideas" or "matters of fact" in Hume's famous dichotomy. It is also interestingly opposite to a current vogue for 'experimental philosophy'. (shrink)
Do participants bring their own priors to an experiment? If so, do they share the same priors as the researchers who design the experiment? In this article, we examine the extent to which self-generated priors conform to experimenters’ expectations by explicitly asking participants to indicate their own priors in estimating the probability of a variety of events. We find in Study 1 that despite being instructed to follow a uniform distribution, participants appear to have used their own priors, which (...) deviated from the given instructions. Using subjects’ own priors allows us to account better for their responses rather than merely to test the accuracy of their estimates. Implications for the study of judgment and decision making are discussed. (shrink)
Humans have a remarkable capacity for tuning their communicative behaviors to different addressees, a phenomenon also known as recipient design. It remains unclear how this tuning of communicative behavior is implemented during live human interactions. Classical theories of communication postulate that recipient design involves perspective taking, i.e., the communicator selects her behavior based on her hypotheses about beliefs and knowledge of the recipient. More recently, researchers have argued that perspective taking is computationally too costly to be a plausible (...) mechanism in everyday human communication. These researchers propose that computationally simple mechanisms, or heuristics, are exploited to perform recipient design. Such heuristics may be able to adapt communicative behavior to an addressee with no consideration for the addressee's beliefs and knowledge. To test whether the simpler of the two mechanisms is sufficient for explaining the `how' of recipient design we studied communicators' behaviors in the context of a non-verbal communicative task (the Tacit Communication Game, TCG). We found that the specificity of the observed trial-by-trial adjustments made by communicators is parsimoniously explained by perspective taking, but not by simple heuristics. This finding is important as it suggests that humans do have a computationally efficient way of taking beliefs and knowledge of a recipient into account. (shrink)
The work of Alan Cowey and Petra Stoerig is often taken to have shown that, following lesions analogous to those that cause blindsight in humans, there is blindsight in monkeys. The present paper reveals a problem in Cowey and Stoerig's case for blindsight in monkeys. The problem is that Cowey and Stoerig's results would only provide good evidence for blindsight if there is no difference between their two experimental paradigms with regard to the sorts of stimuli that are likely (...) to come to consciousness. We show that the paradigms could differ in this respect, given the connections that have been shown to exist between working memory, perceptual load, attention, and consciousness. (shrink)
Previous experimental and observational work suggests that people act more generously when they are observed and observe others in social settings. However, the explanation for this is unclear. An individual may want to send a signal of her generosity to improve her own reputation. Alternately (or additionally) she may value the public good or charity itself and, believing that contribution levels are strategic complements, give more to influence others to give more. We perform the first series of laboratory experiments (...) that can separately estimate the impact of these two social effects, and test whether realized influence is consistent with the desire to influence, and whether either of these are consistent with anticipated influence. Our experimental subjects were given the opportunity to contribute from their endowment to Bread for the World, a development NGO. Depending on treatment, “leader” subjects’ donations were reported to other subjects either anonymously or with their identities, and these were reported either before these “follower” subjects made their donation decisions. We find that “leaders” are influential only when their identities are revealed along with their donations, and female leaders are more influential than males. Identified leaders’ predictions suggest that are aware of their influence. They respond to this by giving more than either the control group or the unidentified leaders. We find mixed evidence for “reputation-seeking.”. (shrink)
This paper argues for more randomised controlled trials in educational research. Educational researchers have largely abandoned the methodology they helped to pioneer. This gold-standard methodology should be more widely used as it is an appropriate and robust research technique. Without subjecting curriculum innovations to a RCT then potentially harmful educational initiatives could be visited upon the nation's children.
This research aim to know difference to percePT. ion to leadership style of superior pursuant of organizational commitment. Leadership style represent a corps of characteristic by a leader used to influence activity a group to reach specific. Organizational commitment is member to its organization which reflect how big identify member organization and how big desire to remain to in organization. Characteristic purpose for the subject of this research is employees of PT. X which have employees status remain to. To collect (...) data used by re3maining instrument in the form of questionnaire with Likert Scale. Data obtained from 181 respondent from each part of unit work at company. Data processing conducted with calculation of ANOVA with significance test at level 0.01 indicating that is fourth leadership style type mean differ by significant, F (19.48) = 0.00, p < 0.01. This matter indicate that H 0 refused, its meaning there is difference to percePT. ion leadership style of superior pursuant of organizational commitment.  . (shrink)
The advance of science and human knowledge is impeded by misunderstandings of various statistics, insufficient reporting of findings, and the use of numerous standardized and non-standardized presentations of essentially identical information. Communication with journalists and the public is hindered by the failure to present statistics that are easy for non-scientists to interpret as well as by use of the word significant, which in scientific English does not carry the meaning of "important" or "large." This article promotes a new standard method (...) for reporting two-group and two-variable statistics that can enhance the presentation of relevant information, increase understanding of findings, and replace the current presentations of two-group ANOVA, t-tests, correlations, chi-squares, and z-tests of proportions. A brief call to highly restrict the publication of risk ratios, odds ratios, and relative increase in risk percentages is also made, since these statistics appear to provide no useful scientific information regarding the magnitude of findings. (shrink)
This paper reports on the experiences of international MBA students following a hybrid design for a business ethics course, which combined class-based lectures with "out-of-class" discussion supported by asynchronous communication tools. The e-learning component of the course was intended to generate discussion on the ethical assumptions of course participants, with each individual required to post a mini case study reflecting an ethical dilemma which s/he had faced at work. Using questionnaire and interview data, we report on the learning experiences (...) of participants following this experimental course. The results reveal a high level of intercultural dialogue between participants, with adopters showing greater awareness of their individual cultural biases in their case writing, a direct consequence of the on-line feedback and case discussion. These findings indicate that asynchronous tools have much to offer business ethics students, supporting ideas sharing and the exchange of cultural perspectives outside the physical boundaries of the classroom. (shrink)
Transcranial current stimulation (TCS) is a promising method of non-invasive brain stimulation to modulate cortical network dynamics. Preliminary studies have demonstrated the ability of TCS to enhance cognition and reduce symptoms in both neurological and psychiatric illnesses. Despite the encouraging results of these studies, the mechanisms by which TCS and endogenous network dynamics interact remain poorly understood. Here, we propose that the development of the next generation of TCS paradigms with increased efficacy requires such mechanistic understanding of how weak electric (...) fields imposed by TCS interact with the nonlinear dynamics of large-scale cortical networks. We highlight key recent advances in the study of the interaction dynamics between TCS and cortical network activity. In particular, we demonstrate the opportunities provided by an interdisciplinary approach that bridges neurobiology and electrical engineering. We discuss the use of (1) hybrid biological-electronic experimental approaches to disentangle feedback interactions, (2) large-scale computer simulations for the study of weak global perturbations imposed by TCS, and (3) optogenetic manipulations informed by dynamics systems theory to probe network dynamics. Together, we here provide the foundation for the use of rational design for the development of the next generation of TCS neurotherapeutics. (shrink)
This target article is concerned with the implications of the surprisingly different experimental practices in economics and in areas of psychology relevant to both economists and psychologists, such as behavioral decision making. We consider four features of experimentation in economics, namely, script enactment, repeated trials, performance-based monetary payments, and the proscription against deception, and compare them to experimental practices in psychology, primarily in the area of behavioral decision making. Whereas economists bring a precisely defined “script” to experiments for (...) participants to enact, psychologists often do not provide such a script, leaving participants to infer what choices the situation affords. By often using repeated experimental trials, economists allow participants to learn about the task and the environment; psychologists typically do not. Economists generally pay participants on the basis of clearly defined performance criteria; psychologists usually pay a flat fee or grant a fixed amount of course credit. Economists virtually never deceive participants; psychologists, especially in some areas of inquiry, often do. We argue that experimental standards in economics are regulatory in that they allow for little variation between the experimental practices of individual researchers. The experimental standards in psychology, by contrast, are comparatively laissez-faire. We believe that the wider range of experimental practices in psychology reflects a lack of procedural regularity that may contribute to the variability of empirical findings in the research fields under consideration. We conclude with a call for more research on the consequences of methodological preferences, such as the use on monetary payments, and propose a “do-it-both-ways” rule regarding the enactment of scripts, repetition of trials, and performance-based monetary payments. We also argue, on pragmatic grounds, that the default practice should be not to deceive participants. Key Words: behavioral decision making; cognitive illusions; deception; experimentaldesign; experimental economics; experimental practices; financial incentives; learning; role playing. (shrink)
In clinical and agricultural trials, there is the danger that an experimental outcome appears to arise from the causal process or treatment one is interested in when, in reality, it was produced by some extraneous variation in the experimental conditions. The remedy prescribed by classical statisticians involves the procedure of randomization, whose effectiveness and appropriateness is criticized. An alternative, Bayesian analysis of experimentaldesign, is shown, on the other hand, to provide a coherent and intuitively satisfactory (...) solution to the problem. (shrink)