Europe PMC

This website requires cookies, and the limited processing of your personal data in order to function. By using the site you are agreeing to this as outlined in our privacy notice and cookie policy.

Abstract 


In recent years, we have seen a new concern with ethics training for research and development professionals. Although ethics training has become more common, the effectiveness of the training being provided is open to question. In the present effort, a new ethics training course was developed that stresses the importance of the strategies people apply to make sense of ethical problems. The effectiveness of this training was assessed in a sample of 59 doctoral students working in the biological and social sciences using a pre-post design with follow-up, and a series of ethical decision-making measures serving as the outcome variable. Results showed that this training not only led to sizable gains in ethical decision-making, but that these gains were maintained over time. The implications of these findings for ethics training in the sciences are discussed.

Free full text 


Logo of nihpaLink to Publisher's site
Ethics Behav. Author manuscript; available in PMC 2009 Jul 2.
Published in final edited form as:
Ethics Behav. 2008 Oct 1; 18(4): 315–339.
https://doi.org/10.1080/10508420802487815
PMCID: PMC2705124
NIHMSID: NIHMS112997
PMID: 19578559

A Sensemaking Approach to Ethics Training for Scientists: Preliminary Evidence of Training Effectiveness

Abstract

In recent years, we have seen a new concern with ethics training for research and development professionals. Although ethics training has become more common, the effectiveness of the training being provided is open to question. In the present effort, a new ethics training course was developed that stresses the importance of the strategies people apply to make sense of ethical problems. The effectiveness of this training was assessed in a sample of 59 doctoral students working in the biological and social sciences using a pre-post design with follow-up, and a series of ethical decision-making measures serving as the outcome variable. Results showed that this training not only led to sizable gains in ethical decision-making, but that these gains were maintained over time. The implications of these findings for ethics training in the sciences are discussed.

Keywords: integrity, ethics, training, evaluation, sensemaking

Notorious events, ranging from the death of study participants to the falsification of data (Kimmelman, 2004; Nature, 2006; Marshall, 1996) have served to remind the scientific community of the importance of ethics. As dramatic as these cases may be, the best available evidence indicates that less noteworthy, but still significant, ethical breeches, such as conflicts of interest and data trimming, may be more pervasive in the sciences than is commonly assumed (Martinson, Anderson, & de Vries, 2005; Steneck, 2004). Recognition of the problems posed by these ethical breeches has led to the proposal of a number of remedies - ranging from the establishment of professional codes of conduct to more effective student mentoring (National Institute of Medicine, 2002).

Prominent among these suggested remedies has been training in the responsible conduct of research (e.g., Chen, 2003; Coughlin, Katz, & Mattison, 1999; De Las Fuentes, Willmuth, & Yarrow, 2005). In fact, the National Institutes of Health now mandates such training for all investigators it supports (Dalton, 2000). Although training in research ethics has become a widespread panacea for the problems posed by scientific integrity, or the lack thereof, one must ask a basic question: How well does training in research integrity work?

Even bearing in mind the many issues that impinge on effective program evaluation (Kraiger, Ford, & Salas, 1993; Sims, 1993), it seems reasonable to question the efficacy of ethics training. An illustration of this point may be found by considering the many studies conducted examining the effects of training on one key criterion - ethical decision-making (Loe, Ferrell, & Mansfield, 2000; O'Fallon & Butterfield, 2005). Some studies have provided evidence indicating that training may lead to improvements in ethical decision-making among scientists (e.g., Al-Jalahma & Fakhroo, 2004; Bebeau & Thoma, 1994; Clarkeburn, Downie, & Matthew, 2002). Other studies, however, suggest that ethics training does not have much effect on the ethical decision-making of scientists (e.g., Kalichman & Friedman, 1992; Macrina, Funk, & Barrett, 2004). With these conflicting findings in mind, our intent in the present study was to develop a new model curriculum for training in research integrity and provide preliminary evidence for the effectiveness of the training in enhancing the ethical decision-making of young scientists - specifically, doctoral students in the health, biological, and social sciences.

Training Content

A variety of approaches have been applied in attempts to improve ethical decision-making skills in the sciences. Perhaps the most widely applied approach is based on Kohlberg's (1984) and Rest's (1986) theories of moral reasoning. In fact, studies conducted by Bebeau and Thoma (1994) and Clarkeburn, Downie, and Matthew (2002) have provided some evidence indicating that training programs developed on the basis of these frameworks can lead to improvements in the ethical decision-making of pharmacology and life sciences scholars. Although moral reasoning models have evidenced some value as a basis for developing curricula to enhance ethical decision-making, they are not the only approach that might be applied. For example, Gawthrop and Uhlemann (1992) used a field practices approach - instruction revolving around field specific codes of conduct, guidelines, or decision-making tools - to develop an ethics training program. They found that this approach led to some improvements in the quality of the ethical decisions made on a set of vignette exercises. Alternatively, Deutch (1996) applied a case analysis approach in an attempt to improve ethical decision-making in the biological sciences, but this approach has not been empirically tested.

The wide variety of approaches that have been applied in attempts to develop ethical decision-making in the sciences, however useful they may be in providing curriculum models, broach a broader question. What is the best, or most viable, approach for the development of ethics training? Of course, any attempt to answer this question depends on the assumptions one is willing to make about ethical decision-making on the part of scientists. We would argue that ethical decision-making by scientists can be viewed as a form of sensemaking, and this assumption has implications for how effective ethics training might be designed.

Sensemaking is a form of complex cognition that occurs when people are presented with ambiguous, high-stakes events (Drazin, Glynn, & Kazanjian, 1999; Walsh, 1989; Weick, 1995). These ambiguous, or ill-defined (Mumford & Gustafson, 1988), high-stakes events allow a variety of mental models to be applied in understanding the situation at hand (Hmelo-Silver & Pfeffer, 2004; Johnson-Laird, 1983). The selection or construction of a mental model for understanding a situation, in turn, provides a framework for information gathering, the standards applied in evaluation of this information, and the construction and appraisal of alternative courses of action - in short, the foundation for decision-making (Hogarth & Makridakis, 1981).

In fact, at least three considerations would lead one to expect that sensemaking would prove critical to understanding ethical decision-making in the sciences. First, if the decision situation is not recognized as having ethical implications, ethical standards will not be evoked. In keeping with this observation, Roberts, Warner, Hammond, Brody, Kaminsky, and Roberts (2005) have provided evidence indicating that basic appraisal activities are related to ethical decision-making among scientists. Second, the decisions presented to scientists, including ethical decisions, are seldom of a simple, yes/no variety (Steneck, 2004). Instead, most ethics-relevant situations allow for a variety of alternative actions. Under such conditions the frames, or mental models, used to formulate decision alternatives can be expected to have a substantial impact on subsequent ethical decision-making. Third, selecting an action, or making a decision, will depend on the forecasts, or predictions, people make with regard to likely outcomes for themselves, others, and their work. These forecasts, however, will ultimately depend on sensemaking, and the mental model applied as a basis for these predictions (Mumford, 2006).

Figure 1 provides an overview of how sensemaking might apply to the ethical decision-making of scientists. Within this model, it is assumed that a variety of situational considerations will influence a scientist's initial appraisal of the problem situation. Such considerations include professional codes of conduct, perceived causes of the situation, personal and professional goals, and perceived requirements for attaining these goals. Following an initial appraisal of the situation, however, scientists must frame or define the exact nature of the problem at hand (Tversky & Kahneman, 1974; Mumford, Reiter-Palmon, & Redmond, 1994). When the problem at hand is defined as having ethical implications, one would expect that ethical decision-making will be more likely. What should be recognized, however, is that when a situation is seen as having ethical implications and implications for the attainment of personal and professional goals, affect will be invoked. The emotions produced by ethical dilemmas are likely to influence ethical decision-making (Haidt, 2001, 2003) through both the direct effects of affect on decision-making and the indirect effects of affect on cognitive processing.

An external file that holds a picture, illustration, etc.
Object name is nihms-112997-f0001.jpg

Sensemaking model of ethical decision-making.

After framing the problem and experiencing a range of emotions, people will begin a search for prior experiences or cases that might provide a framework for navigating the situation at hand (Chen, 2003; Key, 1999). Cases represent “real-world” knowledge abstracted from prior experience, and contain information about causes, outcomes, actions, constraints, and contingencies (Hammond, 1991; Kolodner, 1997; Patalano & Seifert, 1997). These cases are often used as a basis for constructing, or alternatively selecting, a mental model to be applied in aiding decision-making. The mental model formulated on the basis of past experiences is then used to forecast the likely outcomes of various actions (Dörner & Schaub, 1994; Önkal, Yates, Sigma-Mugan, & Öztin, 2003).

Forecasted outcomes of actions in high-stakes events, however, are self relevant (Oyserman & Markus, 1990). Accordingly, people will appraise these predicted outcomes in terms of their view of themselves and others. This appraisal process implies that self-reflection concerning predicted outcomes and the implications of actions will influence both the assessment of outcomes and the viability of the mental models held to give rise to these outcomes (Strange & Mumford, 2005). Thus, self-reflection can give rise to the construction or selection of the actual mental model applied to understand the ethical issue at hand (Walsh, 1989). This mental model will provide a basis for the sensemaking activities that will guide the decisions made with regard to the ethical event.

These observations about some of the key processes underlying ethical decision-making are noteworthy for two reasons. First, they imply that ethical decision-making will be based on available case-based models. Thus, the effectiveness of training may depend, in part, on whether it provides a viable set of relevant cases. These cases might, at times, be provided through direct instruction. However, because case-based models are often acquired through social experiences, one might argue that presentation and adoption of case-based models to support ethical decision-making will be enhanced through the application of cooperative learning techniques (Aronson & Patnoe, 1997; Slavin, 1991). In cooperative learning, shared experiences in working on exercises is used to help people acquire and apply case-based models while providing social reinforcement for the application of these models in addressing a certain class of problems.

In addition to providing case-based models, people must also be provided with strategies for working with these cases (Scott, Lonergan, & Mumford, 2005). In fact, the evidence complied by Scott, Leritz, and Mumford (2004a, 2004b) indicates that strategy-based training interventions appear highly effective in enhancing people's performance in solving complex, ambiguous problems, such as those that call for creative thought. Moreover, evidence provided by Clapham (1997) suggests that strategy-based training is especially likely to prove effective when basic expertise is available and training time is short. The nature of the decision-making processes that we have described point to a number of strategies that might be used as a basis for the development of instruction intended to enhance ethical decision-making. For example, with regard to forecasting, one might expect that anticipating others' reactions to one's actions would improve decision-making. Likewise, with regard to emotions, analysis of the reasons one finds an ethical dilemma uncomfortable might also be of value. Consistent with these assertions about the value of strategy-based training, our primary goal in the present effort was to develop a cooperative learning, case-based approach to ethics training that emphasizes the use of effective strategies for working through ethical problems.

Training Evaluation

If this reasoning about the processes underlying ethical decision-making were applied to the development of an instructional curriculum for scientists, how might the effectiveness of such a program be evaluated (Goldstein & Ford, 2002; Kraiger, Ford, & Salas, 1993; Sims, 1993)? A common strategy used to appraise the effectiveness of ethics training is student reactions to the training. Although such reaction measures have long been employed in the evaluation of training programs (Kirkpatrick, 1959), their relevance for the appraisal of training intended to enhance ethical decision-making is open to question because the key performance of concern, ethical decision-making, is not being examined.

In contrast, change in students' ethical decision-making has commonly been used to appraise the effectiveness of instruction (e.g., Al-Jalahma & Fakhroo, 2004; Clarkeburn, Downie, & Matthew, 2002; Macrina, Funk, & Barrett, 2004). Although the use of actual ethical decisions as a basis for program evaluation is desirable, three issues arise in this regard. First, many programs base evaluations on a very limited set of ethical vignettes (e.g., Ryden & Duckett, 1991). As a result, evidence is not available concerning the extent and stability of the effects of training. Second, although these vignettes often call for realistic decisions, they typically are not structured in terms of a set of dimensional constructs (e.g., Wright & Carrese, 2001). This lack of dimensional structure to the ethical decisions being examined makes it difficult to set bounds on the generality of conclusions drawn about the effectiveness of training. Third, evidence bearing on the reliability and validity of the vignettes used to assess ethical decisions is often not available, and the lack of this evidence makes it difficult to draw firm conclusions about the effectiveness of training interventions (Messick, 1989).

Recently, Mumford and his colleagues developed a set of ethical decision-making measures explicitly intended to examine key aspects of ethical decision-making applicable to the health, biological, and social sciences (Helton-Fauth, Gaddis, Scott, Mumford, Devenport, Connelly, & Brown, 2003; Mumford, Devenport, Brown, Connelly, Murphy, Hill, & Antes, 2006). Development of these ethical decision-making measures began with a review of professional codes of conduct across some 60 disciplines within the three fields under consideration. Based on this review, 17 lower order dimensions were identified that were subsumed under four general dimensions of ethical behavior in the sciences: 1) data management (including data massaging and publications practices); 2) study conduct (including institutional review board practices, informed consent, confidentiality protection, protection of human subjects, and protection of animal subjects); 3) professional practices (including objectivity in evaluating work, recognition of expertise, and adherence to professional commitments); and 4) business practices (including protection of intellectual property, protection of public welfare and the environment, conflicts of interest, deceptive bid and contract practices, inappropriate use of physical resources, and inappropriate management practices).

A series of ethical decision-making tasks were developed to measure these dimensions using a low fidelity simulation approach (Motowidlo, Dunnette, & Carter, 1990) in which people were presented with a scenario and asked to assume the role of the principal actor addressing the ethical issues arising in the course of research. In a sample of 102 doctoral students working in the health, biological, and social sciences, these ethical decision-making measures provided a split-half reliability of .76. More importantly, validation evidence for these scales was obtained by correlating scores on ethical decision-making dimensions with: 1) causes of ethical decision-making - as measured through exposure to unethical practices in the students' day-to-day research work, 2) outcomes of ethical decision-making - as reflected in the severity of punishments awarded for ethical violations, and 3) individual difference measures - intelligence, cynicism, and social desirability. The resulting correlational data pointed to the construct validity of these decision-making scales in that scores on these scales were not related to social desirability (r = -.01) but were negatively related to cynicism (r = -.25). In addition, scores on these scales were negatively related to exposure to unethical practices in the students' day-to-day work (r = -.45), and were positively related to the severity of punishments awarded for ethical violations (r = .55).

Taken as a whole, the findings obtained by Mumford et al. (2006) provide initial evidence for the construct validity of this measure of ethical decision-making. Accordingly, these ethical decision-making scales appear to provide a reasonable basis for evaluating the effectiveness of training programs focused on improving ethical decision-making in the sciences. The present study was designed to investigate the effectiveness of a sensemaking approach to ethics training in the sciences. More specifically, we examined (1) whether higher scores across multiple dimensions of ethical decision-making would result from a sensemaking approach to ethics training, (2) whether such improvements could be tied to enhanced use of the decision-making strategies included in the training, and (3) whether these hypothesized changes in ethical decision-making and use of strategies would be maintained over time.

Method

Sample

The sample used to test these hypotheses consisted of 59 participants. The 19 men and 24 women (16 unreported) who agreed to participate in this study were recruited at a large southwestern university. The participants were drawn from the social (56%) and biological sciences (44%). Of the 40 participants who reported, 75% were Caucasian and the remaining 25% identified themselves as African-American, Asian, or Native American. Participants were also in different phases of their careers in relevant doctoral programs, with 39% being in their first or second years and 37% being in the third year or beyond (24% unreported). Their scores on the graduate record examination were typically a quarter of a standard deviation above national norms. Scores on the graduate record examination and other measures (e.g., cynicism, divergent thinking, and laboratory practices) were compared to those observed in a separate sample of 245 doctoral students not participating in training. No evidence of selective sampling in the training group was revealed by this analysis.

The 245 doctoral students included in the comparison group were recruited from the biological (42%), health (26%), and social (32%) sciences. The 95 men and 144 women (6 unreported) of which 60% were majority and 36% were minority group members (4% unreported) were either first (64%) or third and fourth (36%) year students. Sample members were recruited via email solicitations where they were offered $100.00 to participate in a study of research integrity. Participants in this broader study were asked to complete a battery of individual differences measures (e.g., intelligence, cynicism, narcissism, social desirability) along with measures examining both influences on research integrity (e.g., work events, climate perceptions) and outcomes of research integrity (e.g., punishments awarded for ethical violations) as well as the measure of ethical decision-making being used as dependent variables in the present study.

General Procedures

The ethics instruction program was offered as an intersession course three times a year over a two year period. This course was offered prior to the beginning of the fall semester, between the fall and spring semesters, and between the spring and summer semesters. All doctoral students at the university were notified through email announcements that the course was being offered. An initial announcement describing the course was sent one month prior to the course offering. Two follow-up emails were sent over the next two weeks reminding the doctoral students that the course was being offered and encouraging participation. To provide an incentive for participation, students were informed that six $100.00 awards would be distributed at the end of each training session to randomly selected participants.

Doctoral students who agreed to take the course were asked to register with an administrative assistant at least three days prior to the course. At the time they registered for the course, they were asked to complete the pre-measure packet and return this packet to the instructor at the start of the course. Additionally, at this time, participants were asked to complete a background inventory and an ethical judgment task in which participants selected punishments to be awarded for hypothetical violations of ethical rules. At the end of training, participants completed the post-study measures of ethical decision-making along with a battery of personality and cognitive ability measures, as well as an inventory describing climate and day-to-day practices in their work environments. These supplemental measures were collected to provide support for a larger study on ethical decision-making. At the end of training, participants were also asked to complete an inventory describing their reactions to the course.

Training Course Content

Training was conducted over a two day period with 6 hours devoted to instruction on each day. The training course was taught by a team of four senior professionals working in the social sciences, who were considered experts in the area of scientific ethics. All instructors provided the same content and worked through a set of PowerPoint slides tied to each module of instruction. It is of note that comparison of instructors with regard to post-training measures of ethical decision-making indicated that they all provided comparable learning experiences. Further, observations of the course indicated comparable processes of instruction across the four trainers.

The course delivered by the instructors consisted of ten blocks of instruction, with each block lasting one-to-two hours. All blocks were structured such that a lecture segment occurred to articulate key principles. Following this lecture and discussion of a series of relevant case studies, a sequence of interactive exercises occurred. Table 1 summarizes the key material covered in each module of instruction, the rationale for this instruction, and key learning activities occurring in each module.

Table 1

Summary of Training

ModuleTitleObjectivesContentCases and Exercises
1 Learning Ethical Research Guidelines • Understand and apply fundamental research guidelines in ethical decision-making• Guidelines Packet• 4 Case Studies with questions
• Understand limitations of rule based approach• Cases and Questions Packet• Pre-training Review Panel Task
• Pre-training Events Measure
2 Complexity in Ethical Decision-Making • Review research guidelines• Training introduction• Self-Reflection Activity
• Be aware of the complexity of ethical dilemmas• Discuss the question “What is an ethical dilemma?”• Pre-training EDM Measure
• Discuss Solutions to Module 1 Cases• Module 1 Case Discussion
3 Personal Biases Influencing Ethical Decision-Making • Understand the existence of cognitive biases• Research on Ethical Decisions• Self-Enhancement Demonstration
• Trainees forecast how they might act in an ethical dilemma.• Projected Self-Model• Milgram Study Video
• Decision making errors and personal biases• Behavior Predictions Activity
4 Problems Encountered in Ethical Decision-Making • Understand EDM problems• Myths of Ethical Decision-Making• Hit or Myth Activity
• Identify and generate problems interfering with decision-making• Problems that inhibit ethical decision-making• Problem Identification
• Problem Generation
5 Ethical Decision-Making Model and Decision-Making Strategies • Understand sense-making model to assist in ethical decision-making• EDM Model• Strategy Generation: “Baltimore Affair” Case
• Understand and apply strategies to apply in ethical situations• Introduction to and use of strategies that enhance ethical decision-making ability• Training Evaluation: Day 1
6Field Specific Differences in Applying the EDM Model• Locate field specific guidelines• Self-Directed Module: Finding guidelines and applying knowledge gained through case studies• Locate Field Specific Guidelines on the Web
• Apply EDM model• Two Cases with Questions
• Utilize Strategies• Background Data Survey
7Sensemaking in Ethical Decision-Making• Conceptualize complexity of EDM through sense-making model• Module 6 Homework Review• Role Play Activity: “A Clash to Remember”
• Understand sensemaking and relationship with EDM• Sensemaking Introduction
8Complex Field Differences• Understand disparities in ethical decision-making across disciplines• Introduction to Research across disciplines• Field Specific Guidelines Rev.
• Group Application• “Big Pharma” Group Case and associated case questions
9Understanding Different Perspectives• Evaluate decisions while considering multiple perspectives• Consider viewpoints of people in research process• “Wunderkind” Case Study and Role Play Activity
• Reflect on elements of Ethical Decision-Making• Self-Reflection Activity
• Training Overview• Training Summary
• Training Evaluation
10Ethical Decision-Making Post-Training Assessment• Apply knowledge in testing situations• Post-training Review Panel Task
• Post-training Events Measure
• Post-training EDM Measure

Note. Module 1 is self-directed pre-training; Modules 2 through 5 are presented in Day One Classroom Training.

Note. Module 6 is self-directed between training days; Modules 7 through 10 are presented in Day Two Classroom Training.

In the first day of instruction, six modules were covered in the instructional program. The first module was intended to provide background information concerning government regulations, professional codes of conduct, and institutional policies. The students then applied this material by working through a series of complex case studies prior to the beginning of the instructional sequence. In the second module, the instructor examined optimal answers and the reasons underlying the answers to the cases presented in module one. Following this review, students took a pre-test examining ethical decision-making. The third module of instruction was intended to make ethical conduct personally relevant to students by demonstrating the tendency of people to discount or ignore their personal biases. In this module, a video of the Milgram obedience study (Milgrim, 1965) was presented, and participants were asked to predict their own and others' responses to being in the Milgram study. Participant predictions were then discussed in the context of self-other attributional biases and the pervasive tendency for people to over-rely on their personal values when predicting their behavior. The fourth module of instruction focused on the real-world complexities raised by ethical problems and some of the common errors people make in handling these problems, such as acting too quickly or applying only a single principle. Common errors were described, and then students were asked to analyze these errors in a case illustration. Next, students generated potential personal biases and errors that might thwart their ability to make ethical decisions.

In the fifth module of instruction, practical strategies for dealing with personal biases and errors in an ethical situation were discussed. Students were asked to work through a case in a group discussion format and generate potential strategies that could be used in navigating ethical situations more broadly. Subsequently, seven particular metacognitive reasoning strategies (some of which had already been derived by participants themselves) were described: 1) recognizing the complexities of your circumstances, 2) seeking outside help, 3) questioning your own and others' judgment, 4) dealing with emotions, 5) anticipating the consequences of actions, 6) assessing personal motivations, and 7) considering the effects of actions on others. Table 2 describes the nature of these strategies in more detail. Students were then asked to apply these strategies to two specific cases that illustrated their relevance to ethical problems arising in different fields. These case study activities were administered as homework to be completed during the day between training sessions. The homework activities comprised the sixth module.

Table 2

Metacognitive Reasoning Strategies Training

StrategyOperational Definition
1 Recognizing your circumstancesThinking about origins of problem, individuals involved, and relevant principles, goals & values
2 Seeking outside helpTalking with a supervisor, peer, or institutional resource, or learning from others' behaviors in similar situations
3 Questioning your own and others' judgmentConsidering problems that people often have with making ethical decisions, remembering that decisions are seldom perfect
4 Dealing with emotionsAssessing and regulating emotional reactions to the situation
5 Anticipating consequences of actionsThinking about many possible outcomes such as consequences for others, short & long term outcomes based upon possible decision alternatives
6 Analyzing personal motivationsConsidering one's own biases, effects of one's values and goals, how to explain/justify one's actions to others, & questioning ability to make ethical decisions
7 Considering the effects of actions on othersBeing mindful of others' perceptions, concerns, and the impact of your actions on others, socially and professionally

In the seventh module of instruction, occurring on day two of training, the instructor presented a sensemaking model. Students applied the sensemaking model and associated strategies in a role playing exercise in which participants were asked to take on “a point of view” consistent with one role commonly observed in a research context (e.g., IRB member, professor, graduate student). Participants who did not occupy one of the primary roles were asked to assume the role of an event investigator. All participants worked through an ethical problem in which actions taken were discussed. The eighth module of instruction examined the nature, origins, and implications of field specific differences in ethical guidelines. After investigating and discussing these differences, field specific groups were asked to apply these guidelines, along with the sensemaking model and relevant strategies, in working through a case. They then discussed the similarities and differences in their approaches to the case. In the ninth training module participants engaged in a small group exercise. In this exercise, participants were asked to assume the role of one of five key actors involved in an ethical problem related to doctoral students. Group members then answered a series of questions pertinent to their roles and discussed the reasons for their answers vis-à-vis the ethical sensemaking model and relevant strategies. In the tenth and final module, the instructor reviewed the main points covered over the two day period. Then participants rated the effectiveness of training exercises and completed a post-training measure of ethical decision-making.

Evaluation Measures

Decision-Making

The first, and primary, measure used to evaluate the effectiveness of training was based on the pre-post administration of the measure of ethical decision-making developed by Mumford et al. (2006). The measure was selected for use in evaluating training effectiveness based on the average split-half reliability of .76 obtained for the four major dimensions of ethical behavior under consideration (e.g., data management, study conduct, professional practices, and business practices) as well as the available data pointing to the construct validity of this instrument as a measure of ethical decision-making. This construct validation evidence indicated that scores on these measures were 1) related to ethical climate and exposure to unethical events, 2) were related to the severity of punishment awarded for unethical conduct, and 3) were positively related to intelligence (r = .19), negatively related to cynicism and narcissism (r = -.18), and unrelated to social desirability (r = -.01).

Development of this measure began with a review of professional codes of conduct in the health, biological, and social sciences (Helton-Fauth et al., 2003). Once the dimensions of ethical conduct subsumed under each of these areas had been identified, a review of the ethics literature was conducted to identify ethics cases reflecting one or more of the 17 lower order dimensions identified in the four areas of data management, study conduct, professional practices and business practices in each of the fields under consideration. These cases were then rewritten to describe the general problem at hand. Within this general problem three to four events were generated that would likely involve one of the dimensions of ethical conduct. Participants were then presented with a list of 8 to 12 potential actions that might be taken in response to this event, and responses were structured to reflect high (3), moderate (2), and low (1) levels of integrity as defined by current ethical guidelines. To avoid forced choice responding, participants selected the best two options from among those presented. Each item response was scored by taking the average ethical weight of the responses selected. See the appendix for an illustration of the nature of two questions developed using this framework for the social sciences.

To develop the pre-post version of this measure, three psychologists were asked to review all scenarios and questions developed tapping each dimensions. They allocated the questions developed for each field to either a pre or post measure based on the following criteria 1) the questions appearing in the pre and post measure should be of comparable difficulty, 2) multiple questions must appear in each instrument, pre and post, intended to tap a given area (e.g., data management, study conduct, professional practices, and business practices), and 3) at least one question examining each relevant dimension of ethical behavior had to appear in the pre and post measure. Application of these decision rules led to the presentation of 18 questions in the pre measure developed and 18 questions in the post measure. Questions were tailored to situations common in either the biological or social sciences. Across these two fields, the average number of questions presented examining data management was 2 and 3 (pre/post), study conduct was 5 and 4 (pre/post), professional practices was 8 and 8 (pre/post), and business practices was 3 and 3 (pre/post). Pre and post scores were obtained for the four general dimensions (data management, study conduct, professional practices, and business practices) by taking the average of the responses selected for each of the questions subsumed under a dimension under a given area. Thus, changes in ethical decision-making were to be assessed with respect to these four broad areas. It is of note that these ethical decisions were not directly addressed in training to insure independence of the dependent variable vis-à-vis the training manipulation.

To ensure the compatibility of the pre and post test, pre and post test scores were contrasted in a separate sample of doctoral students at the same university pursuing degrees in the health, biological, and social sciences. To ensure relative equivalence of scores, health science doctoral students were not included in the control pre and post test comparison, yielding a final comparison sample of 180 biological and social science doctoral students. For data management, the mean score on the pretest was 2.17 (SD = .33) and mean score on the post test was 2.27 (SD = .30). On the measure of study conduct the mean score on the pretest was 2.22 (SD = .34) and the post test mean score was 2.24 (SD = .32). On the professional practices measure the mean pretest score was 2.15 (SD = .20) and the mean post test score was 2.24 (SD = .24). Finally, for the business practices measure, mean pretest score was 2.20 (SD = .36) and mean post test score was 2.19 (SD = .47). Because the observed mean differences were relatively small, it seemed reasonable to conclude that these pre and post measures could be considered equivalent.

Strategies

The measure of strategies applied in evaluating learning as a result of ethics training was based on earlier work conducted by Mumford et al. (2006). In this study, four judges, all doctoral students in psychology familiar with the literature on ethics, were presented with operational definitions of the seven metacognitive reasoning strategies being considered in training: 1) recognizing the complexities of your circumstances, 2) seeking outside help, 3) questioning your own and others' judgment, 4) dealing with emotions, 5) anticipating the consequences of actions, 6) assessing personal motivations, and 7) considering the effects of actions on others (see Table 2 for more details). These judges were asked to rate, on a 7-point scale (1 = Low, 7 = High) the extent to which each potential response occurring in an event within a scenario reflected application of each strategy.

Prior to making these ratings, judges were given 15 hours of training concerning the nature of the strategies under consideration and how these strategies were manifested in peoples' behavior. The resulting inter-rater agreement coefficient for the ratings of these 7 metacognitive reasoning strategies was .91. Scores on these strategy dimensions were obtained by taking the weighted average of the responses selected by participants as they worked through the ethical decision-making problems.

As expected, scores on the metacognitive reasoning strategies dimensions displayed positive correlations. The average correlation observed among these scales was .47. However, examination of the relationships among scores on these scales provided evidence for their construct validity. A strong positive correlation was obtained between dealing with emotions and recognizing circumstances (r = .68), but a weak relationship was obtained between seeking help and anticipating consequences (r = .03). As a result, these scales appeared to provide a plausible basis for assessing the effects of training strategy applications in ethical decision-making.

Follow-up

A critical question that arises in studies of ethics training is whether training is maintained over time. Accordingly, training participants were contacted six months later. They were asked if they would be willing to retake the post-test. To provide an incentive for retesting, participants were each offered $50.00 for their time. Out of the original 59 participants, 18 agreed to take the post-training test a second time. These participants evidenced substantial similarity to those participating in training with respect to their scores on relevant measures of individual differences, work climate, and demographic background.

Subjective Reactions

Although we were primarily interested in the effects of the training on overall ethical decision-making and metacognitive reasoning strategies employed in decision-making, it also seemed appropriate to examine participants' reactions to the training. These subjective reaction measures were based on a process analysis framework (Goldstein & Ford, 2002). Accordingly, at the end of each day of training participants were presented with ten questions asking about their impressions of three critical components of training: 1) cases presented, 2) exercises presented, and 3) discussion of topics covered. They were asked to rate on a 7-point scale their impression of the effectiveness of these instructional components. Across two days of instruction these ratings provided internal consistency coefficients of .73 for cases, .87 for exercises and .89 for discussion. Additionally, at the end of training participants were asked to rate the overall effectiveness of the training program.

Analyses

Analyses of the data gathered in the course of this study were conducted using a standard pre-post study design. In addition to the subjective reaction measures of course content and delivery, the primary analyses examined changes in ethical decision-making on the data management, study conduct, professional practices, and business practices scales. Additionally, a second comparison contrasted pre-post performance on the 7 metacognitive reasoning strategies (e.g., recognition of circumstances). These analyses were conducted examining pre-post differences both immediately after training and 6 months later. All analyses were conducted using a one-tailed test under the assumption that training in ethical-decision making would result in significant performance gains.

Results

Reactions

Participants generally viewed the cases, exercises, and discussions provided in training as effective. On a 7-point scale, the mean rating of the effectiveness of the cases presented was M = 5.13 (SD = 1.35). The mean ratings for the effectiveness of the exercises was M = 5.19 (SD = 1.20) and the effectiveness of discussion was M = 5.53 (SD = 1.26). Thus, it appears that participants viewed the key elements of instruction as effective. Accordingly, the overall effectiveness rating of the course was also high, M = 5.38 (SD = .94).

Decision-Making and Strategies

Table 3 presents the pre-post differences observed on the ethical decision-making measures as a result of training. On data management, a significant (t (57) = 3.56, p < .001) gain was observed as a result of training, in contrasting pre and post scores on the ethical decision-making measure. Similarly, a significant (t (58) = 7.38, p < .001) gain was observed in contrasting pre-test scores with post-test scores on decisions involving study conduct. Professional practices (t (58) = 3.48, p < .001) and business practices (t (58) = 2.51, p < .02) also produced significant gains. Thus, this sensemaking training resulted in significant gains in ethical decision-making by young scientists.

Table 3

Changes in decision-making as a result of Training

Pre-TestPost-TestSignificanceEffect Size
MSDMSDtdfpCohen's d

1) Data Managementa2.10.362.32.293.5657.0010.66
2) Study Conductb1.94.282.37.307.3858.0001.46
3) Professional Practicesb2.16.242.29.183.4858.0010.61
4) Business Practicesb2.09.332.24.282.5158.0150.49
aNote. N = 58
bN = 59.

One reason this sensemaking training resulted in the gains in ethical decision-making is that it appears to have led to the application of more effective strategies in working through ethical problems. Table 4 presents the pre-post differences observed with respect to the seven metacognitive reasoning strategies. Recognition of circumstances showed significant gains (t (58) = 7.98, p < .001) across all four types of ethical decision-making problems. A significant gain (t (58) = 4.19, p < .001) was also obtained in seeking help, questioning judgment (t (58) = 8.41, p < .001), and dealing with emotions (t (58) = 6.96, p < .001). This pattern of effects was maintained in examining the three remaining strategy dimensions. Thus, significant effects (t (58) = 5.04 p < .001) were obtained for anticipating consequences, analysis of personal motivations (t (58) = 8.76, p < .001) and considering the effects of actions on others (t (58) = 2.91, p < .01).

Table 4

Changes in Metacognitive Reasoning Strategy scores as a result of Training

Pre-TestPost-TestSignificanceEffect Size
MSDMSDtdfpCohen's d

1) Recognizing your circumstances3.38.523.97.417.9858.0001.24
2) Seeking help.94.441.28.374.1958.0000.84
3) Questioning judgment2.84.473.42.458.4158.0001.27
4) Dealing with emotions2.93.573.40.466.9658.0000.90
5) Anticipating consequences3.30.563.70.465.0458.0000.77
6) Analyzing personal motivations2.63.493.24.408.7658.0001.36
7) Considering the effects of actions on others3.18.553.40.402.9158.0050.45

Note. N = 59.

Follow-up

Taken as a whole, the findings described above indicate that the sensemaking training was effective in enhancing both ethical decision-making and the application of metacognitive reasoning strategies highlighted in training. The question that arises at this juncture is whether the effects of this training were maintained over time. Table 5 presents the results obtained in the comparison of pre test scores with follow-up test scores obtained six months later.

Table 5

Changes in Ethical Decision-Making with six-month follow-up

Pre-Follow
Pre-TestPost-TestFollow-upSignificanceEffect Size
MSDMSDMSDtdfpCohen's d

1) Data Management2.01.352.33.342.09.27.5917.5610.25
2) Study Conduct1.89.272.33.322.24.293.4917.0031.36
3) Professional Practices2.16.262.29.152.19.15.5717.5790.16
4) Business Practices2.05.232.24.242.28.203.0917.0071.14

Note. Follow-up data collected six months after training; N = 18.

As may be seen, significant effects (t (17) = 3.49, p < .01) were obtained for study conduct and business practices (t (17) = 3.09, p < .01) on this long term follow-up. However, data management and professional practices did not show significant effects from the pre-test to the follow-up. It could be that because study conduct procedures and business practices tend to be less familiar to junior scientists like those in this sample, this unfamiliarity may have actually facilitated long-term retention, whereas data management and professional practices may have been taken for granted in the long-term.

Consistent with these findings bearing on the long-term maintenance of gains in ethical decision-making, significant pre-to-follow-up differences were observed on the relevant metacognitive reasoning strategy dimensions. The results obtained in this analysis are summarized in Table 6. As may be seen significant effects were obtained for recognition of circumstances (t (17) = 3.23, p < .01), questioning judgment (t (17) = 3.15, p < .01), dealing with emotions (t (17) = 2.71, p < .02), anticipating consequences (t (17) = 3.63, p < .01), analysis of personal motivations (t (17) = 4.28, p < .001), and consideration of the effects of actions on others (t (17) = 2.88, p < .01). Only seeking help did not maintain its effects from pre-test to follow-up. Thus, it appears that gains in metacognitive reasoning strategies were maintained over time.

Table 6

Changes in Metacognitive Reasoning Strategies with six month follow-up

Pre-Follow
Pre-TestPost-TestFollow-upSignificanceEffect Size
MSDMSDMSDtdfpCohen's d

1) Recognizing your circumstances3.29.573.90.433.80.423.2317.0051.02
2) Seeking help.89.371.27.28.94.20.4617.6530.16
3) Questioning judgment2.71.453.33.513.15.383.1517.0061.04
4) Dealing with emotions2.80.483.37.503.16.372.7117.0150.83
5) Anticipating consequences3.19.603.66.453.75.413.6317.0021.08
6) Analyzing personal motivations2.50.483.16.433.09.354.2817.0011.41
7) Considering the effects of actions on others3.10.603.33.393.52.472.8817.0100.77

Note. Follow-up data collected six months after training; N = 18.

Discussion

Before turning to the broader implications of these findings, certain limitations of the present study should be noted. To begin, the effects of sensemaking training on ethical decision-making were observed only in two fields among doctoral students working at a single university. As a result, the question arises as to whether similar findings would necessarily be obtained at other universities, and whether similar findings would be obtained among more senior professionals. Moreover, although the present study has provided evidence indicating that sensemaking training may prove effective in enhancing ethical decision-making in the biological and social sciences, the question remains as to whether similar effects would be obtained in other fields. Similarly, it should be noted that the obtained effects pertained to a non-minority population where instructors had been adequately prepared to deliver the course material.

Along related lines, participants in this training were all volunteers. Thus, the effects of this training might be contingent on the intrinsic motivation of participants. At one level, the similarities of this sample to the broader, normative sample that did not receive the training argues against this proposition. Nonetheless, it is possible that other unmeasured characteristics of those who volunteered to participate led them to be especially receptive to training. In fact, a similar concern applies to the follow-up study, as only a sub-sample of those who agreed to participate in training agreed to take part in the follow-up effort. Clearly, it is possible that those who agreed to participate in the follow-up where more inherently invested in ethics and ethics training. Although the limited differences observed between members of the follow-up sample and the initial training sample tend to discount this argument, the possibility remains that self-selection into the follow-up sample might, in part, account for the results obtained in the present study.

In addition, the effectiveness of training might be influenced by a number of individual differences variables - variables not examined in the present study. For example, it is possible that this strategy-based training may, or may not, prove effective for people who have greater professional expertise (Ericsson & Charness, 1994). Along similar lines, it is open to question how effective this training would be for individuals who evidence higher levels of cynicism and narcissism. Although, in this regard, it should be noted that pilot analyses contrasting the effects of training in more extreme populations, highly ethical versus less ethical, did not reveal noteworthy differences in this regard.

Still another limitation arises from the application of a pre-post design in evaluation of this training. It is of course possible, although unlikely given the low correlation observed between the ethical decision-making measure and social desirability, that social pressure induced by training influenced post-test gains. More centrally, it is possible that training per se may have “primed” certain responses on the ethical decision-making measure. Although these priming effects might arise from any training program, it should be recognized that the training content was not focused on decisions but rather the strategies applied in working through problems to arrive at ethical decisions. Application of this procedure, of course, to some extent minimizes such priming effects. Moreover, the maintenance of training effects over a six month interval also tends to argue against a simple priming explanation for the effects obtained in the present study.

Finally, it should be recognized that the results obtained in the present study speak most directly to changes in ethical decision-making on one measure - specifically the measure of ethical decision-making developed by Mumford et al. (2006). Thus, whether similar effects of sensemaking training would be observed on other measures is not known. More generally, in the present study, we focused on ethical decision-making as the key marker of integrity (O'Fallon & Butterfield, 2005). Although use of ethical decision-making provides an appropriate measure for appraising the effects of training (e.g., Bebeau & Thoma, 1994), the question remains as to how effective sensemaking training is in enhancing other variables that might influence people's ethical behavior, including knowledge of ethical guidelines (National Institute of Medicine, 2002) or sensitivity to ethical issues (Roberts, Warner, Hammond, Brody, Kaminsky, & Roberts, 2005).

Even bearing these limitations in mind, we believe that the results obtained in the present study have some noteworthy implications. To begin, in the present study two critical conclusions emerged. First, sensemaking training led to sizable gains in ethical decision-making. Second, the effects of this training largely held over time. These findings, of course, point to the value of sensemaking training in enhancing ethical decision-making.

In addition, these findings raise a broader question. Exactly why does sensemaking training in work in enhancing ethical decision-making? In fact, this question becomes even more important when one recognizes that the effects of this training were so large and that they appeared across decisions involving diverse dimensions of data management, study conduct, professional practices, and business practices.

Sensemaking training might be beneficial in part because it focuses on developing three critical attributes that scientists must possess to address ethical issues. First, rules and guidelines are provided, but their application is contextualized to research issues. This contextualization may promote application of this knowledge by allowing people to apply rules and guidelines in a more flexible fashion, consistent with the complex demands of a particular problem (Baer, 2003).

Second, integral to the sensemaking model presented is that ethical issues reflect problems in which people construct solutions “online.” The construction of online solutions, however, is often rooted in experience with or knowledge of relevant past cases (Hammond, 1990; Kolodner, 1997). In other words, available cases provide models that people might use to frame the problem and construct a viable mental model for understanding the problem situation at hand. The use of viable cases in formulating mental models of the ethical problem should, in turn, lead to better decisions, especially when application of these cases has been socially reinforced through cooperative learning techniques (Slavin, 1991). This training approach not only provides individuals with relevant case knowledge, but also the opportunity to work through case examples and decision scenarios cooperatively.

Third, the training approach underlying this sensemaking training was based on an assumption about the nature of real ethical problems confronting scientists. More specifically, it was assumed that these problems should not be viewed in simplistic, black and white terms - although black and white ethics do exist. The sensemaking model underlying the present training approach was devised to sensitize scientists to the ambiguities associated with common ethical problems and provide them with strategies (e.g., questioning one's own and others' judgment, dealing with emotions) that should help them to work through these problems more effectively.

In fact, the results obtained in the analyses of strategy application indicated that sensemaking training led to a preference for decision options involving the application of these strategies. Moreover, the follow-up study indicated that preference for options associated with application of these strategies was maintained over time. This finding is not unique to studies examining ethical problems. In fact, application of effective strategies has been found to contribute to performance on many complex problems (Scott, Leritz, & Mumford, 2004a, 2004b; Scott, Lonergan, & Mumford, 2005). However, the findings obtained in the present study indicate that training in strategy application may represent a critical component of both sensemaking training and subsequent ethical decision-making.

These observations about the importance of providing people with strategies for working through ethical problems are noteworthy for both substantive and practical reasons. Practically, many ethics training courses used in the sciences seek simply to provide knowledge of basic guidelines and ethics rules. However valuable, and necessary, it may be to provide such basic knowledge, it may prove of limited value when people are presented with complex, ambiguous, real-world ethical problems. General principles do not always provide people with effective guidance for working through the complexities of concrete ethical dilemmas, and even knowing what to do does not always translate into actually taking the right course of action. By providing practical strategies for working through ethical problems, however, ethical decision-making becomes more likely. As a result, it seems reasonable to conclude that ethics training might benefit, and benefit substantially, by providing students with strategies for working through ethical problems.

We would not argue, of course, that the seven metacognitive reasoning strategies trained in the present effort represent the only strategies that might be trained or prove of value in ethical decision-making. For example, it might also be useful for people to contrast typical and atypical case models, or alternatively, to analyze assumptions being made by key actors. However, it is clear that ethics training in the sciences might benefit from a systematic analysis of strategies contributing to ethical decision-making and attempts to encourage application of these strategies through sensemaking-based training interventions.

Substantively, our findings point to an important, potentially critical, new direction for research in ethical decision-making in the sciences and for ethics training more broadly. The present results suggest that mental models, cases, and metacognitive reasoning strategies may represent critical underlying mechanisms shaping ethical decision-making. Future research should attempt to delineate more explicitly how each of these elements shape ethical decision-making. By demonstrating the impact of sensemaking training on ethical decision-making, we hope that the present research will provide an impetus for future research along these lines.

Acknowledgements

We would like to thank Amelia Adams, Jane Bowerman, Jennifer Carmichael, Gina Scott Ligon, Blaine Gaddis, and Whitney Helton-Fauth for their contributions to the present effort. This research was sponsored by a grant from the Office of Research Integrity and the National Institutes of Health (5R01-NS049535-02), Michael D. Mumford, Principal Investigator.

Appendix

Example Ethical Decision-Making Question from Social Sciences

Moss is a researcher in the laboratory of Dr. Abrams, a well-known researcher in the field of economics. Moss is trying to develop a model to predict performance of stocks in the technology sector, but she is having difficulty analyzing and selecting trends to include in the model. She enlists the help of Reynolds, another experiences researcher working on a similar topic. With Reynolds's help, Moss eventually analyzes and identifies some key trends working them into a testable model. She also discusses some of her other research ideas with Reynolds. Two weeks later, Moss comes across a grant proposal developed by Reynolds and Abrams. She sees that it includes ideas very similar to those she discussed with Reynolds. She takes the matter to Abrams, who declines to get involved, saying that the two researchers should work it out on their own.

  1. Reynolds admits to Abrams that he used slightly modified versions of Moss's ideas. Abrams is upset with this, but Reynolds is a key person on the proposal team and the grant application deadline is soon. What should Abrams do? Choose two of the following:

    1. Fire Reynolds from the lab on the grounds of academic misconduct

    2. Leave Reynolds as first author on the proposal since he wrote up the ideas

    3. Remove Reynolds from the proposal team, and offer Moss the position if she allows her ideas to be used

    4. Ask Moss to join the grant team, placing her as third author on the proposal if she allows her ideas to be used

    5. Acknowledge Moss in the grant proposal because the ideas were hers originally

    6. Apologize to Moss and indicate that the proposal must go out as is to meet the deadline

    7. Remove Moss's ideas from the proposal and try to rework it before the deadline

  2. Moss is upset about Reynolds using her ideas and she decides to do something about it. Given that Moss works very closely with Reynolds and their boss Abrams, evaluate the likely success of the following plans of actions Moss can take. Choose two of the following:

    1. Moss asks Reynolds to give her credit by putting her name on the grant proposal as well

    2. Moss asks Reynolds about the incident and tape records his reaction to later show Abrams

    3. Moss searches for annotated notes about her ideas that are dated prior to her conversation with Reynolds d. Moss appeals for a “mock trial” for Reynolds to testify under oath to his superiors that the information was his

    4. Moss searches for Reynolds's lack of understanding of the concepts he claims were his own by questioning him in front of other students

    5. Moss attempts to sway other researchers to support her to Abrams

    6. Moss visits Reynolds's office in hopes of finding evidence that she contributed to the proposal

    7. Moss asks Reynolds to write an account of their conversation on the day in question and shows her comparison account to him as evidence that he is using her ideas

References

  • Al-Jalahma M, Fakhroo E. Teaching medical ethics: Implementation and evaluation of a new course during residency training in Bahrain. Education for Health: Change in Learning & Practice. 2004;17:62–72. [Abstract] [Google Scholar]
  • Aronson E, Patnoe S. The jigsaw classroom: Building cooperation in the classroom. 2nd ed. Longman; New York: 1997. [Google Scholar]
  • Baer J. Evaluative thinking, creativity, and task specificity: Separating wheat from chaff is not the same as finding needle in haystacks. In: Runco MA, editor. Critical Creative Processes. Hampton; Cresskill, NJ: 2003. pp. 129–152. [Google Scholar]
  • Bebeau MJ, Thoma SJ. The impact of a dental ethics curriculum on moral reasoning. Journal of Dental Education. 1994;58:684–692. [Abstract] [Google Scholar]
  • Bechtel HK, Pearson W. Deviant scientists and scientific deviance. Deviant Behavior. 1985;6:237–252. [Google Scholar]
  • Chen FW. A study of the adjustment of ethical recognition and ethical decision-making of managers-to-be across the Taiwan Strait before and after receiving a business ethics education. Journal of Business Ethics. 2003;45:291–307. [Google Scholar]
  • Clarkeburn H, Downie JR, Matthew B. Impact of an ethics programme in a life sciences curriculum. Teaching in Higher Education. 2002;7:65–79. [Google Scholar]
  • Clapham MM. Ideational skills training: A key element in creativity training programs. Creativity Research Journal. 1997;10:33–44. [Google Scholar]
  • Coughlin SS, Katz WH, Mattison DR. Ethics instruction at schools of public health in the United States. American Journal of Public Health. 1999;89:768–770. [Abstract] [Google Scholar]
  • Dalton R. NIH cash tied to compulsory training in good behaviour. Nature. 2000;408:629. [Abstract] [Google Scholar]
  • Deutch CE. A course in research ethics for graduate students. College Teaching. 1996;44:56–60. [Google Scholar]
  • Dörner D, Schaub H. Errors in planning and decision-making and the nature of human information processing. Applied Psychology: An International Review. 1994;43:433–453. [Google Scholar]
  • Drazin R, Glynn MA, Kazanjian RK. Multilevel theorizing about creativity in organizations: A sense making perspective. Academy of Management Review. 1999;24:286–329. [Google Scholar]
  • De Las Fuentes C, Willmuth ME, Yarrow C. Competency Training in Ethics Education and Practice. Professional Psychology: Research & Practice. 2005;36:362–366. [Google Scholar]
  • Ericsson KA, Charness N. Expert performance: Its structure and acquisition. American Psychologist. 1994;49:725–747. [Google Scholar]
  • Gawthrop JC, Uhlemann MR. Effects of the problem-solving approach in ethics training. Professional Psychology: Research & Practice. 1992;23:38–42. [Google Scholar]
  • Goldstein IL, Ford JK. Training in organizations. Wadsworth; Belmont, CA: 2002. [Google Scholar]
  • Haidt J. The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review. 2001;108:814–834. [Abstract] [Google Scholar]
  • Haidt J. The emotional dog does learn new tricks: A reply to Pizarro and Bloom (2003) Psychological Review. 2003;110:197–198. [Google Scholar]
  • Hammond KJ. Case-based planning: A framework for planning from experience. Cognitive Science. 1990;14:385–443. [Google Scholar]
  • Helton-Fauth W, Gaddis B, Scott G, Mumford M, Devenport L, Connelly S, Brown R. A new approach to assessing ethical conduct in scientific work. Accountability in Research: Policies and Quality Assurance. 2003;10:205–228. [Abstract] [Google Scholar]
  • Hmelo-Silver CE, Pfeffer MG. Comparing expert and novice understanding of a complex system from the perspective of structures, behaviors and functions. Cognitive Science. 2004;28:127–138. [Google Scholar]
  • Hogarth RM, Makridakis S. Forecasting and planning: An evaluation. Management Science. 1981;27:115–138. [Google Scholar]
  • Johnson-Laird PN. Mental models: Toward a cognitive science of language, inference, and consciousness. Harvard University Press; Cambridge, MA: 1983. [Google Scholar]
  • Kalichman MW, Friedman PJ. A pilot study of biomedical trainees' perceptions concerning research ethics. Academic Medicine. 1992;67:769–775. [Abstract] [Google Scholar]
  • Kirkpatrick DL. Techniques for evaluating training programs. Journal of the American Society of Training Directors. 1959;13:3–26. [Google Scholar]
  • Key S. Organizational ethical culture: Real or imagined? Journal of Business Ethics. 1999;20:217–225. [Google Scholar]
  • Kochan CA, Budd JM. The persistence of fraud in the literature. Journal of the American Society for Information Science. 1992;43:488–493. [Abstract] [Google Scholar]
  • Kohlberg L. Essays on moral development: Vol. 2. The psychology of moral development: The nature and validity of moral stages. Harper & Row; San Francisco: 1984. [Google Scholar]
  • Kolodner JL. Educational implications of analogy. American Psychologist. 1997;52:57–67. [Abstract] [Google Scholar]
  • Kraiger K, Ford JK, Salas E. Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation. Journal of Applied Psychology. 1993;78:311–328. [Google Scholar]
  • Loe TW, Ferrell L, Mansfield P. A review of empirical studies assessing ethical decision making in business. Journal of Business Ethics. 2000;25:185–204. [Google Scholar]
  • Kimmelman J. Valuing risk: The ethical review of climate trial safety. Kennedy Institute of Ethical Journal. 2004;14:369–393. [Abstract] [Google Scholar]
  • Marshall E. Fraud strikes top genome lab. Science. 1996;274:908–910. [Abstract] [Google Scholar]
  • Martinson BC, Anderson MS, de Vries R. Scientists behaving badly. Nature. 2005;435:737–738. [Abstract] [Google Scholar]
  • Macrina FL, Funk CL, Barrett K. Effectiveness of responsible conduct of research instruction: Initial findings. Journal of Research Administration. 2004;35:6–13. [Google Scholar]
  • Messick S. Validity. In: Linn RL, editor. Educational measurement. 3rd ed. Macmillan Publishing; New York, NY: 1989. [Google Scholar]
  • Milgram S. Obedience [film] New York University Film Library; New York: 1965. [Google Scholar]
  • Motowidlo SJ, Dunnette MD, Carter GW. An alternative selection measure: The low-fidelity simulation. Journal of Applied Psychology. 1990;75:640–647. [Google Scholar]
  • Mumford MD. Pathways to Outstanding Leadership: A Comparative Analysis of Charismatic, Ideological, and Pragmatic Leaders. Erlbaum; Mahweh, New Jersey: 2006. [Google Scholar]
  • Mumford MD, Devenport LD, Brown RP, Connelly MS, Murphy ST, Hill JH, Antes AL. Validation of ethical decision-making measures: Internal and external validity. Ethics and Behavior. in press. [Google Scholar]
  • Mumford MD, Reiter-Palmon R, Redmond MR. Problem construction and cognition: Applying problem representations in ill-defined domains. In: Runco MA, editor. Problem finding, problem solving, and creativity. Ablex; Westport, CT: 1994. [Google Scholar]
  • National Institute of Medicine . Integrity in Scientific Research: creating an environment that promotes responsible conduct. National Research Council; Washington, D. C.: 2002. [Abstract] [Google Scholar]
  • Nature Editorial: Ethics and fraud. Nature. 2006;439:117–118. [Abstract] [Google Scholar]
  • Mumford MD, Gutsafson SB. Creativity syndrome: Integration, application, and innovation. Psychological Bulletin. 1988;103:27–43. [Google Scholar]
  • O'Fallon MJ, Butterfield KD. A review of the empirical ethical decision-making literature: 1996-2003. Journal of Business Ethics. 2005;59:375–413. [Google Scholar]
  • Önkal D, Yates JF, Simga-Mugan C, Öztin Ş . Professional vs. amateur judgment accuracy: The case of foreign exchange rates. Organizational Behavior & Human Decision Processes. 2003;91:169–186. [Google Scholar]
  • Oyserman D, Markus HR. Possible solves and delinquency. Journal of Personality and Social Psychology. 1990;59:112–125. [Abstract] [Google Scholar]
  • Patalano AL, Siefert CM. Opportunistic planning: being reminded of pending goals. Cognitive Psychology. 1997;34:1–36. [Abstract] [Google Scholar]
  • Rest JR. An overview of the psychology of morality. In: Rest JR, editor. Moral Development and Behavior: Theory, research, and social issues. Praeger; New York: 1986. pp. 133–175. [Google Scholar]
  • Roberts LW, Warner TD, Hammond KAG, Brody JL, Kaminsky A, Roberts BB. Teaching medical students to discern ethical problems in human clinical research studies. Academic Medicine. 2005;80:925–930. [Abstract] [Google Scholar]
  • Ryden MB, Duckett L. Technical report for the Improvement of Post Secondary Education grant. U.S. Department of Education; Washington, DC: 1991. Ethics education for baccalaureate nursing. [Google Scholar]
  • Scott GM, Leritz LE, Mumford MD. Creativity Research Journal. 2004a;16:361–388. [Google Scholar]
  • Scott GM, Leritz LE, Mumford MD. Types of creativity: Approaches and their effectiveness. The Journal of Creative Behavior. 2004b;38:149–179. [Google Scholar]
  • Scott GM, Lonergan DC, Mumford MD. Contractual combination: Alternative knowledge structures, alternative heuristics. Creativity Research Journal. 2005;17:21–36. [Google Scholar]
  • Sims RR. Evaluating public sector training programs. Public Personnel Management. 1993;22:591–616. [Google Scholar]
  • Slavin RE. Synthesis of research on cooperative learning. Educational Leadership. 1991;48:71–81. [Google Scholar]
  • Steneck N. Standards of ethical standards of conduct: Introduction to the responsible conduct of research. Dept. of Health and Human Services, Office of Research Integrity; Washington, D.C.: 2004. [Google Scholar]
  • Strange JM, Mumford MD. The origins of vision: Effects of reflection, models, and analysis. Leadership Quarterly. 2005;16:121–148. [Google Scholar]
  • Tversky A, Kahneman D. Judgment under uncertainty: Heuristics and biases. Science. 1974;185:1124–1131. [Abstract] [Google Scholar]
  • Walsh JP. Doing a deal: Merger and acquisition: Negotiations and their impact upon target company top management turnover. Strategic Management Journal. 1989;10:307–322. [Google Scholar]
  • Weick KE. Sensemaking in organizations. Sage; Thousand Oaks, CA: 1995. [Google Scholar]
  • Wright SM, Carrese JA. Ethical issues in the managed care setting: a new curriculum for primary care physicians. Medical Teacher. 2001;23:71–75. [Abstract] [Google Scholar]

Citations & impact 


Impact metrics

Jump to Citations

Citations of article over time

Alternative metrics

Altmetric item for https://www.altmetric.com/details/14468448
Altmetric
Discover the attention surrounding your research
https://www.altmetric.com/details/14468448

Smart citations by scite.ai
Smart citations by scite.ai include citation statements extracted from the full text of the citing article. The number of the statements may be higher than the number of citations provided by EuropePMC if one paper cites another multiple times or lower if scite has not yet processed some of the citing articles.
Explore citation contexts and check if this article has been supported or disputed.
https://scite.ai/reports/10.1080/10508420802487815

Supporting
Mentioning
Contrasting
4
198
0

Article citations


Go to all (79) article citations

Funding 


Funders who supported this work.

NINDS NIH HHS (2)