Skip to main content

Advertisement

Log in

Assessing Graduate Student Progress in Engineering Ethics

  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

Abstract

Under a grant from the National Science Foundation, the authors (and others) undertook to integrate ethics into graduate engineering classes at three universities—and to assess success in a way allowing comparison across classes (and institutions). This paper describes the attempt to carry out that assessment. Standard methods of assessment turned out to demand too much class time. Under pressure from instructors, the authors developed an alternative method that is both specific in content to individual classes and allows comparison across classes. Results are statistically significant for ethical sensitivity and knowledge. They show measurable improvement in a single semester.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The philosopher-author of this article wrote this sentence without the “in principle”; the engineer-author added it to warn other engineers that, in practice, it can be hard—at least for engineers without experience of this sort of grading. As with any new form of grading, there is a “learning curve”.

  2. The literature on in-course assessment of ethics learned is not large but growing quickly. Beside other work sited here, see, for example: Bebeau (2002a, b, 2005), Mumford et al. (2006), Kligyte et al. (2008).

  3. Outside the formal curriculum, there are several other options available, such as handing out the NSPE Code of Ethics as part of orientation materials, integrating ethics discussions into orientation sessions, special departmental colloquia or seminars on issues in engineering ethics, and voluntary events, such as outside speakers on ethics.

  4. A test of ethical development in engineering has recently become available. Borenstein et al. (2010). If, as we believe, ethical judgment improves with ethical development, the title of their article is not misleading.

  5. Right now, this is our complete list:

    • Accessibility (designing with disabilities in mind)

    • Animal subjects research

    • Authorship and credit (co-authorship, faculty and students)

    • Publication (presentation: when, what, and how?)

    • National security, engineering research, and secrecy

    • Collaborative research

    • Computational research (problems specific to use of computers)

    • Conflicts of interest

    • Cultural differences (between disciplines as well as between countries)

    • Data management (access to data, data storage, and security)

    • Confidentiality (personal information and technical data)

    • Human subjects research in engineering fields

    • Peer review

    • Research misconduct (fabrication, falsification, and incomplete disclosure of data)

    • Obtaining research, employment, or contracts (credentials, promises, state of work, etc.)

    • Responsibilities of mentors and trainees

    • Treating colleagues fairly (responding to discrimination)

    • Responsibility for products (testing, field data, etc.)

    • Whistle blowing (and less drastic responses to wrongdoing).

  6. The self-assessment would only have taken 15 min once at the end of the semester but was dropped while trying to find room for the new tests.

  7. The Danes seem to have an especially nice expression to make this point: “Which is higher, the Round Tower or the volume of a thunderclap?”.

  8. There is also an ethical issue here, though probably one that is merely academic. Both methods of displaying the ratio as a single number are, in principle, misleading. We would make large improvements seem small if we put the pre-test score on top but make the large improvement seem more dramatic than it is in fact if we put the pre-test score on the bottom. The issue is merely academic insofar as the observed change is probably never going to be large enough to give a false impression, however we chose to define the ratio. But note: a reviewer for this journal referred us to Hake (1998) which uses a more complicated method to get a number, one that seemingly escapes both the risk of zero denominator and our ethical dilemma: (Posttest Score–Pretest Score)/(Maximum Possible Score–Pretest Score).

  9. This observation is, of course, possible because the class is small (and a lab rather than lecture) Feinerman was able to learn a good deal about each student in the course of the semester. A more formal study would gather this sort of information in advance. This observation is not meant to prove anything, merely to suggest an explanation worth further investigation.

  10. We did not use the paired T test to check for statistical significance at the time of manuscript submission due to the minimal statistics training of one of the co-authors (Feinerman) and the virtual absence of statistical training in the other (Davis). In response to one reviewer’s suggestion, we did. Feinerman used PASW 18 to run the paired T test. The resulting difference in means for the undergraduates and graduates was statistically significant in 2009 (with p = .004 and .053 respectively). Looking at the difference in means in 2010, the control year, the difference was not statistically significant, with p = .722 and .710 for undergraduates and graduates respectively. If the sum of the initial (T1) and final (T4) test responses are analyzed in 2008, the results are not statistically significant, but the difference is greater than in the control year with p = .162 and .296 for undergraduates and graduates respectively. The paired T test confirms that micro-insertion does teach the students ethics.

References

  • ABET, Inc. (2009). Criteria for accrediting engineering programs, 2010–2011. http://www.abet.org/Linked%20Documents-UPDATE/Criteria%20and%20PP/E001%2010-11%20EAC%20Criteria%201-27-10.pdf.

  • Bebeau, M. J. (2002a). The defining issues test and the four component model: Contributions to professional education. Journal of Moral Education, 31(3), 271–295.

    Article  Google Scholar 

  • Bebeau, M. J. (2002b). Outcome measures for assessing integrity in the research environment (Appendix B), in integrity in scientific research: Creating an environment that promotes responsible conduct. National Academy Press, Washington, D.C. (available on NAP website: http://www.nap.edu/books/ 0309084792/html).

  • Bebeau, M. J. (2005). Evidence-based ethics education. Summons, The Journal for Medical and Dental Defence Union of Scotland (Summer), 13–15.

  • Bebeau, M. J., & Thoma, S. J. (1999). Intermediate concepts and the connection to moral education. Educational Psychology Review, 11, 343–360.

    Article  Google Scholar 

  • Borenstein, J., Drake, M. J., Kirkman, R., & Swann, J. (2010). The engineering and science issues test (ESIT): A discipline-specific approach to assessing moral judgment. Science and Engineering Ethics, 16, 387–407.

    Article  Google Scholar 

  • Davis, M. (2006). Integrating ethics into technical courses: Micro-insertion. Science and Engineering Ethics 12, 717–730, esp. 726–727.

    Google Scholar 

  • Davis, M., & Riley, K. (2008). Ethics across graduate engineering curriculum. Teaching Ethics, 8(Fall), 25–42.

    Google Scholar 

  • Hake, R. R. (1998). Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics, 66(1), 64–74.

    Article  Google Scholar 

  • Institute of Medicine (IOM). (2002). Integrity in scientific research. Washington, DC: National Academics Press.

    Google Scholar 

  • Kligyte, V., Marcy, R. T., Sevier, S. T., Godfrey, E. S., & Mumford, M. D. (2008). A qualitative approach to responsible conduct of research (RCR) training development: Identification of metacognitive strategies. Science and Engineering Ethics, 14, 3–31.

    Article  Google Scholar 

  • Loui, M. C. (2006). Assessment of engineering ethics video: Incident at Morales. Journal of Engineering Education, 95, 85–91.

    Google Scholar 

  • Mumford, M. D., Devenport, L. D., Brown, R. P., Connelly, S., Murphy, S. T., et al. (2006). Validation of ethical decision making measures: Evidence for a new set of measures. Ethics and Behavior, 16, 319–345.

    Article  Google Scholar 

  • Riley, K., Davis, M., Jackson, A. C., & Maciukenas, J. (2009). ‘Ethics in the Details’: Communication engineering ethics via micro-insertion. IEEE Transactions on Professional Communication, 52, 95–108.

    Article  Google Scholar 

  • Sindelar, M., Shuman, L., Besterfield-Sacre, M., Miller, R., & Mitcham, C., et al. (2003). Assessing engineering students’ abilities to resolve ethical dilemmas. In Proc. 33rd annual frontiers in education 3 (November 58). S2A 25–31.

  • Suskie, L. (2004). Assessing student learning: A common sense guide. San Francisco: Joessey-Bass.

    Google Scholar 

Download references

Acknowledgments

Work on this paper was funded in part by a grant from the National Science Foundation (EEC-0629416). We should like to thank S&EE’s editor and five reviewers for their extensive comments on earlier versions of this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Davis.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Davis, M., Feinerman, A. Assessing Graduate Student Progress in Engineering Ethics. Sci Eng Ethics 18, 351–367 (2012). https://doi.org/10.1007/s11948-010-9250-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11948-010-9250-2

Keywords

Navigation