Introduction

I like to start this article with admitting that my formal education is neither in philosophy and ethics nor in engineering and science; I specialized in Operations Research (OR) after a general education in business and economics (which are two closely related disciplines in continental Europe, but not in the Anglo-Saxon educational system). However, I have been working together with quite a few engineers, on a variety of practical problems that—on hindsight—did imply serious ethical questions. Actually, OR is a discipline that is related to engineering, in so far that both disciplines use mathematical models to solve practical problems (e.g., industrial engineering is very close to OR). In fact, OR is a rather fuzzy discipline (see the website of the “Institute for Operations Research and the Management Sciences”: http://www.informs.org/); OR is also called a “bag of tools”, including Linear Programming (LP), Markov models, and simulation. These tools are applied to a large variety of practical problems, starting in World War II with military problems such as defending a convoy of ships against submarine attacks; more examples will follow. OR has struggled with ethical issues, but has not reached a firm code of conduct (as we shall see). Philosophers specializing in ethics seem best qualified to solve ethical issues in engineering and science; unfortunately, most philosophers have no experience in practical engineering problems.

Some Operations Research examples

In order to become more specific, let’s consider an example of an engineering model in which I was involved personally as an OR consultant. This model represents the so-called Waste Isolation Pilot Plant (WIPP), built near the town of Carlsbad in New Mexico (NM), USA; this WIPP stores nuclear waste. The client (patron) of this modeling effort is the Environmental Protection Agency (EPA) of the Department of Energy (DOE). The consultant (modeler) is Sandia National Laboratories in Albuquerque (NM). The other stakeholders become clear when I point out that this nuclear waste might leak away from the WIPP to the surface (this “plant” resembles an underground coal or salt mine, as it includes a “waste shaft” that is dug into the earth). Such leakage may endanger the health of human beings, so ethical issues certainly play a role. More precisely, not only the people now living near Carlsbad are at risk: future generations are at risk too. Therefore the model’s output (performance measure, criterion) is the chance of leakages in the next 10,000 years; this time horizon is stipulated by the EPA, so it is seems not open to scientific debate. Note that the local population also enjoys the benefits of new employment and business opportunities!

More specifically, the waste stored in this WIPP includes garments worn by medical personnel while treating cancer patients. So besides the risks and benefits for the people living in Carlsbad and other places now and in the future, there are benefits for these patients. Actually, the simulation model quantifies the chance of nuclear leakage; it does not quantify—let alone balance—the costs and benefits of all the different stakeholders (I shall return to the role of stakeholders). So—unlike some cost/benefit analyses—this study does not try to quantify the value of a human life (which is related to the question about “the worth of a songbird”; Funtowicz and Ravetz 1994).

Mathematically, the WIPP model includes many deterministic nonlinear differential equations plus some stochastic (random or chance) processes. These differential equations simulate the physical and chemical processes that are determined by the laws of nature governing the possible dissipation of the nuclear waste in the underground. The stochastic submodel (a so-called Poisson model) simulates human processes; e.g., after (say) 1,000 years, people may have forgotten about the WIPP, and start digging for precious metals in that area.

Based on the outcomes of this model, permission was granted to build the WIPP. Many more details can be found in the vast literature on this project (Helton 2009).

Later on, a different WIPP was modeled; namely, a WIPP for the waste created by the production of atomic bombs, to be built in the Yucca Mountain in Nevada. Obviously, the design and production of such bombs raises different ethical questions! Military applications of engineering and OR models will be discussed in a separate section below.

Besides this WIPP model, there are more engineering models in which I have been involved as an OR specialist—and that may raise ethical questions. One case involved the scientific evaluation of a simulation model for the project planning of the storm surge barrier across the Easter Scheldt estuary in the Dutch province called Zeeland (see http://www.neeltjejans.nl/index.php/en/home). The construction of this novel type of dike involved many challenging engineering questions; e.g., as the barrier progressed, the strength of the water flow through the remaining gap increased so the question arose: should this barrier be built starting from the South to the North, or starting from the North, or starting from both sides simultaneously? Whether any ethical questions were raised at that time, Ì do not remember; maybe the memory of nearly 2,000 people who drowned in the flooding of 1953 was too vivid and overwhelming. Nowadays, however, a central question in this water management might be how to balance the values of stakeholders such as fishermen, local inhabitants, tourists, and the ecology in general.

Another “personal” case study with engineering and OR aspects was the search for explosives (mines) on the bottom of the sea, deploying sonar. The goal of this study was the quantitative evaluation of various tactical and operational strategies of the Dutch navy (the client), such as the tilt angle of the sonar and the ship’s course—given the environment (the mine field, the water temperature and salinity, the type of sea bottom, the operator’s behavior). To answer these questions, a simulation model was developed by TNO/FEL, the largest Dutch research organization (see http://www.tno.nl/index.cfm). The primary scientific and practical problem was how to determine the validity of the model (Kleijnen 1995a). Besides this problem, the model—like any military model—raises ethical questions, because the goal of the military is to eliminate the “bad guys”, “terrorists”, or “aggressors” (but remember the definition of “aggressive weapons”: weapons in the hands of the opponent).

A recent urgent worldwide problem—that also involves engineers and has ethical implications—is global warming (the 2009 Copenhagen conference did not solve this problem). The Dutch “National Institute for Public Health and the Environment” (in Dutch: RIVM) developed a simulation model for this problem. Like in the WIPP example, the issue is the survival of future generations; that survival requires a sustainable world. RIVM was confronted with the issue of validating their model, and selecting the really important factors among the many potential factors. It turned out that the model had some computer bugs (computer modules were called in the wrong order); furthermore, among the 281 potentially important factors, only 15 factors were found to be really important so they needed monitoring (Bettonvil and Kleijnen 1996; Kleijnen et al. 1992).

A final example involving engineering and OR involves milk robots; i.e., cows are milked by robots instead of farmers (Halachmi et al. 2001). The practical problem was to determine the optimal number of (expensive) milk robots for a given farm. An ethical question may be the effects on animal welfare—which was not considered in the model. Note that Dutch parliament is the only parliament with a Party for the Animals (PvdD; see www.pvdd.nl/). Furthermore, animal welfare may include the survival of specific species (Funtowicz and Ravetz 1994).The ethical issues surrounding research that uses animals as guinea pigs (e.g., research on drugs) are discussed in the literature (Jones 2007).

Ethical Codes in Various Scientific Disciplines

Engineering ethics are discussed in detail by Wikipedia (http://en.wikipedia.org/wiki/Engineering_ethics). This discussion includes codes of ethics of the American Society of Civil Engineers (ASCE), the National Society of Professional Engineers (NSPE), the American Society of Mechanical Engineers (ASME), the Institute of Electrical and Electronics Engineers (IEEE), and the American Institute of Chemical Engineers (AIChE). Codes of ethics for engineers are further discussed in the literature (Van de Poel et al. 2007).

More information on codes of ethics is provided by the “Online Ethics Center for Engineering and Research” of the National Academy of Engineering (http://www.onlineethics.org/) and the “Center for the Study of Ethics in the Professions” of the Illinois Institute of Technology (http://ethics.iit.edu/). There are also classic textbooks on ethics in science and engineering (Macrina 2005; Trevino and Nelson 2010; Whitbeck 1998), but new textbooks keep appearing (Bazerman and Tenbrunsel 2010; Rivera 2010; Tavani 2010).

In other countries, engineers also have their ethical codes. For example, in The Netherlands, the Royal Institute of Engineers (KIVI NIRIA) has such a code (http://www.kiviniria.net/CM/PAG000002106/Gedragscode.html).

Note that instruction in engineering ethics may indeed change students’ feelings about professional responsibility; e.g., these students may consider more options before making decisions (Hashemian and Loui 2010).

Because I am not an engineer, I will not further discuss the ethical guidelines published by various engineering organizations. Instead, I will reflect on ethical issues in engineering models, from my personal perspective—as a human being and a scientist active in OR.

Besides engineering models, there are other types of models. For example, as an OR specialist I have been involved in a simulation model of the logistics of modern education at a specific level of secondary schooling. Another project was the quantification of the costs and benefits of changing specific social security laws (further discussed below).

OR does not have its own code of ethics, but some OR researchers and practitioners do pay attention to ethical issues (Le Menestrel and Van Wassenhove 2009). An example is provided by the privacy problems in the modern “surveillance society”, which includes the tracking of cellular phones (Cooper et al. 2009); such “tracking and tracing” is a hot topic in logistics research and practice.

Though OR does not have its own code of ethics, closely related disciplines do (Gass 2009); namely, the American Statistical Association or ASA (http://www.amstat.org/index.cfm), the Association for Computing Machinery or ACM (http://www.acm.org/about/code-of-ethics), and the Society for Computer Simulation or SCS (http://www.scs.org/ethics/); the SCS website lists various organizations that have accepted the SCS code (e.g., NATO). The ASA code emphasizes model validation. This emphasis is also found in several technical OR publications (Halachmi et al. 2001; Kleijnen 1995a, b, 1999, 2000; Kleijnen et al. 2001).

Codes of conducts are also found outside engineering and OR with its related disciplines; e.g., in law, medicine, psychology, the life sciences, and the social sciences (Gustafsson et al. 2005; Jones 2007; Kleijnen 2001; Newberry et al. 2009; White 2009). Current Dutch politics shows much debate on the electronic patient file and its threat to privacy. Since the economic and financial crisis worldwide, there has been much debate on new codes of ethics for financial analysts; also see the MBA Ethics Oath (for which Google gave circa 56,500 hits on February 5, 2010). Codes of ethics in many disciplines are also surveyed on the Internet (http://www.scs.org/ethics/addlInfo.html#Codes).

Models: Assumptions and their Documentation

Obviously, a mathematical model itself has no morals; it is an abstract, mathematical entity that belongs to the immaterial world. However, such a model reflects an existing or planned system in the real world; the goal of this modeling is to solve a problem in that world (in order to improve society as a whole or one of its groups such as a particular company; Jones 2007). That goal may have ethical implications; e.g., a model meant to increase the profits of a heroin dealer has moral aspects. More examples were presented in the second section.

Any model—be it mathematical (computerized) or mental (conceptual)—is based on particular simplifying assumptions; e.g., the mathematical model may assume linear equations with specific parameter values. Consequently, the model’s results (output) are valid if those assumptions hold. This consequence leads to the crucial question: What happens when these assumptions do not hold? Often the answer remains unknown, because the modelers do not investigate this question thoroughly; maybe their clients like the answers that the model gives. Yet, an old saying in computer science is “Garbage In, Garbage Out (GIGO)”! A recent example of the role of assumptions is the public debate on the problems of global warming: are the assumptions of the various climate models realistic? (Actually, this debate has been going on for quite a while: Funtowicz and Ravetz 1994; Van der Sluijs et al. 2005). I claim that the interest in the validation of model assumptions is more articulated in the public domain than in private business with its confidentiality issues; also see the report on a panel discussion on these issues (Banks 2001). The validity of models in any science is also emphasized in a recent report by the Swedish Research Council (Gustafsson et al. 2005, pp. 36–37).

Note that the preceding examples illustrate that simulation models are applied in many scientific disciplines that study dynamic systems, ranging from sociology to astronomy—certainly encompassing the various engineering (sub)disciplines (Karplus 1983). OR is also applied in many of these areas; e.g., in inventory management and queuing systems (telecommunications, traffic).

Models may be used in good or in bad ways, by modelers or clients—and the public may get hurt. These clients and the public may not understand the reasoning that modelers have built into their computer program, because they have not read the instruction manual. This model documentation should explain the model’s underlying reasoning, especially its performance measures (also called criteria, responses, outputs) and its assumptions (concerning specific mathematical functions, their parameter values, initial values of dynamic variables, etc.) with their validation. An OR example is the explicit list of assumptions in the critical analysis of IBMs inventory-management package called “IMPACT” (Kleijnen and Rens 1978).

Model documentation is also necessary to enable other researchers (modelers) to reproduce the outcomes of the model. Indeed, reproduction—or its antithesis, falsification—is a basic principle of science (Jones 2007; Walker 2009).

Note that when the validity of a model is tested, auxiliary assumptions are often introduced; e.g., the responses are usually assumed to have Normal (Gaussian) distributions. Actually, most modelers are brainwashed into assuming Normal distributions so they often forget distribution-free statistical tests and computer-driven statistical techniques such as bootstrapping. Another problem is that multiple tests increase the probability of falsely rejecting a valid model: so-called “type I error probability” or modeler’s risk. So the documentation should also cover the assumptions of statistical techniques used for testing the validity of the model. These statistical issues are further discussed in many publications (Kleijnen 2008).

It is a challenge to develop on-line computerized documentation about the model’s goals, assumptions, and validation. That documentation should be accessible through a “help button”. Many simulation models do provide part of their documentation through animation, which explains—in user terms—the simulated system through a kind of cartoon movie. Animation, however, can be a misleading validation technique, because it uses very short simulated time-periods (Law 2007).

These issues become even more important when the modelers do not know who the users will be, as is the case if there are many stakeholders. A model without documentation is like a (rental) car without an instruction booklet. If the model is used while respecting its documentation, then the users are entitled to a “warranty”; i.e., the modelers should then pay for wrong model conclusions. If, however, the clients use the model outside its validity range, then these clients are to be blamed. While “driving” the model, “red warning lights” should switch on when the users enter inputs into the model that violate its validity range; this validity range is also known as the “experimental frame” (Zeigler et al. 2000). Like a car that is periodically returned to the garage for maintenance, a model may be returned to its builders for updating. For other software it is well-known that maintenance is a crucial and expensive part of the life cycle. Updating is standard in software: new versions keep appearing, repairing “bugs” discovered during usage.

Another analogy is provided by the patient information leaflet that comes with most medicines: these instructions warn against all kinds of undesirable side-effects. Likewise, the documentation of a model should warn against improper usage (Van der Sluijs et al. 2005). And likewise, this documentation should be updated continually.

Norms, Values, and the Model’s Performance Measures

Norms and values are an important political and ethical issue nowadays (Jones 2007). In the context of models, these values concern clients, modelers, and other stakeholders. An example (mentioned above) is the simulation model that computes the financial consequences of changes in certain Dutch social security laws—for both the national government (macro-economic view), who wishes to save money, and the individual employees (micro-economic view), who will not all suffer the same financial consequences (Bosch et al. 1994). A related recent example is the discussion on increasing the age at which employees may retire according to the retirement laws of various countries. Possibly conflicting values of one or more stakeholders may be discussed within an OR framework (Wenstøp and Koppang 2009).

Simulation typically gives multiple performance measurements, which should quantify the values of all the stakeholders. So, simulation modelers assume that these values are indeed quantifiable (a quantitative output may be converted into a qualitative one; e.g., if the quantitative output exceeds a threshold value, then the qualitative output is scored as “unacceptable” which may correspond with the binary variable with value 0 or 1). Note that there is also qualitative simulation, but I do not know any practical applications—though “Ongoing research topics include … modeling methods suited for particular application domains” (Kuipers 2001). Note further that simulation models do not optimize, whereas mathematical programming (e.g., LP) models do—if the latter models’ assumptions hold!

Moreover, a simulation model should give these multiple performance criteria for various scenarios, which represent different assumptions about the future environment and different decisions. The analysts should consider a population of scenarios, which includes a most likely scenario and a reasonable worst-case scenario. Such scenario analysis is also called What If analysis. The simulation analysts may present the users with a set of Pareto optimal solutions, which exclude “inferior” solutions; i.e., solutions that score worse on one or more criteria, while they do not score better on the remaining criteria. Next, these users may decide on their preferred solution, depending on their values. In the private domain (e.g., the banking sector) managers are paid so well because they must make such decisions (e.g., concerning their portfolio of securities)—and live with the consequences! In the public domain, politicians make the final decision (e.g., concerning the infrastructure of the country). In the medical domain, the doctor—not the patient—often decides (about the treatment). There are many publications on measuring various criteria (often called “multiple objectives”) and quantifying their value tradeoffs (Keeney and Raiffa 1976; Rosen et al. 2008; Wallenius et al. 2008).

Dynamic Programming (another OR tool) teaches us that we should try to avoid irreversible decisions. An example in the context of this article concerns nuclear energy: we should not make decisions on nuclear energy that burden many future generations with the consequences, including contamination by nuclear waste.

Note that spreadsheets (based on popular software like Excel) can be a type of simulation. Unfortunately, most spreadsheet software complicates the validation of the underlying model, because that model is not explicitly formulated in terms of equations and inequalities (Whittaker 1999).

Risk or Uncertainty Analysis and Robust Models

Simulation models are often used in uncertainty analysis or risk analysis, which quantifies the probability of a “disaster”, such as a terrorist attack, a nuclear accident, an ecological breakdown, or a financial collapse—now or in the (distant) future. These disasters are unique events, whereas (say) a model for airplanes’ fuel efficiency concerns repetitive events: the airplanes make many flights. Consequently, validation in risk analysis is very difficult; a better term may then be credibility (Helton 2009).

Actually, there are two types of uncertainty; namely, epistemic uncertainty and aleatory uncertainty. Epistemic (subjective) uncertainty implies that the analysts lack the knowledge about

  1. (a)

    the functional form of the mathematical equations describing the physical law that governs the phenomenon under study: so-called model uncertainty;

  2. (b)

    the values of the parameters in these mathematical equations: parameter uncertainty.

Epistemic uncertainty might be reduced through data collection and further analysis. However, data collection is impossible if the phenomenon occurs in the (distant) future; e.g., what will be the temperature at the North Pole on January 1, 2025? Analysis of data concerning the past may suggest a particular functional form; e.g., average temperature per year in the last 50 years follows an exponential curve. Unfortunately, several models can fit the same data; e.g., the same data may pass goodness-of-fit tests for several statistical distribution functions (Law 2007). Consequently, different models result, so different futures are predicted. The current discussions of the IPCC (Intergovernmental Panel on Climate Change) illustrate these issues (also see the “Energy Modeling Forum” at http://emf.stanford.edu/). Further discussions on model versus parameter uncertainty can be found in the OR literature (for references see Kleijnen 2008, p. 126).

Aleatory (objective) uncertainty implies that the system has inherent uncertainty; systems in which humans play an active role exhibit such inherent noise. An example is a supermarket with its customers. Theoretically, the analysts could ask all the supermarket’s customers when exactly they will arrive at the store, but in practice these customers will change their plans so the analysts might assume a Poisson process to model the individual customer arrivals (also see the fundamental discussion by Zeigler et al. 2000). Even physical systems exhibit inherent noise in the models of quantum physics (not Newtonian physics).

Many engineering models without human components do not account for aleatory uncertainty, because in these models the dynamic behavior is determined by the laws of nature (physics and chemistry). An example is the behavior of a robot in a Flexible Manufacturing System or FMS (no aleatory uncertainty) versus a human operator. In practice, many Computer Aided Design/Computer Aided Manufacturing (CAD/CAM) models have epistemic uncertainty only. Some engineering models, however, do account for aleatory uncertainty; e.g., models of semiconductor manufacturing account for scrap and down times (Veeger 2010).

Epistemic and aleatory uncertainties are further discussed in the literature (Helton 2009; Kleijnen 2008; Walker 2009).

If the exact values of the model’s parameters and inputs have not much effect on the model’s output, then the chance of using a model wrongly becomes much smaller; such a model is called robust (Van der Sluijs et al. 2005). The Japanese engineer Taguchi has emphasized the importance of robustness, but he limited himself to the design of physical products such as Toyota cars (Taguchi 1987). Not only products but also processes (systems) may be designed to be robust. An example of such a system is inventory management based on the Economic Order Quantity or EOQ (Dellino et al. 2010). The goal of the classic EOQ model is to minimize the total inventory costs, assuming that the demand rate and cost parameters are known constants: no epistemic uncertainty. In practice, however, that rate and those parameters always differ from their assumed values, so the analysts may minimize the expected cost while guaranteeing that the variability of the cost does not exceed a threshold provided by the users. Changing that threshold gives a set of Pareto-optimal solutions.

Note that we may try to spread various related risks (i.e., we should not put all our eggs in one basket); e.g., we may select a portfolio of energy resources (coal, nuclear, wind, biomass, etc.).

Military Models, Computer Games, and Experimental Economics

Currently, there is a surge of OR models that aim at fighting terrorism; e.g., homeland security is a hot topic. How many models have been developed at the request of terrorist organizations themselves, I have no idea! Neither do I know of models developed by criminal organizations like the mafia. Note that the RAND Corporation developed a model (namely, a gaming model, further discussed below) to study the USAs drug problem (Caulkins 1995). In practice, it is not always clear what is terrorism and crime: is a suicidal bomb a heroic act or is it terrorism; is abortion a crime, even in case of rape? It all depends on one’s norms and values (Wenstøp and Koppang 2009).

Not all scientists are prepared to work for the military establishment (the origin of OR is the development of military models during World War II). Modeling for military defense may be considered morally acceptable, in general; however, exceptions may be the development of some types of weaponry. An example is the controversy around the Anti Ballistic Missile (ABM) system in 1971, which lead to a first set of guidelines for OR (Gass 2009). Other examples of unethical weaponry may be cluster bombs and land mines. Since the 1940s, weaponry includes nuclear weapons—for many scientists a moral dilemma. Modern weaponry includes “unmanned aerial vehicles” or drones, flying over (say) Afghanistan while being activated in the USA; this may make war look like a video game played with a joystick (Sparrow 2009). There are both engineering models to design these drones and OR models to decide on their tactical deployment.

So drones and their joysticks may be associated with a special type of simulation models, namely computer games; i.e., humans make decisions that are input to the simulated world, whereupon the computer calculates the consequences of these decisions. Besides computer games for entertainment, there are so-called serious games. Serious games are a good tool for studying human behavior, including ethical aspects; e.g., do the players go for “cut throat” competition or do they collude against the public? The rise of PCs has stimulated the interest in computer games, including war games (Samuelson 2009). So, research and applications in this field are growing; e.g., recently, 48 French projects received twenty million euro for serious gaming.

A more recent type of games is provided by experimental economics. The latter games are computationally much simpler, but the players receive real money (albeit small amounts)—depending on the decisions of all the players of the current round. These games have already been used to study altruistic versus egoistic behavior, rational versus emotional decision-making, etc. (Smith 2010).

Games may be played not only by humans, but also by computerized robots called agents (Le Menestrel and Van Wassenhove 2009; Robbins and Wallace 2003; Sparrow 2009). These agents, however, add another level of abstraction so the validity of the model becomes more questionable—which has ethical consequences.

In summary, computer games (including war games) are good tools for studying human behavior including ethical principles; these games have not yet become popular for the study of ethics in modeling.

Whistleblowers

In practice, there is a practical problem that has both ethical and theoretical implications: “Don’t bite the hand that feeds you” or (translated from Dutch) “Whose bread one eats, whose word one speaks”. Nevertheless, whistleblowers do speak out, and accept possible consequences.. A famous case in which the whistle was not blown, is the DC 10 crash (Van de Poel et al. 2007). In The Netherlands, a few whistleblowers lost their jobs at the KEMA and RIVO organizations. Another case dates back to 1999, when some Dutch parliament members raised questions about the permission to expand the Amsterdam airport, because an RIVM employee claimed that this permission was based on a wrong model instead of real-world measurements of the airplanes’ noise and pollution.

Note that the disadvantages of such real-world measurements are that they are expensive in time and money; moreover, such measurements enable the testing of only a few scenarios, and they may be dangerous because they may lead to accidents, etc. Again, there is a great need for the validation of models, and the related issues of sensitivity, uncertainty, and robustness analyses.

Note further that in case of multiple stakeholders, the financial costs and benefits may be allocated applying game theory, which analyzes situations in which a “player” anticipates the counter-moves of the opponent before deciding on his or her next move. There are many types of games: games against nature (e.g., the player searches for oil on the sea bottom), zero-sum games (the gain of one player equals the loss of the other player), etc. The corresponding analyses may provide “optimal” solutions; e.g., a solution may result in a “Nash equilibrium”, in which no player can benefit by changing his or her strategy while the other players keep their strategies unchanged. There is much literature on game theory in OR and economics (Shoham and Leyton-Brown 2009). Game theory should not to be confused with gaming.

Some Conclusions

Ethical issues in modeling are essential issues for all modelers, because all modelers are humans and all humans must face moral problems! Nevertheless, ethical issues are not part of the standard academic OR curriculum; one of the rare exceptions is the course by Howard at Stanford University (http://event.stanford.edu/events/212/21283/). In engineering, however, ethics has become a required part of the curriculum in many countries including Spain, The Netherlands, and the USA (Harris et al. 2008; Gill-Martin et al. 2010; Newberry et al. 2009; Brumsen and Van der Poel 2001; Zandvoort et al. 2008).

Occasionally these issues arise in the popular media (e.g., discussing whistleblowing), but these issues are then not discussed in a scientific manner (Jones 2007). There are too few specialists in the interdisciplinary area of ethics and (engineering) models, which is a subarea of ethics and science in the widest sense (Brumsen and Van der Poel 2001). In this article I presented selected personal reflections, based on my experience as an OR consultant. I hope that my expose is a worthwhile contribution to this area, and that it will stimulate further discussion on the issues of ethics in modeling.