Introduction

For a long time, researchers believed that moral judgments arose from a conscious, intentional, and deliberate process (Kohlberg, 1984; Rest, 1986, for a review see Brown & Treviño, 2006). These ‘rationalist approaches’ assume that individuals analyze a moral problem consciously and comprehensively before arriving at a moral judgment, for example, by weighing evidence and applying abstract moral laws (Sonenshein, 2007; Zollo et al., 2017). Since the turn of the millennium, however, this assumption has been subject to increasing criticism, with the most prominent of these so-called ‘intuitionist approaches’ being Haidt’s social intuitionist model (SIM) (Haidt, 2001, 2003). The SIM states that moral judgments emerge from unconscious and automatic cognitive processes called moral intuition, whereas subsequent moral reasoning serves mainly the purposes of ex-post rationalization and moral justification. The SIM further assumes that moral intuitions are the expression of social and cultural expectations rather than the result of private reasoning carried out by individuals (which is why the model is called a social intuitionist model). Similarly, Sonenshein’s (2007) three-staged sensemaking-intuition model claims that individuals first construct moral issues from social stimuli in uncertain and equivocal environments, then instantaneously make an intuitive judgment and finally engage in post hoc reasoning to explain and justify their judgment.

Taken together, the opposing assumptions of the rationalist and intuitionist approaches constitute the controversy of “whether moral judgment is a controlled [analytic] or an automatic [intuitive] process” (Cushman et al., 2006, p. 1082). Today, researchers often take some kind of middle position, according to which moral judgments can be both immediately triggered by intuition without deliberate reflection, and also be the result of more or less careful deliberations (Cushman et al., 2010; Waldmann et al., 2012; Weaver et al., 2014). However, the controversy whether moral judgments primarily arise from analytical or intuitive processes is concerned with descriptive claims about how moral judgments are actually made, and not with prescriptive claims about how they should be made. For example, Haidt explicitly states that his SIM makes “a descriptive claim, about how moral judgments are actually made”, and not “a normative or prescriptive claim, about how moral judgments ought to be made” (2001, p. 815).

In contrast, since Dane and Pratt’s (2007) seminal article on intuition effectiveness, a key question in the study of intuition in management in general has been under what circumstances an intuitive approach can be effective and/or superior to analysis. In this context, researchers often assume that individuals possess metacognitive control over whether to make a judgment analytically or intuitively, and that research on intuition and analysis effectiveness may guide individuals in this respect (Betsch & Glöckner, 2010; Luoma & Martela, 2021; Salas et al., 2010; Shapiro & Spence, 1997; Thompson et al., 2011). Indeed, the question of whether to trust one’s gut feeling or to follow contradictory rational arguments represents a prototypical situation inside and outside organizations (Salas et al., 2010), and is also of great relevance for moral problems (Craigie, 2011; Provis, 2017). Although individuals often possess a particular cognitive style and tend to take either an intuitive or analytical approach (Bullini Orlandi & Pierce, 2020), experiments show that cognitive styles are flexible and can be modified by situational factors (Ayal et al., 2015; Rusou et al., 2013). Even in situations where individuals automatically and unconsciously use heuristics to make judgments, reflecting on potential problem-related biases can help them adapt their approach to the demands of the situation. For example, the heuristics and biases program of Kahneman and Tversky has shown extensively that intuition is often biased in well-structured problems with a pre-defined and unequivocal correct solution (Kahneman & Tversky, 1972, 1973; Tversky & Kahneman, 1973, 1974). Accordingly, for such problems, it is more effective to arrive at a judgment analytically. Moreover, even if one does not follow the assumption that people have a choice to make either an analytic or intuitive judgment, findings about their relative effectiveness would still be important, because they allow conclusions about whether someone is (or was) able to cope with a moral problem.

In their article, Dane and Pratt not only presume “that intuition may be most effective for moral judgments” (2007, p. 41), but also that “conditions under which individuals disregard their intuitions” may lead them to “engage in actions that conflict with principles of ethics in organizations” (p. 49). Despite these strong claims in favor of moral intuition, however, scholars have only recently begun to question the still widely held assumption that moral problems should be judged and coped with analytically. For example, Zhong (2011) argues that approaching ethical dilemmas analytically may narrow one’s focus by ignoring aspects that are normatively important yet largely unanalyzable. Railton (2014) regards intuitive judgments effective due to their capacity to process implicit information. In a similar vein, Ferrin (2017) stresses that intuitive judgments may be correct and robust because they are grounded in implicitly held moral values. Nevertheless, there is still a lack of systematic approaches that allow predictions about the conditions under which a moral problem should be judged analytically and when intuition is the more promising approach.

Consistent with the finding that task characteristics are one of the most important factors influencing intuition effectiveness (Dane & Pratt, 2007; Shapiro & Spence, 1997), the aim of this article is to develop a typology of moral problems in order to make claims about the relative effectiveness of intuition and analysis by looking at the underlying characteristics of the problem types. More specifically, we derive four types of moral problems based on the dimensions of moral uncertainty (i.e. whether the moral problem is familiar or unfamiliar) and moral equivocality (i.e. whether the problem involves a moral conflict or not), and consider moral equivocality as the main criterion for predicting intuition effectiveness.

The article is structured as follows. In the first part, we discuss moral uncertainty and moral equivocality as distinct features of moral problems. In the second part, we develop a typology of moral problems based on their degrees of moral uncertainty and moral equivocality. In the third part, we derive a dual-process theory of moral judgment from the literature, which we relate to our typology in the fourth part. In this way, we derive propositions about the effectiveness of analytic and intuitive approaches depending on the moral problem type.

Moral Uncertainty and Moral Equivocality

A moral problem refers to the question of whether something is right or wrong (good or bad) and what ought or ought not to do be done in the face of that evaluation. The literature on business ethics describes a number of different moral problem types, including compliance problems (Paine, 1994; Weaver & Treviño, 1999) and ethical dilemmas (Garsten & Hernes, 2009; Lurie & Albin, 2007). In order to make general statements about the extent to which such problems should be judged analytically or intuitively, they must be classified in terms of their characteristics.

One of the few attempts in this regard is Geva’s (2006) typology of moral problems, which is the starting point of our own approach. In particular, Geva developed a two-dimensional typology that refers “to conceptually derived interrelated sets of ideal types of moral problems” (p. 134). Geva differentiated moral problems on the basis of the two dimensions of (1) moral judgment and (2) moral motivation, with the former being of particular interest here. In principle, moral (or ethical) judgments reflect “an individual’s personal evaluation of the degree to which some behavior or course of action is ethical or unethical” (Sparks & Pan, 2010, p. 409). Geva differentiated between determinate and indeterminate moral judgments, and classified moral judgments “as determinate when generating a clear recommendation”, and “as indeterminate when culminating in unsettled prescriptions” (p. 135). According to Geva, there are two cases where situations of indeterminate judgment arise. On the one hand, moral judgments are indeterminate when the situation fails “to provide clear guidance for behavior and practice” (p. 135). On the other hand, moral judgments are indeterminate in situations “in which, all things considered, different moral precepts demand conflicting actions” (p. 135).

While we follow this line of reasoning, we nevertheless believe that both cases involve a different form of moral indeterminacy. Although the two cases often co-occur—i.e. conflicting interpretations invalidate prescriptions and/or norms, and a lack of prescriptions and/or norms gives rise to conflicting interpretations—they do not necessarily do so. First, there are situations for which, although there are no moral conflicts, no clear guidance for behavior and practice exists. Such a situation arises, for example, when an employee is suspected of having stolen something, but there is no evidence of the crime. Morally, the situation is unequivocal: whoever steals must be sanctioned. However, due to the incomplete information, there are no definite prescriptions for the solution of the moral problem. Second, there are situations in which moral conflicts arise, but for which there is clear guidance for behavior and practice. For example, someone may be supposed to hire a woman for a job to increase gender diversity but has a more highly qualified male candidate. There is a moral conflict here: Either you hire a person with lower qualifications, or you hire a person who decreases diversity. Nevertheless, the implicit expectations of the community may provide an undeniable demand for what needs to be done in such a constellation. Despite the moral conflict, there would be settled norms for its judgment.

In an earlier work, Geva (2000) referred to the dimension of moral indeterminacy as moral uncertainty, which is given when “doubting what one ought to do” (p. 781). However, since doubts about what one ought to do neither sufficiently nor necessarily presuppose moral conflict, moral uncertainty refers only to the first of the two cases of moral indeterminacy described above. Moral uncertainty generally refers to unfamiliar moral problems where there is a lack of prescriptions and/or norms about how to morally evaluate and solve a given problem (MacAskill et al., 2020). This is the case when established prescriptions and/or norms cannot be applied in a straightforward manner, when the moral problem cannot be fully captured due to a lack of information, or when the moral consequences of actions cannot be determined (Welch, 2017). In addition to incomplete information, moral uncertainty may also arise with asymmetric information, i.e. when an actor part of the moral problem possesses information that cannot be observed or obtained by the person making the judgment (e.g., whether someone lied or not) (Husted, 2007), or with information overload, i.e. when the amount of information becomes excessive and overwhelming due to limited cognitive capacities, potentially leading to “paralysis by analysis” and “extinction by instinct” (Hodgkinson & Sadler-Smith, 2018, pp. 474–475). Moral uncertainty is thus concerned with the amount of problem-specific information and could, in principle, be reduced by either gathering and/or interpreting new information (where there is a lack of information) or by enhancing cognitive capacities (in the case of information overload). While moral uncertainty can also be understood as a psychological state—as the “individual’s uncertainty about his or her ability to fulfill relevant moral obligations” (Reynolds et al., 2012, p. 491)—we conceptualize moral uncertainty here as an inherent component of judgment in relation to the characteristics of the moral problem. That is, a moral judgment is made despite the absence of definite prescriptions and/or norms.

In contrast, the second case of moral indeterminacy—where there is moral conflict—refers not to moral uncertainty but to moral equivocality. In general, equivocality refers to the existence of multiple simultaneous interpretations of a given situation and/or existing information (Weick, 1995). Equivocality can thus be defined as “the existence of multiple and conflicting interpretations about an organizational situation” (Daft & Lengel, 1986, p. 556). Moral problems are associated with equivocality when they allow for conflicting moral views (e.g., deontological versus teleological evaluations), when they are associated with competing moral demands (e.g. social versus ecological demands and/or demands from different stakeholders), when legal and moral prescriptions are in conflict, or when general norms do not align with the norms of a local community. In terms of moral views, there may be a conflict between moral judgments based on the outcomes of a decision (teleological ethics) and moral judgments based on the motivations, principles, or ideals of a decision (deontological ethics) (Hunt & Vitell, 1986). For example, should a supervisor give an honest feedback to a poorly performing employee even though this would lead to both lower self-confidence and further deterioration in performance (Erat & Gneezy, 2012)? Either the supervisor violates the deontological principle not to lie or the teleological principle of judging the moral problem by its consequences. While there may be established norms on how to cope with such tensions (i.e. moral uncertainty would be low), moral equivocality would still be high due to the problem-inherent multiplicity of interpretations. Knowing how to cope with moral equivocality does not take it out of the moral problem, just as doing business as usual does not remove moral uncertainty from an unprecedented moral problem. Moreover, even a single moral viewpoint can allow for contradictory implications and thus be associated with moral equivocality. For example, the deontological principle to preserve life can be interpreted to justify abortion to save the mother's life as well as to justify saving the baby at the sacrifice of the mother's life (Whetstone, 2001).

While scholarly work pointing to the fundamental difference between or independence of uncertainty and equivocality in organizations has been developed outside the ethical domain (Daft & Lengel, 1986; Daft & Macintosh, 1981; Weick, 1979, 1995), Sonenshein (2007) has stressed the relevance of both dimensions to moral problems as well. Moral problems, in his view, are associated with equivocality when there are different value orientations (moral pluralism) or when one has to decide between ‘right’ and ‘right’ in order to satisfy conflicting stakeholder needs. In contrast, individuals perceive uncertainty in moral problems when they do not have access to a plausible interpretation, and they therefore do not know how their actions will affect the future. This view contrasts with much work in the ethical domain that treats moral uncertainty and moral equivocality as synonyms. For example, Warren and Smith-Crowe (2008, p. 83) assign moral uncertainty to situations where individuals are “confronted with ambiguous rather than straightforward ethical situations”, implying that moral uncertainty is accompanied by moral equivocality. In contrast, we follow Sonenshein’s approach to make a distinction between moral uncertainty and moral equivocality, while at the same time we distinguish ourselves from him by not taking a sensemaking perspective on the construction of moral uncertainty and equivocality, but look at moral problems that are already defined or constructed (as in Daft & Lengel, 1986, for example).

The combination of the two dimensions of moral uncertainty and moral equivocality leads to the definition of four types of moral problems, which we assume require different cognitive coping strategies: (1) compliance problems—there is a complete set of rules for coping with an unambiguous moral problem, (2) professional ethics problems—there are no fixed rules for evaluating the problem, although there is no moral conflict, (3) conformity problems—there is a moral conflict with settled norms on how to handle it, and (4) ethical dilemmas— there is a moral conflict but no settled norms on how to handle it. Table 1 provides an overview of these four types, which we will explain in more detail in the following section.

Table 1 Types of moral problems

A Typology of Moral Problems

Compliance Problems

The most straightforward type of moral problem is associated with neither moral uncertainty nor moral equivocality. For such problem types, there are explicit sets of rules for moral judgment covering all potential cases. The course of action to be carried out is unequivocally pre-structured, follows procedures that mostly rely on salient knowledge and templates, is well documented and should be followed in a strict and transparent manner.

In accordance with Geva (2006), we call this type of moral problem compliance problems. Generally, compliance problems are concerned with following rules, which is why the term is sometimes referred to as rule compliance in the literature. Katz and Kahn (1978, p. 406) define rule compliance as the “acceptance of role prescription and of organizational directive because of their legitimacy”. A judgment can be labeled compliant when someone responds to a purposeful request in a desired way (Cialdini & Goldstein, 2004). In order to ensure that employees judge and act on a situation as desired, organizations use codes of conduct, incentive systems, and explicitly expressed values (Pitesa & Thau, 2013). One of the main goals of legal compliance programs is “to bring some degree of order and predictability to employee behavior” (Weaver & Treviño, 1999, p. 317). A compliance problem presupposes that an obligation is clearly defined (Geva, 2000); there is neither room for interpretation nor a need for additional information. Moreover, compliance problems often treat the terms ‘legal’ and ‘moral’ as synonymous, assuming that any legally acceptable course of action is also morally acceptable (Hopkins, 2011; Paine, 1994). Thus, compliance problems are associated with low moral uncertainty and equivocality.

Professional Ethics Problems

In moral problems that are uncertain but unequivocal, it is evident in principle how they are to be judged and what is to be done. However, due to insufficient information, or the uniqueness of the moral problem, its judgment and necessary courses of action are not pre-structured in a straightforward way; no fixed templates are in place to guide courses of action in a step-by-step manner. In this type of moral problems, one is often faced with precedents. A precedent occurs when previous solutions are defined as criteria for new problems (Davies & Crane, 2003). Existing and/or new information is combined and connected in a productive way to come to a conclusion that conforms to what is already known but is unique in its recombination. As Miller and Ireland (2005, p. 27) argue for organizational problems in general, when coping with situations that are uncertain but unequivocal, individuals “know what the issues are, know what questions to ask, and know what data to collect and analyze”. Although both are transparent and understandable for outsiders, the approach reflects a proficient utilization of an extensive and domain-specific body of explicit knowledge that is open to extensions. Solutions thus often require extensive mastery of large amounts of information.

Following Mintzberg, these types of problems are especially relevant when they require “complex, involving difficult, yet specified skills and sophisticated recorded bodies of knowledge—jobs essentially professional in nature” (1979, p. 99). These include, for example, doctors, lawyers, or engineers. Accordingly, we associate uncertain and unequivocal moral problems with professional ethics, defined as “the ethics of the professionals who are members of a given profession, such as medical doctors, registered nurses, lawyers, teachers, and social workers” (Airaksinen, 2012, p. 616). Professional ethics embraces compliance with some kind of formal code (Abbott, 1983) and directs individuals to act in a way consistent with the principles they profess (Lewis, 1982). Professional ethics essentially deals with moral problems where a lot of information has to be taken into account. This information is either given explicitly or can be deduced logically, and there is little room for interpretation.

Conformity Problems

Another type of moral problem occurs when moral uncertainty is low, but moral equivocality is high. This is the case when one is faced with a moral conflict that allows for multiple interpretations, although, at the same time, it is a familiar moral conflict that the community has more or less settled, for example by unilaterally prioritizing one interpretation, finding a holistic way for the community to address both sides of the conflict, or denying the existence of the conflict. In any case, to meet expectations, someone can adequately judge and cope with the moral problem by relying on existing patterns of experience from earlier situations, whereas at the same time, the judgment is bound to subjective experience and defies (complete) codification. Attempting to fully codify how to deal with the problem and reduce moral equivocality is not possible because of the moral conflict’s incommensurability. In addition, highlighting the moral conflict too explicitly may be dismissed by the community. Take, for example, a company that refuses to do business with a particular group of countries ‘because they violate human rights’, while at the same time—taking a closer look—other countries with which the company does business also violate human rights. Even if judgments regarding which countries it is morally acceptable to do business with may be normatively settled in the company (or country), moral equivocality would still be high, because the underlying justification (‘violating human rights’) is open to conflicting interpretations. The actual judgment of whether it is permissible to do business with a particular country follows implicit moral norms that are laden with conflict. However, this by no means implies that effectively coping with problems of high moral equivocality and low moral uncertainty has to be morally inconsistent per se. The opposite may be the case, for example, if complex norms have developed in a company on how to cope with face-saving problems in interactions when both sides have conflicting moral standpoints. As Eden and Ackermann point out for negotiations in general:

By using a communication mode open to several interpretations, social order is not destroyed and substantive information is proffered. Equivocality serves to maintain the balance of order but also provides the fuzziness within which face saving can occur. An appropriate level of equivocality, balanced with transparency is aimed at ‘changing mind and emotions’; however, this might mean backing away from clarity as clarity begins to emerge. (2013, p. 67)

Dealing with moral equivocality, then, may consist in maintaining the equivocality of the situation in such a way that both moral standpoints remain consonant with it. In this way, organizations with high moral pluralism may still function effectively. Just because there is a moral conflict does not mean that there cannot be implicit norms for embracing the conflict and dealing with it constructively in a familiar way.

We call this kind of moral problems conformity problems because individuals are expected to follow customary practices (see also Cialdini & Goldstein, 2004). Conformity problems refer to judgments and courses of action that are not codified into explicit rules but nevertheless are normatively expected. Such expectations can come from the members or stakeholders of an organization, but also from society in general (Carroll, 1979). To cope with conformity problems, one has to know what is morally accepted to do and what is considered inacceptable or inappropriate in face of a moral conflict.

Ethical Dilemmas

When both moral uncertainty and moral equivocality are high, there is neither a proven recipe for success nor a single ‘correct’ interpretation of the moral problem. This is the case with ethical dilemmas (Sonenshein, 2007; Thiel et al., 2012) which refer to a category of moral problems in which “two or more valid ethical requirements or legitimate interests conflict and consensus do not exist as to how it should be resolved” (Geva, 2006, p. 134). Ethical dilemmas are thus characterized by a “clash of moral duties” where one has “to decide between two morally right but incompatible courses of action” (Monin et al., 2007, p. 102). Examples of ethical dilemmas are upholding the law versus saving a colleague (Monin et al., 2007), laying off employees versus threatening the company’s future (Zhong, 2011), minimizing environmental damage versus maximizing shareholder value (Dane & Sonenshein, 2015), or, in the case of external whistleblowing, loyalty to the company versus preventing harm to the public (Geva, 2006).

In ethical dilemmas, there is no unequivocal ‘right’ judgment and course of action. Asking a right-wrong, good-bad, either-or or yes–no question is often not feasible. No matter what course of action is considered right, one will also do something wrong. While such paradoxical constellations cannot be resolved logically, they allow for different interpretations and meanings. For determining an appropriate course of action in ethical dilemmas, it is therefore considered that there may be one or several options that are not immediately apparent. To identify novel options, a re-interpretation of the ethical dilemma may be necessary (Dane & Sonenshein, 2015). Ethical dilemmas allow for several, different interpretations, while, at the same time, additional information may help to approach the dilemma in a new way.

A Dual-Process View of Moral Judgment

The empirical observation that people can sometimes be more intuitive and affect-based and other times more rational and reflective when making moral judgments has led to the development of dual-process theories of information processing in moral psychology. Historically, these models can be traced back to general dual-process theories of information processing developed outside the moral domain in cognitive psychology (e.g., Chaiken, 1980; Schneider & Shiffrin, 1977). In recent decades, such theories have been very influential in the development of moral psychology. They help to explain the coexistence of two different pathways to moral judgment: one that primarily relies on the intuitive and one that primarily relies on the analytic (or rational, reflective) system (Cecchini, 2021; Cushman et al., 2010).

Most prominently, Greene and colleagues (Greene, 2015; Greene et al., 2001, 2004) have developed a dual-process theory of moral judgment, assuming that moral judgment is driven by two “mutually competing” (Greene et al., 2004, p. 389) subsystems in the brain: one triggering automatic social-emotional responses, and the other allowing for abstract thinking and high-level cognitive control. In their neuroimaging studies, they found that individuals utilize different subsystems when making deontologically motivated and utilitarian motivated judgments. When deontological intuitions were prominent, individuals showed an increased social-emotional processing, whereas, when making utilitarian (or, more broadly, consequentialist) judgments, brain regions associated with abstract thinking and high-level cognitive control showed greater activity. The authors concluded that individuals are intuitively aware that harming others is wrong (deontological System 1 processing), while they must reflect analytically to recognize that harming others may be acceptable if they consider the consequences (utilitarian System 2 processing).

Although Greene and colleagues’ dual-process theory of moral judgment has gained initial empirical support (Koenigs et al., 2007; Moore et al., 2008), it was subsequently also subject to general criticism. For example, Cushman et al. (2010) noted that the respective assignments of the normative positions to the two subsystems of the brain run counter to their philosophical origins, because, historically, consequentialism is rather linked to Hume’s moral sentimentalism, whereas deontology is rather associated with Kantian rationalism. The authors further stress that individuals may as well explicitly reason from deontological moral principles, and that intuitive-emotional processes play an important role in weighing consequences in utilitarian thinking. Similarly, Kahane (2012) noted that utilitarian judgments can be quite intuitive (e.g., allowed to lie to prevent harm) while deontological judgments can be strongly counterintuitive (e.g., the Kantian claim that one must not lie, even to prevent harm). Moreover, there is also growing empirical evidence against a strict division of deontological and utilitarian thinking into an intuitive and an analytic system. For instance, Białek and De Neys (2016, 2017) have shown in experiments that System 1 processing involves both deontological and utilitarian intuition and that deontological judgments are slower than utilitarian judgments. Gürçay and Baron (2017), and Rosas and Aguilar-Pardo (2020), found that under conditions of time pressure, individuals are more likely to use utilitarian intuitions as compared to deontological intuitions.

As a result of the criticisms and limitations, researchers have proposed adopting broader dual-process theories from cognitive psychology to explain information processing mechanisms behind moral judgments (Craigie, 2011; Mallon & Nichols, 2011; Sparks & Pan, 2010; Waldmann et al., 2012; Zhong, 2011). For example, Waldmann et al. (2012) have pointed out that more general dual-process theories—that do not associate moral philosophies to brain areas—could be equally suitable to model different behavior when making a moral judgment. As a result of this development, a convergence of moral psychology with more general two-system theories of cognitive psychology emerged. These assume that human information processing is accomplished by two substantially dissimilar, yet complementary systems: an intuitive and an analytic system (Sadler-Smith, 2016). In this context, intuitions are conceptualized as “affectively charged judgments that arise through rapid, nonconscious, and holistic associations” (Dane & Pratt, 2007, p. 33). Such intuitions operate at the nexus of mind and body and represent an embodied way of knowing (Meziani & Cabantous, 2020). Analysis, in contrast, refers to slow, conscious, and rule-based deliberations, and rather draws on disembodied forms of knowledge. This is consistent with Haidt’s definition of moral intuition as “the sudden appearance in consciousness of a moral judgment, including an affective valence (good-bad, like-dislike), without any conscious awareness of having gone through steps of searching, weighing evidence, or inferring a conclusion” (2001, p. 818) (with regard to the consistency of Haidt’s view with dual-process theories, see Zollo et al., 2017). Moral intuitions may involve processes of holistic pattern recognition by which features of a given moral problem are compared with prototypes of moral problems obtained from past experiences (Dane & Pratt, 2009). This idea of intuition as unconscious pattern recognition goes back to Herbert Simon, according to whom intuition reflects the ability to instantly recognize familiar patterns in present situations without deliberate reflection (Simon, 1983), as “analyses frozen into habit” (Simon, 1987, p. 63) (although it should not go unmentioned that Simon denied that analytic and intuitive thinking belong to independent systems of information processing, cf. Simon, 1993).

The field of psychology distinguishes between two kinds of two-system theories: a default-interventionist and a parallel-competitive view (Evans 2007, 2008). The default-interventionist view assumes that the intuitive and the analytic system operate in sequence and are organized hierarchically. The lower intuitive system produces judgments by default that must be endorsed by the higher analytic system to correct or override faulty outcomes. This means that individuals would initially rely exclusively on intuitive thinking when making a moral judgment, while this ‘default’ processing can—but need not—be evaluated afterwards by reflective moral reasoning to ‘intervene’ if necessary (Białek & De Neys, 2017).

According to the parallel-competitive view, both systems operate independently and have access to distinct forms of knowledge. The intuitive system draws on (embodied) implicit knowledge. It can process a large amount of information simultaneously, although this kind of information processing is mostly beyond conscious control and often difficult to articulate. In contrast, the analytic system is characterized by (disembodied) explicit information processing that draws on explicit knowledge. It is rule based and operates in a sequential step-by-step manner. Unlike the intuitive system, the analytic system allows for a conscious and deliberate control of the sequence and direction of information processing (Baldacchino et al., 2015; Betsch & Glöckner, 2010). An example of a well-established dual-systems theory following a parallel-competitive view is Epstein’s Cognitive-Experiential Self-Theory (CEST) (Epstein, 1994). CEST distinguishes between a verbal reasoning ‘rational’ system (i.e. the analytic system) and a tacit associative learning ‘experiential’ system (i.e. the intuitive system), while both systems “are assumed to operate in parallel and to be interactive” (Epstein, 2010, p. 299).

In our approach, we adopt a parallel-competitive view for at least three reasons. First, the default-interventionist view seems, due to its hierarchical structure, to overemphasize the deficiency of intuition while simultaneously overrating analysis: Because the intuitive system produces errors, the analytical system must intervene and correct them (Adinolfi, 2021; Julmi, 2019). However, we want to consider constellations where an analytical approach may be an inferior strategy for making moral judgments. Second, a parallel-competitive view is consistent with growing evidence from experimental and neurological studies (Alós-Ferrer & Strack, 2014; Healey et al., 2015; Howarth et al., 2019; Kuo et al., 2009; Lieberman, 2007). Third, the adoption of a parallel-competitive view corresponds with recent trends in management and organization studies, where the focus is about to slightly shift from a dominance of default-interventionist accounts towards a parallel-competitive view (Adinolfi, 2021; Adinolfi & Loia, 2022; Hodgkinson & Sadler-Smith, 2018; Keller and Sadler‐Smith 2019; Luoma & Martela, 2021; Zaitsava et al., 2022). This is also in line with the conclusion from Cecchini (2021) that recent empirical findings strongly favor an independent rather than a hierarchical dual-process theory in the moral domain.

Conditions Influencing Intuition and Analysis Effectiveness

Effectiveness and Moral Equivocality

Although Haidt’s SIM is concerned with descriptive claims about how individuals make moral judgments, he is nevertheless convinced that “[i]ntuitive and automatic processes are much smarter than many people think” (Haidt & Bjorklund, 2008, p. 216). As discussed in the introduction, research clearly indicates that whether an intuitive judgment is smart, good, or effective depends on the characteristics of the underlying problem. Thus, the question is: in which moral problems should one make an intuitive judgment, and in which moral problems should one judge analytically?

In general, moral judgments are the result of a process in which individuals use their moral knowledge base to evaluate whether a moral issue is right or wrong (good or bad) and what they ought or ought not to do in the face of that evaluation. When the moral knowledge base is implicit, individuals use their intuitive system to make a moral judgment; in contrast, when it is explicit, the moral judgment is based on the analytic system. In terms of effectiveness, moral judgments differ from other managerial judgments in that moral judgments are not typically associated with success but rather as being right or wrong (good or bad) (Dane & Pratt, 2009). To make claims about the effectiveness of moral judgments, we tie effectiveness to the degree the judgment “is both legal and morally acceptable to the larger community” (Jones, 1991, p. 367). We thus assume that the ‘right’ judgment is the one that achieves the most social acceptance and, in this sense, equate morality with legitimacy. Schlipp already emphasized that moral problems can hardly be separated from their social implications and that the appropriateness of a judgment “must be not for the individual alone but must be communicable and, potentially at least, acceptable for all concerned” (Schlipp, 1936, pp. 62–63). This being said, we do not want to conceal the fact that the equation of morality and legality also has limits and can be problematic, as is the case for distorted cultural traditions such as Nazi Germany (Reed, 1999), or miasmatic organizations being in “a state of moral and spiritual decay” (Gabriel, 2012, p. 1146).

Regarding the ‘fit’ between (1) analytical and intuitive judgments and (2) the characteristics of the problem, the literature strongly suggests that the former’s effectiveness is determined by information structuredness of the problem (Adinolfi, 2021; Dane & Pratt, 2007; Shapiro & Spence, 1997). When problems can be decomposed and solved in a step-by-step manner, effective judgments can be reached by analytically executing a set of rules and sequences. In contrast, intuition is less likely to lead to an effective judgment and may cause cognitive biases. When problems are relatively non-decomposable, intuition may prove effective as it allows a problem to be approached holistically. In this case, analysis may cause cognitive bias when certain explicit aspects are emphasized while simultaneously ignoring other aspects of equal relevance (Dane et al., 2012; Julmi, 2019). The structuredness of a problem is directly linked with equivocality (Julmi, 2019; Scherm et al., 2016). In problems of high equivocality, “the situation is ill-defined to the point where a clear answer will not be forthcoming” (Daft & Lengel, 1986, p. 557). When moral equivocality is high and there is a moral conflict, the conflict cannot be resolved analytically (otherwise there would be no logical incompatibility). Here, intuition seems advantageous. On the other hand, if a moral problem is unequivocal, it should be decomposable into single parts that can be sequentially combined and arranged. Here, an analytically derived moral judgment seems adequate. Intuition (analysis) effectiveness should therefore be positively (negatively) associated with moral equivocality.

Accordingly, we assume that analysis is preferable for compliance and professional ethics problems, whereas intuition fits conformity problems and ethical dilemmas. This does not mean, however, that moral uncertainty has no relevance for making claims about judgment effectiveness. Thus, starting from our general premise to tie intuition effectiveness to moral equivocality, we will discuss judgment effectiveness for the four types of moral problems in detail below. At the end of the section, we provide some additional reflections on the special cases of extremely low and high uncertainty, because in such cases the distinction between moral uncertainty and moral equivocality becomes blurred, which also has implications for judgment effectiveness. Figure 1 provides an overview of the propositions on judgment effectiveness that we are going to derive in the following.

Fig. 1
figure 1

Propositions on moral judgment effectiveness

Problems of Low Moral Equivocality

In both compliance and professional ethics problems, equivocality is low because the problems and their solutions mostly rely on clear definitions, formal codes, and applicable rules. The information to be processed is to a large extent of an explicit nature. Beyond that, however, it is obvious that compliance and professional ethics problems each make quite different demands on how to judge them. Professional ethics problems are far more difficult to judge, because they lack prescriptions and place greater demands on the person making the judgment.

Compliance problems are based on explicit sets of rules that cover all potential cases of a moral problem, i.e. each step to a problem solution is unequivocally pre-structured. The individual relies on explicit information as templates for judging the moral problem, whereby the amount of processed information is relatively low. Anyone who possesses sufficient explicit information can easily solve a compliance problem. Explicit rules, procedures and standards provide a fixed and objective body of knowledge that individuals can easily learn and apply: “There is no doubt in this case [of compliance problems] as to the right thing to do” (Geva, 2000, p. 782). Effective judgment in compliance problems is thus reflected in moral analysis, defined as the “focused, step-by-step reasoning about a single possibility” in the moral domain (Provis, 2017, p. 11). In many cases, compliance problems can be solved by the deliberate application of simple and given rules. Looking at the literature, it seems undisputed that compliance problems can be solved effectively with a simple analytical approach in the sense of rule application. In this context, the literature is more concerned with motivational issues (Buchanan, 1996; Paine, 1994; Winter & May, 2001). In contrast, intuitive judgments are expected to be error prone and may cause cognitive bias. One could even argue, in fact, that compliance problems are primarily employed where their proper judgment is counterintuitive. With respect to moral problems of gender diversity and minority protection, for example, compliance rules in recruiting may help overcome the biases underlying intuition (e.g., women are unpredictable and lack assertiveness, black men are aggressive and criminal). Social psychology has shown in numerous examples that unconscious stereotypes and prejudices bias our perceptions and can lead to misjudgments (Fiske, 1998), including in recruitment, selection, and promotion (Whysall, 2018). For compliance problems, the best solution is to follow moral analysis in the form of rule application, while an intuitive approach coincides with the predetermined solution in the best case, but is less effective otherwise.

Proposition 1: In compliance problems, moral analysis is positively associated with judgment effectiveness.

Professional ethics problems are somewhat different, because the moral problem is unfamiliar and lacks prescriptions about how to judge it morally. Existing and/or new information needs to be combined and connected on a case-by-case basis to meet the requirements of the situation. Unlike in compliance problems, judging professional ethics problems draws on an extensive body of knowledge that includes, for example, domain-specific expertise, legal assessments, comparisons with previous problems or existing boundary conditions. While the processed information remains explicit, the amount of information is vast, and new information is frequently considered to accommodate exceptions.

In such cases, moral analysis alone is not sufficient and needs to be complemented with moral reflection. As Provis (2017) highlights, both analysis and reflection are systematic judgment processes that belong to the analytic (or rational, reflective) system, but must be distinguished from each other. He argues that, unlike analysis, reflection is not thinking through a single possibility, but the ability to devise different possibilities and think through different points of view. Such processes of hypothetical thinking allow distancing from a given situation, critically reviewing it, and considering alternatives. In professional ethics problems, moral reflection is necessary, since their judgment cannot be made on the basis of existing criteria or the application of predetermined rules alone. However, moral analysis cannot be dispensed with either, since an appropriate moral judgment also “needs to be based on systematic analysis, checking details, working carefully through all the known possibilities, considering whether a conclusion stands up to detailed scrutiny” (Provis, 2017, p. 8).

We label the type of judgment that fits professional ethics problems and considers both moral analysis and reflection as moral comprehensiveness. Moral comprehensiveness refers to the extent someone systematically considers existing, new, and hypothetical problem-relevant information in making moral judgments. Such an approach may include, for example, complex comparative judgment processes, in which several, partly hypothetical aspects are systematically weighed against another through rankings or pairwise comparisons (Sparks & Pan, 2010). As Forbes (2007) points out in the context of strategic decision making, comprehensiveness represents a deliberate approach that deals with large quantities of explicit information to come to a solution, and that comprehensiveness is only likely to have a positive effect on decision quality when uncertainty is high, but ambiguity is low.

Consider the example again where an employee is suspected of having stolen something, but there is no evidence of the crime (i.e. moral uncertainty is high, but moral equivocality is low). To judge the problem morally requires, among other things, the collection of all information and circumstantial evidence, the ability to evaluate the case legally, assessing the trustworthiness of the accused person, considering the possible consequences of a judgment (e.g., the possibility of a lawsuit, the loss of manpower, the reaction of the workforce), and taking into account the possibility of one’s own misjudgments. The judgment must be comprehensible in its individual steps and transparent to outsiders. Implicit information processing can be used to support the individual steps (e.g., in assessing trustworthiness or anticipating the reaction of the workforce), as long as the resulting intuitive judgment is integrated coherently into the analytical course of the argumentation, culminating in a comprehensive judgment including adequate justification to ensure its legitimation. We therefore assume that moral comprehensiveness is effective when judging professional ethics problems.

Proposition 2: In professional ethics problems, moral comprehensiveness is positively associated with judgment effectiveness.

This proposition seems even be consistent with Haidt’s SIM, for which he acknowledged that the “reasoning process in moral judgment may be capable of working objectively under very limited circumstances: when the person has adequate time and processing capacity, a motivation to be accurate, no a priori judgment to defend or justify, and when no relatedness or coherence motivations are triggered” (2001, p. 822). In professional ethics problems, such ‘limited circumstances’ should be ensured.

Problems of High Moral Equivocality

We associate conformity problems and ethical dilemmas as moral problems with high equivocality because formal coding schemes are usually of little help in solving them. Therefore, we consider intuition to be favorable, although, as we will argue, the way of approaching intuition needed is different in both cases.

In conformity problems, individuals are expected to judge a moral conflict in a customary (i.e. predictable and familiar) way. Relevant information on how to handle the settled moral conflict and under which circumstances deviations from the norm are permitted or imperative is for the most part implicit, and may be difficult or even offensive to articulate. The fact that the handling of a moral conflict is settled does not imply that the moral conflict is resolved. As argued above, dealing with moral conflict can mean making judgments in a particular, face-saving way that sustains moral equivocality. Here, explicit justification could even jeopardize the acceptance of a judgment insofar as it prevents each side from interpreting the judgment as favorable to itself. But even when the conflict appears to be explicitly settled, there is a need for an implicit, and thus intuitive, understanding of its validity in the particular situation. Take, for example, the principle ‘No business can be right in a war that’s wrong’, often postulated in the context of the Ukraine war in Europe (e.g., Beschorner, 2022). Although this deontological principle may stand in conflict with morally undesired consequences—such as the layoff of workers or a potential escalation of the war due to the withdrawal of business from Russia—there seems to be a strong social expectation not to violate this principle. In order to make a judgment as to whether it is nevertheless acceptable in individual cases to do business with Russia in the Ukraine war, an intuitive sense of the implicit expectations and prevailing atmosphere among the relevant stakeholders is required. This may (but need not) be the case for energy supply or humanitarian purposes. Although there is an explicit rule, at the same time an explicit rule is missing under which conditions the rule applies and when it does not. As Geva points out for moral conflicts in general, there are “no well-defined rules for deciding which ethical principle should be given preference in a particular situation” (Geva, 2000, p. 773). Thus, moral judgment based primarily on explicit information processing does not appear to be adequate.

In contrast, implicit information processing seems to be suitable, since it can not only process rules holistically, but can also include, for example, “exemplars, prototypes, schemas, or analogies” (Waldmann et al., 2012, p. 385). Judging a problem through moral intuition allows for holistic pattern recognition, through which the features of the current situation are rapidly and automatically matched with experiences of moral problems in the past (Dane & Pratt, 2009). Although moral intuition is likely to develop very early in childhood and is internalized through the imitation of cultural practices (Haidt, 2001), individuals continue developing their moral intuition in adulthood. For example, active participation in organizational life may convey moral norms in organizations and socialize individuals in terms of expected moral values and norms (Sonenshein, 2007). Moral intuition, then, refers to relatively stable and consistent behavior in judging moral problems for the good of the community, guided by underlying implicit values. We therefore assume that moral intuition is best suited for judging conformity problems.

Proposition 3: In conformity problems, moral intuition is positively associated with judgment effectiveness.

In ethical dilemmas, on the other hand, there are no moral norms that can be applied one-to-one; otherwise, it would not be a true dilemma. Accordingly, relatively stable moral intuition does not seem to be a suitable foundation for judging ethical dilemmas in which moral norms fail to provide guidance. On the other hand, an approach that primarily relies on explicit information processing seems inadequate as well. In ethical dilemmas, irreconcilable moral requirements or legitimate interests oppose each other. The solution to the moral problem thus cannot be determined analytically and requires an implicit and nuanced understanding of the situation; considering the context may be as important as reading between the lines. Processing implicit information is therefore assumed to be mandatory.

Dealing with ethical dilemmas requires an intuition that displays a higher awareness of the uniqueness of the moral problem and is capable of distancing itself from the immediate moral implication of a situation (if there is one at all). Handling intuition in this way, in our view, is reflected in the concept of moral imagination. According to Werhane (1999, p. 93), moral imagination “is the ability in particular circumstances to discover and evaluate possibilities not merely determined by that circumstance, or limited by its operative mental models, or merely framed by a set of rules or rule-governed concerns”. Such an ability allows the generation of novel ideas about what is morally good and right, the reframing of the situation in a creative, yet adaptive way, and questioning established norms when they seem inadequate (Narvaez & Mrkva, 2014; Rozuel, 2016; Whitaker & Godwin, 2013). At the same time, it allows the discernment of aspects embedded within a situation, and the nuanced consideration of differing moral judgments of that situation to ensure accordance with prevailing realities (Godwin, 2015). Hence, moral imagination essentially relies on the processing of implicit information, but (at least partly) in a creative way. Although moral imagination is not bound to moral intuition, the former essentially draws on the latter: “Moral imagination initiates imaginative moral intuition that recognizes the moral content of a given situation, even if it is not easily evident, and creatively envisions its potential repercussions” (Roca, 2010, p. 137).

Unlike instantaneously available moral intuition, processing information through moral imagination can be a slow process and involve periods of preparation and incubation. During preparation, individuals gather, process, and consolidate as much information as possible to capture the problem in a comprehensive way. This can (and mostly should) also include dealing with explicit information. Consistent with this, Malle stated that if “we grant intuitions more information processing […] they become increasingly powerful but also figure to be less automatic and arguably to involve considerable reasoning” (2021, p. 308). The implicit and explicit information gained may then processed holistically in the unconscious and relatively lengthy phase of incubation, eventually leading to an instant moral judgment. In contrast to moral comprehensiveness, where intuition serves analysis, for moral imagination the relationship would be inverted, i.e. analysis would be a ‘servant’ of intuition. Nevertheless, individuals should also be able to rationally justify their moral imagination post hoc, since they have dealt with the moral problem in depth during the stage of preparation. We thus assume that moral imagination is effective when only few precedents exist for guiding moral judgment, whereby we restrict the advantage over moral comprehensiveness to cases where there is not only moral uncertainty, but also moral equivocality.

Proposition 4: In ethical dilemmas, moral imagination is positively associated with judgment effectiveness.

Extreme Cases of High and Low Moral Uncertainty

In our approach, we have conceptualized moral uncertainty and moral equivocality as distinct features of moral problems. This is valid insofar as each can be present without the other. However, this does not necessarily imply that they are completely independent. We see two cases where moral uncertainty and moral equivocality merge and can no longer be distinguished from each other: when moral uncertainty is either extremely low or high. In this section, we will briefly discuss these two cases.

The first case occurs when there is technically a moral conflict, whereas at the same time its judgment is normatively determined in favor of one side of the conflict without any exception in practice. For example, such a constellation is present in Haidt’s well known ‘Julie and Mark’ situation, where two college-age siblings opt to engage in a one-time, consensual sexual interaction without any risk of pregnancy (Haidt, 2001). Although technically the deontological prohibition of incest collides with a consequentialist view, the former retains its validity for the community without exception. We call such problems categorical problems. A categorical problem in the business context, for example, would be to not allow illegal immigrants to work in slave-like conditions while threatening them with deportation if they do not work 16 h a day for little pay (Altman, 2007), regardless of what good can be done with the labor they perform. In categorical problems, it is irrelevant whether one applies the explicit deontological rule or follows one’s moral intuition. Both lead to a judgment that is accepted by the social community and is thus considered effective.

Proposition 5: In categorical problems, both moral analysis and moral intuition are positively associated with judgment effectiveness.

In the second case, moral uncertainty is so high that it merges with moral equivocality, blurring their distinction. Here, it is no longer possible to distinguish whether one has too much or too little information and which interpretation should be considered the correct one under which circumstances. In line with Hogarth (2001, 2010), we refer to problems of this case as wicked problems. In general, wicked problems “are unique, involve many different stakeholders, concern issues of which the causes are uncertain, and can only be resolved partially and temporarily since changing time and context will demand continuous adaptation of policy” (Raadschelders & Whetsell, 2018, p. 1132). Examples of morally laden wicked problems are challenges posed by climate change (Levin et al., 2012) and issues of sustainability (Blok et al., 2015).

In wicked problems, relying only on intuition alone is not effective, because the problem-relevant environment does not have sufficient validity—that is, the environment lacks causal and statistical structure and thus does not provide representative sets of cases from which to learn—necessary for the development of skilled intuition (Greene, 2017; Kahneman & Klein, 2009). Even ethical dilemmas, “though presenting us with novel and strange aspects, must, of course, have enough features that are familiar and recognizable to assume some line of continuity with our own present and past experience” (Schlipp, 1936, p. 61). This is where ethical dilemmas differ from wicked problems, because in “wicked learning environments, samples of experience are not representative and feedback might be missing or distorted”, whereas “mistaken beliefs can lead to dysfunctional actions in the form of self-fulfilling prophecies” (Hogarth, 2010, p. 343). Slovic and Västfjäll (2010) argue that intuition may seduce us to quietly turn away from and suppress disasters such as poverty, disease, and violence. This could lead to intuitive biases when judging wicked problems. On the other hand, too much unstructured explicit information may overwhelm the information processing capacity of the analytical system and require holistic processing of information (Pretz, 2011). Moreover, wicked moral problems regularly require interpretive appraisals and the holistic weighing of moral positions, so moral imagination is still important when judging wicked problems. We conclude that in wicked problems neither moral comprehensiveness nor moral imagination alone is effective, but both should be used jointly and as equals, even though wicked problems cannot be ultimately solved so any moral judgment is necessarily to be considered as tentative.

Proposition 6: In wicked problems, the joint and balanced use of moral comprehensiveness and moral imagination is positively associated with judgment effectiveness.

Conclusion

In our article, we have argued that the question of whether moral judgments should be made analytically or intuitively depends on moral equivocality, whereas the extent of the interaction between the analytic and the intuitive system is related to moral uncertainty. When there is moral certainty, we assume that the analytic and intuitive systems independently lead to an appropriate judgment. When moral uncertainty is low, either the analytic system (when moral equivocality is low) or the intuitive system (when moral equivocality is high) should be more effective. The more the moral uncertainty increases, the more important the interaction between the two systems becomes. With low moral equivocality and high moral uncertainty, we assume that the analytic system is more appropriate to judge a moral problem, but with the support of the intuitive system as its servant. Conversely, if moral equivocality and uncertainty are high, we assume that the intuitive system is more appropriate but requires the support of the analytic system as its servant. If moral uncertainty is so high that it blurs with moral equivocality, it is no longer possible to determine which system is superior. Here we assume that both systems should work together on an equal footing. Assuming that moral judgments should be made effectively in the defined sense, the framework presented is normative in that it makes statements about how judgments ought to be made in a given moral problem. The framework can thus also be utilized in business practice, either to decide how a moral judgment should be made, or to reflect on whether a moral judgment made was appropriate with respect to a particular problem.

The propositions emerging from our framework are consistent with prior works on the benefits of intuition in moral judgments. In line with Zhong (2011), it implies that intuition is crucial in judging ethical dilemmas, because a primarily analytical approach may bias the multifaceted nature of the moral problem. It is also consistent with the work by Railton (2014) and Ferrin (2017), who see the benefits of intuition particularly in the capacity to process implicit information or implicitly held moral values. At the same time, however, our framework goes beyond these studies, as it represents a systematic approach that highlights the benefits of both analytical and intuitive approaches, and identifies criteria for their respective problem-based effectiveness.

To conclude, the question of whether one should make intuitive or analytical judgments on moral problems must be considered in a differentiated way. Choosing between intuition and analysis does not imply that individuals should abandon deliberate thinking or ignore their gut feelings when they engage in deliberate thinking. In fact, pure intuition or analysis seems to make sense only in cases when it is more or less clear how to judge a problem and what ought to be done. For the rest, the major question is not whether one needs the intuitive or the analytic system to judge a moral problem, but why and how each system should be the ‘servant’ or the ‘master’.