1 Introduction

Up to the present point in time, philosophers have devoted their attention to providing conceptual analyses of trust. While it may no longer be fashionable to conceive such analyses as a set of individually necessary and jointly sufficient conditions for trust to obtain, philosophical accounts are in the business of providing an explanatory definition of this object. The debate on the nature of trust centers on questions such as: “is trust the same as reliance?” (Baier, 1986; Goldberg, 2020), “is trust most fundamentally an attitude, or a belief?” (Keren, 2014; Lahno, 2020), “does trusting ascribe specific motives to the trustee?” (Becker, 1996; Hardin, 1993; Jones, 1996), and “are moral obligations, motivations, or moralized reactive attitudes essential to trust?” (Cohen, 2020; Holton, 1994; Nickel, 2007). Those who are familiar with the literature on the nature of trust will recognize such questions as pivotal points in the current debate, at least in analytic philosophy.Footnote 1

By contrast, the literature is largely silent on what it means to trust someone to a greater or lesser degree, although some seminal works have appeared, especially formal accounts, which we shall review later. Still, the topic is not considered a key philosophical issue. The current Stanford Encyclopedia Philosophy entry for trust (McLeod, 2020) mentions dozens of books, book chapters, and journal articles devoted to analyzing the concept of trust as a binary property of relationships: people either trust (or are trusted) or not, and the task of philosophy is to elucidate what this means. The entry does not include a single reference devoted to a philosophical analysis of what it means to trust someone to a certain degree. Similarly, none of the 16 chapters of the edited book on this matter (Faulkner & Simpson, 2017) discusses the idea of trust as something that obtains as a matter of degree or how to conceptualize it as such.

We maintain that the very idea of trust as a scalar quantity—something that does not just obtain or not but rather obtains as a matter of degree—deserves a careful examination. People usually do not simply indicate whether they trust others; they also talk about how much they trust others, about the processes of losing or gaining trust, and about trusting one person more than another. The gradual build-up of trust is a very important phenomenon; yet making sense of the idea of a graded notion of trust has not occupied the mind of most philosophers as much as any of the other recognized dimensions of this phenomenon.

This paper is devoted to foregrounding the graded notion of trust. We propose a philosophical account of trust as anti-monitoring to form the conceptual basis for a graded account of trust relationships. Our account is based on two key intuitions. First, trust is a property of reliance relations in which X relies on Y with a positive expectation toward the achievement of a goal of interest. Second, there is an inverse relationship between the degree to which X trusts Y and the degree to which X is disposed to monitor Y. This account of trust as anti-monitoring is original, but it shares its central intuition with the related work of Baier (2013), Taddeo (2010), Keren (2014), and others, to which we refer later.

Notice that how much one trusts somebody is not the only possible dimension of trust amenable to a scalar analysis. For example, if the trust relationship is modeled as a relation between the trustor and many individuals, degrees of trust may represent the breadth of trust, that is, how many people an individual is willing to trust. If the trust relationship is modeled as a relation between the trustor and a single individual for a range of different goals, degrees of trust may represent the scope of the trust relationship; in other words, how many goals one trusts another. Our goal here is to single out one of the clearest and most obvious scalar dimensions of trust, what we label intensity.

The structure of our paper is as follows: we will first analyze the idea of trust as anti-monitoring in the literature. Next, we will define the building blocks of an analysis of trust as anti-monitoring based on the concepts of reliance, monitoring, and trust, and establish their mutual relations. Third, we will discuss an account of trust as antithetical to monitoring (i.e., “trust as anti-monitoring”), then quantitatively formalize the idea of trust as anti-monitoring, introducing mathematical models that use functions such as q-exponentials (Tsallis, 1988, 2009). Finally, we will discuss how the formalized intuition of trust as anti-monitoring can be used to derive empirical measures of trust.

2 Quantities of trust and anti-monitoring

In this section, we present the main quantifications of trust and its conceptions as antithetical to monitoring in philosophy and the social sciences. This is a preliminary step to the philosophical analysis of the intensity of trust.

The idea that trust is antithetical to monitoring is not new in the philosophical literature: it has been endorsed, implicitly or explicitly, by a wide range of scholars from different disciplines, with disparate views about what trust is and how to measure it. The core idea of trust as anti-monitoring is that if I trust someone, I will not need to monitor that person. The action of monitoring implies the investment of resources (e.g., time and money) to ensure that the person relied upon will perform an action to achieve the goal of interest for the trustor. Therefore, we consider trust as the property of relationships where one relies on another person to achieve a given goal with little monitoring.

In the philosophical literature, monitoring is an activity that is often considered antithetical to trust. In fact, Baier (2013) writes: “As I understand trust, it itself involves economizing on monitoring, supervision, and audits, and leaving the trusted to get on with their work with minimal audits and minimal supervision. So increasing these is of course displaying decreasing trust–simply replacing it with audits, supervision, threats, sanctions and coercion.”Footnote 2

The term “economizing” does not exclude the action of monitoring in toto; instead, it reflects the idea that trust is antithetical to monitoring to some degree. Therefore, one is drawn to use vague quantitative terminology, such as “reliance with little monitoring” or “not too much monitoring,” to provide a high-level description of the relationship between trust and monitoring. Keren’s (2014) account goes further. It shows that reasons for trust, in opposition to reasons for mere reliance, are also essentially considerations that count against monitoring the trustee.Footnote 3 This is closely related to our proposal of measuring trust as a quantity that is inversely related to monitoring, which provides a basis for our graded account of trust.Footnote 4

The idea of an essential or constitutive relation between trust and monitoring attitudes is also indirectly supported by the claims of many scholars who have mentioned monitoring in an attempt to characterize trust, without placing it at the center of their account. One may find the claim that trusting occurs “before one can monitor the actions of … others” (Dasgupta, 1988, p. 51) or “when out of respect for others one refuses to monitor them” (McLeod, 2020). Mayer et al. (1995) define trust as “the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other party will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party” (712). In the context of trusting decisions, monitoring is defined as the activity “aimed at ascertaining whether another action has been successfully executed” (Castelfranchi & Falcone, 2010, p. 193). According to Castelfranchi and Falcone’s definition, monitoring is subsequent to the decision to trust; it can anticipate an update of the trusting relationship, allowing for interventions or activities aimed at adjusting a process by “dealing with the possible deviations and unforeseen events” (Castelfranchi & Falcone, 2010, p. 193). In the context of trust in AI, trust has been defined as the absence of monitoring (Ferrario et al., 2020), and e-trust—namely trust between artificial agents (AA)—has been inversely related to monitoring in Taddeo’s (2010) influential model.

Some authors have proposed formalizations of trust. Among these, some include monitoring as a constitutive element of trust, some represent trust as a scalar quantity, and some do both. We summarize these contributions in Table 1 and briefly comment on the work produced by Taddeo (2010), which is the most similar approach to ours.

Table 1 Formalizations of trust from the literature

In Table 1, column “Risk-based?” specifies whether trustors are modeled as believing that the relation will be successful to a given degree (e.g., by considering the reputation, track records, etc. of the trustee). The column “Monitoring?” specifies whether, in the proposed formalization, trustors (plan to) control/evaluate the trustees’ actions throughout the reliance relation, that is, whether there is an investment of resources in controlling and supervising the trustee (as opposed to past evidence of success in similar endeavors). In “Graded?” we indicate whether trust or trustworthiness is a matter of degree in the proposed account, rather than a binary condition (yes/no). Finally, column “Object” specifies whether the formalization provides a measure of trust, trustworthiness, or both.

Taddeo’s (2010) model of e-trust is a forerunner of our approach, as it establishes an inverse relationship between e-trust and monitoring. The e-trust model developed by Taddeo descends from a general principle of the inverse relationship between e-trust and resources. Although Taddeo’s model is the closest to our approach, there are several relevant differences to highlight. First, Taddeo’s theory does not explain how the dimensionality of trust (being a pure number) comes about from such different measures as the measure of monitoring and the measure of belief. Second, Taddeo measures trust as the belief that a trustee will achieve a goal with a probability p that is deemed sufficient for trustworthiness. By contrast, in our account, we consider not only the probability of success but also the utility produced for the trustor without introducing any ad hoc threshold of p. Moreover, we formalize the relationship between the probability of success, the utility produced by the reliance relation, and the levels of monitoring.

Finally, we note that in economics, experimental game theory provides a quantitative account of trust, which is not coherent as a measure of anti-monitoring. Experimental game theory postulates a context requiring trust for cooperation, such as the investment game (Berg et al., 1995) and the public goods game. The intensity of trust is identified with the amount of resources a trustor is willing to put at risk in her interaction with a counterpart (Berg et al., 1995), whose behavior cannot be controlled by the trustor. This is incompatible with treating monitoring as a variable whose quantity is relevant for the measure of trust, as proposed by our formalization.

3 Reliance, monitoring, and stakes

In a few words, we propose to consider any degree of trust as a degree of reliance with a positive expectation and a given degree of monitoring. We defend a doxastic account of (measurable) trust if doxastic involves a belief state. This is because the measure of trust includes a graded belief in the probability p of the trustee’s success. Yet, trust is not reducible to any specific belief state or set of belief states because it can include a non-belief mental state, namely, an intention, desire, or other pro-attitude (our account is neutral relative to this) to avoid monitoring or supervision to some degree. However, it is not doxastic given the stronger definition of doxastic as requiring as a necessary condition a belief in the trustworthiness of the trustee.Footnote 5

We therefore propose to measure the intensity of trust as a function of both the trustor’s expectation and the intended degree of monitoring.Footnote 6 Thus, our account is based on the following primary notions: (a) reliance, (b) monitoring, and (c) a staking expectation. For the sake of brevity, we will refer to the staking expectation as “stakes” hereafter. We provide a formalization of the building blocks of our account of trust in Appendix A1.

3.1 Reliance

In the philosophical literature, reliance is commonly seen as distinct from trust (Baier, 1986; Holton, 1994), yet trust involves some (special) kind of reliance. In this work, we propose the following definition:

Reliance::

Let X and Y be two agents, and g be a goal of X. The relation R(X, Y, g) is called reliance, if

R1::

the relation is motivated by a shared goal g for X and Y;

R2::

X believes the probability p of achieving g to depend also on Y’s properties and that p >0;

R3::

it is goal-relative and bounded in time.

In what follows, we will use the notation (X, Y, g) to represent any reliance relation wherein X relies on Y to achieve the goal g.

With R1, we assume that X relies on Y to achieve the goal g. The goal g is both X’s goal and Y’s goal. We ascribe a potential gain or loss (utility) to X while abstracting from any utility produced for Y. Thus, given an ordered pair of agents with a shared goal (X, Y, g), what characterizes the first agent (X) as the trustor and the second agent (Y) as the trustee is simply the fact that in describing X and Y’s pursuit of a shared goal, we focus on X’s utility and beliefs while ignoring Y’s.

With R2, one assumes that some properties attributed to Y by X are believed by X to contribute to success in the shared endeavor. More precisely, these dispositions are believed by X to make it probable to a degree p that Y will achieve g. Examples of properties believed to influence p are competence, honesty, reliability, meticulousness, conscientiousness, and goodwill. These properties can be moral (Y’s commitment to moral norms), prudential (Y’s ability to correctly identify her own interest and value long-term reputational effects following from her performance in achieving g), epistemic (Y’s knowledge of the domain related to g), or technical (Y’s ability to know how to achieve g). The parameter p expresses X’s best guess (informed or uninformed) about the probability that g will be achieved considering such factors. Reliance is only possible when an agent X believes that Y’s properties make g’s achievement at least possible, that is, when p > 0 (if not highly probable). If X believes Y’s properties make g unachievable, X cannot rely on Y for g.

R3 defines reliance as a discrete interaction between X and Y for the specific goal g. Thus, in our parlance, reliance does not refer to a long-term relationship between two agents achieving different goals. Such complex relationships between two agents may consist of a chain of different reliance relations.Footnote 7 A discrete reliance interaction is, in our account, the entity to which a degree of trust can be attributed. This discrete interaction ends when either the goal g is accomplished or its delegation by X to Y is prematurely interrupted.

Some philosophers have discussed cases that may be inappropriately labeled as divergent reliance, in which X achieves a goal f by means of Y, and where f is not one of Y’s goals. A popular example in philosophical discussions is that of Kant’s neighbors, who relied on Kant’s regular habits to set their clocks (Baier, 1986). This is not an instance of reliance as we define it, as the goal is not convergent: Kant does not intend to contribute to clock-setting. Another interesting case is that of a sadistic manager assigning a task to an employee in full confidence that the employee cannot successfully perform the task. In this case, the goal f of X is Y’s humiliation. X’s assigning a task to Y, apparently to achieve g, is only superficially a case of reliance. In fact, if X is certain that Y is unable to achieve g, X only pretends to make herself reliant on Y to achieve g. In reality, X has already (but not openly) given up g for the sake of achieving f. The relation in this example may inappropriately be labeled reliance, but we believe that the term “reliance” in a trust context should only be used for the convergent variety. X’s so-called “divergent” reliance on Y is never what we mean by “reliance” in what follows, although we sometimes include the pleonastic adjective “convergent” to remind the reader of our conceptualization.

3.2 (Planned) monitoring

We introduce our definition of monitoring and supervision (hereafter, monitoring), inspired by the works of Castelfranchi and Falcone (2010) on the cognitive aspects of trust.


Monitoring: Let (X, Y, g) be a reliance relation. The monitoring exercised by X on Y is a set \(M_{{\left( {X, Y, g} \right)}}\) of the behaviors of X:

aimed at ascertaining whether … [Y’s] action has been successfully executed or if a given state of the world has been realized or maintained (monitoring, feedback);

[and]

aimed at dealing with the possible deviations and unforeseen events in order to positively cope with them and adjusting the process (intervention). (Castelfranchi & Falcone, 2010, 193)

Monitoring includes actions that are widely believed to be antithetical to trust. It does not only include passive observation but also active interference in the form of control. To simplify its measurement, monitoring in our account includes only those behaviors that are an investment, denoted by m, for X. For example, X may decide to spend 3 h of her weekly time supervising a Ph.D. student. These hours have a monetary value in terms of X’s annual wage.

One interesting question is whether sanctions count as monitoring activities according to our definition. Given our definition, sanctions must be included, but only when (a) they are behaviors of X, (b) they follow from ascertaining that the preconditions of the sanctions are satisfied, and (c) they are costly for X. Sanctions of public opinion and other effects of reputational losses (Pettit, 1995) count as monitoring only if X actively contributes to them to control Y in a way that produces costs for X (e.g., if X produces public feedback about Y).Footnote 8

Monitoring can affect both Y’s disposition to achieve g (particularly when monitoring involves feedback and corrections) and X’s confidence  p that g will be realized. These factors can vary dynamically when Y acts to achieve g while being monitored by X. For the sake of simplicity, we abstract from updates of p resulting from X’s monitoring of Y during the reliance relation. Actual updates of p due to monitoring during the interaction are not considered in the model, but the beneficial effects of future monitoring on augmenting the value of p are considered ex ante and will be discussed in Sect. 3.4.

We consider only the investment m that is planned at the beginning of each reliance relation. Therefore, in our account, monitoring is an activity, the amount and cost of which is planned at the beginning of the reliance interaction (and not altered during it). The act of monitoring for g, as we define it, begins only after X has formed the intention to rely on a specific person. For example, merely scrutinizing the reputation of possible candidates before choosing one to rely on is not part of monitoring. Planned monitoring is a set of actions intended before the act of monitoring. This plan is associated with a cost that is estimated in a given (monetary) unit. While we speak about “monitoring” simpliciter, from now on we always mean planned monitoring investment, which is measured in a given (monetary) unit.

Y’s reputation acquired by virtue of accomplishing a past goal f can contribute to X’s estimate of p in relation to Y’s achievement of X and Y’s current shared goal g. Even if checking Y’s reputation has a cost for X, this is not included in X’s (planned) monitoring (costs) of Y relative to g. Similarly, when, during monitoring, X updates p and consequently takes further precautions to prevent future losses, this will not affect the variable  m, which refers to X’s initially planned monitoring costs. When evaluating X’s degree of trust in Y for  f, only X’s (planned) monitoring (costs) for f are allowed to determine this quantity. In Fig. 1, we provide an example of a sequence of three reliance interactions between a trustor X and a trustee Y. Each reliance relation is characterized by a given goal, a time duration, a level of trust, and a degree of monitoring. In Fig. 1, we show the case of a series of reliance relations resulting in an incremental gain of X’s trust in Y.

Fig. 1
figure 1

A sequence of three reliance relations with goals f, g, and h over the course of a longer relationship between X and Y. Each relation is characterized by a level of trust that is constant throughout the relation and inversely proportional to the amount of planned monitoring (represented by the circles at the beginning of each relation). The amount of trust in a given relation may affect the amount of monitoring exercised in the next relation (as shown by the diagonal arrows). Long-term relationships between trustors and trustees must be broken down as a sequence of discrete steps with different local and short-term goals to be analyzed with our framework

3.3 Stakes

We define the stakes of a reliance interaction between X and Y as X’s expectation of the benefits and harms resulting from Y’s reliability with respect to g. Like monitoring costs, stakes are quantified at the beginning of the reliance relation. Formally:

Stakes: Let (X, Y, g) be a reliance relation. The stakes S of (X, Y, g) are

$$S = pG - \left( {1 - p} \right)L$$
(1)

where G is an estimate of the likely gain for X resulting from the successful pursuit of g by Y; L denotes the estimate of the likely loss deriving from Y’s failure; p is described as the chance of receiving gain by relying on Y (Coleman, 2000).Footnote 9

In the above definition, g’s (non-)achievement is the state of affairs upon which the gains G and losses L are contingent. Thus, the products \(pG\) and \(\left( {1 - p} \right)L\) denote the expected gains and losses, respectively, from achieving or not achieving the goal g. The gains and losses due to the process of pursuing g, independent of its achievement, are excluded: G and L only refer to gains and losses caused by the change in one real-world variable, g’s accomplishment.Footnote 10

In reality, X’s decision to rely on Y for g may be affected by various advantages and disadvantages of the process of relying on Y for g, which are independent of g’s accomplishment. For example, if X were coerced into relying on Y for g, X would face a disutility of some type by choosing not to depend on Y. This punishment obtains irrespective of Y’s ability to achieve g. Hence, the positive result of relying on Y (i.e., avoiding punishment) is not included in G.

We emphasize that the quantities \(\it {\text{G}}\), \(\it {\text{L}}\), and \(\it {\text{p}}\) are all X’s subjective estimates. Therefore, the stakes are also entirely subjective.Footnote 11

3.4 Monitoring, probability, and stakes

The definition of stakes in Sect. 3.3 does not include monitoring. It conveys the intuitive idea that the (subjectively) expected benefits and harms from trusting depend on both the magnitude of the possible gains and losses and the risk of not achieving the goal. Of these elements, the possibility of the gains and losses are independent of monitoring, but their probability is not. Indeed, our model must incorporate the assumption that planned monitoring influences the trustor’s confidence in the possibility of achieving the goal.

In our model, the effects of planned monitoring on the initial value of p are the reflection of the predicted effect of monitoring on the success of the interaction in the trustor’s mind before the actual interaction begins. In other words, an intention to monitor can typically affect the levels of the trustor’s confidence (probability p) that the trustee will achieve the goal g. Thus, the expected impact of future actions of monitoring on p concurs, together with its expected cost, to decide the investment in monitoring resources. Let us clarify this with an example. Suppose that the trustor plans a weekly meeting with the trustee (planned monitoring) and starts to rely on that trustee to achieve the goal g. The value of p in the trustor’s mind at the beginning of the reliance relation considers the effects of the planned (weekly) meetings on the possibility of achieving the shared goal of the reliance interaction, g. Typically, a rational trustor who invests resources in monitoring expects p to be larger than p would have been in the absence of monitoring.Footnote 12

Since planned monitoring affects the trustor’s confidence p, it is possible to express the projected contribution of monitoring costs on the probability of success as a function of monitoring. Formally, we parametrize the subjective probability level p of a reliance relation (X, Y, g) by means of a function p (bold p, while simply italicized p stands for a numerical confidence level, i.e., a probability value). We call this the “perceived reliability function.” Therefore, given a reliance relation (X, Y, g) with the level of subjective probability p > 0 and monitoring level m, we write

$$p = {\varvec{p}}\left( {m|c} \right),$$

where p is a function that depends on the reliance relation under consideration. Thus, p is defined mathematically as a function of monitoring, given a parameter c that encodes the context in which the investment m in monitoring is performed. The gain G and the loss  L is part of the context c in which the reliance relation takes place.

The function p intuitively expresses a trustor’s confidence in the possible performance of the trustee under different monitoring modalities. It expresses an important property of the trustee in relation to the trustor and the specific goal. In the remainder of the paper, we will write p(m) instead of p(m|c) to simplify the notation when c is clear from the context.

For clarity’s sake, in what follows, we will use “confidence” (and not “reliability”) to indicate the value of p at which we land after a choice of m through p. So, if X chooses to monitor Y more intensely, and obtains a higher value of p as a result, this counts as an increase in confidence, not in perceived reliability. We will use “(perceived) reliability” to indicate the function p. For example, if X increases her confidence without changing her monitoring plan, this results from a change in perceived reliability (given c).

It may seem highly unusual to feature a function as one of the basic conceptual elements of trust. Therefore, it is important to provide a philosophical interpretation of this mathematical concept. The relevant contrast, here, is between p, understood as confidence by a trustor who has already adopted a concrete monitoring plan, and the function p, which expresses the range of all possible confidence levels in the trustee as planned monitoring hypothetically changes. It seems intuitive to us that the range of all possible confidence levels considered as a whole, i.e., p(m|c), with m denoting all possible monitoring levels, provides a comprehensive outlook of Y’s reliability from X’s point of view. We distinguish between reliability and trustworthiness in Sect. 4.

Let us now discuss the parameter c. Consider X, a professor, who invests 30 h of her time into supervising her Ph.D. students, for an estimated monitoring investment of m = $30,000 for a single dissertation. However, X’s confidence of success p does not only depend on the nominal value of the investment in monitoring and reliability. In fact, the same investment in monitoring may correspond to different confidence levels that do not derive from reliability but from contextual factors; for example, the nature of the goal and the market cost of the socially necessary resources (work and equipment) for monitoring the execution of that goal, as well as cultural elements. All these elements—conceptually distinct from reliability—are grouped together as the context c.

As an example of contextual goal dependence, $30,000 worth of monitoring by a professor may correspond to a 60% chance of a successful dissertation, but only a 5% chance of the student publishing an article in a top journal before graduating. Notice that if the goal of a reliance relation changes, the possible gains G and losses L for the trustor X also normally change.

As an example of labor-cost contextual dependence, $30,000 worth of monitoring by a professor may purchase more hours of supervision in a country where professor salaries are lower, leading to a higher value of p for the same amount of investment. As an example of equipment-cost dependence, an investment of $1000 in digital surveillance technology (cameras, image recognition software, etc.) may purchase a higher degree of effective control and higher values of p after technological innovation lowers the cost of said technology. Finally, as an example of cultural dependence, an investment of $1000 in monitoring may have a higher return in a society in which education prepares workers to respond fruitfully to control and feedback and a lower return in a society in which workers resent control and supervision and try to undermine it.

We are not providing an empirical methodology for assessing c; rather, we recommend that empirical comparisons of the intensity of trust are only treated as meaningful when they apply to contexts characterized by the same c. Given these considerations, we then arrive at the final definition of the stakes of a reliance relation in our anti-monitoring account of trust:

Stakes (final): Let (X, Y, g) be a reliance relation and \(m\ge 0\) denote a level of monitoring. The stakes \(S_{{\varvec{p}}} (m)\) of (X, Y, g) are

$$S_{{\varvec{p}}} (m) = {\varvec{p}}\left( {m|c} \right)G - \left( {1 - {\varvec{p}}\left( {m|c} \right)} \right)L$$
(2)

where G is an estimate of the likely gain for X resulting from the successful pursuit of a goal g by Y; L denotes the estimate of the likely loss deriving from Y’s failure; the reliability function p describes the trustor’s confidence in receiving gain [and losses], for all levels of monitoring m. By definition, the stakes are a function of monitoring via the reliability function \({\varvec{p}}\). However, as the monitoring level m of a reliance relation (X, Y, g) is estimated before the relation starts, during the relation, the stakes are constant and equal to \(S_{{\varvec{p}}} (m)\), and the (fixed) subjective probability level satisfies \(\it \text{p} > {0}\) by property R2 of reliance relations (see Sect. 3.1). We assume that the estimated G and L are part of the context c in which the investment in monitoring m is decided.

3.5 Definition of trust

Above, we have introduced the building blocks—reliance, monitoring, and stakes—of our theory. Here, we combine these elements into a definition of trust. Our account characterizes trust as a quantitative property of certain reliance relations. Thus, trust and reliance are realized by the same real-world relations.Footnote 13 As we shall argue, all reliance relations with positive stakes have a degree of trust, which can vary in its quantity depending on the level of monitoring relative to the stakes. More precisely:


Trust: Let (X, Y, g) be a reliance relation with positive stakes,Footnote 14 i.e., \(S_{{\varvec{p}}} (m)\)>0, for a given level of monitoring \(m\ge 0\), where \(S_{\varvec{p}} (m)\) is as in Eq. (2). Given a fixed context c, wherein G and L are kept fixed, trust is a property of (X, Y, g) that satisfies the following axioms:

M1: keeping p fixed, the more X monitors Y, the less X trusts Y

M2: keeping m fixed, the greater the stakes when X relies on Y, the more X trusts Y

The M1–2 axioms are intuitively plausible assertions on the characteristics of trust and its trend when the monitoring or stakes vary. Axiom M1 is the building block of our model of trust as anti-monitoring. It is equivalent to state that had X planned a level of monitoring m’ for the same reliance relation (X, Y, g), such that \(m^{\prime} > m\) and \(S_{\varvec{p}} \left( {m^{\prime}} \right)> 0\), then the level of trust of the relation would have been lower, keeping the function p fixed.

Axiom M2 expresses the intuition that the higher the stakes \(S_{\varvec{p}} (m)\), the more one trusts, other things equal. The “other things equal” clause in axiom M2 is equivalent to keeping m fixed and letting p vary.

Let us clarify what it means for the reliability function p to vary. It is important to clearly distinguish this from the idea that one can keep p, the trustor’s confidence, fixed as m varies. We remind the reader that we are saying that p varies, not p. Remember that a change in confidence p that is due only to an increase in monitoring reduces trust.

By contrast, a change in confidence p without any change in  m (context c fixed) can only be the result of a change in the trustor’s perceived reliability p of the trustee for that level of monitoring.Footnote 15 Thus, a change of confidence of this type should intuitively correspond to increased trust.Footnote 16 In line with this intuition, axiom M2 implies that had X estimated a level of subjective probability \(\user2{p^{\prime}}\left( m \right)\) such that \(S_{{\user2{p^{\prime}}}} (m) > S_{{\varvec{p}}} (m)\) for the actually planned monitoring level m, keeping G and L as elements of the context fixed, then the level of trust in Y would have been higher.

We assume that all reliance relations that exemplify trust have positive stakes. Therefore, trust always involves a (subjective) positive expectation, including when it is low. Clearly, a positive expectation is only an expected value, and it does not guarantee that the goal will be achieved.

4 The relations between trust, monitoring, and reliance

4.1 Main idea of the measurement framework

Let us explain the conceptual framework with an example. Claire, the CEO of a company, wants to know the minimum amount of monitoring she needs to trust her employees to carry out an innovation project that, if completed, would increase the profit of the company by 6% per year. The completion of the project, the length of which is estimated to be a few months, is the goal g of this case. Following our model, Claire should identify the stakes of the relation. Let us say that the 6% increase is G = $150,000 and the potential monetary loss is L = $100,000. Claire must now assess her p, namely the subjective probability that her team is reliable and will achieve g at a given level of monitoring m (see Sect. 3.2). Claire thinks that by investing m = $65,000 into monitoring the employees (the equivalent of a few working days), she will reach g with a probability p(65,000) = 0.86. Therefore, using Eq. (2), the stakes are \(S_{{\varvec{p}}} (\text{65,000})\) = $115,000. We assume that Claire assesses the probability of success before she assigns tasks to her employees and become dependent on them for its success, that is, before the innovation project starts, and we ignore any subsequent monitoring decision and its cost, as well as any subsequent update of p(m) (see Sect. 3.2).

Intuitively, we postulate that the relation between trust, monitoring, and stakes is as follows. Had the planned monitoring costs m been higher, all other things equal, Claire would have trusted the employees less than she in fact did. Had the stakes \(S_{{\varvec{p}}} \left( \text{65,000} \right)\) been higher, all other things equal, she would have trusted the employee more than she in fact did.Footnote 17

4.2 Trustworthiness in the account of trust as anti-monitoring

Trustworthiness, namely the quality of being able to be trusted, is a fundamental element of any account of trust. Accordingly, we will now present how trustworthiness is conceived in our account of trust as anti-monitoring. Roughly stated, trustworthiness is the trustee’s (higher-order) feature of having characteristics counting in favor of both relying on the trustee and economizing on monitoring the trustee. A characterization of trustworthiness in terms of reasons (not) to monitor the trustee is coherent with our approach, though it is still a simplification.

First, not all considerations that provide a trustor with a reason to reduce monitoring of the trustee contribute to trustworthiness. If X relies on Y, and X receives a threat that coerces X to avoid the monitoring of Y, this only counts as a reason for X by virtue of that further goal f (avoiding the threat), which is not the (shared) goal g of the reliance relation between X and Y. The threat provides X with a reason to avoid monitoring, but it does not follow that X believes Y to be more trustworthy as a result. More generally, when X’s monitoring choice is unrelated to Y’s properties that contribute to accomplishing g, and it is explained by X’s non-shared goal f (e.g., avoiding punishment), X’s monitoring choices and Y’s trustworthiness are unrelated.Footnote 18

Second, not all considerations that contribute to trustworthiness provide a reason to reduce monitoring. For example, Keren (2014) argues that some reasons not to monitor, including reasons provided by Y’s competence, creativity, intelligence, and conscientiousness, characterize relations of trust.Footnote 19 We point out, however, that trustworthiness is not always that which provides a reason to lower monitoring. Consider a scenario in which Y is so little trustworthy that investing resources in controlling Y’s action is extremely inefficient; for instance, if Y’s understanding of feedback is not accurate enough or Y is liable to forget or ignore corrections and advice. Supervision thus has almost no effect on Y. Still, X can rely on Y (with positive stakes) to accomplish simple tasks in the (non-optimal) way Y is used to accomplishing them. One can imagine this as a scenario in which X avoids monitoring Y not because Y is trustworthy but because Y is not. If Y were to become more trustworthy, for example, highly conscientious and highly accurate in her response to feedback, one would expect monitoring to be cost-efficient. Thus, if Y were more trustworthy, this X would have a reason to monitor Y more closely, a reason that X lacks when Y is less trustworthy. This shows that the relation between trustworthiness properties and reasons for monitoring is not simple and linear. If one wants to define trustworthiness in terms of reasons for monitoring, one should provide a more fine-grained characterization of the type of reasons in question. These reasons are not simple reasons that make it rational, in an economic sense, to avoid monitoring. Rather, they are reasons that support the avoidance of monitoring as a fitting attitude in relation to Y’s properties such as conscientiousness, silencing rather than outweighing reasons for monitoring. Providing a reductionist explanation of trustworthiness in terms of reasons to avoid monitoring is, however, not a goal we pursue in this paper.Footnote 20

Moreover, it is possible and legitimate to provide an account of trust as a graded concept in terms of monitoring, even when monitoring decisions are partly causally independent from beliefs in Y’s trustworthiness understood as reason-giving terms. We want to allow for the conceptual possibility that relations of high trust may emerge even when the trustee is not believed to be highly trustworthy. Trust may emerge, at least sometimes, because the circumstances justify a low-monitoring interaction if the stakes are positive and higher than they would be with a high-monitoring interaction or if a high-monitoring interaction is not feasible or conceivable under the given circumstances.Footnote 21

It may be objected that, while we try to disengage from modeling trustworthiness in our framework, we contradict ourselves by implicitly providing an interpretation of trustworthiness through the p function. In reply,  p does not provide our interpretation of trustworthiness, nor does it provide our interpretation of trustworthiness-as-perceived-by-X. The function p includes all confidence levels, including those corresponding to very high levels of monitoring and that can never provide a reason to trust, only reasons for reliance.

In line with this distinction, notice that our measure of trust is not uniquely determined by the function expressing Y’s reliability as seen by X, but also by X’s actual choices to monitor Y in a given context.

5 The mathematical measure of trust

With the axiomatization of reliance and the description of its relationship with trust, we now describe the modeling of trust as anti-monitoring.

Let us consider axioms M1–2 satisfied by trust, according to the definition of Sect. 3.5. In Appendix A2, we prove that clauses M1–2 are satisfied by a family of functions of the level of monitoring m. We call them “level of trust” or “intensity of trust” functions. These functions provide a mathematical formulation of our theory of “trust as anti-monitoring.” They result from a single mathematical Ansatz that quantifies the relationship between the decrease in levels of trust due to the increase in monitoring using the building blocks of our theory of reliance: stakes, monitoring, and levels of trust. Mathematically, the trust functions encode the intuition that a decrease in the levels of trust due to an increase in monitoring should be proportional to the level of trust before the increase and inversely proportional to the stakes before the increase. They are examples of q-exponentials (Tsallis, 1988, 2009), that is, deformations of the classical exponential functions.

The Ansatz makes use of a free parameter, denoted q. Its formalization is discussed in Appendix A1. The values of q parameterize the functional form of the Ansatz describing the reduction in (the levels of) trust due to the increase of monitoring (see Appendix A2). The parameter q controls how much of a given level of trust is lost due to an increment of monitoring m, other things equal. Here, we discuss the general formulation of the trust functions, and we refer the reader to Appendix A2 for a more detailed discussion of the Ansatz, how to derive the trust functions, and the q parameter. Our formulation makes the assumption that the trust functions of a reliance relation are defined for all levels of monitoring such that the stakes of the relation are positive.Footnote 22

The trust functions are “normalized,” that is, they are suitable for situations in which the measurement of trust levels must lead to a finite value, which we put in the interval (0,1]. They model the intuition that full trust is trust with no monitoring, and all other forms (i.e., involving monitoring to some degree) are partial forms of trust. This is expressed most naturally by using a level of trust equal to one for full trust, and fractions of that value for all other forms of partial trust. In other words, these functions model the intuition that full trust involves no monitoring, and increasing monitoring causes a decay from a totally accomplished condition, until nothing of the initial condition remains (as monitoring approaches an infinite quantity).

Let us consider any given reliance relation (X, Y, g) with stakes \(S_{{\varvec{p}}} (m)\), for a level of monitoring \(m\ge 0\), where \(S_{{\varvec{p}}} (m)\) is given in Eq. (2). The level of trust of the relation is

$$t_{q} \left( {m|G,L,{\varvec{p}}} \right) = e^{{\frac{1}{1 - q}\ln \left[ {1 - \left( {1 - q} \right)I_{{\varvec{p}}} \left( m \right)} \right] }} ,\quad q > 1$$
(3)

where

$$I_{{\varvec{p}}} \left( m \right) = \mathop \int \limits_{0}^{m} \frac{{d\hat{m}}}{{S_{{\varvec{p}}} \left( {\hat{m}} \right)}}.$$
(4)

Equation (3) states that the level of trust of the given reliance relation is modeled as a q-exponential function (Tsallis, 1988, 2009) that comprises the integral function \(I_{{\varvec{p}}} (m)\) given in Eq. (4).Footnote 23 Geometrically, the integral \(I_{{\varvec{p}}} (m)\) is the area under the function \(\frac{1}{{S_{{\varvec{p}}} \left( {\hat{m}} \right)}}\), where \(\widehat{m}\in [0,m]\), as shown in Fig. 2. The integral appears in Eq. (4) as the behavior of the stakes is described in full generality prior to choosing a reliability function p. This is the result of capturing how trust changes when the reliability function p changes, as per axiom M2. This integral is defined in terms of the only reference points of our model: the minimum level of monitoring, m = 0, and the chosen level of monitoring, m (for which there is no maximum).

Fig. 2
figure 2

The geometrical meaning of the integrals in Eq. (4). We chose the reliability functions pε given in Eq. (5). For any m, the integral \(I_{ {{\varvec{p}}_{\varepsilon} }} \left( m \right)\) is the area of the under the function \(\frac{1}{{S}_{{{\varvec{p}}}_{\varepsilon }}(m)}\) (dashed line), considering the interval [0,m]

The intuitive meaning of the integral can be explained as follows: a person’s level of trust does not reflect only the perceived reliability of Y for the planned level of monitoring m, but X’s view about Y’s reliability in all scenarios wherein X could have invested lower levels of monitoring. This follows from the mathematical resolution of the Ansatz and the reference points of our model.

As a result, by definition, the level of trust \({t}_{q}(m|G,L,{\varvec{p}})\) corresponding to the amount of monitoring m is computed by integrating over all contributions \(\frac{1}{{S}_{{\varvec{p}}}(\widehat{m})}\), where \(\widehat{m}\in [0,m]\), via the integral \({I}_{{\varvec{p}}}\)(m).

Fig. 3
figure 3

Two examples of reliability functions. pε depicts a piecewise linear function, while \({\varvec{p}}_{\text{r,s}}\) is a logistic reliability function (see Appendix A3)

From the Ansatz (see Appendix A2), it follows that the parameter q controls the rate of the decrease in trust levels—as computed in Eq. (3)—resulting from the increase in monitoring, given that p and the context c are fixed. In fact, other things equal, the higher the value of q, the higher the level of trust, as shown in Figs. 4, 5, and 6. We note that choosing q = 2, the trust functions have the simplified expression:

$$t_{2} \left( {m|G,L,{\varvec{p}}} \right) = \frac{1}{{1 + I_{{\varvec{p}}} \left( m \right)}}.$$
(5)
Fig. 4
figure 4

\({t}_{1}(m|G,L,{{\varvec{p}}}_{\varepsilon }\)) and \({t}_{2}(m|G,L,{{\varvec{p}}}_{\varepsilon }\)), for G = 10, L = ε = 1 (in an arbitrary unit of measurement). The vertical line corresponds to the level of monitoring \(m=G-\varepsilon\). The level of trust increases when the value of q increases, other things equal

Fig. 5
figure 5

\({t}_{1}(m|G,L,{{\varvec{p}}}_{r,s}\)) and \({t}_{2}(m|G,L,{{\varvec{p}}}_{r,s}\)) for G = 10, L = 1, and \(r=s=0.1\) (in an arbitrary unit of measurement). The level of trust increases when the value of q increases, other things equal

Fig. 6
figure 6

Trust functions using \({{\varvec{p}}}_{\varepsilon }\). Left panel: increasing q increases the level of trust, other things equal (G = 10, ε = 1). Right panel: increasing \(\varepsilon\) increases the level of trust, other things equal, with G = 10, \(\it {\text{q}}\) = 2 (in an arbitrary unit of measurement). For \(\varepsilon =G=10\), the trust function becomes \({t}_{2}(m|G,L,{{\varvec{p}}}_{G}\)) = \(\frac{1}{1+\frac{m}{G}}\) (dotted line). In both panels, the vertical line corresponds to the level of monitoring, \(m=G-\varepsilon =9\)

Finally, the exponential functions are retrieved in the limit case \(q\to 1\): \({t}_{1}(m|G,L,{\varvec{p}})={e}^{-{I}_{{\varvec{p}}}(m)}\). The fact that the function \(t_{q} \left( {m|G,L,p} \right)\) satisfies axiom M1 results directly from the Ansatz. Using the formula in Eq. (3), it can be proven that it satisfies axiom M2 as well.

Fig. 7
figure 7

Trust functions using \({{\varvec{p}}}_{\text{r,s}}\). Left panel: increasing q increases the level of trust, other things equal, with G = 10, L = 1, \(r=0.1\), \(s=0.1\) (in arbitrary units of measurement). Right panel: increasing the product \(\it {\text{rs}}\) increases the level of trust, other things equal (G = 10, L = 1, q = 2, in an arbitrary unit of measurement)

5.1 Explicit formulae of the trust functions

As shown in Eq. (3), each level of trust \({t}_{q}(m|G,L,{\varvec{p}})\) is defined in terms of an integral \(I_{{\varvec{p}}} \left( m \right)\), where \(m\ge 0.\) This follows from solving the Ansatz in Appendix A2.

To derive explicit formulae for the trust functions, we need to (1) choose the reliability functions p, (2) specify the context c, (3) solve the corresponding integrals \(I_{{\varvec{p}}} \left( m \right)\) analytically, and (4) insert the explicit formula of \(I_{{\varvec{p}}} \left( m \right)\) in Eq. (4). Steps 1 and 2 together identify the perceived reliability of Y from X’s point of view. Steps 3 and 4 deliver a measure of trust as a function of m, G, and L, which must be interpreted in line with the given assumptions on reliability from steps 1 and 2.

We begin by introducing the reliability functions in our account. For the sake of simplicity, we focus on two function types. Both function types encode the intuition that X’s subjective probability of achieving g should increase by investing in more monitoring (given a context c that we assume is specified by G L, as we will see later in this section). On the one hand, the first type of function describes situations where X can obtain certainty of achieving g by exercising enough monitoring. On the other hand, the second type of function describes all situations where an arbitrarily large increase in monitoring allows X to obtain certainty of achieving g only asymptotically (see Fig. 3 for some examples).

Formally, for any reliance relation (X, Y, g), the reliability functions are either of the form \({\varvec{p}}\left( {m|G,L} \right) = \left\{\begin{array}{ll} f\left( {m|G,L} \right) & m < m^{\prime} \\ 1 & m \ge m^{\prime} \\ \end{array} \right.\), where f is continuous and strictly monotonic increasing for all \(m<{m}^{\prime}\) with \(f\left( {\left. 0 \right|G,L} \right) > \frac{L}{{G + L}}\) and \(f\left( {m^{\prime}} \right) = 1\), or continuous and strictly monotonic increasing for all \(m\ge 0\) with \(\user2{p}\left( {\left. 0 \right|G,L} \right) > \frac{L}{{G + L}}\). Doing so, we choose to specify the reliance relation’s context c in terms of the estimated G and L of the goal g, for these are the only aspects of the goal that we attempt to quantify in our framework. In other words, the model treats changes in G and L as a partial specification of the context c, which includes other features that we do not attempt to measure. This is a simplification, but it is required to arrive at a concrete mathematical measure of trust. We also note that, mathematically, c must be captured by parameters measured in terms of money, since p is a pure number and m is a quantity of money. G and L fill this requirement as well.

Before we can compute explicit formulae for the trust functions, we must choose functions p and encode perceived reliability. We do so by considering two families of reliability functions, which are examples of the two function types mentioned above. Here, we present the first family; the second is discussed in Appendix A3 for the sake of readability.Footnote 24

5.1.1 Piecewise linear reliability functions

Let us consider the reliability functions

$${\varvec{p}}_{\varepsilon } \left( {m|G,L} \right) = \left\{ \begin{array}{ll}\frac{1}{G + L}m + \frac{L + \varepsilon }{{G + L}} & m < G - \varepsilon \\ 1 & m \ge G - \varepsilon \hfill \\ \end{array} \right.,\quad 0 < \varepsilon \le G$$
(6)

These functions are examples of piecewise linear functions (see Fig. 3). The free parameter ε is a measure of the perceived reliability of the trustee: the higher the value of ε, the less monitoring is needed to achieve certainty that the trustee will achieve the goal g. In fact, this level of monitoring is equal to G ε\(\text{.}\) Each reliance relation (X, Y, g) is characterized by a value of ε, which can be inferred experimentally by assessing X’s confidence in the absence of monitoring, i.e., \({\varvec{p}}_{\varepsilon }\)(0|G,L). In fact, using Eq. (6), the parameter \(\varepsilon\) satisfies the equality \(\varepsilon =(G+L){{\varvec{p}}}_{\varepsilon }\)(0|G,L)  L. Then, \(\varepsilon\) is uniquely determined by G, L, and \({{\varvec{p}}}_{\varepsilon }\)(0|G,L).

A quick check shows that \({S}_{{\varvec{p}}_{\varepsilon }}(m)>0\) for all \(m\ge 0\).Footnote 25 The functions \({\varvec{p}}_{\varepsilon }\) admit an interesting interpretation. In fact, we note that in general, \({S}_{{\varvec{p}}}(m)\) > m if and only if p(m) > \({{\varvec{p}}}_{0}(m)\), where \({{\varvec{p}}}_{0}=\underset{\varepsilon \to 0}{\mathit{lim}}{{\varvec{p}}}_{\varepsilon }\) and using the definition of stakes in Eq. (2). Therefore, the functions \({\varvec{p}}_{\varepsilon }\) represent a translation (controlled by ε) of the “limit” function \({\varvec{p}}_{0}\). This function allows for ascertaining whether, after choosing a confidence function p and a level of monitoring m, the expected gains \({S}_{{\varvec{p}}}(m)\) of the reliance relation exceed the planned cost of monitoring (or not).

Finally, using the definition of stakes in Eq. (2), it is possible to solve the integrals \({I}_{{{\varvec{p}}}_{\varepsilon }}(m)\) analytically (see Appendix A2). For example, we have

$$\begin{aligned} t_{1} \left( {m|G,L,{\varvec{p}}_{\varepsilon } } \right) &= \left\{ \begin{array}{ll} \frac{\varepsilon }{{m + \varepsilon }} & m < G - \varepsilon \\ \frac{\varepsilon }{G}e^{{ - \frac{1}{G}\left( {m - G + \varepsilon } \right)}} & m \ge G - \varepsilon \\ \end{array} \right.,\quad \hfill \\ t_{2} \left( {m|G,L,{\varvec{p}}_{\varepsilon } } \right) &= \left\{ \begin{array}{ll} \frac{1}{{1 + \ln \frac{{m + \varepsilon }}{\varepsilon }}} & m < G - \varepsilon \hfill \\ \frac{1}{{1 + \ln \frac{G}{\varepsilon } + \frac{1}{G}\left( {m - G + \varepsilon } \right)}} & m \ge G - \varepsilon \hfill \\ \end{array} \right.. \hfill \\ \end{aligned}$$
(7)

We plot both functions in Fig. 4 and refer the reader to Appendix A2 for more details on the trust levels \({t}_{q}(m|G,L,{{\varvec{p}}}_{\varepsilon }\)). Lastly, we note that the higher the value of \(\varepsilon\), the higher the level of trust \({t}_{q}(m|G,L,{{\varvec{p}}}_{\varepsilon }\)), other things equal. The limit case \(\varepsilon =G\) is such that \({{\varvec{p}}}_{G}(m)=1\) and \({S}_{{\boldsymbol{ }{\varvec{p}}}_{G}}(m)=G\) for all monitoring values m. Equation (7) reads \({t}_{2}(m|G,L,{{\varvec{p}}}_{G}\)) = \(\frac{1}{1+\frac{m}{G}}\). Note that trust can be less than 1, even if X is fully confident that the goal will be achieved with little supervision: this captures the case in which X (arguably irrationally) chooses to monitor Y.

In summary, the trust functions in Eq. (3) encode the intuition of trust as anti-monitoring via nonlinear transformations of the stakes. They satisfy axioms M1-2 and have values in (0,1], where 0 is a limit value corresponding to levels of monitoring approximating infinity, and the maximum at 1 is reached with no monitoring, i.e., m = 0.

Therefore, these trust functions are normalized, and their values can be interpreted as “percentages of trust” with respect to a maximum value, that is, that which is reached in absence of monitoring.

6 A methodology for measuring trust as anti-monitoring in empirical studies

The most common approach to the measurement of trust in social sciences, psychology, and sociology is the psychometric technique in the form of surveys, which investigate the subject’s expectations and intentions. These surveys appeal to trust indicators, which are, in turn, built on the most widespread definition of trust within a given research field, which is often not informed by the philosophical debate on trust.

The building blocks of our model of trust as anti-monitoring (Sect. 3) and trust relationships’ necessary property of low monitoring (Sect. 4) provide the theoretical toolbox that connects the definition of trust as anti-monitoring with the design of mathematical models of trust (Sect. 5). Our approach to trust provides the basis of a methodology for measuring the intensity of trust in interpersonal relationships through empirical techniques. The model of trust as anti-monitoring gives precise indications on how trust should be measured, based on measurable quantities of monitoring and the expected gains and losses deriving from achieving or not achieving the goal entrusted to a person who is relied upon.

Let us return to the example of Claire the CEO from Sect. 4. We seek to demonstrate how to empirically compute Claire’s level of trust in her team using the formalism introduced in Sect. 5. Appendix A4 includes all details on the approach we follow in this example for the interested reader. As introduced in Sect. 4, Claire thinks that the project gain is G = $150,000, and the potential monetary loss is L = $100,000. To estimate the level of Claire’s trust in her team, we can proceed as follows. First, during a meeting, Claire may answer a question to ascertain whether she believes that the project goal could be achieved with a success rate comparable to certainty by investing enough resources in monitoring her team. This would be the case, for example, for a low-risk project similar to others that have been successfully finalized in the past and with a CEO like Claire who is very confident in her team and the monitoring structure in place. In the case of a positive answer, it seems appropriate to model her subjective reliability function considering the family of piecewise linear functions \({{\varvec{p}}}_{\varepsilon }\) (see Fig. 3). Then, to estimate the parameter ε,Footnote 26 Claire may be asked about her confidence in successfully achieving her goal in the absence of monitoring, as discussed in Sect. 5. Let us suppose that her confidence in the absence of monitoring is equal to 60%. Then, we get ε = $50,000. As a result, considering a planned level of monitoring equal to m = $65,000 and using Eq. (6), we can calculate that Claire’s confidence is equal to 86% (at m = $65,000).

Finally, we must estimate the parameter q of the trust level \({t}_{q}(m|G,L,{{\varvec{p}}}_{\varepsilon }\)) function. To do so, it is sufficient to ask Claire for her estimated level of trust in her team for levels of monitoring close to m = 0.Footnote 27 In summary, if Claire plans to invest $65,000 in monitoring her team, \(\varepsilon\) =$50,000, and q = 2, then measuring trust with the function \({\text{t}}_{2}\) in Eq. (7) gives a trust level of \({t}_{2}(\text{65,000}|G,L,{{\varvec{p}}}_{\varepsilon })=0.55.\) Therefore, for a level of monitoring equal to $65,000, the level of Claire’s trust in her employees is 55% of full trust (i.e., no monitoring). Hence, despite Claire showing high confidence in achieving the project goal, her level of trust is only slightly closer to full than to no trust in virtue of the planned level of monitoring.

7 Conclusions

We have introduced the idea that trust is a property of reliance relations and is antithetical to monitoring. We have proposed a set of conditions (or axioms, in the mathematical modeling) that a measure of trust as anti-monitoring must satisfy. We have described mathematical models wherein the intensity of trust is measured as a function of expected gains, losses, and monitoring. The models satisfy the axioms encoding our essential intuitions about trust as anti-monitoring. In the Appendix, we show that these models stem from a natural Ansatz aiming at modeling the decrease in the level of trust in a reliance relation as a function of the stakes, the levels of monitoring, and trust. In the final part of the paper, we have shown how our model can be applied to a practical context by explaining how a measure of trust could be produced for a hypothetical scenario.