1 Preliminaries

Among recent biological concepts, gene regulatory networks is a particularly intriguing one. Roughly said, it refers to a set of genes coding for proteins being able to activate or inhibit the expression of other genes within the same set. Since such systems are likely to involve large numbers of genes, and since moreover gene expression is known to be a non-linear mechanism, mathematical models are, though a required tool, facing a major challenge. This fact has impelled a large amount of work, giving birth to numerous models; see de Jong (2002) for a review. Among different types of models, a special class of piecewise linear differential equations have been proposed by Glass in the early 1970s (Glass 1975). This class has the advantage of being mathematically much more tractable than more classical, smooth non-linear equations, nonetheless offering a comparable variety of behaviours. Following a common usage, we will often mention ‘Glass systems’ in the following when refering to piecewise linear models.

Since its introduction, the class of piecewise-linear models has thus been studied in various fashions. Periodic trajectories, or limit cycles, of these systems have been especially investigated. The first results concerned a particular case, characterized by the equality of all decay rates of the system. Under this assumption, trajectories are straight lines in regions where the system is affine, and the whole analysis becomes much simpler. Notably, it is possible to derive an explicit Poincaré map, which is a fractional linear map. It is then possible to reduce the problem of existence and stability of periodic orbits to an eigenanalysis. This method was introduced in (Glass and Pasternack 1978a, b), and subsequently improved by different authors (Edwards 2000; Farcot 2006; Mestl et al. 1995).

However, there is no possibility to extend this technique to systems with non-homogeneous decay rates. Moreover, these rates define the time scales of regulation processes at different genes in a network, and as such are very unlikely to be all of the same value. To our knowledge, only one study did not require this uniform decay rates assumption, but concerned on the other hand systems whose interaction structure consists in a single, negative loop (Snoussi 1989). Biological examples, such as the repressilator (Elowitz and Leibler 2000), make this type of interaction structure relevant. For such systems, the existence, but not uniqueness, of a stable limit cycle in dimension ≥3 and of a stable focus in dimension 2 is proven in (Snoussi 1989). This proof relies on Brouwer’s fixed point theorem, and is not entirely self contained.

The goal of the present paper is to propose some generic techniques for the study of periodic orbits in piecewise affine systems with arbitrary decay rates. To this aim, we shall rely on some property of these systems which do not depend on these decays. One such property is the fact that the flow is monotone in each region where the dynamics is affine. As for periodic orbits, it is possible in general to define a Poincaré return map, but then its associated Poincaré section will usually have to be partitioned into domains on which the return map is monotone with respect to different partial orders. Under an additional assumption, however, we will show that the long-term dynamics is entirely contained in a unique such region. Then, on this region, we will successfully apply a fixed point theorem for monotone, concave operators. There exists actually a well-developed corpus about operators satisfying such properties, especially for systems with positive variables. These results have the advantage of being generic, and the hypotheses on which they rely are on the other hand naturally satisfied in the context of piecewise affine models of gene networks.

Instead of giving a generic and abstract presentation of the method, we shall illustrate its potentialities on an example. Namely, we choose to consider the same systems as Snoussi (1989), since it naturally satisfies our additional assumption. For these negative feedback loops, we obtain a complete proof of the existence and uniqueness of the attractors (limit cycle and fixed point). Moreover, let us insist on the fact that the mathematical tools involved to prove this result seem to have been ignored in the context of piecewise linear gene network models, and offer a very promising framework.

In order to give a more precise foretaste of the rest of this paper, let us state now the main result in an informal way. The general methodology is as follows. We consider a sequence \({\mathcal C}\) of regions in state space, which is periodically crossed by trajectories of the system. Then, any two successive regions in \({\mathcal C}\) share a domain included in both their boundaries, and called a wall. One arbitrary such wall (let us denote it W here) can be chosen as a Poincaré section, and the periodic orbits of the system in \({\mathcal C}\) are characterized by means of a first return map on W. Then, we apply a fixed point theorem (see Sect. 3.1) to this map. In the case of a negative feedback loop system, it can be shown that a region \({\mathcal C}\) of the form above exists for a large set of parameter values. Then, our main result states that in \({\mathcal C}\) two asymptotic behaviours can occur, depending on the number, say n, of variables:

  • if n = 2, all trajectories in \({\mathcal C}\) converge toward the origin.

  • if n > 2, all trajectories in \({\mathcal C}\) converge toward a unique stable limit cycle.

The class of piecewise affine models we focus on is presented in Sect. 2. A rapid overview of the tools and references we rely on concerning monotone systems is given in Sect. 3.1, followed in Sect. 3.2 by general remarks on monotonicity in piecewise affine models. These remarks are then applied to negative feedback loop systems in Sect. 4, where the main results are stated and proven, except for the most technical aspects which are postponed to Appendix.

2 Piecewise Affine Models

2.1 Formulation

The general form of the piecewise affine models we consider here may be written as:

$$ \frac{dx} {dt} = \kappa(x) - \Upgamma x.$$
(1)

The variables \((x_1 \ldots x_n)\) represent concentrations in proteins or mRNA produced from n interacting genes. Since gene transcriptional regulation is widely supposed to follow a steep sigmoid law, it has been suggested that idealized, discontinuous switches may be used instead to model these complex systems (Glass 1975). Accordingly, \({\mathsf s}^+(\cdot ,\theta):{\bf R}\to \{0,1\}\) denotes the increasing step function, or Heaviside function:

$$ \left\{ \begin{array}{ll} {\mathsf s}^+(x,\theta) = 0 &\hbox {if } x< \theta,\\ {\mathsf s}^+(x,\theta) = 1 &\hbox {if } x\geq\theta, \end{array} \right. $$

which represents an effect of activation. Also, \({\mathsf s}^-(x,\theta)=1-{\mathsf s}^+ ({x,\theta}),\) is its decreasing version, and represents inhibition.

Then, \(\kappa:{\bf R}_+^n\to{\bf R}^n_+\) is a piecewise constant production term that can be expressed in terms of step functions \({\mathsf s}^\pm(x_i,\theta_i).\) \(\Upgamma \in {\bf R}_+^{n\times n}\) is a diagonal matrix whose diagonal entries \(\Upgamma_{ii}=\gamma_i,\) are degradation rates of variables in the system.

Since each variable x i is a concentration (of mRNA or protein), it ranges in some interval of nonnegative values denoted [0, max i]. When this concentration reaches a threshold value, some other gene in the network, say gene number j, is suddenly produced with a different production rate: the value of κ j changes. For each i ∈ {1…n}, there is thus a finite set of threshold values, which serve as parameters of step functions:

$$ {\Uptheta}_i=\{\theta_{i}^1,\ldots,\theta_{i}^{q_i-1}\}, $$
(2)

where the thresholds are ordered: \(0 < \theta_{i}^1 <\,\cdots\,< \theta_{i}^{q_i-1} < {\tt max}_i.\) The extreme values 0 and max i are not thresholds, since they bound the values of x i , and thus may not be crossed. However, a conventional notation will be: θ 0 i  = 0, and \(\theta_i^{q_i}={\tt max}_i.\)

Now, at a time t such that \(x_i(t) \in \Uptheta_i,\) there is some \(j \in \{1 \ldots n\}\) such that κ j (x(t +)) ≠ κ j (x(t )). It follows that each axis of the state space will be usefully partitioned into open segments between thresholds. Since the extreme values will not be crossed by the flow (see later), the first and last segments include one of their endpoints:

$$ {\mathcal D}_i \in \left\{\left[\theta_i^0, \theta_i^1\right),\;\left(\theta_i^{q_i-1}, \theta_i^{q_i}\right]\right\} \cup \left\{(\theta_i^j, \theta_i^{j+1}) | j \in \{1\ldots q_i-2\}\right\} \cup \Uptheta_i $$
(3)

Each product \({\mathcal D}=\prod_{i=1}^n{\mathcal D}_i\) defines a rectangular domain, whose dimension is the number of \({\mathcal D}_i\) that are not singletons. When \(\dim{\mathcal D}=n,\) one usually says that it is a regulatory domain, or regular domain, and those domain with lower dimension are called switching domains (de Jong et al. 2004). We use the notation \({\fancyscript{D}}\) to represent the set of all domains of the form above. Then, \({\fancyscript{D}}_r\) will denote the set of all regulatory domains, and \({\fancyscript{D}}_s\) the set of all switching domains.

A convenient way to represent regular domains makes use of a discrete map, whose range we write as:

$$ {\mathcal A} = \prod_{i=1}^n\{0 \ldots q_i-1\}. $$
(4)

The map in question is then defined as \({\mathsf d}:{\fancyscript{D}}_{r} \to {\mathcal A},\) and sends each regular domain to the superscript of its “lower-left” corner: \({\mathsf d}\left(\prod_i (\theta_i^{a_i-1}, \theta_i^{a_i})\right)=(a_1-1\ldots a_n-1).\) In the following, we will often identify regular domains and their image in \({\mathcal A},\) talking for example about some ‘domain a’.

The dynamics on regular domains, called here regular dynamics, can be defined quite simply, due to the simple expression of the flow in each \({\mathcal D} \in {\fancyscript{D}}_r.\) On sets of \({\fancyscript{D}}_s,\) on the other hand, the flow is in general not uniquely defined. It is anyway possible to define solutions in a rigorous way, yielding what will be mentioned as the singular dynamics, at the price of considering set-valued solutions. The latter’s definition rests on Filippov’s theory of differential equations with a discontinuous right-hand side, and its formulation in the particular case of Eq. 1 can be found in (Casey et al. 2006; Gouzé and Sari 2003), among others. Since we will only encounter systems for which the Filippov solutions lead to single valued trajectories, we shall only describe the regular dynamics with some detail.

2.2 Regular Dynamics

Regulatory domains are of particular importance. They form the main part of state space, and the dynamics on them can be expressed quite simply. Actually, on such a domain \({\mathcal D},\) the production rate κ is constant, and thus Eq. 1 is affine. Its solution is explicitly known, for each coordinate i:

$$ \varphi_i(x,t)=x_i(t) = \frac{\kappa_i} {\gamma_i} + e^{-\gamma_i t}\left(x_i - \frac{\kappa_i} {\gamma_i}\right), $$
(5)

and is valid for all \(t \in {\bf R}_+\) such that \(x(t) \in {\mathcal D}.\) It follows immediately that

$$ \phi({\mathcal D}) = \left(\phi_1 \ldots \phi_n\right)=\left(\frac{\kappa_1} {\gamma_1} \ldots\frac{\kappa_n} {\gamma_n}\right) $$

is an attractive equilibrium point for the flow (5). Hence, if it lies inside \({\mathcal D},\) it is an actual equilibrium of system (1). Otherwise, the flow will reach the boundary \(\partial{\mathcal D}\) in finite time, unless \(\phi({\mathcal D})\) lies exactly on the boundary of \({\mathcal D}.\) However this situation is clearly not generic, and we assume in the rest of this paper it never occurs. At the time when the flow reaches \(\partial{\mathcal D}\) thus, the value of κ will change, and that of ϕ accordingly. The point \(\phi({\mathcal D})\) is often called a focal point of the domain \({\mathcal D}.\) Then, the continuous flow can be reduced to a discrete-time dynamical system, with a state space supported by the boundaries of boxes in \({\fancyscript{D}}_r.\)

Hence, the state space of this discrete-time system is part of \({\fancyscript{D}}_s,\) which may seem problematic at first sight. In fact, it is actually not always possible to define discrete-time trajectories on domains of \({\fancyscript{D}}_s\) having dimension n − 2 or less. On any n − 1 domain \({\mathcal D},\) on the other hand, such a definition may always be properly provided, in terms of the flow lines in the two regular domains separated by \({\mathcal D}.\) If these flow lines both point towards, or away from \({\mathcal D}\) (which is expressed formally using the flow coordinate which is normal to \({\mathcal D}\)), the latter is called respectively a black wall or a white wall. Otherwise, i.e. when flow lines both cross \({\mathcal D}\) in the same direction, one usually refers to \({\mathcal D}\) as a transparent wall. In the two first cases, Filippov theory is required, providing set-valued trajectories on the walls, so-called sliding modes (Gouzé and Sari 2003). On transparent walls, trajectories are simply defined by continuity, from the flow lines on both sides. We shall encounter only this third kind of wall in the rest of this paper.

Note that if \({\mathcal D} \in {\fancyscript{D}}_r\) is represented by \(a \in {\mathcal A},\) i.e. \({\mathsf d}({\mathcal D})=a,\) we shall most often denote by ϕ(a), or ϕa, the focal point associated to this domain.

Once the flow (5) is given in a box \({\mathcal D}_a,\) it is easy to compute the time and position at which it intersects the boundary of \({\mathcal D}_a,\) if ever. The possibility for each facet to be encountered by the flow depends uniquely on the position of the focal point: \(\{x | x_i=\theta_{i}^{a_i-1}\}\) (resp. \(\{x | x_i=\theta_{i}^{a_i}\}\)) can be crossed if and only if \(\phi_i < \theta_{i}^{a_i-1}\) (resp. \(\phi_i > \theta_{i}^{a_i}\)). According to this observation, we denote \(I_{\rm out}^+(a) =\{i \in \{1\ldots n\} | {\phi_i} > \theta_{i}^{a_i}\},\) and \(I_{\rm out}^-(a) = \{i \in \{1\ldots n\} | {\phi_i} < \theta_{i}^{a_i-1}\}.\)

Then, \(I_{\rm out}(a)=I_{\rm out}^+(a) \cup I_{\rm out}^-(a)\) is the set of escaping directions of \(\mathcal D_a.\) Also, it will be practical to point out the lower and upper thresholds bounding a box, thanks to the pairs of functions \(\theta_i^\pm:{\mathcal A}\to\Uptheta_i,\theta_i^-(a)=\theta_{i}^{a_i-1}\) and \(\theta_i^+(a)=\theta_{i}^{a_i}.\) These functions will clearly only be useful notations, and bring no further information about the system.

When it is unambiguous, we will omit the dependence on a in the sequel.

Now, in each direction i ∈ I out the time at which x(t) encounters the corresponding hyperplane, for \(x \in {\mathcal D}_a,\) can easily be shown to be:

$$ \tau_i(x)=\frac{-1} {\gamma_i}\ln\left(\min\left\{\frac{\phi_i -\theta_{i}^-(a)}{\phi_i -x_i}, \frac{\phi_i -\theta_{i}^+(a)} {\phi_i -x_i}\right\}\right). $$
(6)

Taking the minimum \(\tau(x)=\min_{i \in I_{\rm out}}\tau_i(x),\) and re-injecting it in Eq. 5, we get the exiting point of \({\mathcal D}_a\) when starting at x. Since this process is intended to be repeated along trajectories, x will generally lie on the boundary of the current box, except for the initial condition, which may however be chosen on a facet without loss of generality. We then get a transition map \({\mathcal T}^a: \partial {\mathcal D}_a\rightarrow \partial {\mathcal D}_a:\)

$$ \begin{aligned} {\mathcal T}^ax & = \varphi\left(x,\tau(x)\right)\\ & = \phi + \alpha(x) (x-\phi)\\ \end{aligned} $$
(7)

where α(x) = exp(− τ(x)Γ). The latter depends on a, as seen from (6).

Now, the initial system (1) may been reduced to a discrete time dynamical system, as mentioned earlier. It consists in iterates of a global map \({\mathcal T}\) on a domain \(\hbox{Dom}_{\mathcal T}.\) As already mentioned, \(\hbox{Dom}_{\mathcal T}\) must be contained in the set of n − 1 domains of \({\fancyscript{D}}_s.\) Since \({\mathcal T}\) has to be iterated on this domain, all points which reach some n − 2 dimensional (or less) domain after applying \({\mathcal T}\) a finite number of times, must be excluded from \(\hbox{Dom}_{\mathcal T}.\) Furthermore, whereas \({\mathcal T}\) is readily defined on transparent walls, black and white walls are much less obvious to deal with. We thus suppose from now on that \(\hbox{Dom}_{\mathcal T}\) entirely consists of transparent walls. A more detailed discussion about the topology of this domain can be found in (Farcot 2006). A simple criterion to ensure that all walls are transparent is the absence of auto-regulation, in the sense that no production term κ i depends on x i .

At each point of such a domain, it is possible to consistently apply a unique local map \({\mathcal T}^a.\) Actually, each wall is either on the boundary of the whole domain \(\bigcup_a{\mathcal D}_a,\) and thus of a single regular domain \({\mathcal D}_a,\) or can be written as an intersection \(\partial{\mathcal D}_a\cap \partial{\mathcal D}_b\) of two regular domain boundaries. It is clear that on transparent walls there is always exactly one of the two domains, say \({\mathcal D}_a,\) such that \({\mathcal T}^a\) is not the identity. Applying \({\mathcal T}^a\) is then consistent with the orientation of the flow lines. Yet, here we only care about forward trajectories, since \({\mathcal T}^{-1}\) is not properly defined on the full domain \(\hbox{Dom}_{\mathcal T}.\)

Suppose now that there is a wall \(W\subset\hbox{Dom}_{\mathcal T},\) and a sequence \(a^1 \ldots a^\ell\) of regular domains such that \(({\mathcal T}^{a^{\ell}}{\circ}{\mathcal T}^{a^{\ell-1}}{\circ}\,\cdots\,{\circ} {\mathcal T}^{a^1}) (W)\,{\cap}\, W\,{\neq}\,{\O}.\) Now, it is tempting to study the dynamics of this iterated map on W. In particular, any fixed point of this map determines a periodic trajectory of the continuous time system (1).

3 Monotone, Concave Maps

3.1 General Results

The main result of this paper strongly relies on one theorem, that we recall here. It is a fixed point theorem of monotone, concave operators. Both monotonicity and concavity shall be understood with respect to some partial order, see below for definitions and notations. Operators with both these properties have led to various results on existence and uniqueness of fixed points. Among early studies, Krasnosel’skii has provided important results (Krasnosel’skii 1964). Here, we use a more recent presentation of such results, due to Smith (1986), whose formulation is well adapted to our purpose and context. Namely, we use Theorem 2.2 of (Smith 1986).

Before stating the latter, let us introduce some notations (which are similar to those in (Smith 1986)). We denote x < y and x ≤ y if these inequalities hold for each coordinate (resp. entry) of vectors (resp. matrices) x and y. Then, we denote \(x\,\lneq\,y\) if x ≤ y and x ≠ y. For x ≤ y, [xy] = {z | x ≤ z ≤ y}, and (xy) = { z | x < z < y}. For any set \(A, {\mathring A}\) denotes the interior of A, and cℓ(A) its closure. Now, we can state the following,

Theorem 1

Let\(p \in {\mathring R}^n_+,\)andT : [0, p] → [0, p] continuous, C1in (0, p). Suppose\(DT(0)=\lim\limits_{{x\to 0}\atop {x>0}} DT(x)\)exists. Assume

  • (M) DT(x) > 0 if x > 0, x < p.

  • (C) DT(y)\(\,\lneq\,\) DT(x) if 0 < x < y < p.

Assume alsoTp < p.

Suppose that T0 = 0, and define λ = ρ(DT(0)), the spectral radius ofDT(0). Then,

  • λ ≤ 1 ⇒ ∀x ∈ [0, p],  T n x → 0 when n → ∞.

  • λ > 1 ⇒ There exists a unique nonzero fixed point q = Tq. Moreover, q ∈ (0, p) and for every \(x \in [0,p] \setminus \{0\},T^n x \to q\) as n → ∞.

The case T0 ≠ 0 leads to either a unique fixed point, or to diverging orbits. However, this case will not appear in our context, and thus it is not detailed in the theorem above.

Remark now that in the case when T is twice differentiable, the concavity condition (C) admits a simple sufficient condition. The map \(T_i:[0,p] \to {\bf R}_+\) denotes the ith coordinate function of T : [0, p] → [0, p].

Proposition 1

Suppose that for allijk ∈ {1…n}, and for all 0 < xp,

$$ \frac{\partial^2 T_i}{\partial x_k\partial x_j}(x) \leq 0, $$

and for allijthere exists aksuch that the inequality is strict.

ThenTsatisfies condition (C) of Theorem 1.

Actually, it is clear that under this condition each term \(\frac{\partial T_i}{\partial x_j}\) of DT is a decreasing function of each coordinate x k . It is moreover strictly decreasing in at least one of these coordinates, and (C) thus follows. Observe by the way that the notion of concavity (w.r.t a partial order) we deal with here is weakened by the fact that it concerns only ordered pairs (xy) of variables.

Remarkably, Theorem 1 relies on the Perron–Frobenius theorem, which will be used at some point in the proof of our main result. It usually says that any matrix with positive entries admits a positive, simple eigenvalue, associated to a positive eigenvector. Here, we will only need the following corollary: if A is a real n × n matrix such that A > 0, then A admits a positive real eigenvalue.

3.2 Regions of Fixed Monotonicity and Concavity

In order to apply Theorem 1 to an iterate of the transition map, it is necessary to check whether it is monotone and concave, in the sense of properties (M) and (C). In this section it is shown that these conditions cannot be satisfied by the transition map on a whole wall, and that some refinement should be done. To this aim, we have to compute the Jacobian of a composite map, of the form: \(D({\mathcal T}^{a^{\ell}}{\circ}{\mathcal T}^{a^{\ell-1}}{\circ}\,\cdots\,{\circ}{\mathcal T}^{a^1}).\) This requires the computation of an arbitrary \(D{\mathcal T}^{a^i}.\)

Let W be an arbitrary wall. We have seen in a previous section that \({\mathcal T}={\mathcal T}^a,\) for some uniquely defined a. Let moreover x ∈ W be such that there exists a unique s ∈ I out(a) such that τ(x) = τ s (x), and denote \(W'\subset\{x | x_s=\theta_s\}\) the wall such that \({\mathcal T} x \in W'.\) Then there exists an open neighbourhood of x in which the following holds:

$$ {\mathcal T}_ix = \phi_i^a + (x_i - \phi^a_i) \alpha_i(x), $$
(8)

where \({\mathcal T}_i\) is the ith coordinate function of \({\mathcal T}.\) To abbreviate computations, we have denoted α i the ith diagonal entry of the diagonal matrix α(x) defined in previous section:

$$ \alpha_i(x) = \left(\frac{\phi^a_s - \theta_s} {\phi^a_s - x_s}\right)^ \frac{\gamma_i} {\gamma_s}. $$
(9)

Observe in particular that α s (x) bears no exponent, or in other words that \(\alpha_i(x)=\left(\alpha_s(x)\right)^ \frac{\gamma_i}{\gamma_s}. \)

Then it is rather straightforward to compute the partial derivatives at a point x:

$$ \frac{\partial{\mathcal T}_i} {\partial x_j} =\left\{ \begin{array}{ll} \left(\alpha_s(x)\right)^\frac{\gamma_i} {\gamma_s} & \hbox {if }\; i=j\\ -\frac{\gamma_i} {\gamma_s}\frac{\phi_i^a - x_i} {\phi_s^a - x_s} \left(\alpha_s(x)\right)^\frac{\gamma_i} {\gamma_s} & \hbox {if }\; j=s\\ 0 & \hbox {otherwise}. \\ \end{array} \right. $$
(10)

It appears from (10) that the partial derivatives are of constant sign if, and only if, sign(ϕ a i  − x i ) is fixed. Actually, all degradation rates γ i are positive, and \(\alpha_s(x)=\exp(-\gamma_s \tau(x)) \in (0,1)\) since τ(x) is nonnegative.

Now make two observations on the sign pattern of ϕa − x in general. The first one concerns its invariance with respect to time evolution:

Proposition 2

From the expression (5), observe that the quantities sign(ϕ a i  − x i ) are preserved by the flow, in the sense that sign(ϕ a i  − x i ) = sign(ϕ a i  − φ i (xt)) for any\(t \geq 0.\)

In particular, since the wallWchosen at the beginning of this section is included in a hyperplane of the form {x | x j  = θ j }, the above observation yields\(\hbox{sign}(\phi^a_j - {\mathcal T}_j x)=\hbox{sign}(\phi^a_j - \theta_j). \)

Similarly, for anyxwhich escapes via the wall\(W'\subset\{x | x_s=\theta_s\},\)we have sign(ϕ a s  − x s ) = sign(ϕ a s  − θ s ).

A second fact concerns some particularities of the different coordinates for sign(ϕa − x).

Proposition 3

For any escaping directioni ∈ Iout(a), eitherx i  < θ + i (a) < ϕ a i or ϕ a i  < θ i (a) < x i , and in both cases sign(ϕ a i  − x i ) is constant in the whole closure\(c\ell({\mathcal D}_a)\)of the domain. If θ i  ∈ {θ i (a),  θ + i (a)} is the escaping threshold, this sign equals sign(ϕ a i  − θ i ).

For the non escaping directions on the other hand, sign(ϕ a i  − x i ) is not constant on W, since θ i (a) < ϕ a i  < θ + i (a).

Thus, condition (M) cannot be valid on the whole wall W, unless all directions are escaping.

Since a differential of the form \(D({\mathcal T}^{a^\ell}{\circ}{\mathcal T}^{a^{\ell-1}}{\circ}\,\cdots\,{\circ}{\mathcal T}^{a^1})\) is a product of differentials of local maps \(D({\mathcal T}^{a^i})\) and the latter may only be positive in a region where the vector \({\phi}^{a^i}-x\) has a fixed sign pattern, it seems relevant to restrict the study to such regions.

Now, condition (C) concerns the variations of the terms in (10). As shown by Proposition 1, the second-order derivatives are informative in this case. They can be expressed as:

$$ \frac{\partial^2{{\mathcal T}_i}}{\partial x_m\partial x_j} =\left\{ \begin{array}{ll} \frac{\gamma_i} {\gamma_s} \frac{\left(\alpha_s(x)\right)^\frac{\gamma_i} {\gamma_s}} {\phi_s^a-x_s} & \hbox {if }\; i=j, m=s\,\hbox {or } j=s,\, m=i\\ -\frac{\gamma_i} {\gamma_s} \left(1+\frac{\gamma_i} {\gamma_s}\right) \frac{\phi_i^a - x_i}{(\phi_s^a - x_s)^2} \left( \alpha_s(x) \right) ^\frac{\gamma_i} {\gamma_s} & \hbox {if }\; m=j=s\\ 0 & \hbox {otherwise} \end{array} \right. $$
(11)

Here again, these derivatives are of fixed sign only if ϕa − x has a fixed sign pattern. Hence, both first order and second order derivatives are of constant sign in the same regions. We are thus naturally led to consider rectangular regions partitioning the wall W i, defined by the sign of ϕa − x, which may take 2n different values, hence leading to that number of rectangular region on each wall.

In general, considering a sequence \(\{a^1 \ldots a^{\ell}\}\) of domains periodically crossed by flow lines, we have moreover to deal with the intersections of these regions and the images, through the transition map, of regions on previous walls. This may lead to a high number of configurations, and requires some detailed combinatorial analysis. As this paper aims at providing a first attempt to use the monotonicity properties of Glass systems, we shall make a simplifying assumption. Geometrically, this assumption can be described as the fact that successive focal points are supposed to be aligned. This may be written as:

$$ \forall i \in \{1 \ldots \ell\}, \forall j \in \{1 \ldots n\} \setminus\{s_{i+1}\},\quad \phi^{a^i}_j = \phi^{a^{i+1}}_j, $$
(12)

where s i is supposed to be the unique escaping direction from domain a i. This alignment assumption will allow us to use Theorem 1 in the region where the long-term dynamics take place. Furthermore, it will be shown that the geometric property (12) can also be obtained as a consequence of further assumptions on the interaction structure of the system: if the latter consists in a single negative feedback loop, it follows that there exists a periodic sequence of boxes satisfying (12), as we will see in the next section.

4 Negative Feedback Loop Systems

4.1 Preliminary Definitions

In this section, we focus on systems of the form:

$$ \left\{ \begin{array}{ll} \frac{dx_1} {dt} = \kappa_1^0+\kappa_1^1 {\mathsf s}^-(x_n,\theta_{n}^1) - \gamma_1 x_1 &\\ \frac{dx_i} {dt} = \kappa_i^0+\kappa_i^1 {\mathsf s}^+(x_{i-1},\theta_{i-1}^1) - \gamma_i x_i, &i=2 \ldots n. \end{array} \right. $$
(13)

In such systems, each variable x i activates the production of the next variable, x i+1, except x n which inhibits x 1. Hence, their interaction structure is that of a negative feedback loop. Up to a translation, they are of the form dealt with by Snoussi in (Snoussi 1989). In the case n = 3 we find the well-known repressilator (Elowitz and Leibler 2000). Remark now that any production term κ i is independent of the variable x i , and thus all walls must be transparent.

Without introducing unnecessary formalism, let us only say that we call interaction graph, IG , an oriented, labeled graph, whose vertices represent variables of the system, and edges represent interactions between them. For example, \({ x_i\mathop {\longrightarrow}\limits^{{-}}x_j }\) if an increase of x i may lead to a decrease of κ j at least at some point x in state space. Then, a negative loop is a cycle in IG with an odd number of negative edges. It is easily shown that any system whose interaction graph consists in a single negative loop (involving all variables) is equivalent to (13), up to a symmetry in state space.

The particular form of (13) implies that there is only one threshold θ 1 i in each direction i, and θ 0 i , θ 2 i represent the bounds of the range of x i , in accordance with (2). In the following we always assume that

$$ \forall i \in \{1\ldots n\} \quad \kappa_i^0+\kappa_i^1 > \gamma_i \theta_i^1 \quad \hbox {and} \quad \kappa_i^0 < \gamma_i \theta_i^1 $$
(14)

This implies that higher values of focal points are above thresholds, and lower values are below thresholds. Otherwise, all trajectories would converge to a unique fixed point, see (Snoussi 1989, Lemma 1) .

Before giving more detail, let us carry out a few simplifying operations. First, we choose to set w j  = x j  − θ 1 j . Then,

$$ \dot w_j = \kappa_j^0+\kappa_j^1{\mathsf s}^\pm(w_{j-1},0)-\gamma_j\theta_j^1 - \gamma_jw_j, $$

and thus all threshold values are zero. Accordingly, all regular domains are now identified with (bounded rectangular regions of) the 2n orthants of \({\bf R}^n.\) They are mapped as explained in Sect. 2.1, onto the discrete set \({\mathcal A}=\{0,1\}^n.\) The corresponding discrete mapping \({\mathsf d}:{\fancyscript{D}}_r\to{\mathcal A}\) sends negative coordinates to 0, and positive ones to 1.

The bounds of the whole domain are now denoted

$$ \theta_i^-=\theta_i^0-\theta_i^1=-\theta_i^1\quad\hbox {and} \quad \theta_i^+=\theta_i^2-\theta_i^1. $$

Observe that variables may now be negative, and do represent concentrations only up to a translation. The new dynamical system behaves identically to (13), once thresholds have been set to zero, and the following renamings have been performed:

$$ \kappa_i^0\leftarrow \kappa_i^0 - \gamma_i\theta_i^1, \quad \hbox {and} \quad \phi_i \leftarrow \phi_i-\theta_i^1. $$

Then, for each i ∈ {1…n}, the focal point coordinate ϕ i may take only two values. We abbreviate them as follows:

$$ \phi_i^- = \frac{\kappa_i^0}{\gamma_i} \quad\hbox {and} \quad \phi_i^+ = \frac{\kappa_i^0+\kappa_i^1} {\gamma_i}. $$
(15)

Let us also denote κ ± i  = γ i ϕ ± i .

Remark that ϕ i is negative and ϕ + i positive (and similarly for κ ± i ). From this observation, we deduce that the focal points are entirely determined by the sign of their coordinates. In other words, the coordinates of any focal point are known exactly from the regular domain it belongs to. More precisely, if a focal point belongs to a domain \(a \in {\mathcal A},\) the sign of ϕ i is 2a i −1 ∈ {± 1}, for each i ∈ {1…n}.

Now it is time to describe the dynamics of (13). In brief, we now focus on systems of the form below, into which any system (13) can be transformed:

$$ \left\{ \begin{array}{ll} \frac{dx_1} {dt} = \kappa_1^-+(\kappa_1^+-\kappa_1^-) {\mathsf s}^-(x_n,0) - \gamma_1 x_1 & \\ \frac{dx_i}{dt} = \kappa_i^-+(\kappa_i^+-\kappa_i^-) {\mathsf s}^+(x_{i-1},0) - \gamma_i x_i, & \quad i=2 \ldots n. \end{array} \right. $$
(16)

A first, coarse-grained description can be achieved in terms of discrete transitions between qualitative states in \({\mathcal A}.\) This can be conveniently explained with the aid of the state transition graph, or simply transition graph, whose vertices are elements of \({\mathcal A},\) and edges (ab) exist if and only if there exists some continuous trajectory crossing successively the corresponding regular domains in state space, \({\mathcal D}_a\) and \({\mathcal D}_b.\) This graph will be denoted TG .

It is easily deduced from the properties given in Sect. 2.2 that TG is entirely determined by the position of focal points. Namely, (ab) is an edge in TG if and only if b = a = ϕ(a), or

$$ b \in \{a+{\bf e}_i \,|\, i\hbox{\,such that\,} a_i=0\hbox { and }\phi_i(a) > 0\} \cup \{a- {\bf e}_i \,|\, i\hbox { such that } a_i=1\hbox {\,and\,} \phi_i(a) < 0\}, $$

where&& \( {\bf e}_i\) denotes the ith vector of the standard basis of \({\bf R}^n. \)

Observe that a and b may differ at one coordinate at most, since TG represents the regular dynamics only. From (16) and (14), we deduce the presence of the following cycle of length 2n in TG :

figure a

This cycle will always be denoted \({\mathcal C}\) hereafter.

It can be shown that there is no fixed state in TG , and that \({\mathcal C}\) is the only cycle without escaping edge. Also, a Morse decomposition and a discrete Lyapunov function, very similar to those in (Gedeon and Mischaikow 1995; Mallet-Paret and Smith 1990) can be constructed, and \({\mathcal C}\) corresponds to the lowest level of this decomposition. For a dimension \(n \geq 4,\) there are other cycles than \({\mathcal C}\) in TG but if they contain periodic orbits, the latter are very likely to be unstable. A more global study shall be the topic of future work.

Hereafter the analysis will concern only the region of state space defined by \({\mathcal C},\) and we denote

(17)

Furthermore, let us write ϕi = ϕ(a i), the focal point of domain a i, and W i the wall between boxes a i and a i+1, i.e. it verifies \(c\ell({W^i})= c\ell({{\mathcal D})_{a^{i}}}\cap c\ell({{\mathcal D}_{a^{i+1}}}).\)

In the following, the superscript in a i must be understood periodically, so that for instance a 2n+1 will mean a 1. It appears that each pair of successive states a ia i+1 in this cycle differ only at one coordinate, which we denote s i . If, moreover, we denote s ± i , where ± is the sign of \(a^{i+1}_{s_i}-a^i_{s_i}\), which is also the sign of \(\phi^i_{s_i}\), we obtain

(18)

Hence s i is the only escaping direction of box a i, a fact that can also be written as I out(a i) = {s i }. As seen in Sect. 2.2, this implies that all trajectories in a domain a i leave it in finite time, via the wall \( W^{i}\subset\{x | x_{s_i}=0\},\) and enter domain a i+1. It follows that no trajectory can escape the cycle, i.e. \({\mathcal C}\) is an invariant region. This cycle is represented on Fig. 1 for n = 3.

Fig. 1
figure 1

The transition graph of a feedback loop system of the form (16) with three variables. The cycle \({\mathcal C}\) is represented with plain lines. The walls W i have been schematically depicted as well

A rapid inspection shows that the walls can be described in a quite explicit manner:

$$ W^i= \left\{ \begin{array}{ll} \prod\limits_{j< s_i}[\theta_j^-, 0)\times\{0\}\times\prod\limits_{j> s_i}(0, \theta_j^+] & \hbox {for}\;i \in \{1 \ldots n\},\\ \prod\limits_{j < s_i}(0, \theta_j^+]\times\{0\}\times\prod\limits_{j > s_i}[\theta_j^-, 0) & \hbox {for}\;i \in \{n+1 \ldots 2n\}. \end{array} \right. $$
(19)

From the remark following Eq. 15 it is not difficult to deduce that each pair of focal points ϕi, ϕi+1 differ only at coordinate s i+1. Actually, they verify \(\phi^i \in {\mathcal D}_{a^{i+1}},\) and we have seen that a i+1 and a i+2 only differ at coordinate s i+1. Hence, the cycle \({\mathcal C}\) satisfies the alignment condition (12) seen in a previous section. We can state it with our current notation:

$$ \forall i \in \{1 \ldots 2n\}, \forall j \in \{1\ldots n\} \setminus\{s_{i+1}\},\quad \phi^{i}_j = \phi^{i+1}_j. $$
(20)

Remark also that by construction, for the remaining coordinate:

$$ \forall i \in \{1\ldots 2n\},\quad \hbox{sign}(\phi^{i}_{s_{i+1}}) = -\hbox{sign}(\phi^{i+1}_{s_{i+1}}.) $$
(21)

Since our aim is to study limit cycles arising from (13), with the help of Theorem 1, it seems natural to consider the following map:

$$ {\mathcal T}^{a^1}{\mathcal T}^{a^{2n}}{\mathcal{T}}^{a^{2n-1}} \ldots {\mathcal T}^{a^2}: W^1 \longrightarrow W^1 $$

Although each \({\mathcal T}^a\) is defined on the whole boundary of \({\mathcal D}_a,\) see (7), we consider here the restrictions to walls W i-1 only, which we write explicitly:

$$ \begin{aligned}{\mathcal T}^{a^i}: W^{i-1} &\longrightarrow W^{i} \\ x &\longmapsto \left( \phi^i_j + (x_j - \phi^i_j)\alpha_j(x) \right)_{j=1 \ldots n}, \end{aligned} $$
(22)

where, as in (9),

$$ \alpha_j(x)=\alpha_j(x_{s_i})=\left(\frac {\phi^i_{s_i}} {\phi^i_{s_i} - x_{s_i}} \right)^\frac{\gamma_j} {\gamma_{s_i}}. $$

Remark here that the expression of α j depends on the number i of the region under consideration. However, this region will always be clear from the context, and we thus leave this dependence hidden, avoiding notation such as α i j (x).

4.2 Restriction of Domains

4.2.1 Projection and Extension to the Closure of Walls

To satisfy the hypotheses of Theorem 1, we must consider a closed domain of the form \([0,\,p],\, p \in {\bf R}_+^n,\) with nonempty interior, and since the walls are n − 1 dimensional, their interior is empty. Therefore, we may first project W i to \({\bf R}^{n-1},\) suppressing the s i th coordinate of points in W i, since it equals zero by construction; see (19).

Then, since each wall is a product of the singleton {0} and segments of the form (0, θ + j ] or [θ j , 0), to satisfy the hypotheses of Theorem 1, it suffices to check that the map \({\mathcal T}^{a^i}\) can be properly extended to the points of \(c\ell({W^{i-1}})\) having at least one coordinate equal to 0. Moreover, we also need to verify that this extension maps \(c\ell({W^{i-1}})\) to \(c\ell({W^{i}}),\) in order to compose such local mappings.

The extension of each \({\mathcal T}^{a^i}\) to the closure \(c\ell({W^{i-1}})\) is simply defined by continuity of the expression (22). The fact that this extended map sends \(c\ell({W^{i-1}})\) to \(c\ell({W^{i}})\) can be seen on Fig. 2, and is proved in the Appendix. Hence, we consider now that each map \({\mathcal T}^{a^i}\) is defined on the closure of walls: \(c\ell({W^{i-1}})\rightarrow c\ell({W^{i}}). \)

Fig. 2
figure 2

The same walls as in Fig. 1, this time in state space. Dotted lines represent thresholds. The dashed lines on a wall W i represent lines where x j  = ϕ i j . The alignment condition (20) is fulfilled on the left, and not on the right hand side, shown as a counterexample to (20). The blue region on each such wall represents the image \({\mathcal T}^{a^i}\left(W^{i-1}\right).\) Observe that the closed blue regions actually lie in the closure of walls, as claimed in Sect. 4.2.1. They are shown as polyhedras for ease of observation, but they have in general a piecewise smooth boundary, and not a piecewise linear one (unless all decay rates are equal). The grey shaded regions are not reachable, as an illustration of Eq. 23

Finally, Proposition 2 implies that the image \({\mathcal T}^{a^i}\left(c\ell({W^{i-1}})\right)\) is always contained in the subset of \(c\ell({W^{i}})\) whose points satisfy

$$ \hbox{sign}(\phi_{s_{i-1}}^i-x_{s_{i-1}}) =\hbox{sign}(\phi_{s_{i-1}}^i). $$
(23)

This restriction is illustrated on Fig. 2. Since our interest is in the iterates of those maps, we should restrict them to such subsets.

In summary, three simplifications can be done:

  • The s i th coordinate of x ∈ W i can be suppressed without loss of information

  • The transition maps can be defined on the closure of walls.

  • Only the points verifying (23) can be reached by \({\mathcal T}^{a^i}.\)

The notation \(\widetilde W^{i}\) will be used to denote the domains obtained after these three simplifications. More explicitly, \(\widetilde W^i\) is obtained from (19) by suppressing the {0} term, using closed intervals, and replacing \([\theta_{s_{i-1}}^{-},0]\) by \([\phi_{s_{i-1}}^-,0]\), or \([0,\theta_{s_{i-1}}^+]\;{\text{by}}\; [0,\phi_{s_{i-1}}^+].\) For example:

$$ W^1=\{0\}\times \prod_{i=2}^n(0, \theta_i^+],\quad\hbox {so that } \widetilde W^1 = \prod_{i=2}^{n-1}[0, \theta_i^+]\times[0,\phi_{n}^+]. $$

Each \(\widetilde W^i\) is a rectangular region in \({\bf R}^{n-1}\) with nonempty interior, still called wall in the sequel, and the preceding discussion shows that each map \({\mathcal T}^{a^i}\) induces a well defined map

$$ \widetilde {\mathcal T}^{a^{i}}:\widetilde W^{i-1} \rightarrow \widetilde W^{i}. $$

Remark that although points in \(\widetilde W^{i-1}\) do not have a s i-1th coordinate, \(\widetilde{\mathcal T}^{a^i}_{s_{i-1}}\) is defined using expression (22), with \(x_{s_{i-1}}^{-}=0.\) Let us write it explicitly for later use:

$$ \widetilde{\mathcal T}^{a^i}_{s_{i-1}}x = \left(1- \alpha_{s_{i-1}}(x)\right)\phi^i_{s_{i-1}} $$
(24)

Now, we define the map

$$ {\bf T} = \widetilde{\mathcal T}^{a^1}\widetilde{\mathcal T}^{a^{2n}}\widetilde{\mathcal T}^{a^{2n-1}} \ldots \widetilde{\mathcal T}^{a^2}: \widetilde W^1 \longrightarrow \widetilde W^1 $$
(25)

Our aim will be to check the hypotheses of Theorem 1 for this map T.

4.2.2 Partition of Walls

Let us now consider how \(\widetilde W^{i}\) may be partitioned according to the sign pattern of ϕi − x. Since coordinate s i has been suppressed from \(\widetilde W^{i}\) by construction, we restrict our attention to sign patterns in \(\Sigma_i=\{\pm 1\}^{\{1\ldots n\} \setminus\{s_i\}}.\) Vectors in Σ i will then have their coordinates indexed by \(\{1 \ldots n\} \setminus \{s_i\},\) which will prove more convenient than indexing them by {1…n − 1}. For any sign pattern σ ∈ Σ i , let us define a corresponding ‘zone’

$$ {\mathcal Z}^i(\sigma)=\{x \in \widetilde W^i \,|\, \forall j\neq s_i ,\; \hbox{sign}(\phi^i_j-x_j) = \sigma_j\}. $$

By definition, \(\widetilde W^i\) is the half of W i on which is \(\hbox{sign }(\phi^i_{s_{i-1}}-x_{s_{i-1}})\) fixed, see (23). Hence \(\widetilde W^i\) is the union of all the zones with \(\sigma_{s_{i-1}}\)being equal to this fixed sign. On the other hand, \({\mathcal Z}^i(\sigma)\) is empty if \(\sigma_{s_{i-1}}\) is the opposite of this sign. Now, from the translation of all thresholds to zero, the origin of \({\bf R}^{n-1}\) belongs to every wall \(\widetilde W^i.\) Moreover, no focal point has a coordinate equal to zero. Hence, on each \(\widetilde W^i,\) there is a unique zone containing the origin on its boundary, and the corresponding sign vector in Σ i is

$$ \sigma^i = \left(\sigma^i_j\right)_{j\neq s_i} = \left(\hbox{sign}(\phi^i_j)\right)_{j\neq s_i} $$
(26)

We will see now that these particular zones, having the origin on their boundary, in fact attract all trajectories of the system, and then remain invariant. As they will be the only zones we consider eventually, we simply denote:

$$ {\mathcal Z}^i={\mathcal Z}^i(\sigma^i). $$

Remark that the definition of these zones and Eq. 19 allow for a more explicit formulation:

$$ {\mathcal Z}^i= \left\{ \begin{array}{ll} \prod\limits_{j < s_i}[\phi_j^-, 0]\times\prod\limits_{j > s_i}[0, \phi_j^+] & \hbox {for}\;i \in \{1\ldots n\},\\ \prod\limits_{j < s_i}[0, \phi_j^+]\times\prod\limits_{j > s_i}[\phi_j^-, 0] & \hbox {for}\;i \in \{n+1\ldots 2n\}. \end{array} \right. $$
(27)

4.2.3 Attractive and Invariant Regions

Proposition 4

The zone\({\mathcal Z}^1\)is attractive for the dynamics induced byT:

$$ {\bf T} \left( \widetilde W^1 \right) \subset{\mathcal Z}^1. $$

The proof of this proposition is in Appendix.

Let us now consider the restriction of T to \({\mathcal Z}^1.\) Actually, the previous proposition implies that, choosing any initial condition \(x^1 \in \widetilde W^1,\) the trajectory \(\{{\bf T}^m x | m \in {\bf N}\}\) is contained in \({\mathcal Z}^1,\) except maybe for \(x^1.\) So there is no loss of generality in choosing initial conditions only in this invariant subset.

We can also verify that, on each wall all trajectories of system (16) are eventually contained in the region \({\mathcal Z}^i.\) This fact is a consequence of Proposition 4 and the following, which is also proved in Appendix:

Proposition 5

For all \(i \in \{1 \ldots 2n\},\quad\widetilde{\mathcal T}^{a^{i}}\left({\mathcal Z}^{i-1} \right) \subset{\mathcal Z}^{i}. \)

This proposition allows us to consider only the restrictions:

$$ \widetilde{\mathcal T}^{a^i}:{\mathcal Z}^{i-1}\to {\mathcal Z}^{i} $$

in the following. Figure 3 shows the domains \({\mathcal Z}^i\) on a 3-dimensional example.

Fig. 3
figure 3

a The same walls as in Fig. 2. The invariant region \(\bigcup_i{\mathcal Z}^i\) has now been shaded in dark green. The reader may check on the figure that trajectories originating on any wall eventually fall into this region. b A view of the wall W5 with the different zones made explicit, with the 1’s omitted in the sign vectors. Since s5 = 2,  ϕ 51  > 0 and ϕ 53  < 0, it follows that (σ 51 , σ 53 ) = (+ 1,  − 1), i.e. \({\mathcal Z}^5={\mathcal Z}^5(+,-)\) with the abbreviation above

4.3 Main Result

The main result of this paper will essentially result from applying Theorem 1 to feedback loop systems of the form (13), or equivalently (16). Among the hypotheses to be checked, the concavity condition (C) will be verified with the aid of the following lemma.

Lemma 1

Let\(p,q,r \in {\mathring R}_+^n.\)Let alsoT : [0, p] → [0, q] and: [0, q] → [0, r] be twice differentiable mappings satisfying condition (M) of Theorem 1. Suppose that all the second order derivatives ofTand Sare nonpositive, and that at least one of them is negative, for both mappings.

Then, their compositeST : [0, p] → [0, r] satisfies (M) and (C).

Proof

The result is obtained from the Chain Rule. Let x ∈ [0, p]. Then Tx ∈ [0, q] and

$$ D\left(ST\right)(x)= DS\left(Tx\right)DT(x). $$

From this expression it is clear that the composite ST satisfies (M) if both T and S do. Now, given any x < y in [0, p], condition (M) on S implies Tx < Ty, by monotonicity of each coordinate function T i S i .

Then the nonpositivity of second order derivatives gives \(DS(Ty) \,{\lneq}\,DS(Tx).\) Then using both conditions (M) and (C) on T, and condition (M) on S, the Chain Rule leads to

$$ D\left(ST\right)(y)= DS\left(Ty\right)DT(y)\,\lneq\, DS\left(Ty\right)DT(x)\,\lneq\, DS\left(Tx\right)DT(x) = D\left(ST\right)(x). $$

\(\square\)

We are now in a position to state our main result.

Theorem 2

Consider the map\({\bf T} : {\mathcal Z}^1 \to {\mathcal Z}^1\)defined in the previous section.

  • Ifn = 2, then\( \forall x \in {\mathcal Z}^1,\; {\bf T}^m x\to 0\)whenm → ∞.

  • Ifn > 2, then there exists a unique nonzero fixed point\(q={\bf T} q.\)Moreover, \(q\in {\mathring {\mathcal Z}}^1\)and for every\(x \in {\mathcal Z}^1 \setminus\{0\}, {\bf T}^m x\to q\)as m → ∞.

Remark 1

From Proposition 4, this result holds for all orbits that reach the union of boxes composing \({\mathcal C}.\) A more global analysis should be the topic of some future work. As a first step towards this, let us mention that a discrete Lyapunov function can be constructed; with the notation of this paper, it is defined almost everywhere by \(x\,\mapsto\,\hbox{card}(I_{\rm out}({\mathsf d}(x))),\) and \({\mathcal C}\) is the lowest level set of this function.

Proof

The proof consists in checking the hypotheses of Theorem 1, which then yields precisely the above proposition.

First, the upper corner of \({\mathcal Z}^1\) is the point \(({\phi_2^+} \ldots \phi_{n}^+),\) which belongs to \({\mathring R}^{n-1}_+.\) Second, it is clear from expression (22) that each \(\widetilde{\mathcal T}^{a^i}\) is C 1. Hence, T is C 1 on the whole rectangle \({\mathcal Z}^1.\) It follows in particular that \(D{\bf T}(0)\) is well defined.

Since the proof is a little lengthy, it will be structured in three parts.

4.3.1 Preliminary Change of Variables

Before checking the hypotheses of Theorem 1, it will prove convenient to apply a local transformation \(\rho^i:{\mathcal Z}^i\to{\mathcal P}^i\) to each wall, such that \({\mathcal P}^i\subset{\bf R}_+^{n-1}.\) This will allow us to deal with nonnegative variables only.

Now we claim that ϕi j and x i j have the same sign, for \(x^i \in {\mathcal Z}^i.\) Actually on this zone we have, by definition,

$$ x_{j}\in \left\{ \begin{array}{ll} \left[0,\phi_j^i\right. )\cup(\phi_j^i,\theta_j^+] & \hbox{if } \phi_{j}^{i} > 0 \\ \left[\theta_j^-,\phi_j^i\right. )\cup(\phi_j^i,0] &\hbox{if }{\phi_{j}^{i}} < 0 \\ \end{array}\right. $$

and the condition sign(ϕ i j  − x j ) =  sign(ϕ i j ) is easily seen to imply our claim.

Let us now simply define:

$$ \rho_j^{i}(x)= \sigma^i_jx_j, \quad \quad j \in \{1 \ldots n\} \setminus\{s_i\}. $$
(28)

In other words, ρi is just a multiplication by a diagonal matrix, whose jth entry is σi j  ∈ {± 1}. This implies that ρi is invertible, and equals its inverse. Hence we also denote ρi this inverse: \(\rho^i:{\mathcal P}^i\to{\mathcal Z}^i.\) We may also formulate \({\mathcal P}^i\) explicitly:

$$ {\mathcal P}^i=\prod\limits_{j\neq s_i} \left[0,|\phi_j^i| \right]. $$
(29)

It is tempting now to define maps \({\mathcal M}^{(i)},\) by the condition that the following diagram commutes, for any i ∈ {1…2n}:

(30)

With the previous remark on ρi, this may also be written as \({\mathcal M}^{(i)}=\rho^i {\mathcal T}^{a^i}\rho^{i-1}.\) Composing the diagrams above for successive i, and since ρ1 = ρ2n = id, we obtain \({\mathcal P}^1={\mathcal Z}^1,\) and

$$ {\bf T}= {\mathcal M}^{(1)}{\mathcal M}^{(2n)}{\mathcal M}^{(2n-1)}\,\cdots\,{\mathcal M}^{(2)} $$
(31)

For any \(x^1 \in {\mathcal P}^1=[0,p],\) we write \(x^i = \widetilde{\mathcal M}^{(i)}x^{i-1}, i\,\geq\,2.\) From Proposition 5, we obtain \(x^i \in {\mathcal Z}^i.\) Each \({\mathcal M}^{(i)}\) can be expressed coordinate-wise:

$$ {\mathcal M}^{(i)}_j x = \sigma^i_j\phi^i_j + \sigma^i_j\left( \sigma^{i-1}_jx_j - \phi^i_j\right) \left(\frac{\phi^i_{s_i}} {\phi^i_{s_i}-\sigma^{i-1}_{s_i}x_{s_i}}\right)^ \frac{\gamma_j} {\gamma_{s_i}} \quad j\neq s_i. $$

From their definition, the σ i j s clearly satisfy the alignment condition (20): σ i j  = σ i−1 j for j ≠ s i . It also comes from (21) that \( \sigma^i_{s_i}=-\sigma^{i-1}_{s_i}.\) Hence, we also have:

$$ {\mathcal M}^{(i)}_j x = |\phi^i_j|+\left( x_j - |\phi^i_j|\right)\alpha_j\left(\sigma^{i-1}_{s_i}x_{s_i}\right) \quad j\neq s_i, $$
(32)

where

$$ \alpha_j\left(\sigma^{i-1}_{s_i}x_{s_i}\right) = \left(\frac{\phi^i_{s_i}} {\phi^i_{s_i}-\sigma^{i-1}_{s_i}x_{s_i}}\right)^ \frac{\gamma_j} {\gamma_{s_i}} = \left(\frac{|\phi^i_{s_i}|} {|\phi^i_{s_i}|+ x_{s_i}}\right)^ \frac{\gamma_j} {\gamma_{s_i}} $$

4.3.2 Monotonicity and Concavity

We compute now:

$$ \frac{\partial\mathcal M^{(i)}_j} {\partial x_k} (x) =\left\{ \begin{array}{lll} \alpha_j\left(\sigma^{i-1}_{s_i}x_{s_i}\right) & \hbox {if }\; j=k\\ \frac{\gamma_j} {\gamma_{s_i}} \frac{|\phi^i_j| - x_j} {|\phi^i_{s_i}|+ x_{s_i}} \alpha_j\left(\sigma^{i-1}_{s_i}x_{s_i}\right) & \hbox { if }\; k=s_i\cr 0 & \hbox {otherwise}. \end{array}\right. $$
(33)

and

$$ \frac{\partial^2{\mathcal M}^{(i)}_j} {\partial x_m\partial x_k} (x) =\left\{ \begin{array}{lll} -\frac{\gamma_j} {\gamma_{s_i}} \frac{\alpha_j\left(\sigma^{i-1}_{s_i}x_{s_i}\right)} {|\phi^i_{s_i}|\,+\, x_{s_i}} & \hbox {if }\; j=k, m=s_i\\ & \hbox {or } k=s_i, m=j\\ - \frac{\gamma_j} {\gamma_{s_i}} \left(1+\frac{\gamma_j} {\gamma_{s_i}}\right) \frac{|\phi^i_j|- x_j} {(|\phi^i_{s_i}|\,+\, x_{s_i})^2} \alpha_j\left(\sigma^{i-1}_{s_i}x_{s_i}\right) & \hbox {if }\; m=k=s_i\\ 0 & \hbox { otherwise}. \end{array}\right. $$
(34)

We see that \({\mathcal M}^{(i)}\) is very similar to \({\mathcal T}^{a^i},\) so that the equations above are strongly reminiscent of (10) and (11). Let us evaluate the sign of these quantities. From \(x \in {\mathcal P}^{i-1}\) and (29), it follows that \(|\phi^i_j|-x_j\geq0.\) It is clear also that \(|\phi^i_{s_i}|+x_{s_i}>0\). It follows that, for all maps \({\mathcal M}^{(i)},\) the nonzero terms of the Jacobian are positive, while the nonzero derivatives of order 2 are negative, at any point \(x \in {\mathcal P}^{i-1}.\)

Applied to (31), the Chain Rule gives

$$ D{\bf T}(x^1) =D{\mathcal M}^{(1)}(x^{2n}) D{\mathcal M}^{(2n)}\left(x^{2n-1}\right)\,\cdots\,D{\mathcal M}^{(2)}(x^1). $$
(35)

Moreover, it seems from (33) that the Jacobians in the product above are the sum of a diagonal matrix and a matrix whose column number s i contains the only nonzero elements.

Some care must be taken, though, due to the particular indexing of matrices. Actually, from (33) the diagonal entries of \(D{\mathcal M}^{(i)}\) are at first sight \(D{\mathcal M}^{(i)}_{jj} = \alpha_j \left(\sigma^{i-1}_{s_i}x^{i-1}_{s_i}\right).\) However, the row subscripts of \(D{\mathcal M}^{(i)}\) belong to \(\{1 \ldots n\} \setminus\{s_i\},\) while the columns are indexed by \(\{1 \ldots n\} \setminus\{s_{i-1}\}.\) This implies that neither \(D{\mathcal M}^{(i)}_{s_i,s_i}\) nor \(D{\mathcal M}^{(i)}_{s_{i-1},s_{i-1}}\) are defined. The exact shape of nonzero entries in \(D{\mathcal M}^{(i)}\) depends on the order in which s i−1 and s i are sorted. In our case, Eq. 18 shows that either s i  = s i−1 + 1, or s i  = 1 and s i−1 = n, the latter occurring for i ∈ {1, n + 1}.

Let us represent the subscripts of nonzero entries in both cases. First, if \(i\not \in \{1,n+1\},\) then both \(D{\mathcal M}^{(i)}\) and \(D{\mathcal M}^{(n+i)}\) take the following form:

$$ \begin{aligned} \begin{array}{l} \mathop{s_{i-1}}\limits_{\downarrow}\quad\quad\quad\quad\quad\quad\quad\,\,\;\;\quad \\ \end{array} \\ \left[\begin{array}{cccccccc}(1,1) && &(1,s_i)&&&\\ & \ddots &&\vdots&&&\\ &&(s_{i-1}-1,s_{i-1}-1) & (s_{i-1}-1,s_i)&&&\\ &&&(s_{i-1},s_i)&&&\\ &&&(s_i+1,s_i) & (s_i+1,s_i+1)&&\\ &&&\vdots&& \ddots\\ &&&(n,s_i)&&&(n,n)\end{array}\right] \end{aligned} $$
(36)

Now, if i ∈ {1, n + 1}, which implies that row 1 and column n are removed, then:

$$ D{\mathcal M}^{(i)}\equiv \left[ \begin{array}{llll}(2,1) & (2,2) & &\\ \vdots & & \ddots & \\ \vdots & & & (n-1,n-1) \\ (n,1) & 0 &\,\cdots\,& 0 \end{array}\right] $$
(37)

Since s i − 1 ranges over {1…n − 1} when i or n + i varies in {2…n}, it is not hard to deduce from (36) that both products:

$$ \Pi^2=D{\mathcal M}^{(2n)}\left(x^{2n-1}\right)\,\cdots\,D{\mathcal M}^{(n+2)}\left(x^{n+1}\right)\quad\hbox {and}\quad \Pi^1={\mathcal M}^{(n)}\left(x^{n-1}\right)\,\cdots\,D{\mathcal M}^{(2)}(x^1) $$

have no zero entries, and are thus positive. Then, from (37), it follows that

$$ D{\bf T}(x^1) = D{\mathcal M}^{(1)}(x^{2n}) \Pi^2 D{\mathcal M}^{(n+1)}(x^n) \Pi^1 $$

is positive as well. Hence we have just shown (M):

$$ \forall x^1 \in {\mathcal P}^1,\quad D{\bf T}(x^1) > 0. $$

As for concavity, the negative terms in (34) and Lemma 1 imply that for x i−1 < y i−1 in \({\mathcal P}^{i-1}\)

$$ D\left({\mathcal M}^{(i+1)}{\mathcal M}^{(i)}\right)(y^{i-1})\,\lneq\, D\left({\mathcal M}^{(i+1)}{\mathcal M}^{(i)}\right)(x^{i-1}). $$

Then a simple induction shows that \(D{\bf T},\) as expressed in (35), satisfies condition (C):

$$ \forall x^1,y^1 \in {\mathcal P}^1,\;0 < x^1 < y^1,\quad D{\bf T}(y^1) \,\lneq\, D{\bf T}(x^1). $$

4.3.3 Behaviour at the Upper Corner and the Origin

Before studying the origin, let us show that \({\bf T} p < p,\) as required by Theorem 1. Here, \(p=\left(\phi_2^+\ldots \phi_{n}^+\right)\) is the upper corner of \({\mathcal Z}^1,\) i.e. \({\mathcal Z}^1=[0,\,p].\) Now, let us consider the last maps composing T, as in the proof of Proposition 4. Namely, from (24), we get:

$$ \forall x \in \widetilde W^{n+i-1}, \quad \widetilde{\mathcal T}^{a^{n+i}}_{i-1}x = \left(1- \alpha_{i-1}(x)\right)\phi^{n+i}_{i-1}, $$

As usual, we identify a 2+ 1 and a 1. Then for i ∈ {2…n + 1},  αi – 1(x) ∈ (0, 1] gives:

$$ \widetilde{\mathcal T}^{a^{n+i}}_{i-1}x = \left(1- \alpha_{i-1}(x)\right)\phi^{+}_{i-1} < \phi^{+}_{i-1}. $$

Now, from Eq. 38 in the proof of Proposition 4, we know that sign \(({\phi}^{n+1}_{s_{i-1}}-x_{s_{i-1}})\) is conserved through the last steps of the cycle, so that

$$ \forall i \in \{2 \ldots n\}, \forall x \in \widetilde W^{n+i-1}, \quad \left(\widetilde{\mathcal T}^{a^{1}} \widetilde {\mathcal T}^{a^{2n}}\,\cdots\,\widetilde{\mathcal T}^{a^{n+2}}\right)_i x< \phi_i^+ $$

Since in particular

$$ \widetilde{\mathcal T}^{a^{n+1}} \widetilde {\mathcal T}^{a^{n}}\,\cdots\,\widetilde{\mathcal T}^{a^{2}}p \in \widetilde W^{n+1}, $$

it follows that \({\bf T} p < p.\)

To complete the proof, it remains to check whether the spectral radius of \(D{\bf T}(0)\) is greater than 1. First, remark that \({\bf T}(0)=0\) implies that each \(D{\mathcal M}^{(i)}\) has to be evaluated at 0 in expression (35). Then from the expression of α j in (33), for i ∈ {2…n} the diagonal terms of each \(D{\mathcal M}^{(i)}(0)\) equal 1, except at (i − 1, i − 1). Here, it is given by the second line in (33), just as the rest of column i − 1. The latter is given by:

$$ \forall i \in \{2 \ldots n\} \quad D{\mathcal M}^{(i)}_{j, i-1}(0) = \frac{\gamma_j |\phi^i_j|} {\gamma_{i} |\phi^i_{i}|} = \left| \frac{\kappa^i_j} {\kappa^i_i}\right|, $$

with the notation κi j  = κ j (a i).

If the case n = 2, matrices \(D{\mathcal M}^{(i)}(0)\) are in fact scalars of the form above. From Eqs. 13 and 16 we obtain

$$ \kappa^1_2=\kappa^4_2=\kappa^+_2,\quad \kappa^4_1=\kappa^3_1=\kappa^+_1,\quad \kappa^3_2=\kappa^2_2=\kappa^-_2,\quad\hbox { and }\quad\kappa^1_1=\kappa^2_1=\kappa^-_1. $$

See Fig. 4 for the state space of 2-dimensional feedback loops. Then, (35) gives

$$ D{\bf T}(0)=\left|\frac{\kappa^1_2} {\kappa^1_1} \,\frac{\kappa^4_1} {\kappa^4_2}\, \frac{\kappa^3_2} {\kappa^3_1} \,\frac{\kappa^2_1} {\kappa^2_2}\right| = 1. $$

This is essentially the expression given in (Mestl et al. 1995), though that was in the uniform decay case.

Fig. 4
figure 4

The state space of a feedback loop with only two variables, schematically. The notation is explained in the text

Suppose now that \(n\geq3.\) Before going further, let us remark the following fact: if a matrix A is such that A − Id > 0, then the spectral radius ρ(A) > 1. This is a direct consequence of the Perron–Frobenius theorem. Actually, ρ(A) is the maximal eigenvalue of A, or the maximal root of its characteristic polynomial, which we denote π A (X). We clearly have πAId(X) = π A (X − 1), so the roots of πAId are exactly those of π A minus one. Applying the Perron–Frobenius theorem to the positive matrix A − Id implies ρ(A) > 1.

Hence to finish the proof, it suffices to show that \(D{\bf T}(0)-Id > 0,\) or, equivalently, that the diagonal terms of \(D{\bf T}(0)\) are strictly greater than 1, since we have shown (M) already. We use (35), from which it is possible to derive the general term of \(D{\bf T}.\) Then, we can explicitly exhibit the diagonal terms, which are greater than 1. Details are given in the Appendix.

5 Conclusion

The main result of this paper is Theorem 2, which is a direct application of a theorem of Smith (Smith 1986), here referred to as Theorem 1. The latter is a classical fixed point result for monotone and concave maps, and in fact Sect. 3.2 indicates that it may be applied to piecewise affine systems satisfying a condition of alignment between successive focal points in sequences of boxes. This condition is in particular satisfied in negative feedback loop systems, leading to Theorem 2. This theorem implies a result of Snoussi (Snoussi 1989), since the latter states existence of a limit cycle where Theorem 2 states existence and uniqueness. The improvement is entirely obtained from the use of Theorem 1. This use is made possible by the natural occurrence of monotone, concave transformations in the context of piecewise affine gene network models of the form (1). It seems promising then, to study systems of a more general form than a negative feedback loop, using the same tools as in the present paper, especially the indications in Sect. 3.2. This may open the way for various new results, since the existing work on periodic trajectories of Glass systems almost exclusively concerns uniform decays.