Introduction and Goals

System-theoretic metaphors have been pervasive in biology at least since the publication of Cybernetics by Wiener (1948), and even predating that—for example, the use of perception-action feedback loops by Lotka (1925) and von Uexküll (1926) or the notion of homeostasis due to Cannon (1932), to name just a few. According to the classical view of system theory embraced by Wiener, a system receives inputs (or stimuli) from its environment, reacts by generating outputs (or responses), and can be steered using output-to-input feedback.

This externalist viewpoint was dominant until about the late 1950s. However, a complementary internalist perspective arose in the 1960s (see Kalman et al. (1969) for an overview). Its main idea is to augment the input/output description of a system with an internal state that contains the system’s memory. In fact, the concept of internal state was common in computer science at least since the seminal work of Turing (1937), and played a prominent role in the theory of automata (Shannon and McCarthy 1956).

The new state-space paradigm was still wedded to the perception/processing/action view and thus firmly grounded in the machine metaphor. Concurrent developments in artificial intelligence, cognitive science, molecular biology, and neuroscience were all instrumental in enshrining the “gestalt of the computer” (Rosenberg 1974) as the default view (Monod 1971; Bray 2009).

However, the machine metaphor for living systems is not universally accepted by biologists. In this essay, I will focus on the ideas of Varela (1979) and Maturana and Varela (1980), who argued that it is not sufficient for understanding biological autonomy; see Casti and Karlqvist (1989), Rosen (1991), Gaveau et al. (1994), Nicholson (2013, 2014, 2019), and Bongard and Levin (2021) for other critical views and alternatives. My perspective is that of a control theorist with a deep interest in structure, function, and organization of complex autonomous systems. These notions are central to biology as the study of “organized complexity” (Weaver 1948), with ramifications in control theory, machine learning, and artificial intelligence.

While I agree with many of the authors cited above that the traditional (cybernetic) control framework is insufficient for understanding autonomy, I believe that many of their objections can be addressed within the behavioral approach to systems theory (Willems 2007, and references therein). In particular, the key tenets of the theory of autonomy proposed by Varela and Maturana and elaborated by others, such as Moreno and Mossio (2015), are entirely natural within the behavioral approach due to its emphasis on describing systems by the patterns of interaction between them and their environments that are allowed by the dynamical law governing this interaction. Importantly, this stance does not presuppose any separation of system attributes into inputs or outputs, nor does it posit any internal representations. Rather, these features are deduced a posteriori if necessary for the modeling task at hand. My hope is to convince the researchers interested in questions of biological autonomy that modern theory of dynamical systems and control has much to offer even when one’s philosophical inclinations are to reject the notions of internal representations, processing, or inference. With this in mind, I intend to argue that the behavioral approach is congenial to the enactive view of biological autonomy and cognition (Thompson 2007; Varela et al. 2016; Di Paolo et al. 2017), which emphasizes the dynamic and reciprocal nature of structural coupling between an organism and its environment; at the same time, it can be recruited to establish some common ground between enactivism and so-called new mechanism (Bechtel 2007; Bechtel and Richardson 2010; Craver and Darden 2013; Glennan 2017; Lee 2023).

The Dimensions of Biological Autonomy

The following crisp characterization of autonomy was put forward by Gaveau et al. (1994, p. 2):

[Autonomous systems] must maintain themselves in their environment without the benefit or interference of external intervention. They must, accordingly, set their own ‘goals’, select the ‘problems’ to be solved, determine which external stimuli are relevant, and ‘decide’ on choices of action. They may need to adapt to changes in their environment (and, to some extent, choose their environment), though they may (and in fact they ultimately do) ‘fail’.

This formulation highlights the constitutive and the interactive dimensions of autonomy (Moreno and Mossio 2015): the former refers to the network of processes and/or constraints through which the system defines and asserts its identity (Varela 1979) or maintains its coherence (Gaveau et al. 1994), while the latter refers to the fact that the system is “structurally coupled with the environment, with which it exchanges matter, energy, and information” (Moreno and Mossio 2015, p. 6).

Varela contrasts autonomy (or self-law) with allonomy (or external law). The latter implies instruction and representation, i.e., correspondence between certain structures in the system and those in its environment (Conant and Ashby 1970; Wonham 1976; Godfrey-Smith 1996); the former emphasizes construction and the emergence of adequate behavior as a reflection of the viability of the organism in its environment (Varela et al. 2016). The input/state/output formulation is appropriate for the analysis of allonomy since it allows the external behavior of the system to be shaped by an appropriate control law, while the internal state emerges as a representation of the environment. However, according to Varela, the right framework for autonomy is nonrepresentational, involving structural coupling between the organism and its environment, i.e., a process by which “the continued interactions of a structurally plastic system in an environment with recurrent perturbations... produce a continual selection of the system’s structure” (Varela 1979, p. 33). Another key concept is operational (or organizational) closure, which is

characterized by processes such that (1) the processes are related as a network, so that they recursively depend on each other in the generation and realization of the processes themselves, and (2) they constitute the system as a unity recognizable in the space (domain) in which the processes exist. (1979, p. 55)

The notion of closure requires further elaboration to relate it to standard control-theoretic terminology, according to which a closed dynamical system is one that evolves without external inputs, although it may produce outputs. Such systems are described by first-order ordinary differential equations involving state variables only, i.e., \({\dot{x}}(t) = f(x(t),t)\). This formulation (Varela 1979, pp. 88–89) also coincides with the control-theoretic definition of autonomous systems (Sontag 1998). In such models, the environment is completely explained or eliminated. This, however, presents certain difficulties in view of the above discussion of structural coupling, which involves perturbations applied to the system by the environment. Varela attempts to resolve this by making a distinction between control inputs, which imply intentionality and purpose, and perturbations, which are treated as free and unexplained. However, strictly speaking, the presence of such free inputs that can influence the system implies loss of autonomy, at least at this level of description.

A more satisfactory resolution is offered by Moreno and Mossio (2015), who locate organizational closure not at the level of processes, but at the level of constraints. At the level of processes, biological organisms are open systems, both in thermodynamic and in control-theoretic senses. (In thermodynamics, open systems exchange both matter and energy with their environment; in control, they may also exchange information.) Indeed, various process-based minimal models of living systems, such as the chemoton of Gánti (2003) or the hypercycle of Eigen and Schuster (1977), are all open systems in one or both of these senses. By contrast, constraints refer to functional dependencies among processes that can be treated as invariant on some characteristic timescale (Pattee 1972, 1973; Juarrero 1999). As examples of constraints, Moreno and Mossio (2015) cite the role of the vascular system in channeling the flow of oxygen to cells on much shorter timescales than would be possible via uncontrolled diffusive transport, or the role of enzymes in catalyzing chemical reactions. In both cases, there is a characteristic timescale on which the relevant constraints persist (e.g., the preservation of the vascular structure of an organism while it channels the flow of oxygen or the conservation of enzymes while they catalyze chemical reactions), although this may not be true on longer timescales (e.g., the changes in the vascular system through neovascularization or the eventual degradation of enzymes). Organizational closure refers to a specific mode of mutual dependence in a closed network of constraints with their characteristic timescales (Moreno and Mossio 2015).

Dynamical Systems from the Behavioral Point of View

“The Behavior is all There is”

The behavioral approach to systems theory (see Willems 1989, 1991, 2007 and references therein) is concerned with mathematical modeling of open and interconnected systems. It adopts a radically externalist and empiricist stance:

instead of trying to understand, in the tradition of physics, how a device is put together and the detail of how its components work, we are told to concentrate on how it behaves, on the way in which it interacts with its environment. ... However, we will back off from the usual input/output setting, from the processor point of view, in which systems are seen as being influenced by inputs, acting as causes, and producing outputs through these inputs, the internal initial conditions, and the system dynamics. (Willems 1991, p. 259)

A system is defined extensionally by its behavior—by specifying exhaustively the set of all attributes that can be generated in the course of the system’s interaction with its environment. At this level of description, “the behavior is all there is” (Willems 2007, p. 52), which means that all variables are treated on an equal footing, so in particular there is no a priori designation of inputs or outputs, and thus no designation of causes and effects. While Gaveau et al (1994) criticize the Willemsian notion of behavior as “some kind of Platonic, pre-existent set of potentialities” (p. 15), I prefer, following DeLanda (2016), to think of it as a virtual diagram of the system, consisting of “dispositions, tendencies and capacities that are virtual (real but not actual) when not being currently manifested or exercised” (p. 108).

In the context of dynamical systems, the behavior is a set of trajectories. For systems modeled by differential or difference equations, the extensional definition of the behavior can be compressed into an intensional one, which is furthermore local in time and in space (Carnap (1947, Chap. I) contains a discussion of extensional and intensional descriptions). Willems (1991) gives a nice illustration of these concepts in terms of planetary motion, where Kepler’s laws specify a planet’s orbit extensionally while Newton’s equations of motion provide the intensional description. Behavioral equations typically involve latent variables in addition to the manifest variables that constitute the external behavior of the system. Certain choices of such latent variables acquire the status of state variables; their role is to parametrize the memory of the system.

Inputs and Outputs

The notion of inputs and outputs looms large in our discussion of autonomy. According to the behavioral approach, the “choice of what is the input should be deduced from the model, not imposed on it” (Willems 1991, p. 217).

An input is any attribute in the system’s behavior that is free (loosely speaking, there are no restrictions on its trajectories), while an output is any attribute that processes some input in a nonanticipating manner (Willems 1991, p. 271). In other words, “the input itself cannot be explained by the model: it is free, imposed from the outside by the environment” (1991, p. 217), and the output is completely determined once the input, the laws of the system, and the initial conditions (including the past of the output) are specified. Combining the input/output and the state-variable descriptions then gives rise to an open dynamical system. Such systems are described by input/state/output models of the form \({\dot{x}}(t) = f(x(t),u(t),t), y(t) = g(x(t),t)\), where u(t) are the inputs, x(t) is the internal state, and y(t) are the outputs (Sontag 1998).

Control as Interconnection

One of the innovations of the behavioral approach is a rather liberal view of control that goes beyond the classical feedback paradigm based on the processing of sensor outputs into actuator inputs. From the behavioral perspective, this notion of control is too narrow since “many practical control devices do not act as feedback controllers” (Willems 2007, p. 79)—for example, passive suspension systems in automobiles, heat sinks, pressure valves. Far from involving any manipulation of internal representations, “the action of such passive controllers is best understood through interconnection and variable sharing, rather than signal processing and feedback control” (2007, p. 80). As an example of interconnection and variable sharing, consider connecting two pipes in a system involving fluid transfer (2007, p. 47). Each pipe, \(i = 1,2\), is characterized by two variables: the flow rate \(f_i\) into the pipe and the pressure \(p_i\) in the pipe at the interconnection point. When the pipes are connected end-to-end, these variables are no longer free but are shared between the two pipes: the pressures must be equal (\(p_1 = p_2\)) and the flow rates must balance each other (\(f_1 = - f_2\)). These constraints are enforced by the physical nature of the coupling between the two subsystems (the pipes), not through any processing or feedback.

Thus, from the behavioral point of view, control amounts to restricting the behavior of one system by interconnecting it with another system, the overall effect being a reciprocal interaction between the two systems. While feedback control obviously satisfies this description, the entire range of possibilities is much wider, since one can exercise control by introducing constraints (Pattee 1972; Juarrero 1999), by equilibrium point regulation (Feldman and Levin 1995), or by dynamic switching among multiple attractors (Kelso 1995). The examples of the vascular system or of enzymes given by Moreno and Mossio (2015) in their discussion of constraints are also examples of control by interconnection in the sense of Willems; moreover, feedback mechanisms also have a role to play in their framework as a means of regulation through “second-order constraints that... exert their causal actions on changes of other constitutive constraints of the organization” (2015, p. 34; emphasis in original). The word “behavioral” may also evoke associations with the behavioral approach to robotics, pursued by Brooks (1991). Indeed, it shares many features with the Willemsian framework, for example, no global internal representation of the world; no separation into perception, processing, and action subsystems; and no central locus of control. The main idea behind the behavioral viewpoint is that these features are not predetermined, but can emerge as a consequence of interaction between the system and its environment or between various subsystems making up the system.

Biological Autonomy Through the Behavioral Lens: Reconciling Enactivism and Mechanism?

The notion of control by interconnection and variable sharing is not tied to internal representations, computation, or inference, and is therefore broad enough to accommodate the concept of structural coupling between the organism and its environment underlying the enactivist view, according to which “the system and the environment will have an interlocked history of structural transformations, selecting each other’s trajectories” (Varela 1979, p. 55).

In this regard, it is instructive to revisit Bittorio (Varela 1988), a simple model of structural coupling between a cellular automaton and a random external environment (see also recent work by Beer (2019) on structural coupling and autopoietic systems in the Game of Life). The automaton consists of finitely many binary units arranged on a ring, where the next state of each unit is a given function of its current state and the current states of its two neighbors. This update rule and the periodic boundary conditions imposed by the ring structure instantiate operational closure. The interaction between the automaton and its milieu takes place in discrete time; at each time instant, the state of one of the units is perturbed (replaced) by a random bit generated by the environment, and the configuration of the automaton is updated according to the automaton’s rule. For some choices of that rule, the automaton eventually begins to make distinctions among the patterns of external perturbations—some of these cause a global change in the automaton’s configuration, while others do not. One can see in this model all the features of the behavioral approach: the stream of random bits generated by the automaton’s milieu is an input in the sense of Willems; the automaton interacts with its milieu by variable sharing; and the update rule of the automaton, together with the periodic boundary conditions, furnishes the behavioral equations.

Viewed through the behavioral lens, the concept of enaction as “a history of structural coupling [between an organism and its environment] that brings forth a world” (Varela et al. 2016, p. 206) can be thought of as selection or actualization of a particular trajectory in the organism’s behavior, which is a virtual diagram (DeLanda 2016) of all its possible interactions with its environment. This act of selection is an ongoing, reciprocal, constructive process (Whitehead 1978; Kampis 1992), which may (and does) induce structural changes both in the organism and its environment (Lewontin 1983). This view also agrees with the proposal of Waddington (1968) to model the phenotype mathematically as a “branching system of time-extended trajectories in [a multidimensional] phase space,” so that the organism’s ontogeny corresponds to an ongoing, self-directed selection of one such trajectory.

The behavioral view can also play a role in facilitating a rapprochement between the enactivist and the mechanist viewpoints, advocated recently by Lee (2023). The term “mechanist” here refers to the so-called new mechanical philosophy (Bechtel and Richardson 2010; Craver and Darden 2013; Glennan 2017), which emphasizes explanations based on decomposition and localization encompassing components on multiple interacting levels or scales. Mechanistic explanations are not a priori incompatible with the dynamical systems view (Kaplan 2018), and one can give a mechanistic account of biological autonomy (Bechtel 2007). My view is that the behavioral approach, with its simultaneous emphasis on dynamics, interaction, structure, and organization, is on a neutral ground between enactivism and mechanism. Its modeling philosophy is based on “tearing, zooming, and linking” (Willems 2007), i.e., decomposition of the overall system into components, modeling the unconstrained local behavior of each component, and then interconnecting the components while respecting all relevant local and global constraints. This makes it a natural framework for understanding enaction as operating “[t]hrough a network consisting of multiple levels of interconnected, sensorimotor subnetworks” (Varela et al 2016, p. 206), similar to the framework of Brooks (1991). The behavioral view is also inherently pluralistic, able to accommodate a variety of descriptions of a given system in a context- and observer-dependent manner—cf. Willems’ insightful example of modeling the flight of a bird that traverses a hierarchy of descriptions, from a simple mechanical model to one involving intentionality, purpose, and other organisms, such as the bird’s prey (Willems 1989, pp. 193–194).

Conclusion

Biological autonomy is a fascinating subject of theoretical inquiry, encompassing and traversing a hierarchy of multiple descriptive levels, from nonequilibrium thermodynamics to integrative organization. The enactive view, originating with the theory of autopoietic, or self-constructing, systems (Maturana and Varela 1980), rejects the input/processing/output paradigm of cybernetics and instead emphasizes the notions of operational closure and structural coupling between an organism and its environment. Moreover, it is emphasized that autopoietic systems do not have inputs or outputs, but “can be perturbed by independent events and undergo internal structural changes which compensate these perturbations” (Varela 1979, p. 15).

This statement looks prima facie contradictory, but it reflects the distinction Varela makes between inputs (or controls), which connote intentionality and purpose, and perturbations, which do not carry such connotations. In this essay, I have argued that a proper way to understand these points is by appealing to the behavioral approach to system theory (Willems 2007), which aims to model open, interconnected dynamical systems “as they are,” without any pre-given designation of inputs or outputs. In this approach, a system is described by its behavior, which is defined as the set of all possible trajectories (past, present, and future) that can be generated through the interaction between the system and its environment. From the behavioral point of view, control involves interconnection and variable sharing, which is in accord with the notion of structural coupling in the enactivist framework. Moreover, I have suggested that enaction is synonymous with the process of selecting or actualizing a particular potentiality in the organism’s behavior, which is only possible through continued reciprocal interaction between the organism and its milieu. This reciprocal interaction entails agency, since the organism has wide latitude in how to actualize some of these potentialities in its interaction with the environment, which may involve (re)configuration of various constraints and effective coupling between the organism and the environment, adaptive selection of observables, and so on.

In conclusion, just as Bongard and Levin (2021) argue convincingly that the traditional mechanism metaphors for biological organisms are in need of revision based on modern developments in artificial intelligence, synthetic biology, and robotics, I believe that our understanding of biological autonomy and related questions will also benefit from a revision based on modern developments in systems and control, which have moved past the cybernetic paradigm based on inputs, outputs, feedback, and internal representations. A fruitful direction of future research is to further interpret and expand the theory of biological autonomy, adaptation, and agency in Willemsian terms.