Abstract

The concept of attractors is considered critical in the study of dynamical systems as they represent the set of states that a system gravitates toward. However, it is generally difficult to analyze attractors in complex systems due to multiple reasons including chaos, high-dimensionality, and stochasticity. This paper explores a novel approach to analyzing attractors in complex systems by utilizing networks to represent phase spaces. We accomplish this by discretizing phase space and defining node associations with attractors by finding sink strongly connected components (SSCCs) within these networks. Moreover, the network representation of phase space facilitates the use of well-established techniques of network analysis to study the phase space of a complex system. We show the latter by introducing a new node-based metric called attractivity which can be used in conjunction with the SSCC as they are highly correlated. We demonstrate the proposed method by applying it to several chaotic dynamical systems and a large-scale agent-based social simulation model.

1. Introduction

Practical applications of dynamical systems analysis can be found in various subjects of study including transportation networks, food systems, cancer, and game theory [15]. A commonality stemming from analyses in these subjects is an attempt to understand the evolution and long-term states of these systems. One way to do this is by studying the trajectories of these systems towards attractors in order to understand their long-term behavior. However, several definitions exist for what constitutes an attractor, leading to several different ways to formulate an analysis [6]. Many of these ways involve finding solutions through analytical computation. This makes it fundamentally difficult to analyze systems with nonlinear dynamics [7]. The problems become amplified in higher dimensions as well with many variables. Additionally, little research has been done on analyzing the dynamics within an attractor consisting of many states.

In this paper, we offer a different approach to analyzing complex dynamical systems based on two significant attributes associated with attractors, states that are reached after an extended period of time and the inability of a system to escape an attractor. Networks are then used as the basis of algorithmical analyzing of the trajectories in complex dynamical systems instead of analytical methods. This allows networks to be used both as a dimension reduction technique and for analyzing the attractors. This leads to two computational approaches to analyze long-term behavior of a system with a network-based phase space. One approach focuses on finding the set of states associated with the attractor of the system by finding sink strongly connected components in the network-based phase space. The other approach focuses on calculating a node-specific value that we refer to as “node attractivity.” This metric can be viewed as an indicator of how strongly a node in the network-based phase space is associated with an attractor in the system. We will then present the results from applying this method to ODE-based systems as well as show its applicability to high-dimensional systems. The results will also be visualized with details on how modifications can be incorporated without changing the essence of the method.

2. Literature Review

System dynamics are important for understanding the long-term behavior of a system. Long-term behavior that lead to a single steady state, or fixed point attractor, have an extensive research history that includes their properties [8] and use cases such as in ecological and neurophysiological models [9, 10]. This form of analysis bodes well for determining stability and the dynamics of the system in the region surrounding the fixed point. However, there are other types of attractors, including periodic, quasi-periodic, and chaotic cycles [11]. With the rise in popularity of complexity science, other forms of attractors have also been used to identify more complex structures and situations such as health sciences [12, 13], racial identity [14], thermodynamics [15], and human interaction. As these attractors are more complex, existing approaches for analyzing these attractors have also become more complex in comparison to fixed points. Along with the increased awareness of chaotic behavior in the real world, this has led to research into how we can extract information out of chaotic (strange) attractors [17, 18].

The embrace of complexity has fueled adoption of networks for understanding complex structures and computer simulations for modeling complex phenomena, which in turn has been fueled by the rise of computing power [7]. Networks provide a nearly universal conceptual structure to be applied for any complex system. Some information often calculated for complex systems such as entropy and fractal dimension can be calculated for networks as well [19, 20]. However, there are also network-specific information such as closeness and harmonic centralities, and clusters and strong/weak components that can be gathered algorithmically.

Attractors can be useful in a network representation of a system as well. In neural networks, attractors are identified through recurrent neural connections [21]. As an example, this definition has been used to study the dynamics of perceptual bistability without assumptions used in other models [10]. This combination of attractors and networks was mostly found in neuroscience for fixed point attractors.

3. Methods

Recognizing that attractors are a critical component of complex systems, we will provide two complementary methods for finding a attractor-like structure within a network representation of a system. These methods function regardless of attractor type and rely on algorithmic processes to avoid potential complexity from analytical computation. Both approaches involve first computing a network representation of the phase space. We will show that minimal adjustment is made if the phase space is continuous rather than discrete. Afterward, the first approach will lead to the discovery of a network substructure that we consider as the attractor represented in a network-embedding. The second approach will provide a node-specific metric that can be calculated with the expectation of distinguishing between attractor nodes and nonattractor nodes as time passes.

3.1. Network Representation of Phase Space

A continuous phase space of a system can be represented as a network by first dividing the phase space into nonoverlapping (disjoint) regions. Each region can then be represented by a unique node in a network. For discrete states in a phase space, each state can be directly represented by a node. Next, we can add directed edges between nodes where the weight represents the frequency that a transition occurred from a state represented by one node to a state represented by another. The resulting network can then be analyzed algorithmically for attractors. Note the resulting network is the state transition graph, or state diagram, of the system when the states are discrete and each node uniquely represents a single state.

3.2. Identifying Attractors in Networks
3.2.1. Sink Strongly Connected Component

Consider two major properties of an attractor: (1) no exit exists once a system enters an attractor and (2) any position in an attractor should be approachable from any other position in the same attractor. The first property coincides with a in a network. The second property can be interpreted as a connectivity requirement for an attractor. In a directed network, this interpretation coincides with the definition of a strongly connected component (SCC). A set of nodes that meets both of these properties in a network would then be a sink strongly connected component (SSCC) and can represent an attractor for the associated system. All of the nodes with a path leading to a SSCC would then be a sink weakly connected component of the network to correlate with the basin of attraction. Stochastic state transitions are also covered by the possibility of a node existing in multiple sink weakly connected components. The resulting analogous terms are shown in Table 1.

A simulation approach involves executing many simulations with different initial states. The observed network can be reconstructed after every time step across all simulations to incorporate all states and state transitions from the beginning of the simulation to the most recent time step. The result is a series of networks where each network has at least the same number of nodes as the previous and the sum of state transitions increases by the number of simulations. The SSCC may then reappear with another new transition soon after. To reduce the possibility of SSCCs incorrectly disappearing once identified, we introduce the concept of “SSCC efficiency” for a component and a corresponding “SSCC efficiency threshold.” The SSCC efficiency of a component is defined as the percentage of nodes in the component that are also part of an SCC. With this metric, we can define the algorithm to finding the SSCC of a network as the following:(1)Find all strongly connected components (SCCs) in the network. Each SCC becomes a candidate component.(2)Check if the candidate component is a sink.(a)If the candidate component is a sink, then it is the SSCC.(b)Otherwise,(i)create a new candidate component from the set of candidate component nodes and their outgoing neighbors;(ii)if the candidate component has an SSCC efficiency greater than the threshold , then go to step 2a with the new candidate component;(iii)otherwise, an SSCC is not found.(3)Go back to step 2 for next SCC.

This approach loosens the definition of an SSCC. In our case, we found it sufficient to use a threshold of 95% when one attractor was expected and a lower threshold of 65% was used when expecting multiple attractors. A visual example of the overall approach is represented in Figure 1.

3.2.2. Node Attractivity

The SSCC approach provides a discrete, Boolean answer for whether or not a node should be associated with an attractor. To provide a continuous answer between 0 and 1, we offer an alternative approach utilizing harmonic centrality.

Harmonic centrality utilizes harmonic mean in place of the arithmetic mean as used for calculating closeness centrality, given represents the distance from to [22]

This has led to harmonic centrality being used in a few different ways, including calculating network topology scores in large social networks [23] and transportation networks [1] as well as a hybridization of node-weighted centralities [24]. It contains two major properties for us to consider: (1) if a path does not exist from node to node , then there will be no contribution to the harmonic centrality and (2) as the shortest path between nodes and shrinks, the contribution of this path to the harmonic centrality increases.

Now, consider our network-based phase space. If we use the inverse of each edge weight for our harmonic centrality calculation, then this would result in an inverse weighted harmonic centrality (IWHC). These values for nodes in a network can also be easily normalized into the range of . As transition frequency increases, corresponding IWHC measurements also increase before normalization for affected nodes. Nonattractor nodes would maintain lower IWHC, creating a clear distinction between attractor nodes and nonattractor nodes. We would expect these results to eventually match the SSCC approach. For this reason, we can also call this metric as “node attractivity.” The computational complexity calculations of both methods can be found in Appendix A.

Both the SSCC and IWHC approaches were used in Ref. [25] for analyzing the El Farol bar problem. Attractor (SSCC) nodes and nonattractor (nonSSCC) nodes were identified based on the SSCC approach. We then used logistic regression to compare a series of node-based metrics with the SSCC results. Results confirmed that the SSCC nodes contained high node attractivity while nonSSCC nodes contained significantly low node attractivity. Figure 2 provides a side-by-side comparison of the results from the chosen node-based metrics. We can see that node attractivity was the best performing one. F1 score is used over accuracy, precision, or recall due to the wide range of attractor sizes. More details related to this analysis can be found in Appendix B.

4. ODE System Results

For systems of differential equations, we can use a similar set-up and simulate system movement across Euclidean space, with a series of simulations starting from different initial states. Since these phase spaces are boundless, we will set a boundary for the initial positions of the simulations based on known positioning of the systems’ attractors.

The phase space is also continuous, so we will discretize it by establishing a unit size and rounding down every element in a coordinate to a multiple of the unit size. This will determine the identity of the corresponding node in the network and the corresponding Euclidean space associated with it. This unit size will be called the “granularity” of our nodes. For example, if a new state is found at and the granularity is set to 1, the corresponding node in the network will be at . This also means every node coincides with an integer coordinate representing an -cube with a length of 1 unit along each of dimensions. If the granularity was set to 2, then the corresponding node in the network will be at , and every node would coincide with an even integer coordinate and represent an -cube with a length of 2 units along each dimension. After determining the region for initial positions and the granularity to analyze, we can then proceed as we did for the El Farol bar problem simulation, and then visualize the resulting network in both 3D Euclidean space and/or in a unit-less 2D network-embedding space. All ODE-based systems use a step size of 0.01 during calculations. Metrics of the resulting networks can be found in Appendix C.

4.1. Lorenz System

The Lorenz system is a system of equations presented by meteorologist Lorenz as a simplified model of atmospheric convection flow [26]. A recognizable butterfly-like shaped attractor arises from this system in Euclidean space. The system of equations is given in Equation (1). For our experiment, we use the same values for the parameters used in the original paper, , and

For the Lorenz system, we used the region defined below in Equation (2) and generated initial coordinates at every even integer coordinate with a granularity of 2 for node positions. We then ran the simulation for 20 steps with no transient period

Figure 3(a) shows the results of our proposed method in Euclidean space. Figure 3(b) shows the same results as Figure 3(a) except visualized as a network using the ForceAtlas 2 layout algorithm. We can call this visualization of the attractor as the “Lorenz Attractor Network.”

Nodes in both figures are colored similarly. Those with high X values are yellow while nodes with low X values are blue/purple. This is highlighted the most by the tips of each wing of the attractor as they are distinctly in different colors. The wing on the positive X (X+) side is yellow while the wing on the negative X (X-) side is more or less purple.

The X+ wing and X-wing are clearly distinguishable in the Lorenz attractor network based on color where the X+ wing contains yellow nodes and the X-wing contains blue/purple nodes. This visualization successfully projects the attractor into two dimensions by replacing the Euclidean space-based spatial relationship between nodes with a layout based solely on system dynamics. Nodes representing states that are visited in close succession are in relatively close proximity to each other as well.

4.2. Yu’s Chaotic System with Multiple Attractors

Next, we will test with a system of equations known to produce multiple attractors. Equation (3) is the system of differential equations for Yu’s chaotic system [27] using a value of 2.5 for . The region for initial states is defined below in Equation (4) with initial positions at every integer coordinate and a granularity of 0.5 is used. The simulation ran for 40 steps with no transient period

Figure 4 shows the full-generated SSCC network in 3D Euclidean space and in the network-embedding space. Both visuals show two attractors. In Figure 4(a), we can see that the attractors are circling about the z-axis. Two components are seen in the network visualization in Figure 4(b). The components have incidental overlap but are disconnected. We can also tell from Figure 4(b) that the node attractivity is higher inside the attractor, which is not as clearly seen in Figure 4(a). It was noticed that a large granularity with a long-running simulation can lead to merged attractors and basins due to nodes straddling between multiple basins.

4.3. Four-Dimensional Chaotic System

Lastly, we will test our method against a system with more than three dimensions. Equation (5) is the system of differential equations found in [28] containing different kinds of attractors. However with our results, we argue that the different kinds of attractors are all part of one multidimensional attractor.

We set the parameter values to the same ones used in the aforementioned paper, , , and , and define the region as shown in Equation (6). Since the region is rather large due to the additional dimension, the initial coordinates are created at coordinates with X, Y, Z, and values at multiples of 4. The resulting network uses a granularity of 1, just as we did for the Lorenz network. The simulation ran for 70 steps with no transient period

Figure 5 shows the attractor nodes of the identified SSCC with nodes colored based on attractivity from low (blue) to high (yellow). We found the dynamics which suggested a single two-winged attractor with reflection symmetry within the system. Visualizing the entire network including nonattractor nodes with the same ForceAtlas 2 layout algorithm led to a different apparent SSCC subgraph. We discuss this finding and more related to this system in Appendix E.

5. Conclusion

In this paper, we provided a novel approach for discovering attractors in complex systems by utilizing a network representation of the phase space. Our approach focuses on two key properties of an attractor. First, an attractor should be a component containing no (or exceptionally rare) outgoing edges. Second, each node should be approachable from every other node within the identified component. This definition leads us to the concept of a sink strongly connected component (SSCC) as the network-based entity associated with an attractor in a complex system. With this baseline, we proposed a novel metric named “node attractivity.” Although the computation is more time-consuming, node attractivity provides more details about the system behavior than the SSCC approach. However, these two approaches complement each other well as “node attractivity” aligns with SSCC results over time as shown by our logistic regression results.

The generality of both approaches were seen by analyzing simulations of the El Farol bar problem as well as three systems of differential equations with varying dimensionality and number of attractors. The results from the El Farol bar problem showed how node attractivity performed against in degree, weighted in degree, and weighted out degree in predicting the SSCC. Applying these approaches to the Lorenz system revealed the behavioral tendencies of the Lorenz attractor in a unit-agnostic space, exposing a cyclic nature in a tornado-like structure in the “Lorenz attractor network.” We also showed that these approaches can successfully identify the number of attractors in a system given a set of initial positions and can clarify structural components of the attractor for systems of multiple dimensions by applying this approach to Yu’s chaotic system and a four-dimensional chaotic system.

Future possibilities for research include testing this approach for systems with more than four dimensions. It is possible that the network becomes visually more difficult to understand, but computational analysis may remain consistent as the nature of this approach is to capture all of the behavior of system trajectories within a network structure, regardless of dimensions. We can also find possible research avenues by focusing just on node attractivity. One key benefit that can arise from node attractivity is to develop a proxy of the probability of long-term system state. It may be sufficient to sum-normalize the node attractivity since node attractivity is already nonnegative.

Further research can be performed to see how well this works, or if an intermediate function can be included to model the long-term system state. We previously mentioned potential issues that can appear when using our approach on systems with multiple attractors due to straddling nodes between basins. It would be worthwhile to explore ways to mitigate this issue, such as condensing the space of initial coordinates while adding a transitory period. Using different condensed initial regions would likely improve identifying attractor nodes and nodes straddling between basins since they would presumably appear in multiple SSCCs separately. Lastly, we can take these approaches and apply them to other domains such as transportation systems, voting behavior, or other decision-making systems in search of any commonalities in the SSCC or node attractivity of different systems.

Appendix

A. Computational complexity

The computational cost of finding all SSCCs for a directed network is the combination of the cost to find all strongly connected components of which can be executed with a time complexity of with Kosaraju’s algorithm, and the computational time to check if each component is a sink, which can be executed with a time complexity of . Overall, this leads to a time complexity of for finding all SSCCs of .

Finding the attractivity of all nodes in requires solving the all-pairs shortest path problem. The weights will be positive, so we can use Dijkstra’s algorithm on each vertex in . Since the time complexity of Dijkstra’s algorithm is for a connected graph, the all-pairs shortest path problem can be solved in , which is already more expensive. Once the all-pairs shortest path problem is solved, we only need to sum the reciprocal of all shortest paths from each node, which can be completed in . Our final time complexity for finding node attractivity for all nodes would then be .

B. El Farol bar problem

Table 2 contains the list of metrics used to predict whether or not a node should be associated with an attractor based on the results from the SSCC approach. Predictions for each metric were generated by performing Table 3 logistic regression for each combination of metric and strategy set. F1 score is used over accuracy, precision, or recall due to the wide range of attractor sizes. For PageRank, we found that increasing the damping factor did not have any effect on the results.

The metrics that recorded a nonzero F1 score for all experimented sets of strategies are marked bold in Table 2. These were determined as better metrics for predicting the association of a node with an attractor, reducing our list of metrics to node attractivity, in degree, weighted in degree, and weighted out degree. Figure 2 provides a side-by-side comparison of the logistic regression results using each of these four metrics. Each data point for each metric represents the network results for one unique set of strategies.

C. ODE network data

Below is a breakdown of the data for each network generated for the ODE systems mentioned in this paper.

D. Lorenz system results

Figure 6 [29], is a visualization of the butterfly-like attractor resulting from the Lorenz system. A similar butterfly-like structure can be viewed when mapping the trajectories of our results in Euclidean space as seen in Figure 7.

The Euclidean space visualization of the attractor in Figures 3(a) and 7 also shows that the two wings of the attractor meet at lower Z values. A view of the same network except colored to indicate Z coordinates results in Figure 8 with the color representing relatively low (blue/purple) or high (yellow) Z coordinates. By coloring, we can now see that rotational symmetry and each band of nodes starts near the middle of the network and curves around each other counterclockwise and eventually point towards opposite ends of the network. The inner half of each are a mix of green and blue, indicating the presence of low-middle Z coordinates for these nodes. The outer half of the bands are mostly yellow and blue, indicating nodes mostly with low or high Z coordinates. These nodes are also less connected to the rest of the network since they are near the edge of the network, suggesting that these nodes coincide with the disconnected portions of the wings from Figure 7.

Figure 9 highlights the nodes with high attractivity in yellow and low attractivity in blue in both the Euclidean and ForceAtlas 2 layouts. The nodes with high attractivity are located in the region where the two bands meet in the Euclidean space. This coincides with the nodes in the center of the network in the ForceAtlas 2 layout, further validating our previous assessment.

Lastly, if we were to look at the network from the ForceAtlas 2 layout more closely, we would see that the edges tend to point in the counterclockwise direction. However, the number of edges pointing in the clockwise direction increases as we move towards the center of the network. Much of this conflicting clockwise flow occurs in the center of the network in between the two bands. This lines up with known dynamics in the Lorenz attractor as well since each wing rotates in different directions in the Euclidean space.

E. Four-dimensional chaotic system results

It was mentioned that this system “can generate chaotic butterfly attractors of two wings and four wings at the same time” [28] depending on the dimensions visualized, as seen in the paper and in Figure 10. In the figure from the paper, X-Y, X-Z, Y-Z, and Z-W plots clearly show two wings of an attractor. However, X-W and Y-W plots show potentially 4 wings. In Figure 10, we visualize a single trajectory at from two different angles. Figure 11(a) provides an angle of which suggests two wings while the angle in Figure 11(b) shows prospectively four wings. In both visualizations, the trajectory is traced from light red to dark red.

Emphasizing different dimensions give a different perspective on the present attractor. Similarly, hidden dimensions can hide critical information about the attractor and lead to a false perception. Our network approach aims to instead capture information across all dimensions when visualizing in 2D.

In Figure 5, we showed a network visualization of only the attractor nodes identified by our method. Below is a visualization of the full network (attractor and nonattractor nodes) using the same results and the same layout algorithm. The structure of the SSCC is significantly different due to the presence of nonattractor nodes. The full network contains 377,601 nodes, while the SSCC contains only 101,138 nodes (26.78%).

Data Availability

The code used in this study is available upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.