Introduction

In 1989, the Canadian philosopher and historian of science Ian Hacking described the astrophysical method as follows: “The technology of astronomy and astrophysics has changed radically since ancient times, but its method remains exactly the same. Observe the heavenly bodies. Construct models of the (macro)cosmos. Try to bring observations and models into line” (Hacking 1989: 577). His further diagnosis that there is no branch in the natural sciences in which modelling is more central than in astrophysics is certainly doubtful. However, the core of this statement is hard to dispute: numerical models are key to astrophysical research. They have become indispensable tools not only for the theoretical understanding of astrophysical processes but also for the interpretation of astronomical observations (see Anderl 2016 and references therein).

The astrophysical analysis of astronomical observations essentially consists in the application of physical theories to cosmic phenomena. Given the vast irreducible complexity of astrophysical phenomena together with the enormous spatial and temporal scales involved, the application requires numerical models and simulations today.Footnote 1 Computer technologies have been central to astrophysical research for the last 70 years. Early stellar-evolution computations performed by the German-born, US-astrophysicist Martin Schwarzschild occupied half of the time on John von Neumann’s pioneering MANIAC computer in the early 1950s.Footnote 2 Other stellar evolutionists joined Schwarzschild in using early digital computers for their computations (for instance, Rudolf Kippenhahn in Germany, who calculated stellar structure with “new, still home-made computers”. See the interview conducted by Owen Gingerich 1978). In the following years, simulations were used for the first time to understand the dynamical evolution of galaxy clusters—bound groups of hundreds to thousands of galaxies—(Aarseth 1963) and galactic encounters (Toomre & Toomre 1972) using an approach in which the cosmic phenomena are represented by test-particles moving under the inverse-square force of gravity (also called “N-body calculations”, with N < 100 in the former and N = 121 in the latter study). While these simulations and many others motivated by these two pioneering studies only accounted for the influence of gravitational forces on point masses, hydrodynamic simulations describing interstellar processes and stellar evolution in the framework of computational fluid dynamics led to further break-throughs in the late 1960s and 1970s (e.g. Larson 1969: star formation; Arnett 1967: core collapse supernova; Woodward 1976: interaction of an interstellar cloud and a shock wave).

Since the 1980s, personal computers and workstations made it possible for scientists to control their own computing environments; accordingly the role of computational modelling became increasingly central. This development occurred concomitantly with supercomputing facilities opening (e.g. the NSF Supercomputing Centers, which offered the entire US research community access to state-of-the-art supercomputers, were established in 1985). Today many astrophysics simulations are calculated on large computing grids and supercomputers, but “small”—i.e. computationally inexpensive—numerical models are still widely run on personal computers and workstations.

Numerical models are one backbone of astrophysical research. Astronomical data are the other, and electromagnetic radiation is the major astronomical information carrier—traditionally in the optical wavelength regime but today in all the other parts of the spectrum as well. The use of other information carriers reaching earth, like cosmic rays, neutrinos, meteorites and gravitational waves, only started within the last hundred years and still plays a relatively minor role in comparison to electromagnetic radiation. The quality of an astronomical observation based on the collection of electromagnetic photons is judged in several dimensions. The four most important are the data’s time-resolution (how much time must separate two events to still be distinguishable in the data), its spatial/angular resolution (how far apart two point sources must be to still be distinguishable in the data), its spectral resolution (how well different spectral features in an electromagnetic spectrum can be resolved in the data), and its sensitivity (how dim a source can be to still be detectable). Improvements in data collection can be made by, for example, building larger telescopes or combining them (higher spatial/angular resolution); developing more sensitive detectors, larger collection areas, and securing better observing conditions (better sensitivity); and developing faster electronics (better time resolution).

The data quality, in turn, has a direct impact on the nature of the model used in the interpretation of the data since the model will have to reproduce the degree of observationally resolved detail in a direct model-target comparison. The better the data, the more complex and less idealized the models are permitted to be in order to allow for the most fruitful interpretation of the data.Footnote 3 Much of the observing technology currently used in astrophysics was only developed in the second half of the last century. The reason is that the Earth’s atmosphere only has windows in the optical and radio regime for the detection of electromagnetic radiation. Radiation in other parts of the spectrum is either partly shielded (e.g. UV and near-IR) or completely blocked (e.g. X‑ray) by the Earth’s atmosphere. Accordingly, observations in these regimes need to be done from high altitude or from outside the atmosphere. For instance, infrared astronomy only became an established branch of astronomy in the 1960s (see e.g. Rieke 2009 on additional possible reasons for this late start). In 1983, the first infrared satellite, the Infrared Astronomical Satellite (IRAS), which was a joint project of NASA, the Dutch NIVR, and the British SERC, was launched. ESA’s Infrared Space Observatory (ISO) followed in 1995. In 2003, NASA launched the Spitzer Space Telescope, which was followed by ESA’s Herschel Space Observatory in 2009. In the near future, NASA’s James Webb Space Telescope, a member of the next generation of infrared space telescopes, will be launched. With the growing quality of astronomical data and the increasing coverage of the electromagnetic spectrum, theoretical astrophysics and observational astronomy have grown together: today, most studies combine the presentation of new data and an interpretation with sophisticated numerical models. Purely theoretical studies without a central reference to observations have become an exception.

Many of the sophisticated numerical models in astronomy have a long history, being refined by generations of researchers while their numerical core structure remains basically unchanged. This makes them a promising target for a historical study of changing modelling and simulation practices under the assumption that changes occurring in the course of decades with respect to available empirical data and computing technology will leave a characteristic imprint on how this very model is used. In order to make such a study as fruitful as possible, it may be beneficial to draw on existing conceptual frameworks from the philosophy of science—the philosophy of scientific models and simulations, in particular—to guide understanding. Whenever I refer to philosophical concepts in this paper, it is this interaction between the history and the philosophy of science that I wish to put forward.Footnote 4

While the evolution of models in contemporary astrophysics has rarely been studied so far in the history (and philosophy) of science, there are at least two studies at the intersection of history and philosophy of science that focus on particular astrophysical models. Gerd Graßhoff (1998) explores the model building endeavor that took place in order to explain the nature of the astrophysical object SS433 between 1978 and 1981. He describes how models initially only focus on main relevant features in the data before being refined and probed by new incoming data. In his study, the collective model building is regarded as an international attempt of a number of research groups using many different models and modelling approaches. The common denominators are their shared attempt to understand SS433, and, more fundamentally, a common understanding of model construction. Graßhoff, then, introduces a formalization in order to structure the diverse collective modelling efforts as evolving epistemic systems in terms of model revision, suspension and expansion in the face of new data.

Daniela Bailer-Jones (2000) explores the nature, the power and the function of models by studying the modelling of extended extragalactic radio sources (EERS). She concentrates on the explanatory role of models: the proposal of a causal mechanism of how the modelled phenomenon come about. Like Graßhoff, she summarizes the international efforts to understand one particular phenomenon after its discovery (which happened in 1946 for EERS). In particular, she describes the build-up and development of “sub-models” that cover various aspects of the modelled phenomenon by proposing a causal mechanism for them. These sub-models are finally integrated into an overall-model. Bailer-Jones stresses the important role of visual displays of these sub-models, relying on conventions agreed upon in the community of scientists, that help illustrate that all sub-models represent aspects of one empirical phenomenon. One overarching message of her paper is that “[m]odelling is work in progress” (Bailer-Jones 2000: 72), i.e. models are continuously improved and changed. She illustrates this thesis with models that were developed by several different groups until the mid-1970s. However, unlike Graßhoff, she does not follow the development of one particular model in detail. Furthermore, both studies do not mention any numerical implementation of the models discussed and whether such an implementation, if existing, has its own influence on the model building process, even though a numerical implementation is often inevitable in astrophysical model building due to the high complexity of astrophysical phenomena.

Simon Portegies Zwart, astronomer at Leiden Observatory, recently highlighted in an article in Science the prevalence of astrophysical software that is decades-old but still being used and expanded. He describes astronomical source code as being “tiny by industrial standards, and its structure is characterized by developments during the “software crisis” of 1965 to 1985, when software was written as a long list of instructions without formal structure.” The software crisis was linked to a rapid increase in computing power at the time that was not yet met by a corresponding change of programming style. This problem did not only concern scientific software development but software engineering in all the various fields of software application. The term was coined at the first NATO Software Engineering Conference in 1968 at Garmisch, which was organized in order to discuss possible international action in the field of computer sciences (for a report see Naur & Randell 1969). In his article, Portegies Zwart goes on to write,

astronomical software development is organized in indigent “families” of researchers […]. The simple structure of astronomical software packages has enabled them to survive, some even since the 1970s. Although this dinosource code is often written in an ancient dialect, the underlying physics has hardly changed (Portegies Zwart 2018: 979).

Since the long tradition of existing astrophysical software codes means that their development happened parallel to drastic changes in the observational and computing technology, a number of interesting questions are raised: How are these changes reflected in the way these models are built and used? Provided that the core structure of the numerical models often stays largely the same in the model-building process while only certain processes are added or refined, does the interpretation of the model structure exhibit changes over time? More specifically: Are there variations in the representational relationship between target and model? What happens to the standards of model evaluation in view of new data? I will explore these questions by studying the long-term development of one particular astrophysical numerical model, namely, the Paris-Durham shock code within the 20 years (1985–2004) since it was first published. For the present study all available publications in that time period were gathered and evaluated in order to describe the main lines of model development. Looking at only one particular model (in contrast to Graßhoff 1998, for example, who studies all available models for one particular phenomenon in a relatively short time span) allows a reconstruction of the history of a numerical code as the product of a “family”, in Portegies Zwart’s sense, of model builders and, most notably, a reconstruction of the changes in the interpretation of this particular model over a period of two decades.

In the following, I will, first, provide an introduction to shock modelling by focusing on interstellar shocks and then on a new type of shocks in particular, which was not initially known in terrestrial contexts but was “discovered” through numerical modelling. It is this particular type of shock that will be the subject of the case study. I will then introduce a conceptual framework in order to analyze the changes a scientific model can undergo over time. Thus equipped, I will then summarize the first 20 years of development of the Paris-Durham shock code by highlighting milestone publications showing changes in attitude towards the model. I will sum up by discussing the changing interpretations of the modelers before finally drawing my conclusions.

Interstellar Shocks

Shock waves created by strong explosions have been studied on earth for a long time. In a medium like air there is a characteristic velocity at which sound waves propagate, the speed of sound, and it depends on the medium’s temperature, density, and composition. If some event creates a disturbance of the medium travelling faster than the local speed of sound, the medium ahead of the supersonic compressive motion cannot be “warned” dynamically by sound waves preceding the disturbance. Hence, the medium ahead cannot dynamically adapt to the disturbance before it appears, and is compressed, heated, and accelerated very suddenly within a thin shock front only when the shock arrives. A well-known example is the sonic boom, the audible consequence of the shock created by an airplane breaking the sound barrier. A very important property of a shock is its ability to create irreversible changes in the physical or chemical state of the medium through which it passes. In particular, kinetic energy is transformed very efficiently into heat, leading to an increase in entropy in the shock front. Hence, in order to understand the energy balance of a particular medium, including the impact of explosions as a practical example, the influence of shocks needs to be understood.

The numerical simulation of detonations and explosions became important after World War II in the United States in the context of the Manhattan Project, long after shocks had been studied experimentally in the late nineteenth century (for a historical overview see Krehl 2007). However, the existence of a shock front as a very narrow transition region that practically appears as a discontinuity poses a major challenge for numerical solvers. The transition is governed by conservation principles, but the extreme gradients across the shock front create numerical instabilities, unphysical oscillations of the physical parameters. The earliest method for solving this problem was offered by von Neumann and Richtmyer (1950). They introduced an ad-hoc term into their equations that dampens numerical instabilities. This approach was a key step in enabling numerical modelling of shocks. Their “artificial viscosity term” has since become a prominent example of how numerical models may deviate from a straight-forward numerical implementation of the underlying theory (e.g. Winsberg 2010).

Shocks are not only found on earth. Shocks are ubiquitous throughout the universe, and in galaxies in particular. For instance, the accretion of interstellar matter during the growth of young stars leads to the supersonic ejection of gas back into the interstellar medium, resulting in powerful shocks in the young star’s environment. When the most massive stars end their lives through supernova explosions, they cause violent shocks inside and outside of the star. In order to understand the dynamics and energy balance of the interstellar medium—the gas and dust between the stars in a galaxy—a theoretical understanding of shock waves is therefore essential. When astrophysicists started observing the interstellar medium, a field that began blossoming in the 1950s and 1960s with the advent of radioastronomy, astrophysicists could build upon rich preliminary work on shocks from terrestrial physics (see e.g. monographs: Zel’dovich & Raizer 2002; Tidman & Krall 1971; review: Chu & Gross 1969). It turned out that the universe is as rich in shock phenomena as earth—even richer as it eventually became apparent.

A New Type of Shock Wave—the Motivation for a New Shock Code

A special feature of the interstellar medium is the ubiquity of magnetic fields, the theoretical description of which adds substantial complication to the hydrodynamics and shock physics. If the shock waves are very fast relative to the speed of sound and their kinetic energy density much exceeds the medium’s magnetic energy density, the influence of magnetic fields can be neglected in the modelling of shocks; however, that is not necessarily the case in astrophysical contexts. The early works on interstellar shocks accounted for magnetic effects by simply adding magnetic terms, such as magnetic energy and pressure, to the initial hydrodynamic description (e.g. Wentzel 1966; Field et al. 1968). This yielded a modification of the resulting structure of the shock waves because the magnetic pressure resists the compression of the shocked gas. However, this simple way of adding the influence of the magnetic field within a hydrodynamical framework is only justified if certain conditions are met, in particular, if the magnetic field follows the overall dynamics of the gas (this assumption is called frozen field approximation). More precisely, this is the case if the magnetic field, which is coupled to the movement of the charged particles (electrons and ions), is also tightly coupled to the neutral fluid. If however the density of ions relative to neutral particles is very low (a low fractional ionization), the charged particles can slip through the neutral particles, and any magnetic field disturbance can thereby also move quickly through the neutral gas. In such a case, the dynamics of the charged and neutral particles need to be treated separately. Such a situation is, however, rarely relevant in terrestrial contexts since the gas density is typically much higher.

A thorough understanding of the conditions under which the assumption of the frozen field approximation is false was not achieved until 1969. Dermott Joseph Mullan, a young Irish astrophysicist, who studied under the supervision of Donat G. Wentzel at the University of Maryland, showed in his PhD thesis “The Structure of Hydromagnetic Shocks in Regions of Very Low Ionization” (Mullan 1969) that the frozen field approximation, which had been a standard formulation until then, was not valid for slow shocks and low fractional ionization. Such conditions obtain in regions of atomic hydrogen and molecular clouds—exactly those regions in which shocks are created, for instance, in the context of star formation. Furthermore, Mullan could show using numerical computations that in such conditions the shocks have a very different appearance: the shock-front is not a discontinuity but is significantly broadened. The reason for the change in the shock structure is that the momentum transfer between the ions, which are coupled to the magnetic field, and the neutral particles is very inefficient, and the magnetic field can send a pressure wave ahead of the neutral gas discontinuity, thus “warning” the gas ahead of the arrival of a disturbance.

Due to the inefficient coupling, both fluids need individual theoretical descriptions, and a completely new type of shock emerges! Yet Mullan’s work, both his PhD thesis as well as its summary in a research article published in the MNRAS in 1971, was widely ignored by the astrophysicists working on shocks. In the ten years that follow, only two papers (Aannestad & Field 1973; Hollenbach & McKee 1979) briefly referred to his work, and merely to justify the validity of the frozen field approximation for their particular cases of interest.

Bruce Draine, Professor at Princeton University, took Mullan’s calculations up again in 1980 to confirm the earlier results using a new numerical shock model. Draine explicitly introduced, thereby, a new type of shock waves. They constitute a particular class due to the fact that they have a significantly extended shock front (see Fig. 1): these shocks exhibit a magnetic precursor in which the fluid parameters (temperature, density, velocity) change smoothly. Draine called this class “C-type shocks” (“C” for “continuous”) in order to distinguish them from the known “J-type shocks” (“J” for “jump”). These C‑type shocks require, other than J‑type shocks, a full two-fluid magnetohydrodynamic description that treats the ionized and neutral particles separately. Draine summarizes:

In an important but infrequently cited paper, Mullan (1971) demonstrated that the frozen field assumption breaks down for interstellar shock waves when the fractional ionization is low, as is often the case in HI regions and molecular clouds. Mullan showed that the structure of the shock wave could be significantly changed if the usual frozen field assumption is relaxed and replaced by a hydrodynamic description allowing the neutrals and the ion-electron fluid to have different flow velocities and temperatures (Draine 1980: 1,021)

Fig. 1
figure 1

Schematic illustration of the transition (panels a to e) from a J- to a C-type shock in partially ionized gas with increasing transversal magnetic field B0 (From Draine (1980))

On the one hand, the theoretical description of these C‑type shocks is easier because the numerical difficulties in calculating the practically discontinuous shock transition disappear. Consequently, the introduction of an artificial viscosity to dampen numerical instabilities is not needed. On the other hand, the theoretical description is more complicated because many more interactions among the different fluids, such as exchange of particles, mass, momentum and energy, must be accounted for. Draine (1980) computed six exemplary models, for different values of the magnetic field strength both in mostly atomic and mostly molecular hydrogen gas. Draine stressed that he did not intend to explore the astrophysical implications but that the models “are intended only to provide illustrations of the general character of the hydrodynamic solutions, and a demonstration of the importance of magnetic field-driven ion-neutral slip under conditions which are thought to be representative of some of the components of the interstellar medium.” (Draine 1980: 1,034) He points out that the new type of shocks will be an interesting location to study the gas chemistry because charged and neutral particles stream relative to each other leading to high-speed collisions that may overcome the energy barriers of certain endothermic ion-neutral chemical reactions, such as C+ + H2 ⇒ CH+ + H.

In a follow-up paper, Bruce Draine et al. (1983) included for the first time a simple chemical network in their calculations focusing on the chemistry of oxygen-bearing species. Calculating the gas chemistry is a crucial first step towards the comparison of modelling results and actual observations of electromagnetic radiation because each spectral line is created by a transition between two different energy states of a specific atom or molecule. Knowing the abundance of chemical species at each point in the shocked gas together with the energetic state of the gas species (the excitation properties of the gas) is necessary to predict the radiative emission from the gas,Footnote 5 thus allowing detailed comparison with observations.

Calculating a gridFootnote 6 of shock models with a range of pre-shock densities (the number density of the undisturbed gas ahead of the shock), shock velocities, magnetic field strengths and fractional ionizations, Draine and his colleagues predicted spectral line intensities for the main cooling lines from molecular hydrogen and CO. With these results, their publication solved an observational puzzle: in the Orion molecular cloud—a region of very active star formation—molecular gas had been observed to move at velocities spanning a range of about 100 km s−1. It seemed natural to assume that this gas had been accelerated by shocks created by new-born stars. However, hydrodynamic shock models ignoring magnetic fields predicted that shock waves faster than 25 km s−1 will be so vigorous that the molecular gas will be dissociated and should therefore not be observable at such high speeds. Thus, shocked molecular gas at relative velocities in the order 100 km s−1 could not be explained with J‑type shocks. Draine et al. (1983) could show that C‑type shocks provide a solution. The broadening of the shock front makes these shocks less hot and dissociative, and molecules can easily survive. Furthermore, their grid calculations indicated that C‑type shocks will dominate over J‑type shocks in the typical environments of star formation: “Our results demonstrate that in dense molecular clouds C‑type shock waves will be the norm, and J‑type shocks the exception” (Draine et al. 1983: 505). In a sibling-paper (Draine & Roberge 1982), they interpreted specific observations of the Orion molecular cloud in terms of one of their C‑type shock models. Through these papers, it became apparent that magnetohydrodynamic C‑type shocks should be investigated in greater detail.

The need for further exploration of the chemical properties of C‑type shocks motivated the development of a second numerical code, led by the French astrophysicist Guillaume Pineau des Forêts (Observatoire de Paris) and his British colleague David R. Flower (University of Durham). In contrast to Draine’s code, their code has been under constant development and in frequent use since its emergence. Over decades, Pineau des Forêts and Flower led revisions of the code that became known as the “Paris-Durham shock code”. In this respect, their code represents an ideal example on which to study the historical process of long-term and collective model building. In the following section, the first 20 years of this development (1985–2004) is presented and analyzed. In order to avoid redundancies, only articles published in peer-reviewed journals will be noted. A particular focus is placed on the relationship between development and use of the code, on the one hand, and the referenced, related astronomical observations, on the other. Five distinct but not necessarily sharp time periods can be identified in the development of the Paris-Durham shock code. They are marked by benchmark publications that opened new levels of code development and novel astronomical data/observations.

Conceptual Framework

The historical character of scientific models—the changes it experiences in its scientific contribution, function, and setup as it matures—has been noted by several authors. Numerical models are “work in progress” (Bailer-Jones 2000: 72). Bailer-Jones understands scientific models in light of their respective functions. Accordingly, the evolution of a model is reflected in the question whether its function and/or the way it fulfils this function has evolved over time. She claims that modelling begins with the need for an explanatory account when certain features of a phenomenon or a process are lacking a known causal mechanism. The proposal of a particular model opens new explanatory gaps that ask to be filled. In this fashion, according to Bailer-Jones, more and more mechanisms are added, and eventually unity in the sub-models and completeness in the resulting overall-model are sought. Graßhoff (1998), in contrast, focuses on the interplay between data gathering, model-modification, and the exploration of theoretical implications. He sees the beginning of model development in a situation where data are scarce. An initial explanatory model concentrates on the main relevant features of the data, before it is expanded with the “epistemic goal of reaching an empirically adequate model that accounts for all the data features.” Such an expansion could, for instance, consist in the introduction of a particular geometry. If no new data are added, model development can encounter a phase of stillstand; as soon as additional data become available, however, competing explanatory models can be discriminated, and model development may continue.

In 1997, Michael L. Norman, an astronomer at the Laboratory for Computational Astrophysics at the University of Illinois, described the astrophysical modelling practice from an astrophysicist’s point of view in similar terms. In his review of the role computer simulations play in astrophysical research, he gave names to the different phases of model-maturation: “physical [i]nsight”, “[o]bservational contact”, “[p]ostdictions”, and “[p]redictions”. First, crude simulations provide insight into the nature of an astrophysical phenomenon and shape the way we think about it. According to Norman, this early phase does not depend much on the availability of observations: “More often than not, early models miss essential physics inherent in a phenomenon, or are of insufficient resolution to simulate it accurately. Such models require substantial maturation before observations are engaged in any meaningful way” (Norman 1997: 12). As soon as all relevant physics is included, however, the interplay of the model and observations initiate rapid progress in computational astrophysics, according to Norman.

There appears to be agreement on the fundamental difference between initial model building in face of scarce data and late model building: the model’s function, its scientific impact, and the way it is used evolve. For our analysis, it may be useful to refine the conceptual tools used to describe these changes. In Simulation and Similarity (Weisberg 2013), Michael Weisberg introduces a general and potentially rich conceptual framework for understanding scientific models. We choose this framework because it is general enough, on the one hand, not to introduce a very specific and narrow understanding of scientific models on the one hand, but, on the other hand, finer evolved in its conceptional detail than corresponding frameworks that are used in the papers of Graßhoff and Bailer-Jones.

Weisberg describes models as structure plus interpretation.Footnote 7 According to the three different types of models, he differentiates a model’s structure, which can be concrete (like scale models), mathematical (like the Lotka-Volterra model), or computational (like any algorithm-based model). Models are specified by descriptions, such as pictures, equations, or source code. At least as important as the model’s structure are the modelers’ interpretations. Weisberg calls them “construals” (Weisberg 2013: 39), and he distinguishes four kinds: “Assignments” are explicit specifications to map the target system (or parts of it) onto the model. The modelers’ intended “scope” specifies “decisions about which aspects of their model are to be taken seriously” (Weisberg 2013: 40). The third and fourth are “fidelity criteria”. They define the degree of similarity expected between the model and the world in order to call it an adequate representation and are based on output error tolerances (“dynamic fidelity”) and the similarity between the structure of the model and the target system (“representational fidelity”).

According to Weisberg, the construction, analysis and evaluation of theoretical models is governed by “representational ideals”: “They regulate which factors are to be included in models, set up the standards theorists use to evaluate their models, and guide the direction of theoretical inquiry” (Weisberg 2013: 105). “Completeness”, for example, is such an ideal, and its goal is a complete representation of a phenomenon: “Each property of the target phenomenon must be included in the model”, and “the best model is one that represents every aspect of the target system […] with an arbitrarily high degree of precision and accuracy” (Weisberg 2013: 106). “Simplicity”, for example, is another, and its aim is to “include as little as possible, while still being consistent with the fidelity rules” (Weisberg 2013: 107). This ideal is either pedagogical or employed when general ideas are to be tested. There are other ideals such as only including the core causal factors responsible for a certain phenomenon or maximizing the precision and accuracy of the model’s output. The idea of changing representational ideals may resonate with Bailer-Jones analysis of the model’s function: depending on the respective function the model is to fulfil, different ideals may govern model evolution. It also parallels Graßhoff’s (1998) and Norman’s (1997) descriptions as the ideals also determine the influence and importance of observations on the model’s construction.

In the following case study, I will show that the maturation of the Paris-Durham model can indeed be described in terms of changes in the model’s interpretation in face of the changing availability of observational data and in terms of applied representational ideals in particular. This will stress the important role the user’s interpretation plays in the understanding of scientific modelling and puts forward a dynamic picture of the epistemic status of scientific models and simulations between theory and observations.

1985–1994 Phase 1 of the Paris-Durham Shock Code: The Code as Theoretical Exploration Tool

The first period of development constitutes ten years of theoretical studies with only weak links to astronomical observations. In fact, observations are only mentioned as motivation for the exploration of certain physical/chemical processes and mechanisms in the context of interstellar shocks. In this context, the specific spatiotemporal origins of the observations in question play a minor role; they typically illustrate general correlations or trends that are to be explained.

The first version of the Paris-Durham shock code for magnetohydrodynamic C‑type shocks was presented by David Flower, Guillaume Pineau des Forêts and Tom W. Hartquist (Flower et al. 1985), the latter being a senior researcher from the University of Maryland who had experience with those aspects of the interstellar chemistry expected to be relevant in C‑type shock environments. “Theoretical studies of interstellar molecular shocks—I. General formulation and effects of the ion-molecule chemistry” is the title of this initial paper, and its aim was to answer the question Bruce Draine had raised concerning the possibility of new types of chemical reactions if ions and neutrals move relative to each other in a shock: “In the present paper, we investigate the importance of the ion-molecule chemistry. We find that the ion-neutral streaming can, indeed, drive endothermic reactions” (Flower et al. 1985: 776). In their paper, the first step toward this result is to reproduce qualitatively the numerical results of Draine (1980) and Draine et al. (1983) in order to test the functioning of their program.

The first paper of the series is particularly revealing because it describes the very basic theoretical structure implemented in the numerical code, which has remained the heart of the numerical model throughout its development until today. Its descriptions can be used to obtain a first conceptual assessment of the shock model’s nature. The Paris-Durham shock model, apparently, is a computational model. Its assignment is explained in this first paper where features of the target phenomenon in question and its underlying physics and chemistry are mapped onto subparts of the numerical code in particular (the exact denotations and implementations in the source code are not explicitly described, though). The model assumes plane-parallel symmetry, although it is not expected to be found for any shock in the real world. The limited geometry is relevant for the intended scope of the model, and it also impacts on the fidelity criteria initially used: the focus is on exploring the causal mechanisms creating certain general features of the MHD-shocks (“representational fidelity”, as Weisberg would call it). In this first and subsequent papers, the shock model is seen as a means of exploring the general properties of shock waves rather than being understood as representing one specific spatiotemporally located cosmic target phenomenon. This is particularly visible in the fact that in these papers not many different model instantiations with fixed parameters like velocity and pre-shock density, for instance, are calculated (for the difference between a model instantiation and a general model see Orzack and Sober (1993)).

The series of theoretical studies continued in three papers (II, III, IV) in the following year (Pineau des Forêts et al. 1986a, 1986b, 1986c). Each paper studies another theoretical extension of the code in terms of its representational capacities of the interstellar chemistry. The stated goal of the papers is the exploration of the effects certain physical or chemical processes or assumptions have on the resulting shock and the abundances of chemical species in it. In each paper, the chemical network is enlarged, and the extensions are almost invariably motivated by the existence of corresponding observations. This begs the question: What observations were available at that time?

Radiation stemming from molecules is found in different parts of the electromagnetic spectrum at wavelengths as short as ultraviolet. Most molecular radiation originating from interstellar gas in which slow shocks occur is however emitted in the infrared and microwave domains of the spectrum. These wavelengths are hard to observe from earth because the atmosphere is increasingly opaque for this radiation. Since the 1960s, observations in the infrared regime were attempted with earth-bound telescopes. However, atmospheric absorption and infrared emission generated by the telescopes themselves degraded the quality of available observations significantly.Footnote 8 A clear improvement was hoped for by putting telescopes onboard airplanes to enable observations from high atmospheric altitudes. The Kuiper Airborne Observatory (KAO), operated by NASA between 1974 and 1995, consisted of a 91.5 cm telescope in a modified Lockheed C‑141A Starlifter, a jet transport aircraft with a normal operational altitude of 12.5 km. During its first years of operation, however, this instrument could not reach its envisioned angular resolution due to technical problems (e.g. Elliot et al. 1989). The first survey satellite operating in the infrared regime was the Infrared Astronomical Satellite (IRAS), launched in 1983. The first articles presenting its data were published in March 1984 in the Astrophysical Journal Letters. However, it took some time until the new data had any impact on theoretical studies. Thus, the first shock papers refer to studies presenting rather low-quality observations with the KAO, earth-bound telescopes, or—if radiation of molecular hydrogen in the UV were used—of the Copernicus (OAO-3) satellite. The latter instrument was an UV and X‑ray telescope, which was launched in 1972 and operated by NASA until 1981.

The first direct connection to existing observations in the literature (in this case obtained by ground-based telescopes like the McDonald Observatory or the Mount Hopkins 1.5 m telescopeFootnote 9) is established in Paper III. It tests whether the observed chemical abundances (actually “column densities”: number of particles per area along the line of sight) of the molecules CH and CH+ toward a number of star formation sites can be qualitatively reproduced by C‑type shocks. Species formation like this relies on the aforementioned relative motion between ions and neutrals and thus served as a characteristic tracer of C‑type shocks.

The series “Theoretical studies of interstellar molecular shocks” was continued by Pineau des Forêts and David Flower with varying co-authors until paper X, which was published in 1989 (Pineau des Forets et al. 1989). Like the first four publications, all these papers introduce new theoretical extensions of the code and present their respective consequences. In the papers that followed this series (Flower et al. 1990; Pineau des Forets et al. 1993; Flower & Pineau des Forets 1994), the general philosophy of the first four papers is continued: different physical and chemical mechanisms are explored (e.g. Flower et al. 1990 provides “the first quantitative study of this mechanism [carbon enrichment through destruction of interstellar grains]”).

During these first ten years of model development, the limited quality of the data—mostly in terms of available angular resolution and signal-to-noise—and the limited number of observable shock tracers focused attention on general chemical peculiarities. These unexplained features, in turn, called for “how-possibly explanations”: Is it possible that a shock wave is responsible for a certain kind of chemistry, for the high abundance of a certain molecule, for such a level of excitation? In order to answer questions like this, no particular model instantiation (a model with fixed environmental and shock-parameters) is chosen as the one representation; exploration of whether the general model is in principle able to generate the phenomena in question is carried out instead: “A good answer doesn’t depend on explaining how any specific system actually works, but rather on how they might work.” (Weisberg 2013: 118). The representational ideals, which seem influential here, foster the study of specific physical or chemical causal mechanisms by comparisons of models with and without the newly implemented effects.

1995–1997 Phase 2 of the Paris-Durham Shock Code: The Route Toward Observations

In this second phase of shock model development, the shock model is applied in the interpretation of observations of specific interstellar regions. The increased status of observations for the shock studies is illustrated by the fact that now, for the first time, information derived from astronomical observations is displayed in a figure together with simulated data.

In 1995, ten years after the initial paper was published, the appearance of a specific astronomical object occurred in the title of a paper related to the Paris-Durham shock code for the first time: “Hot shocked ammonia towards Sgr B2” (Flower et al. 1995). Sgr B2 is the name of a giant molecular cloud in the galactic center region, which shows strong star formation activity. Its gas appears very hot and moves at high velocities; shocks, therefore, seem to play an important role in that region. The paper aims to reproduce these conditions with C‑type shock models. The observations it takes as a reference are the ammonia observation lines published in 1993 and 1994 by researchers from the Max Planck Institute for Radio Astronomy working alongside of the young astronomer Susanne Hüttemeister, which were obtained with the Effelsberg 100 m telescope.Footnote 10

The shock paper differs from all previous papers in two respects. Firstly, it is the first time that (strongly processed) observational data is shown in a figure and directly compared to model calculations. The figure is in the form of an excitation diagram. For each transition (each spectral line), it contains information on the column density and the respective excitation, information that can be calculated from the observed line intensities. This kind of diagram will be one of the most commonly used visual tools for direct model-target comparisons in subsequent years (see Fig. 2). Secondly, the paper is notable since it is not an update of the code that constitutes the main content of the paper, but the interpretation of observations. It is clearly for the sake of this interpretation that the code is slightly modified. However, the modification itself is discussed only relatively briefly. The main result of the paper is that the shock model is indeed capable of explaining much of the spectral data. It is, however, worth noting that the paper does not attempt to say more about the specific nature of the shock and its environment, like the gas density, the velocity or the magnetic field. In particular, it does not try to fit one particular model instantiation (i.e. a model with set parameters like the velocity, magnetic field or pre-shock density) to the observations.

Fig. 2
figure 2

In 1995, observational and modelled data are compared in an excitation diagram for the first time (Flower et al. (1995))

The next three papers (Flower & Pineau des Forêts 1995; Flower et al. 1996; Field et al. 1997) follow again the familiar theoretical build-up: a new effect is introduced into the code, and its consequences are studied. It is perhaps significant that the paper from 1995 contains a table of intensity predictions of infrared transitions for three different shock speeds as a service for the impending launch of ESA’s ISO infrared satellite (launch date: 17 November 1995), the successor of IRAS. The second paper refers to a type of astronomical object in the title: “The structure of MHD shocks in molecular outflows”, although the paper, once more, has its main emphasis on the evaluation of the theoretical extension to the C‑shock model. An order-of-magnitude comparison of the derived SiO column densities with outflow observations is mentioned rather briefly at the end: “It is gratifying that the present results derive some support from observations of molecular outflows” (Flower et al. 1996: 455). In the paper from 1997, C‑shock calculations are presented in the last part of the paper as an “illustration” without attempting “a detailed comparison with observations” (Field et al. 1997: 845).

In summary, this short period of model development was kicked off by a paper that has, for the first time, as its main goal the reproduction of the conditions in a particular spatiotemporal region in the interstellar medium. But there was still no specific interpretation of this region with a single model instantiation. A class of observations is identified with a class of model instantiations defined by a (narrow) range of parameter values instead. Nevertheless, the transition from the purely exploratory, consciously incomplete and idealized models toward the modelling goal of representing individual shocks in a more and more complete fashion had begun.

1997–1999 Phase 3 of the Paris-Durham Shock Code: The Code as Service Tool for Observers I: Grid Calculations

This third phase contains the first calculation of synthetic observations that can directly be compared with real, i.e. relatively unprocessed, observations. The comparison of model calculations with observations is fostered by the start of extended grid calculations, i.e. calculations of a larger number of different models with varying input parameters covering many different possible physical and chemical conditions in interstellar environments.

This milestone was reached in 1997. In the paper “SiO production in interstellar shocks” (Schilke et al. 1997), observed spectral line profiles (intensity plotted against wavelength/velocity, i.e. a data representation that closely imitates real and largely unprocessed observational data) occur for the first time together with synthetic profiles generated by the shock code. The data shown stem from the Spanish IRAM 30 m telescope. In order to compute these spectral lines with the shock code, the shock computation needs to be coupled with a numerical treatment of the radiative transfer (see annotation 5): “To facilitate comparison with the observations, we calculate SiO line profiles, introducing the temperature, abundance, and velocity profiles of the shock in an LVG [large velocity gradient] program” (Schilke et al. 1997: 299). The underlying question answered by such an LVG approach is how the generated radiation propagates through the shock. The resulting data formally look like the unprocessed observational data, and a direct visual comparison is therefore easily possible (see Fig. 3).

Fig. 3
figure 3

Observed and synthetic SiO line profiles as shown in Schilke et al. (1997). The right subfigure is truncated

Furthermore, this paper includes the most extensive grid of shock models (see annotation 6 on grids of models) calculated so far, covering four different pre-shock densities and five shock velocities for two different scenarios of how silicon is released in the shock. Another novelty is that associated data, sputtering parameters and chemical rates, were published in parallel in the French VizieR On-line Data Catalog, which was founded in 1996 in order to make the information available for other users.

Two subsequent studies, again, have a theoretical character. The first, Chieze et al. (1998), compares a time-dependent shock calculation with the results of the classical shock code, which operates under the assumption of a steady-state. In the conclusions, the relevance of the paper’s findings for ISO observations is pointed out: “The interpretation of ISO observations of Herbig-Haro complexes (Wright et al. 1996), for example, is likely to be influenced by allowing for the time dependence” (Chieze et al. 1998: 681). Apart from that remark, the paper fully concentrates on the theoretical exploration of the effect. The second study, Flower and Pineau des Forêts (1998), tries to explain the observational fact that the emission lines of CH and CH+ seem to be emitted at different gas velocities. As in Schilke et al. (1997), they apply a treatment of the radiative transfer to obtain the expected synthetic spectral lines. They also compare observed and calculated column densities (number density per observed area) of molecular species in the same figure (observations taken by the Very Large Array in New Mexico, the 140 ft telescope in West-Virginia, the Nancay telescopes, and the Plateau de Bure Interferometer in the French Alpes). The paper shows more frequent references to observational evidence than theoretical papers before, but its focus is clearly on the more general question of whether shock models can reproduce certain observed trends and phenomena.

In summary, the inclusion of radiative transfer—the propagation of electromagnetic radiation through the shock—marks another milestone in shock model development since theoretical and observed spectral lines can now be compared. For the first time, a direct comparison between synthetic model-generated data and largely unprocessed observed data is presented. Furthermore, the calculation of larger grids of models is an important precondition in order to single out singular model instantiations that can represent individual cosmic shocks. Thus, the influence of the representational ideal that aims for a complete and faithful representation of a particular interstellar shock seems to be growing.

1999–2004 Phase 4 of the Paris-Durham Shock Code: The Code as Service Tool for Observers II: Extensive Grids of Models

The fourth phase is characterized by the first interpretation of a spatiotemporally located shock in terms of its properties, its age in this case. Accompanied by further grid calculations, the power of the shock model as a tool for unveiling the underlying physics in specific observed interstellar regions is increasingly used.

In 1999 David R. Flower and Guillaume Pineau des Forêts implemented an approximation of the time-dependent modelling (by combining stationary shock profiles) into the stationary code. This is found to be necessary because jets and outflows in the context of star formation are expected not to reach a steady state as assumed before. In addition, a detailed treatment of the excitation of molecular hydrogen is implemented, allowing the calculation of the H2 rovibrational emission and infrared fine-structure lines that are observationally available. By calculating models for different shock ages, the authors are able to estimate the age of the outflow observed in Cepheus A West by the ISO satellite (1,500 years old) based on observations published by Wright et al. (1996). This seems to be the first time that the model is used to derive a definite property of a specific spatiotemporally located shock.

As already used in Flower et al. (1995), the practical means used for this comparison is an excitation diagram, in which (basically) column densities of H2 in different rotational states, calculated from observations or from the results of model computations, are plotted against the excitation energy. In the conclusions, the authors motivate their application of the shock model in the interpretation of molecular outflow observations:

The model incorporates grain erosion processes and a detailed numerical treatment of the H2 rovibrational level populations, using the most recent molecular data. In these respects, we believe that the present treatment of shocks in outflows is the most complete and sophisticated to date (Flower & Pineau des Forêts 1999: 279).

This statement seems to make the first reference to the completeness of the model.

The following papers mostly try to make the best possible use of the existing shock code for the interpretation of existing observations with relatively little refinements of the code. Wilgenbus et al. (2000) calculates “extensive grids of models of both C‑ and J‑type planar shock waves, propagating in dark, cold molecular clouds, in order to study systematically the behaviour of the ortho-para‑H2 ratio” (Wilgenbus et al. 2000: 1010). This grid covers four different values of the pre-shock density, four to five different shock velocities, and different values of the H2 ortho-para ratio, for which the H2 line fluxes are also provided—a rich source of information and reference for observers. “As an illustration” (Wilgenbus et al. 2000: 1019), the modelling results are used in the interpretation of ISO and ground-based observations of the Herbig-Haro object HH54. They conclude that an application of the shock model to available observations can help to understand the characteristics of the underlying cosmic processes: “Observations of the ortho-para H2 ratio in various sets of lines seem to provide a useful way of determining the shock structures in protostellar outflows” (Wilgenbus et al. 2000: 1021).

A second paper in 2000 (May et al. 2000) provides another extensive grid of shock models, focusing this time on sputtering processes that are relevant for the interpretation of emission lines of species that are bound in refractory grain material or ices on grains in quiescent gas and that can be released by the action of shocks. No application to specific observations is made; only order-of magnitude considerations are presented. In the subsequent short letter (Pineau des Forêts et al. 2001), the effect of collisions between hydrogen molecules and dust grains is studied; however, due to the brevity of the paper, only a small table with computed intensities is given: “The intensities of rovibrational transitions of H2 have been calculated for a series of models of C‑type shocks, considered to be relevant to molecular outflow sources” (Pineau des Forêts et al. 2001: L7).

Another paper in 2002 (Le Bourlot et al. 2002) ends, after studying the transition from C‑ to J‑type shocks depending on shock speeds, with an application of shock models to existing ISO observations using excitation diagrams. This application is able to constrain shock velocities and pre-shock densities in the observed regions OMC‑1 and OMC-KL:

We have compared the predictions of the model with ISO-SWS and ISO-LWS observations of OMC‑1 and OMC-KL. A two-component model, comprising shocks with speeds of 60 and 40 km s−1, is found to yield good agreement with the observed column densities of rovibrational levels of H2 (Le Bourlot et al. 2002: 992).

Flower et al. 2003 use H2 excitation diagrams to interpret spectra of the outflow sources Cepheus A West and HH43. A second paper in 2003 (Flower & Pineau des Forêts 2003) is then purely theoretical, improving the implementation of grain physics and chemistry in the code.

In sum, the ability to compute larger grids of shock instantiations covering the parameter space of possible individual models more densely now clearly permitted reaching beyond “how-possibly explanations”. It became possible to represent specific, spatiotemporally located targets not only in terms of their causal structure but also in terms of their specific observational outputs, i.e. their radiation. The different use of the model can be seen in the shrinking error tolerances when models and observations are compared, which contrasts with a rough recreation of certain observational features without high requirements for a precise match aimed at in the first decade of shock development.

The method of finding the best-fit to observations within an excitation diagram strives for agreement in model and observations. The corresponding representational ideal is completeness. It is becoming increasingly apparent and is even explicitly mentioned (Flower & Pineau des Forêts 1999) that adding more and more physical and chemical details to the model is seen as the main motor of model improvement, measured in terms of increasing applicability in the interpretation of observations.

2004 Phase 5 of the Paris-Durham Shock Code: Observations First

The last period in the first 20 years of model development is characterized by an almost complete about-face in focus: (newly obtained) observations become the most important part of a publication, and the model-based interpretation takes a backseat.

This important step regarding the change in character of shock publications took place in 2004. In Giannini et al. (2004), observations newly obtained by the authors themselves are presented as the main purpose of a paper for the first time, and their interpretation using the shock code is only an add-on. The observations were obtained at the ESO-NTT 3.5 m telescope and consist of near-infrared spectra toward protostellar jets in three different star formation regions. After these observations are presented and discussed, the last part of the paper is dedicated to a comparison of these observations with the predictions of shock models: “The H2 emission from representative HH objects (HH26A, HH72A and HH320A) has been successfully modelled by planar J‑shocks with magnetic precursors, for which the main parameters (pre-shock density, speed) are derived” (Giannini et al. 2004: 999). Once again, excitation diagrams of H2 rovibrational emission are the main tool for this comparison. As a result, shock characteristics for all three regions can be estimated.

This paper was the first of many other similar observational papers that followed, in which new observations become the main topic and their model-based interpretation is only part of the analysis-and-discussion section. This development was facilitated by the fact that the shock model became more and more publicly available. Dissemination the shock model began via astrophysicists trained as students and postdocs by Guillaume Pineau des Forêts and David Flower who later transferred their expertise into other, often observational, fields of astrophysical research or through the publication of the code as a webtool in the 2010s.

Changing Interpretations

During the first 20 years of shock code development, the use of the Paris-Durham shock code slowly changed. This change is however hard to see as a change that can be traced back to a mere change of its numerical structure as this structure only changed gradually over the years and mostly at the outskirts of the code. Instead, it seems mostly to be a change in the code’s use and its relationship to observations, thus a change in the code’s interpretation.

In the beginning, its purpose was mostly to ascertain whether certain general (observed) effects and phenomena can be traced back to interstellar shocks and the physical and chemical processes they cause. In doing so, the shock model was understood as a means of exploring the general properties of shock waves and the relevant causal mechanismsFootnote 11 rather than being understood as representing one specific spatiotemporally located cosmic target phenomenon. This is particularly visible in the fact that in the early papers not many different model instantiations (i.e. model with fixed parameters like velocity and pre-shock density) are calculated, and, when they are, none of them are particularly exposed to specific observed shock phenomena. This happens in 1997 for the first time when a direct comparison of synthetic model-generated data and observed data is presented. However, there is still no specific interpretation of one individual spatiotemporally located shock region using the shock code. Instead, a class of observations is identified with a class of model instantiations defined by a (narrow) range of parameter values. Two years later, in 1999, this changes when the model is used to assign an age to an outflow in Cepheus A West by identifying the best-fitting model in a shared plot of observed and calculated data. Extended grid calculations foster this type of use of the model as a best-fit representation of particular spatiotemporally located observations in the following years. At this point in time, the completeness of the model in terms of implemented physical effects is stressed in a publication for the first time. Accordingly, it may not be very surprising that in 2004 the first paper appears that is not centered on the model anymore but on observations performed by the authors themselves. Today, research articles without comparisons to specific observations (or the goal of enabling such comparisons) are practically non-existent, thus corroborating the shift toward target-directed modelling.

The changes in the way the model is construed over time, which I have set out above, are not unusual in scientific practice. Weisberg acknowledges such changes, and he lists several possible reasons for them: Firstly, theorists may not “have a feel” for their models yet (Weisberg 2013: 77) and therefore have to adapt their initial expectations, expressed in the fidelity criteria, on the model after initial explorations. Secondly, the model structure can exhibit unexpected properties so that the scope of the model may need to be adjusted. Thirdly, a model can be used in a completely different domain so that the assignment has to change in its entirety. Notably, besides these influences, changes in the construal of the Paris-Durham shock model seem to be provoked by both increasing computational power and the improved data available.

In summary, the model’s assignment has changed (from generalized targets to spatiotemporally located specific targets) in parallel with its scope (more and more aspects of the target system were included) in the twenty years of shock model development presented above. In addition, its fidelity criteria changed in the sense that originally a qualitative similarity in the model’s behavior and the observationally derived properties of the phenomenon had priority, whereas the quantitative similarity of output values and observations gained increasing importance later. These changes mirror an overall change in the representational ideals: a focus on the exploration of isolated causal mechanisms is progressively replaced by an aspiration to create an increasingly complete model in terms of included physical and chemical effects. Today, at a time when high-quality observational data of shocked cosmic regions are more widespread than ever before in the history of astronomy, the Paris-Durham shock model has continued along a path toward an interpretational tool for understanding specific spatiotemporally located phenomena.

Conclusion

Numerical models and simulations are a central element of astrophysical research, bridging the gap between a combination of theories from different subfields, on the one hand, and observations, on the other. Where exactly between theory and empirical data they should be located—closer to theory or closer to observations—differs from case to case. Their epistemic position can even change in the course of the development of a single model.Footnote 12 The changing interpretations of a model may be brought about by variations in the quality and coverage of available data, on the one hand, and in computational capacities, on the other. These factors are often not explicitly addressed but are implicit in the corresponding publications.

This paper demonstrates such a changing interpretation during the first twenty years of the development of the Paris-Durham shock model. Whereas observations acted mostly as motivation to ask “how-possibly”-questions in the beginning, their role in how the model is used becomes increasingly central. Accordingly, the shock model was used as an explorative model of generalized target phenomena during the first ten years before individual instantiations of the general model were finally understood as representations of specific spatiotemporally located shocks. Using the conceptual framework of Weisberg (2013), it can be stated that the model’s construals have changed during this process in all different levels of assignment, intended scope, and fidelity criteria. These changes were governed by evolving representational ideals: a focus on the exploration of isolated causal mechanisms is increasingly replaced by an aspiration to create a more and more complete model in terms of the physical and chemical effects included. This study shows that numerical modelling is a highly flexible and dynamical process, inheriting its flexibility and dynamics from the central role played by the modelers’ interpretation of the model.

Acknowledgements

I would like to thank Arianna Borrelli for the invitation to contribute to this Special Issue and for many helpful discussions that helped to improve the paper significantly. Without her and Janina Wellmann’s work and persistence this issue would not have been possible. I would also like to highly acknowledge the support of MECS (Centre for Advanced Study on Media Cultures of Computer Simulation) at the Leuphana Universität Lüneburg, and of Martin Warnke in particular. The generous funding of two workshops and a working stay in Lüneburg was essential for writing this article. Finally, I am grateful to two anonymous referees for useful comments that helped strengthen the paper.