1 Introduction

Scientific models and computer simulations are indispensable to scientific practice.Footnote 1 Through their use, scientists can effectively learn about how the world works, and to discover new information. However, there is a challenge in understanding how scientists can generate knowledge from their use, stemming from the fact that models and computer simulations are necessarily incomplete representations and partial descriptions of their target systems (the real-world systems they aim to represent). In order to construct a model or simulation, scientists must make idealizations, approximations, and abstractions. But what is the nature of these kinds of idealizations? How are these idealizations justified by scientists? Why are scientists epistemically justified in drawing conclusions about the nature of the real world from models and simulations when they contain idealizations, and are incomplete (and in some cases false) representations of real-world target systems?

This chapter examines the role of idealization in the context of astrophysical computer simulations. In the context of astrophysics, the use of models and computer simulations to study systems is pervasive. They are used to obtain a better understanding of small-scale astronomical objects (such as the evolution of stars or individual black holes), to explore astronomical interactions (such as the interactions of galaxy or galaxy cluster collisions), as well as to model and attempt to better understand the large-scale structure of the entire universe. Due to the complexity of these systems, and other epistemic challenges connected to astrophysics more generally, astrophysics provides an excellent opportunity to study the precise ways that idealization and representational trade-offs enter into the construction of simulations, and how they may determine values for simulation parameters.

Our goals in this chapter are three-fold. First, we aim to provide a survey of some of the existing philosophical literature connected to idealization. This, in part, will provide those who are interested in exploring the role of idealizations in the context of astrophysics a sense for what literature and philosophical problems might be relevant to their work. This also will allow us to, secondly, conduct philosophical analysis on a case study from astrophysics in which computer simulations play a central epistemic role, and examine the role of idealizations in this context. Ultimately, we use this work to argue in favor of the importance of using a variety of de-idealization strategies in addressing epistemic challenges connected to the use of computer simulations in the context of astrophysics.

2 Epistemic Challenges in Astrophysical Methodology

It is important to briefly discuss some of the background epistemic challenges astrophysics faces more generally before examining the role of idealizations in astrophysical computer simulations more specifically. Doing so will help highlight why philosophical analysis of idealizations specifically can aid in developing a better understanding of how idealizations aid or hinder knowledge development in the field of astrophysics, especially in the presence of computer simulations. First, one of the key limits to astrophysical methodology is its capacity to conduct direct experimentation on its object of study (Jacquart 2020; Weisberg et al. 2018). When comparing experimental access in astrophysics to the kind of access other sciences (such as biology or chemistry) have to their objects of study, these other sciences more frequently have the capacity to experiment on their object of study. Astrophysics, on the other hand, is generally not capable of experimenting on its objects of study (such as stars, galaxies, etc.) in such a direct or material matter (Jacquart 2020). Second, astrophysics also has a spatial-temporal limited vantage point; a significant amount of the phenomena of interest in astrophysics take place over a vast timespan and are only observable from one vantage point (such as a telescope in space near Earth). While some cosmic events like the death of stars or black hole mergers happen over shorter timespans, observations of these too are frequently confined to a series of snapshots of cosmic phenomena. This limited spatial-temporal vantage point leads to a sparseness-of-data issue (Jacquart 2020).

In light of these challenges, one of the central strategies used in astrophysics is deploying computer simulations in order to better understand the systems of inquiry. Computer simulations allow scientists to explore how various systems might evolve over time (in a way akin to long-time scale observations), or allow for manipulation of a system (in a way akin to experimentation). In the case where there is little (to no) direct access to a system itself (i.e., direct access to the object of study), incorporation of information or data one does have direct access to is critical. In the context of astrophysics, most simulations are developed based on the observational data astrophysicists do have access to, as well as various background theory. In the research areas in astrophysics where computer simulations are frequently used, astrophysical methodology faces epistemic challenges connected to computer simulation construction and evaluation. This includes broader issues related to verification and validation, the relationship between simulation and theory, and capacity for simulations to offer explanations (see, for example Kadowaki forthcoming; Winsberg 2010). It also includes issues connected to developing a scientific representation as a computer simulation, as well as the role of idealizations and approximations.Footnote 2 This latter set of challenges is where this paper will focus.

In the context of astrophysics, scientists are often trying to model systems ranging from individual stars, single galaxies, galaxy interactions, all the way up to the structure of the entire universe. Obviously, these systems rarely can be simulated in their entirety, for reasons connected to their sheer complexity as well as computational tractability. As such, idealizations (and approximations) are made about these systems in order to develop computer simulations representing these systems. Idealizations are intentional distortions or mis-representations of the target systems, often representing the system in some way in which it is not. Idealizations are “assumptions made without regard for whether they are true, generally with the full knowledge that they are false” (Potochnik 2017, 2). A model or computer simulation, then, is an idealized representation with respect to its target “when it fails to represent some important aspects of the target” (Weisberg 2013,98). This raises questions related to how simulations, in light of their deployment of idealizations, can obtain meaningful epistemic status to offer predictions or explanations about the real-world systems they proport to represent.

Given this web of epistemic challenges, the role of idealization in astrophysical simulations is in need of attention. There is a need to not only consider what kinds of idealizations occur in astrophysical simulations and the role they play in representing their real-world target systems, but also what idealizations are warranted, as well as how they are handled and mediated. In order to examine these concerns in detail, in the next section we provide a basic case study: collisional ring galaxy simulations. After providing this context, Sect. 8.4 will introduce some key ideas and themes connected to idealization, and their instantiation in this case study. We then use this discussion as a backdrop for examining the role of idealizations in astrophysical computer simulations and connection to epistemic claims.

3 Case Study: Collisional Ring Galaxies and Their Computer Simulations

Collisional ring galaxies are formed when a smaller galaxy passes, or collides, with the center of larger disk galaxy at relatively high speeds. Through this gravitational disruption, the smaller galaxy essentially collapses, with its gas and dust generating star formation (young blue stars) at the outer edge of the larger galaxy. This interaction then also affects the orbit of the larger galaxy, producing the ring-like structure (Appleton and Struck-Marcell 1996). The central means by which astrophysicists investigated this system and learned about its galactic formation was through the use of computer simulations.Footnote 3

For these early simulations, the goal was simply to provide a general how possibly account for how these galaxies got their ring shape. With gravitational interaction is a primary driver in galaxy collisions, simulators decided that the masses of the two galaxies would be the critical features of the target systems, as well as the impact velocity and angle of the collision. The masses of the two galaxies, as well as the angle of collision, were varied as a means of exploring how the two galaxies might interact and to determine what conditions are necessary for these ring galaxies to obtain their ring shape. Simulations of these interactions also simplified the system to point particles, with the masses, or number of point particles, of the two galaxies varied to explore galaxy mass ratios (for instance, one galaxy having 600 particles with the other 150 particles) that would result in the ring galaxy phenomenon. Through this process they determined the ring shape occurs only in cases where a smaller compact companion galaxy and a larger disk system undergo a near head-on collision, with more pronounced rings occurring at higher impact speeds (Lynds and Toomre 1976; Appleton and Struck-Marcell 1996).

As computational capacities progressed, collisional ring galaxy simulations have been able to increase in complexity as well. Some contemporary collisional ring galaxy simulations for instance utilize GADGET—a code for cosmological N-body/Smoothed-particle hydrodynamics simulations, as well as GIZMO (building on GADGET) as a massively-parallel, multi-physics simulation code. Both of these allow for simulators to move beyond simple point particle simulations and include more refined physics and features such as hydrodynamics, magnetic fields, fluid dynamics, cosmological integrations, to name a few.Footnote 4 Research groups focused on galaxy simulations have taken these codes and expanded on them for their own purposes as well. For example, the FIRE (Feedback In Realistic Environments) project builds on GIZMO, and aims to improve the predictive power of individual galaxy formation simulations through including interstellar medium and star formation processes as critical drivers of single galaxy evolution. In the case of ring galaxy simulations, GIZMO+FIRE has been deployed as a means to explore the role star formation might play in the evolution of the galaxy collisions (Jacquart 2020). In future work, simulators working on collisional ring galaxies consider it necessary to model individual interacting galaxies such that it includes, at some level of approximation, stellar and gas dynamics of the multi-component galaxies with self-gravity, pressure and heating/cooling effects and will eventually require that the simulations include non-isothermal gas disks in both primary and companion galaxies (Appleton and Struck-Marcell 1996). While past simulations justified omitting these attributes and features due to computational tractability, when considering smaller-scale simulations of individual galaxies these attributes and features could have a significant impact on galaxy structure and evolution. As such they are now flagged by the community as relevant features that may turn out to be causally important.

Though we discuss collisional ring galaxy simulations specifically, we believe this case study has notable features shared across different kinds of simulations that occur in astrophysics. First, this case showcases a progression of simulation computational capacities. The first simulations were developed in the 1970s, when astrophysical computer simulations were primarily simple, small number point particle-based simulations governed almost exclusively by gravity. As computational power advanced, so too did the simulations to more complex N-body and hydrodynamical simulations. These later simulations also offer more refined gravity treatments of increasing complexity (particle-mesh, to tree particle-mesh, to fast multipole), and similarly with their hydrodynamics treatments (moving from adaptive-mesh refinement to smoothed particle hydrodynamics).Footnote 5 This progression is seen not only in galaxy simulations on small scales (i.e. individual galaxies) but also in the large scale simulations (such as those used in large scale structure formation simulations like e.g. Millennium-II).

The collisional ring galaxy simulations also showcases variability in target system representation, that is, what features of the real-world target system the simulator chooses to include in the developed simulation. When modelling any galaxy formation there are several astrophysical processes that could be included: gas cooling, interstellar medium, star formation, stellar feedback, supermassive black holes, active galactic nuclei, magnetic fields, radiation fields, cosmic rays, etc.Footnote 6 These kinds of features (as will be discussed in the following section) are all also potential contributors to ring galaxy structure evolution and development. Representing all of these in one simulation is (at present) not possible, and so various idealizations (and approximations) are introduced. All of these present challenges for modeling ordinary baryonic matter. Additional challenges are also posed to modelling of dark matter in galaxy simulations due to the lack of knowledge regarding dark matter’s precise nature (for example, if dark matter is weakly interacting massive particles (WIMPs), self-interacting (SIDM), or something else entirely).

4 Idealizations, De-idealizations, and Representation in Astrophysical Computer Simulations

We now turn to examine the role of idealization more closely in our case study. In Sect. 8.4.1 we provide an overview of kinds of idealizations that occur in developing scientific representations, and examples of what each kind of idealization looks like in the context of astrophysical computer simulations. Such taxonomies can be extremely useful for thinking through the use of idealizations in science, as specifying the kind of idealizations present not only can help reveal nuances to scientists’ conceptualizations of their representational system, but they can also offer insight for the epistemic challenges and justifications for introducing them. In Sect. 8.4.2 we examine the aims of idealizations in scientific practice and introduce a framework for conceptualizing the aims of idealizations in the context of astrophysical simulations specifically. In Sect. 8.4.3 we connect this with strategies of de-idealization so that in Sect. 8.4.4 we can discuss connections between idealizations, de-idealization, and a common aim in models and simulations: developing more accurate representations of target systems in order to increase confidence in epistemic claims. Ultimately, we highlight how deploying de-idealization strategies is central to bolstering epistemic confidence in simulations.

4.1 Kinds of Idealizations in Astrophysical Computer Simulations

The importance of examining idealizations and their role in developing scientific representations has an extensive history within the philosophy of science and scientific modeling literature (see for example Nowak (1972), Cartwright (1983), McMullin (1985), Wimsatt (1987), and Giere (1988)). More recent analysis of this literature (such as Weisberg 2007, 2013; Elliott-Graves and Weisberg 2014; also discussed in Shech Forthcoming) suggests that there are three kinds of idealizations common in scientific modeling and simulations—Galilean idealization, minimalist idealization, and multiple-models idealization. Studying idealization requires examination of what activity is characteristic of that form of idealization (that is, what the representational goals are) and how that activity is justified (Weisberg 2013, 98).

Galilean idealization is the simplified representation of a target system for the sake of mathematical or computational tractability, and as such is justified pragmatically. Characterized most fully by McMullin (1985), the practice includes selecting a target system of interest, and then introducing distortions and simplifications (idealizations) that allow the scientist to simplify the system, and represent it in such a way to make progress on their problem of inquiry. These idealizations are meant to be temporary with the expectation of future de-idealization.

Considering our case study, we see nice examples of this project deploying Galilean idealization in that it’s introducing distortions with the goal of simplifying to make the models and simulations computationally tractable. Very common to early astrophysical computer simulations (and even those developed today) is the need to simulate highly complex systems, such as a galaxy (and even large-scale structure of the universe). In these contexts, with past and current computational capacities, it is impossible to simulate the trajectory or interactions of every star, planet, gas. Instead, simplified point particle-based simulations are developed, letting a large number of particles stand in for the system as a whole. For instance, the 1976 simulations were pared down to a few hundred particles so that the simulations could run. Even the more contemporary simulations such as those utilizing GIZMO+FIRE have a limit in terms of how many particles can be included due to computational capacities. Galilean idealizations such as these (especially in domains of science that rely on simulations) are not only present, but prevalent. Over time, advances in computational power have allowed scientists to de-idealize, removing distortions and adding back in previously omitted details. As McMullin points out, the capacity and interest in doing so in fact “then serves as the basis for a continuing research program” (1985, 261). We will return to the topic of de-idealization in Sect. 8.4.3.

Let us turn next to another kind of idealization: minimalist idealizations aim to understand the core causal relations that give rise to a phenomenon (Weisberg 2013; Elliott-Graves and Weisberg 2014). Rather than trying to include all the details and complexities of a target system, minimal models include only those factors that are understood to be the core causal factors, or “difference makers” to the phenomenon investigated. This strategy introduces idealizations to eliminate all but the most significant causal influences which give rise to a phenomenon. With minimalist idealizations, justification is related to scientific explanation, and aiming to isolate the explanatorily causal factors either directly (Cartwright 1989 and Strevens 2011), asymptotically (Batterman 2002), or via counterfactual reasoning (Hartmann 1998) (see Weisberg 2013,103 for extended discussion).

In connection to our case study, we also see minimal idealizations deployed, with the 1976 ring galaxy simulations demonstrating this the clearest. These first simulators were interested in understanding core causal relations that would allow a galaxy collision to produce the ring structure—they were interested in providing explanation for how the rings may have gotten their particular shape. In this context, the simulators included only the factors that make a difference to the occurrence and character of the phenomenon in question: mass ratios and angle of collision. In later simulations, such as those deploying GIZMO+FIRE, we also see simulation development through idealizations aimed at exploring if there are any other additional causal influences which could give rise to a phenomenon—that is in what way features like gas or stellar feedback might provide explanations for other structures or features in the rings.

We consider it worth noting at this point that simulations may not deploy one singular kind of idealization. There is a sense in which a simulation might deploy both a Galilean idealization in that it is simplifying and distorting a system to make it more tractable, while also aiming to isolate causal factors (and thus also motivated by aims akin to minimalist idealization practices). But what does seem clear is that there is a clear connection between the kinds of idealizations we deploy, and their purposes or aims for which the idealization is introduced. Idealizations are thus closely tied to, and require reflection on, the wide range of purposes or aims a model or simulation may be intended to serve.

Finally let us turn to a third kind of idealization, multiple-models idealization (MMI). MMI deploys several related but incompatible models together to shed light on a phenomenon. Each model “makes distinct claims about the nature and causal structure giving rise to a phenomenon”, but with no expectation that a single best model will be generated, nor that de-idealization will occur (Weisberg 2013, 106). Central to the justification of MMI is necessary tradeoffs between varying representational goals and desiderata such as accuracy, precision, generality and/or simplicity. Multiple models are needed because no single model can achieve all representational goals while at the same time providing the highest achievement of all possible desiderata. Within the philosophical literature, there has been some discussion regarding how to interpret Weisberg’s understanding of MMI (see for example Potochnik 2017 but also Rohwer and Rice 2013), either narrowly, in which multiple models might be employed within a single research program (akin to robustness analysis), or more broadly, in which multiple models are employed across the scientific enterprise as a whole and often focus on different aspects of phenomena, i.e. causal patterns (Potochnik 2017, 45–6).

In the context of astrophysical computer simulations, one might be tempted to think of a simulation’s ability to run with various different parameter settings as an instance of MMI. As mentioned in connection to the case study, in the process of exploring possibility space in order to determine the conditions in which the ring phenomenon occurs, various parameters in the simulations are changed. One could consider each of these parameter specifications to be its own model, and thus the collection of these an instance of MMI. However, under both a narrow and broad reading of MMI, we do not consider this to be the sense in which “multiple models” is intended to apply as the overall idealizations that are made are unchanged. That is, there are no new idealizations or tradeoff of representational goals.

One might also consider MMI to occur when comparing the 1976 simulations to the more contemporary GIZMO+FIRE-based simulations.Footnote 7 In these instances, several simulations are employed together to shed light on a phenomenon, in this case, ring galaxies. This includes point-particle simulations to the more-complex-but-still-idealized simulations that include feedback and fluid dynamics. The simulations are testing the hypotheses of the rings obtaining their shape through these collisions, and if the cause is competent to produce it. Some simulations have more complexity, some have less. It is through different idealizing assumptions about the basic physical processes involved in ring galaxy formation that we determine under what conditions ring galaxies form as well as some of the more subtle features. There is a sense in which, when taken together, the simulations are not offering distinct claims about the nature and causal structure giving rise to a phenomenon. However, under both a narrow and broad reading of MMI the use of the multiple models bolster confidence in a more unified claim about the phenomena and its structures.

In astrophysical computer simulations, instances of MMI practices may be more likely to occur when considering issues of scale. The idealizations that are made in the case of simulating a single galaxy will almost certainly be in tension with idealizations made for large scale structure. Simulating single galaxies can help us understand what is occurring at the smaller scale, but it will be necessary to make different idealizations when examining how the interactions of single galaxies impact the larger scale structures.

4.2 Idealizations and the Aims of Astrophysical Computer Simulations

We have discussed three kinds of idealizations that can occur in developing scientific representations like computer simulations, the connected scientific goals and justification for introducing those idealizations, and provided some examples of instances of these kinds of idealizations in the context of astrophysical computer simulations. We turn next to discuss the aims of idealizations in scientific practice more broadly. Our intentions here are to, first, introduce a framework that may be of use for conceptualizing the aims of idealizations in the context of astrophysical simulations generally and, secondly, discuss how this applies in the context of our Sect. 8.2 case study specifically. For this discussion, we draw largely on Angela Potochnik’s book, Idealizations and the Aims of Science (2017), in which she explicitly examines the role of idealizations in scientific endeavors.

According to Potochnik, science is a human enterprise best characterized as the search for causal patterns in nature’s complexity. By causal patterns, she means dependencies between factors, revealed under manipulation, and which causal pattern emerges depends on our representational choices. The complexity of nature is what, in part, motivates science to make abstractions and idealizations. She describes abstractions as omissions “without consequence for the representation” (2017, 55). Idealizations on the other hand are not characterized as omissions or negative representational features, rather idealizations play a positive representational role. She defines idealizations as, “assumptions made without regard for whether they are true and often with full knowledge they are false” (ibid., 2, 42). For Potochnik, idealizations play an active role in scientific representations (such as models and computer simulations) of the world. By virtue of science being a human enterprise, causal patterns are identified in scientific representations as opposed to taken directly from the highly complex world. Scientists must then make choices in their representations of the world. These choices may be driven by the research projects, tractability, or simply by virtue of the scientists’ know-how. In whatever way the representational choices are made, they have a direct impact on what causal patterns are derived from the representation. This point, taken in tandem with Potochnik’s commitment to idealizations as assumptions, makes it salient that idealizations will play some active role in whatever causal pattern is derived in any given representation. Idealizations are actively selected for in a similar fashion that other representational choices are made. Much of this discussion mimics similar points we have detailed already in this paper, but it is worth noting the emphasis Potochnik places on connecting the deployment of idealizations to positively contribute to the identification of causal patterns.

Yet despite the vitalness of idealizations to science, Potochnik considers idealizations to be “rampant and unchecked” (ibid., 57). By rampant she means to draw attention to their pervasive nature within science—scientists employ idealizations all the time. By unchecked she means there is (1) little focus on eliminating idealizations (namely, conducting de-idealizations), or even (2) on controlling their influence. Potochnik is careful to note that unchecked does not necessarily mean unprincipled. Rather, it is that idealizations reflect the scientists’ interests. And since idealizations play a positive representational role, the nature of the role must be appropriate for the focal causal pattern, causal details of phenomena, and aims and methods of the research (ibid., 60). What is less clear is the extent to which these features are reflected upon in practice. What we wish to do in this subsection is reflect on Potochnik’s two components to “unchecked” idealizations in the context of astrophysical computer simulations.

With respect to (2), some philosophers (e.g., Batterman 2002; Strevens 2011; Weisberg 2007, 2013) see justification for these idealizations occurring only for insignificant features of a system, non-difference-makers, or details that, if wrong, are safely ignored; especially in instances when an idealization is permanent. Potochnik, on the other hand, “[permits idealizations] even of central causal influences, on a permanent basis, and without taking any steps to hold in check the resulting misrepresentation” (2017, 59). For Potochnik however, even misrepresentation (representation as-if) positively contributes to the representation of actual systems. Her strong view of idealization allows for “the permanent use of idealizations in many roles, including a central role in representing actual phenomena, even when they stand in for significant causes and without measures taken to control their influence” (ibid.).

The initial idealizations in the 1976 simulations identified the causal patterns, and over time, these causal patterns were better and better understood by a process of developing more and more detailed simulations of the target. In considering the target system, the structure of even a single galaxy is highly complex. It consists of stars, stellar remnants, interstellar gas, dust, and dark matter. But even in this very simple simulation (i.e., from 1976) where we have idealized it to just mass and point particles, astronomers had identified the causal pattern of ring galaxies. Even with radical idealizations, astronomers had captured the relevant causal dependencies. Thus far, we think the role of idealizations in this context is very similar to the analysis Potochnik provides.

With respect to (1), for those who consider science aimed for truth, idealized representations must be de-idealized to achieve this aim. Potochnik (ibid.,92) points to Odenbaugh and Alexandrova (2011), who argue that without the removal of all idealizations (complete de-idealization) we have “no ground, beyond that of our background knowledge that informed the model, for claiming that the model specifies a causal relation” (765). Others like Wimsatt (2007) argue that idealized “false” models can be used to produce “truer” theories without recourse to de-idealization. Nevertheless, Potochnik points out that “when an idealization is present merely for temporary reasons, there may be a scientific benefit to de-idealization when those reasons no longer obtain. But this is uncommon” (2017, 60).

Two interesting lines of inquiry lie here. The first relates to whether one ought to consider the epistemic aim of science to be truth (Potochnik ultimately argues science isn’t after truth, but rather understanding as its epistemic aim). For those who may consider science aimed at truth, idealizations (and their deliberate falsehoods) are likely to be seen as problematic, and as such they may place higher value on de-idealization. We are not going to consider this larger issue related to the scientific pursuit of truth in this chapter. What we wish to explore is a second line of inquiry connected to the role de-idealization might play more generally in the development of astrophysical computer simulations. While de-idealization is often brought up as a path to “truer” representations, we wish to explore what other possible roles de-idealization might play in scientific practices. To do so, we now introduce the reader to some further discussion of de-idealization.

4.3 De-idealizations & Astrophysical Computer Simulations

Tarja Knuuttila and Mary Morgan (2019) point out that the implicit view in the idealizations literature is that idealizations are, or potentially are, some kind of reversible process. That is to say, constructing a model or simulation is done through a process, which includes making simplifying assumptions, introducing abstractions, and idealizations. In fact, in the case of Galilean and minimalist idealizations, their conceptualizations crucially depends on the possibility and desirability of de-idealization (Knuuttila and Morgan 2019, 643–645). As discussed above, the capacity for de-idealization is seen by some as a desirable feature. Others see the ability for a model or simulation to be de-idealized as central way to distinguish between different kinds of idealizations. Yet despite the importance of de-idealization, there is little existing literature discussing this reversal, nor its desirability.

Knuuttila and Morgan argue that, when analyzed, it is clear de-idealization is not just a simple reversal process, rather that there are four categories of de-idealization processes: (i) recomposing, (ii) reformulating, (iii) concretizing, and (iv) situating. They consider these four to provide a framework for more effectively analyzing de-idealization that occurs the in scientific practice of model construction. Through discussion of these four distinct processes (and relevant examples) they illustrate that in fact de-idealization processes may often involve multiple of these strategies, and show that models are not simply decomposable and that philosophers of science must play closer attention to modeling heuristics. Thus, there is no easy “adding back in” or reversals of idealizations, and idealization as a simple, reversable process in science may be in itself, an idealization (ibid., 657). Let us look at each of these strategies a bit closer.

The first strategy is de-idealization via recomposing—reconfiguration of the parts of the model with respect to the causal structure of the world. Recomposing might be most akin to the idea of “adding back in” features into a model that were at one point idealized, previously ignored, or controlled for. That is, often the de-idealization process is considered in terms of the reversals of the various ceteris paribus conditions. But Knuuttila and Morgan (following Boumans 1999) consider there to be three processes of de-idealization, (ceteris absentibus factors, ceteris neglectis factors, and true ceteris paribus factors), which upon reflection, are more complex than a simple “adding” of a factor, and thus require more extensive recomposing of the model in order to de-idealize.

We want to attend to the details of these three further since we suspect that de-idealization via recomposing is how de-idealization is commonly conceptualized. The first is the de-idealization processes as adding back in factors that are normally assumed absent yet do have an influence (ceteris absentibus). These are likely to be causal factors, which may be quite significant, and adding such causal factors will significantly alter the existing model. Here a model can only be recomposited by knowledge of the rest of the elements (ibid., 647). Instances of this in our case study occur most notably through the inclusion of stellar feedback in modeling single galaxy structure and evolution. It is a factor that was absent in early simulations, but included later (i.e., FIRE-based simulations). Second is the de-idealization process of adding back in factors normally assumed of so little weight that they can be neglected (ceteris neglectis). Here Knuuttila and Morgan are concerned that even if individually these factors can safely be dismissed, jointly they could make a significant difference to the model. In our case study, this might be modeling both dark matter and baryonic matter as point particles—for some research goals, as long as the overall mass is accurate and proportional, idealizing these both as point particles may not matter. However, this is also something contemporary simulations aim to de-idealize. Finally, the de-idealization process of adding back variability in those factors that are present but whose effect in the model is neutral as they are assumed to be held constant (actual ceteris paribus factors). Knuuttila and Morgan explain why ceteris neglectis conditions are so central to modeling: they “smooth out variety to create stability and so enforce homogeneity” (ibid., 648). However, it is unclear how to reconstitute these variable factors back into models that have previously held them constant. This is in part because there might not be evidence of a real (de-idealized) value, “either because of absence of knowledge or because there are no possible equivalent deidealized values” (ibid.). While there may be challenges to finding values that de-idealize ceteris neglectis conditions, Knuuttila and Morgan point out that such de-idealization might be relatively easy, as in “replacing average values by probability distributions” (ibid.). In our case study, this might be current neglection of dark energy (and something that has yet to be “de-idealized”).

Moving on to two other categories, de-idealization as reformulating and concretizing each deal with issues of model representation, focusing on two different sides of the abstractness of models: their symbolic and conceptual formation. Knuuttila and Morgan acknowledge that there are many different modes of representation scientists can choose for their model or simulation in order to convey their content. Each representational choice can provide advantages but can limit what can be represented too. De-idealization as reformulating addresses the mathematical formalism used in models. An example of this difference between the mathematical representation as either algebraic or geometric (ibid.). What starts to hint at de-idealization not being possible by a simple reversal in this context is that once choices related to mathematical modeling are made, they are not readily visible as other modeling choices. Given the integral nature of the mathematical formulation, de-idealization would then require a reformulating of the model. Since the mathematical construction bears on how the relevant set of elements is integrated, such reformulation, in an attempt to de-idealize, runs the risk of the model falling apart. In our case study this may be akin to simulations choosing to idealize gravity in non-relativistic ways. For the case of individual galaxies, Newtonian dynamics are generally permitted, even though it is considered to not accurately represent the actual causal structure of the world.Footnote 8 To de-idealize this component would require revising the very mathematical formulation of the simulations.

De-idealization as concretizing is related to the representational choices made by scientists that embed theoretical or conceptual commitments about either the system or elements of that system (ibid.,651). The de-idealization of these conceptual abstractions partly means making them operational, it also means including assumptions about the definition of those abstractions. How a system or elements of a system are concretized will depend on “specific purposes in theorizing or in application” (ibid., 651). It is key to note that though concretization is posed by Knuuttila and Morgan as a sort of de-idealization they also point out that concretization does not necessarily mean making a given model or its elements more realistic for even truer to observations about the target system. Rather concretized versions of conceptual abstractions will still be “wedded to their conceptual framing” (ibid.,652). This de-idealization may be more prevalent in the economic cases that are of concern to Knuuttila and Morgan, where decisions must be made about how to represent a utility maximizer. In the context of our case study, this may be seen in choices about how to model dark matter (most choosing non-interacting, yet this embeds some kind of theoretical commitment).

The final category is de-idealization as situating, which addresses the applicability of models to particular situations, and is concerned not just with how a model can be de-idealized to represent some determinable target situations, but how such a process enhances their use in theorizing (ibid., 646). In situating, scientists might use a model in many different but similar specific instances, using either statistical work or experimental work in lab or field. There is not any ‘general’ de-idealization, that takes place, but rather a different de-idealization for every different situation (such as time, place, or topic) (ibid., 656). In the context of our case study, this seems to be what takes place when specific parameter values (masses, velocities, angle of impact, other observational-based data from actual target systems) are entered into simulations. It is instances of de-idealization tied to specifics, rather than a kind of de-idealization occurring to the model or simulation as a whole.

4.4 Idealizations, De-idealizations, and Epistemic Status of Simulations

Having detailed Knuuttila and Morgan’s conceptions of de-idealizations, and provided some examples of each strategy we turn back to our larger goal of examining the aims and roles of de-idealization in astrophysics. If we take seriously Knuuttila and Morgan’s conceptualization of de-idealization in this more complex matter (i.e., not as simple reversal) we see that use of simulation and code that is flexible enough to de-idealize representations plays a specific role in reasoning about results in the context of astrophysics. It’s in these de-idealizations where a lot of the simulation’s epistemic power lies in using simulations to connect a vast array of independent astronomical observations/phenomena to cosmologists’ more global arguments.

More specifically, part of what is being done in the case study by deploying GIZMO+FIRE simulations is adding back in features of the target system to the simulations that had originally been idealized away, and which might actually be difference-making. GIZMO+FIRE simulations allow for exploration about these structures through de-idealizing, namely via including stellar feedback. On the scale of individual galaxies, this is a kind of difference-maker that matters for specific kinds of questions and complexity of questions that can be posed by scientists. The limits of scientific questions prior to GIZMO+FIRE were restricted to those of general structures, or general causal features. But as we have discussed, stellar feedback is critical to how individual galaxies develop and evolve over time; stellar feedback is a difference maker. In this process of de-idealizing these minimal causal models with more details, including stellar feedback, refinement in structures occurred. For instance, simulations allow scientists to now see what kind of stars are present (i.e., young hot blue stars vs older cooler red stars). These features emerge in the simulations only once you have the complexity of stellar feedback. Such features also allow scientists to gain more refined temporal information about the age of the ring galaxy because stellar structure contains this information.

Knuuttila and Morgan conceptualize the “menu” of de-idealization processes consisting of recompositing, reformulating, concretizing, and situating. We think embedded in these there is a useful set of processes-based dimension to de-idealization worth highlighting more explicitly than Knuuttila and Morgan have done. The first is de-idealizing within one context—the kind of de-idealization that occurs fitted to a specific case, data, or target system. The idea here is that some simulations, such as the GIZMO+FIRE simulation, allow for a basic setup, say, two galaxies of specific masses colliding, represented as point particles. Once a simulator has successfully set up this simulation, they can implement a variety of de-idealization strategies (adding in stellar feedback (FIRE), specifying a subset of those particles as stars, gas, etc.). The second is de-idealizations that occur across multiple projects in the way commonly demonstrated via robustness analysis, cross-comparison, and a plurality of “tests” most directly targeted towards identifying difference-makers. This can occur within one simulation instance (say, the set of 1976 point particle simulations), or across the history of simulation progress investigating a specific target system (such as investigations of ring galaxies comparing 1976 to contemporary GIZMO+FIRE simulations and knowledge regarding causal processes and relevant difference makers). The third is a de-idealization process that occurs over time, via progress on tractability. This requires taking models or simulations not individually but as a set, as an ongoing de-idealization through rebuilding simulations in their entirety. These allow for much more expansive “de-idealization” than others because it allows for the simulators to revise or return to representational choices and idealizations introduced into the system.

Knuuttila and Morgan emphasize that idealized models embed a scientist’s theoretical or conceptual commitment about either the system, or elements of the system. Part of what one does in the process of de-idealization is think about how conceptual elements can be de-idealized in different ways, for different sites, and for different purposes. This is just the kind of story at play in simulation codes in astrophysics: namely a group of simulators will develop a code, and different research groups will put it to different purposes. In this process they de-idealize it for their context and goals. With different research groups doing this, it offers a plurality of tests of that simulation code. If it works out well for most groups that adds to the power of the simulation code, connecting a vast array of independent astronomical observations/phenomena to cosmologists’ more global arguments made or embedded in the code. But if it fails to work through this process of de-idealizing, it highlights instances in which some critical representation has perhaps been overlooked, or is perhaps overtly flexible. Being too flexible is a worry in the context of astrophysics, partly because astrophysics and cosmology in particular is one of these cases in which scientists do not have a full understanding of the real world target systems of investigation, and thus what might even need to be in their model or simulation.

This brings us back to the central point raised by Potochnik regarding the connection between idealization, de-idealization, and representational choices. When considering the aims of idealization or even de-idealization, which causal pattern emerges depends on representational choices. Some features emerge because of scientists’ representational choices, that is, their choice to include more features in the simulations than we included originally. But there also seems to be a process-based evolution to simulations: They often start with an idealized minimal model, that over time has undergone de-idealization, and an “adding back in” of features that may or may not be relevant to procuring the phenomenon. But this “adding back in” is not simple reversal, as highlighted by Knuuttila and Morgan, it is some kind of recompositing, reformulating, concretizing, and situating, which are ultimately informed by simulators’ interests (goals for the simulation) connected to representational choices. Often, astrophysicists are aiming towards representations closer to the actual target system. Consequently, greater explanatory strength is added to their models. The process highlights a need to capture only that which might be causal.Footnote 9 Complexity achieved through de-idealization provides some of the simulation’s inferential power; and attending to the way in which the de-idealization strategies are utilized and justified provides that epistemic support. Yet, a background concern is that of the computational tractability stage: there is give and take between what is included. This highlights a central tradeoff at the core of de-idealization between computational tractability, inclusion of aspects of the target system that make a difference for the goals of the scientists for the simulation.

5 Conclusion

In this chapter we have provided a survey of philosophical literature connected to idealization as it connects to (1) kinds of idealizations that occur in science, (2) the aims of idealization in science, and (3) various strategies for de-idealization in science. All of these topics and taxonomies can be deployed to obtain a better understanding of the relationship between model and simulator representational choices in developing their simulations, challenges of these representations necessarily being incomplete and partial descriptions of target systems, and what those simulators might then be merited in terms of epistemic claims. Throughout this discussion we have drawn on a simple case study of collisional ring galaxy simulations to help illustrate how these topics might connect to and apply in the context of astrophysical computer simulations. To this extent, our analysis has only skimmed the surface. We hope this chapter might inspire others to take a deeper dive.

Finally, let us consider the central themes discussed in Sect. 8.4, and the role of idealization in the context of astrophysical computer simulations more generally. First and foremost, it seems that the connection between the kinds of idealizations that are deployed in the development of models and computer simulations relies on a non-trivial awareness of the aims or purposes for which the model or simulation are being constructed. That is, the justification, or kind of idealization deployed in turn captures aspects of what the scientist views as the goal of the model or simulation more generally. As Potochnik points out, introduction of idealizations can go unchecked, but “unchecked” does not necessarily mean unjustified. Rather, introduction of idealizations does not always come with explicit justification by the scientist. But should this this justification be reflected on, there is a connection to aims of advancing human understanding and uncovering the causal patterns. In the context of our ring galaxy simulation case study, we see that at least these astrophysical simulations aim to more accurately represent target systems (e.g., collisional ring galaxies), with hopes of having a resource or tool (the simulations) to aid better understanding of the system. Second, when an aim of the scientist is development of a further understanding of the system, it may serve an impetus to de-idealize. A central point to appreciate from Knuuttila and Morgan is these de-idealizations cannot be done as a simple reversal, they must happen via a variety of strategies. In turn, these strategies also reflect various aims and understanding goals. Four of these strategies are delineated by Knuuttila and Morgan, and we have highlighted the different process-dimensions also at play. These process-dimensions work to unpack more explicitly some of these aims and goals. Third, by attending to the aims and goals of introducing idealizations or attempts to de-idealize, we do not see one-to-one correspondence of kinds of idealizations originally made to a specific de-idealization strategy. Finally, though we made our case by way of the ring galaxy case study, we suspect generalizing our argument, at least partially, is possible to the use in other astrophysical contexts deploying idealizations.