1 Introduction

In 1983, Modified Newtonian Dynamics, or MOND, was proposed by Milgrom as an alternative explanation of the observed flat galaxy rotation curves, instead of the dark matter hypothesis. The cosmology community has, in its majority, remained sceptical of the MONDian approach. Yet a small group of scientists keeps defending MOND as preferable over dark matter (now included in the cosmological concordance model, \(\Lambda \)CDM). In contrast to the moderate number of exponents of MOND in scientific contexts, most philosophical analyses of the debate either explicitly defend the MONDian approach (McGaugh, 2015; Merritt, 2017, 2020, 2021b; Milgrom, 2020), or take a more neutral stance (Massimi, 2018; Jacquart, 2021). In contrast, we accept the currently limited appeal of MOND within the cosmology community as our starting point. Obviously, the way in which a majority of cosmologists views MOND in the absence of conclusive empirical testing does not amount to a final verdict on the hypothesis’ viability. Still, it will provide the basis for using MOND as a test case for philosophical views of scientific confirmation.

The extent to which exponents of MOND have used philosophical arguments to defend their theory is remarkable and quite unusual by physics standards. From a philosophical perspective, this raises interesting questions regarding the role of philosophical reasoning in science. An instructive starting point for assessing the status of philosophical reasoning in the given context is the following: given that MONDians take philosophical arguments to strongly favor their theory, why is the MONDian defense so ineffective in the eyes of most cosmologists?

Three answers are available in principle. First, it could be that the philosophical reasoning relied upon by MONDians is sound and epistemically significant and its deployment by MONDians is adequate. This would mean that a vast majority of cosmologists has been wrong on MOND in recent decades and has handled theory assessment in an inadequate way due to their disregard for the relevance of philosophical reasoning. It goes without saying that the burden of proof for this option would be much higher than for the following two. Second, the philosophical arguments relied upon by MONDians could be conceptually flawed, inapplicable to real world science, or epistemically irrelevant. If so, MONDians would have been misled by overrating the significance of philosophical arguments in the context of assessing the status of a physical theory. Third, the MONDian application of the philosophical tools they deploy could be flawed. We will argue that the third answer is the correct one. As will emerge in our discussion, the issue is more multi-faceted, however, than one might think at first glance.

So, what are these philosophical arguments commonly offered in defense of MOND? At face value, many of them appeal to a Popperian or Lakatosian perspective on theory assessment. Our first goal is to argue that a literal reading of Popper or Lakatos does not offer the basis that a coherent epistemic defense of MOND requires.

But not all is yet lost for the MONDian. We submit that the defense of MOND’s viability often reveals an argumentative structure that, unlike Popperian views, does offer a basis for epistemic analysis. Specifically, the arguments in support of MOND emulate the lines of reasoning of meta-empirical theory assessment (Dawid, 2013). It is our second goal to show how the MONDian philosophical reasoning structurally amounts to meta-empirical theory assessment. This then gives us the basis to reevaluate the defense of MOND, now according to the framework of meta-empirical theory assessment. Our third goal, finally, is to show that the defense of MOND, though structurally amounting to meta-empirical assessment, is inadequate as proper meta-empirical assessment. For proponents of meta-empirical theory assessment, it will be instructive to understand why the MONDian arguments are epistemically deficient, and what this implies for applications of meta-empircal theory assessment in practice.

Here is how the paper will go. We begin with a brief outline of the evidence in favor of dark matter and MOND (Sect. 2). In Sect. 3, we introduce the MONDian defense as it is offered by proponents of MOND. We argue that it fails from the MONDian’s own philosophical perspective in Sect. 4. Instead, we argue in Sect. 5 that the most plausible reconstruction of the MONDian defense is in terms of meta-empirical theory assessment: we identify two arguments supporting a ‘No Alternatives Argument’, and an ‘Unexpected Explanation Argument’. But, as argued in Sect. 6, there are several reasons why this defense of MOND still fails.

2 MOND, dark matter and galactic scales

The first introduction of dark matter is often traced back to Zwicky’s (2009) work in the 1930s but the dark matter hypothesis was broadly accepted in the second half of the twentieth century.Footnote 1 Currently, the dark matter hypothesis is a central tenet of the concordance model \(\Lambda \)CDM and it is supported by observations on cosmological, cluster, and galaxy scales.

On cosmological scales,Footnote 2 evidence for the existence of non-baryonic dark matter comes from observations of the Cosmic Microwave Background (CMB), Baryonic Acoustic Oscillations (BAO), primordial element abundances, and large-scale structure. First, since dark matter and baryonic matter interact differently with radiation, their presence in the early universe has different effects on the power spectrum of the CMB anisotropies. Particularly the second and third peak put tight constraints on the baryonic \(\Omega _b\) and dark matter density \(\Omega _{DM}\), respectively (Aghanim et al., 2020). Second, BAOs are remnants of sound waves in the early universe, that is, oscillations in the matter density due to the counteracting influence of radiation pressure and gravitational collapse. They are detectable as a preferential formation of galaxies separated from each other by the sound horizon scale compared to other length scales (Eisenstein et al., 2005). Again, dark matter and baryonic matter have a different influence on BAOs: where baryonic matter is subject both to gravitational collapse and outward radiation pressure, dark matter only contributes to gravitational collapse. The BAO amplitude is too high to be generated by baryonic matter alone, thus, again, providing evidence for non-baryonic dark matter. Third, Big Bang Nucleosynthesis (BBN) is responsible for the formation of the lightest elements in the early universe (including as the main source of deuterium D). The primordial abundances depend on the baryon-to-photon ratio in the early universe; by using photon density estimates from the CMB, primordial element abundances can be used to constrain the baryon density. These estimates are too low compared to other determinations of the matter density in the universe that suggest a flat geometry. This is taken to imply the necessity of dark matter (Reeves et al., 1973; Schramm, 1993). Fourth, the large-scale structure in the universe is seeded by the density fluctuations that left an imprint on the CMB. Given the size of those density fluctuations, however, baryonic matter alone would be insufficient to account for the amount of structure formed through gravitational clustering that is observed in the universe today. Additional gravitating matter is needed, that is, dark matter (Blumenthal et al., 1984).

On cluster scales, Zwicky’s observations of the velocity dispersion of galaxies in the Coma Cluster were already referred to. More recently, the Bullet Cluster has been touted as “direct empirical evidence for the existence of dark matter" (Clowe et al., 2006). The Bullet Cluster is a merging event between two galaxy clusters. Gravitational lensing reveals that the gravitational potential is displaced compared to the distribution of baryonic matter, determined through X-ray maps. The displacement can be explained by the presence of additional non-baryonic matter.Footnote 3

Finally, on galactic scales, the prime source of evidence for dark matter are the galaxy rotation curves (Rubin & Ford, 1970). Based on Newtonian dynamics, it would be expected that the rotational velocity of stars around the galactic center would drop off with increased distance away from the galactic center. Observations reveal, however, that the rotational velocity remains more or less constant (the curves of rotational velocity as a function of distance from the center are, in other words, ‘flat’). One way of explaining these flat rotation curves is by introducing additional, ‘invisible’ matter affecting the galaxy’s rotation.

The galaxy rotation curves also gave rise to an alternative hypothesis: Modified Newtonian Dynamics, or MOND (Milgrom, 1983a, b, c; Bekenstein & Milgrom, 1984). The reasoning behind this alternative is quite straightforward. First, there is the concern about the ad hoc-nature of the dark matter hypothesis to explain flat galaxy rotation curves:

[I]n order to explain the observations in the framework of [the Hidden Mass Hypothesis (HMH)], one finds it necessary to make a large number of ad hoc assumptions concerning the nature of the hidden mass and its distribution in space. The large amounts of data on galaxies and galaxy systems which have been collected to date, and in particular the various regularities which have emerged from these data (each requiring new ad hoc assumptions about the hidden mass) make, I believe, the time ripe for considering alternatives to the HMH. (Milgrom, 1983a, p. 365)

The obvious alternative to changing the mass distribution is to change the laws governing its dynamics:

It must have occurred to many that there may, in fact, not be much hidden mass in the universe and that the dynamical masses determined on the basis of [the virial relation \(V^2 = MGr^{-1}\)] are gross overestimates of the true gravitational masses. (Milgrom, 1983a, p. 365)

In other words, as long as there is no definite evidence for the missing mass hypothesis, why assume that the deviations from predictions based on our theories of gravity in combination with mass estimates are not due to the fact that our theories of gravity should be revised, instead of the mass estimates?

The MOND hypothesis did not take off, and has largely been discredited by the majority of the cosmology community today. This rejection is based on a combination of factors: the lack of connection between MOND and standard physics (the lack of a relativistic version of MOND is particularly glaring), and the related failure to account for cluster- and cosmology-scale observations (where dark matter does succeed).

MOND fails to successfully describe the observed features of galaxy clusters. Other evidence, such as the cosmic microwave background anisotropies and large scale structure, are not generally able to be addressed by MOND, as MOND represents a phenomenological modification of Newtonian dynamics and thus is not applicable to questions addressed by general relativity, such as the expansion history of the universe. (Hooper, 2009, p. 3)

Of course, since modifying the dynamics or changing the mass estimates exhaust the space of possibilities, it follows from the failure of MOND that the solution to account for the galaxy rotation curves requires dark matter.

Bekenstein’s TeVeS was long considered a plausible candidate for a relativistic version of MOND, although it was unclear how TeVeS could account for the Bullet Cluster (Hooper, 2009, p. 3). However, TeVeS has recently been rejected based on LIGO’s observations of the neutron star merger and the lack of time delay between gravitational and electromagnetic signals (see (Boran, Desai, Kahya, & Woodard, 2018) for the original scientific discussion, and (Abelson, 2022) for a philosophical analysis). Skordis and Zlosnik (2020)’s RMOND supposedly solves these problems.

One of the more charitable assessments of the MOND hypothesis comes from Jim Peebles:

On the length scales of cosmology, \(c/H_0 \approx 4000 Mpc\), the demanding tests [...] make a compelling case that general relativity with the hypothetical nonbaryonic dark matter is a good approximation to reality. If this is accepted, as most have done, why is Milgrom’s alternative theory so successful on the scale of galaxies? The community assessment is that this is an accident of the complexity of the application of standard physics to galaxy formation. Deciding whether we have adequate physics for analyses of the structures of galaxies [...] or whether we have missed something interesting, calls for more data analyzed in better ways, as usual. Meanwhile the community decision is appropriate: work with standard physics and the hypothetical subluminal/nonbaryonic matter applied to a cosmology that fits demanding tests–until or unless we run into trouble. (2020, p. 264)

It is telling that Peebles emphasizes the appropriateness of working with standard physics. Peebles continues to recognize the appeal of the phenomenological regularities on galactic scales identified by MOND (cf. infra), but accepts that, given the empirical support for dark matter in combination with standard physics, it is useful to continue this line of research “until or unless we run into trouble”.

Dark matter being a useful working hypothesis or not, a small group of staunch defendants of MOND over dark matter remains. As Massimi (2018) explains, their criticism is primarily focused on dark matter’s alleged failure to account for certain observations on galactic scales. These observations, collectively known as ‘MOND phenomenology’, are apparently naturally predicted by MOND and widely confirmed by observations.

Here, we will briefly explain the MONDian approach as it was first proposed by Milgrom, as well as predictions that follow from that original proposal and that, following Merritt (2020), are touted as the most impressive empirical successes of MOND, aside from galaxy rotation curves: the Baryonic Tully-Fischer Relation (BTFR) and the Mass-Discrepancy Acceleration Relation (MDAR). According to defenders of MOND, the successes of MOND extend far beyond these three.Footnote 4 However, the argumentative structure tends to take the same form for all cases, and the three considered here are also recognized by defenders of \(\Lambda \)CDM as phenomena potentially in need of explanation (cf. Bullock and Boylan-Kolchin, 2017). We can therefore use galaxy rotation curves, BTFR and MDAR as a representative sample for MOND phenomenology without this affecting our philosophical argument.

Milgrom (1983a) proposed a modification to how Newton’s second law relates acceleration to gravity in galaxies.

$$\begin{aligned} {\varvec{F}} = M_{gal} \mu \left( a/a_{0}\right) {\varvec{a}} \end{aligned}$$
(1)

where \(M_{gal}\) is the baryonic mass in the galaxy (stars and gas), \(a_0\) is Milgrom’s constant and \(\mu \left( x \gg 1 \right) \approx 1\) and \(\mu \left( x \ll 1 \right) \approx x\). For sufficiently large acceleration, Newton’s second law is recovered. The transition between the MONDian and the Newtonian regime is determined by the value of \(a_0\) (based on observations, Merritt (2020, p. 61) suggests \(a_{0} \approx 1.2 \times 10^{-10} m \, s^{-2}\)).

For sufficiently small accelerations, the gravitational acceleration \({\varvec{g}}_N\) of a test particle in a symmetric and stationary gravitational system becomes:

$$\begin{aligned} {\varvec{g}}_N \approx \left( a/a_{0}\right) {\varvec{a}} \end{aligned}$$
(2)

And for a system at distance R from the galactic center:

$$\begin{aligned} {\varvec{g}}_N = \frac{G M_{gal}}{R^2} {\varvec{e}}_R \end{aligned}$$
(3)

Combining 2 and 3 with the usual formula for the acceleration of a test particle in uniform circular motion, we get:

$$\begin{aligned} \frac{a^2}{a_0}&= g_N \end{aligned}$$
(4)
$$\begin{aligned} \frac{1}{a_0} \left( \frac{V^2}{R}\right) ^2&= \frac{G M_{gal}}{R^2} \end{aligned}$$
(5)
$$\begin{aligned} V^4&= G M_{gal} a_0 \equiv V_\infty \end{aligned}$$
(6)

This last equation recovers, as expected, the flat galaxy rotation curves: sufficiently far from the galactic center, the rotational speed shows no dependence on the distance from the galactic center.

From this last equation, we can also read off the Baryonic Tully-Fischer Relation, a tight and universal scaling relation between the asymptotic rotational speed \(V_\infty \) and the total baryonic mass of a disk galaxy \(M_{gal}\). Milgrom (1983b) derived the ordinary Tully-Fischer relation and described it as “a major prediction and an absolute relation independent of galaxy type or any other property of the galaxy" (377). McGaugh et al. (2000) and McGaugh (2012) showed that the scaling relation was tighter using the total baryonic mass rather than galaxy luminosities. BTFR is considered a surprising and interesting prediction of MOND because it holds between the baryonic mass of the galaxy, and the asymptotic velocity which scales with the total (baryonic + dark) mass. Non-gravitational interaction between dark and baryonic matter is supposed to be limited, however. The coincidental close scaling between baryonic and total mass seems too good to be true.

Another consequence of Milgrom’s modifications is the Mass Discrepancy-Acceleration Relation [(more recently reformulated as the Radial Acceleration Relation—see Merritt (2020, p. 68)]. MDAR is a tight correlation between the so-called mass discrepancy \(a(R)/g_N(R)\) and the centripetal acceleration a(R). \(a(R)/g_N(R)\) is called the mass discrepancy because it can be re-described (through \(\left( V(R)/V_N(R)\right) ^2\), where V(R) is the observed velocity and \(V_N(R)\) the expected velocity based on Newtonian dynamics) as the ratio between the galaxy mass inferred observationally from galaxy rotation curves (which, according to standard model cosmology, would include dark matter mass), and the galaxy mass inferred from observations of stars and galaxies. Like BTFR, the reason why MOND-defenders take MDAR as an ‘anomaly’ for the dark matter hypothesis is that, under the dark matter hypothesis, MDAR indicates a tight correlation between the dark matter-distribution in the galactic halo and the baryonic matter-distribution contained mostly in the galactic disk, without a clear indication why such a tight correlation would be expected.

Indeed, as Bullock and Boylan-Kolchin (2017, p. 368) recognize in a review of \(\Lambda \)CDM on galactic scales, the “real challenge [...] is to understand how galaxies can have so much diversity in their rotation curve shapes compared with naïve \(\Lambda \)CDM expectations while also yielding tight correlations with baryonic content". The ‘naïve \(\Lambda \)CDM expectations’ are that galaxies with similar maximal rotational velocities would have similar central densities. Observations reveal, however, that there is a large scatter in central densities among galaxies with the same maximal rotational velocity. One potential way to account for the observations would be to consider feedback processes and gastrophysics. However, Bullock and Boylan-Kolchin recognize a challenge for the feedback-explanation: “[t]he fact that there is a tight correlation with baryonic mass and not stellar mass (which presumably correlates more closely with total feedback energy) makes the question all the more interesting” (ibid.).

To conclude our overview of successes and failures of dark matter and MOND, we want to draw attention to one important point. Some defenders of \(\Lambda \)CDM who reject MOND do recognize that \(\Lambda \)CDM ultimately needs to succeed in recovering the MOND phenomenology in some way or other. This is evidenced by the discussion by Bullock and Boylan-Kolchin (2017), but also by recent attempts to recover MOND phenomenology with hydrodynamical simulations implementing \(\Lambda \)CDM [see e.g. (Vogelsberger et al., 2014; Glowacki et al., 2020)]. Of course, part of the disagreement with defenders of MOND is what an adequate recovery could look like.

3 The MONDian philosophical defense

With the successes of dark matter and MOND on the table, we are now ready to start assessing the philosophical arguments offered by defenders of MOND. Defenders of MOND often frame their defense in terms of Popperian or Lakatosian demarcation criteria. On the one hand, they aim to show that \(\Lambda \)CDM, and particularly the (cold) dark matter hypothesis, is somehow ‘unscientific’ according to Popper’s falsifiability demarcation criterion.Footnote 5 On the other hand, they argue that MOND does provide a viable theory candidate for physics on galactic scales according to both Popperian and Lakatosian demarcation criteria.

The unfalsifiability charge against dark matter comprises two separate concerns. First, at the level of \(\Lambda \)CDM, there is the concern that predictions from the model can only be derived from computer simulations. And since hydrodynamic simulations come with large swaths of free parameters that allow for tuning, this makes the concordance model as a whole, including its positing of large amounts of dark matter, unfalsifiable and therefore unscientific.

For instance, Merritt (2017, 2020, 2021b) argues that dark matter (and dark energy, for that matter), is an example of a Popperian ‘conventionalist stratagem’. Worse, \(\Lambda \)CDM has allegedly become unfalsifiable since the introduction of the dark matter- and dark energy-hypotheses:

When attempting to reproduce the rotation curve [...] of an individual galaxy, the parameters describing the putative dark-matter halo are typically varied, arbitrarily, in order to give the best fit to the data. In this limited sense, the dark matter hypothesis can be said to be non-falsifiable, since essentially any observed rotation curve can be fit by adjusting the assumed dark matter density appropriately. (Merritt, 2017, p. 44)

The concern is that simulations can be tuned at will in order to reproduce galaxy-scale observations, without these simulations therefore having physical meaning.Footnote 6

Second, at the level of the dark matter hypothesis itself, there is the concern that, although the existence of a specific type of dark matter particle might be confirmable, the dark matter hypothesis itself is not falsifiable. This part of the argument is sometimes strengthened by drawing historical parallels with the aether hypothesis:

If the concordance cosmology is correct, [non-baryonic CDM must exist]. Contrawise, the non-existence of CDM falsifies the concordance cosmology. The situation is somewhat reminiscent of that of aether in the nineteenth century. Given what we know of cosmology today, non-baryonic dark matter must exist. But does it? We know there must be new physics, but of what kind? The existence of the aether was at least falsifiable. It is not obvious that CDM meets this standard, and we teeter on the brink of the definition of science. The existence of CDM is confirmable: a clear laboratory detection of appropriate WIMPs would suffice. However, the existence of dark matter is not falsifiable. If we fail to find WIMPs, maybe it is axions. If not axions, we are free to invent another form of dark matter—and another, and another, and so on, ad infinitum. CDM was invented for very good reasons. But if this hypothesis happens to be wrong, how do we tell? (S. McGaugh, 2015, p. 17)

Interestingly, this conclusion comes at the end of a paper that characterizes \(\Lambda \)CDM and MOND as two different paradigms, successful on cosmological and galactic scales, respectively. Despite initially buying into a Kuhnian characterization of the situation, McGaugh still turns to Popper in the end.

Note that neither Merritt nor McGaugh take the second type of unfalsifiability of the dark matter-hypothesis to be irremediable. They admit that dark matter could be falsifiable if physicists were to settle on one dark matter particle model, such that they could derive some specific predictions about its detection. If it cannot be detected in the regime where the particle model predicts, the entire dark matter-hypothesis would subsequently be falsified. It seems fair to say that this approach to defending the scientificality of dark matter could only succeed if the state of the field underwent substantial changes. None of the current particle candidates are supported sufficiently to permit being adopted as ‘the’ dark matter particle (especially now that the WIMP hypothesis has come under increasing pressure). Indeed, the current reasoning in dark matter-research appears to be exactly reversed: no particle model will be adopted without a convincing detection. This road to scientificality for dark matter is a nonstarter.

But perhaps there is another reply possible for proponents of dark matter: why does the concern about ad hoc-ness not extend to MOND? This is a genuine worry. MOND itself was introduced as a response to the anomalous galaxy rotation curves. Why, then, should MOND not be rejected as a ‘conventionalist stratagem’? Merritt (2020, p. x) recognizes this concern, but argues that, despite being similarly motivated when introduced, the two hypotheses have seen quite different developments: where dark matter has failed to make successful predictions (particularly with regards to the dark matter particle properties), MOND has made various risky but successful predictions. MOND, in other words has developed into a Lakatosian progressive research program.

Note that [the prediction of the Baryonic Tully-Fischer relation] of Milgrom’s is refutable: it could, in principle, have been found to be incorrect. By contrast, the standard-model prediction that dark matter particles are passing through an Earth-bound laboratory is not refutable, since nothing whatsoever is known about the properties of the putative dark particles. A failure to detect them might simply mean that their cross section for interaction with normal matter is very small (and that is, in fact, the explanation that standard-model cosmologists currently promote). On these grounds, as well, Milgrom’s hypothesis ‘wins’: it is epistemically the preferred explanation. (Merritt, 2020, pp. xi–xii)

Thus, although MOND initially was introduced in an ad hoc manner to account for galaxy rotation curves, MOND has redeemed itself by making novel predictions like BTFR or MDAR.

This development of MOND into a progressive research programme not only counters the threat of ad hoc-ness for MOND, it also forms the basis for a positive argument in favor of MOND. In a book-length reconstruction of MOND as a progressive research program, Merritt (2020) identifies four theories in the MOND research program and, for each theory, its novel predictions and whether the predictions have been corroborated by observation. In the conclusion, Merritt writes about the original formulation of the theory:

Given the background knowledge that existed c. 1980 (e.g. the known, asymptotic flatness of galaxy rotation curves), the proposal that the kinematics of any disk galaxy could be predicted, with high accuracy, from the observed distribution of normal matter alone was amazingly bold: rather as if one had predicted that the gravitational field of a planet is determined by (say) its spin angular momentum, or its surface area. There was simply no basis, under the standard model, for believing any such thing, and yet it turned out to be correct. The prediction of a single, universal acceleration scale (\(a_0\)) was equally bold and its experimental confirmation equally impressive. (Merritt, 2020, p. 229)

Remarkably, Merritt considers later versions of MOND, including attempts at a relativistic version of MOND, to be less successful theories in the research program [(the aforementioned RMOND seems to be an exception to this rule, see (Merritt, 2021a)]. Nonetheless, Merritt touts the remarkable explanatory success of the original MOND proposal as sufficient for MOND’s progressiveness.

4 An initial assessment of the MONDian defense

The previous section introduced a Popperian rejection and Lakatosian defense of dark matter and MOND, respectively. Such arguments cannot succeed on a strict reading of either Popper or Lakatos, however: Popperian falsificationism rejects the epistemic significance of theory assessment. Popper’s point is normative: as scientists, we are only allowed to construct falsifiable theories.Footnote 7 As long as we don’t, we don’t have a scientific theory. Lakatos avoids Popper’s strong normative declarations and presents a descriptive, backward-looking characterization of the scientific research process.Footnote 8 Just like Popper’s, however, his analysis refrains from specifying any epistemic implications for ongoing scientific debates.Footnote 9

There is no doubt, however, that defenders of MOND (just like their counterparts who endorse \(\Lambda \)CDM) do address the question of their theories’ epistemic status. The defenders of \(\Lambda \)CDM endorse their theory on epistemic grounds. The MONDian response to those claims does not amount to making the canonical Popperian point that credence in a theory is, as a matter of principle, always misguided. Rather, as we have shown above, defenders of MOND respond by specifically questioning the epistemic credentials of \(\Lambda \)CDM and stressing those of MOND instead. Merritt (2020, p. xii) explicitly refers to MOND as being “epistemically the preferred explanation”. The philosophical defence of MOND thus aims to make an epistemic argument in favor of MOND based on a philosophical approach that forecloses such epistemic reasoning. This suggests that the philosophical tools deployed by defenders of MOND are ill-chosen or at least insufficient. They provide no basis for the the epistemic claims the exponents of MOND are aiming for.

At this point, it should not go unnoticed that Popperian and Lakatosian views on theory assessment look a little dated today from a philosophy of science point of view. Most contemporary philosophical discussions of theory assessment do allow for epistemic evaluation of some kind. The most influential perspective in recent decades has been Bayesian confirmation theory, which models the updating of credence in a theory based on evidence. In this light, it is natural to ask the following question: is it possible to make sense of MONDian reasoning by representing the Popperian elements of their argumentation in a Bayesian framework that adds the epistemic level of analysis they aim to address? In other words, is there an implicit Bayesian substructure to their explicitly Popperian reasoning?

No such representation can be established by simply pointing at Bayesian updating under empirical data. The philosophical lines of reasoning offered in (McGaugh, 2015; Merritt, 2017, 2020, 2021b; Milgrom, 2020) go substantially beyond straightforward attestations of empirical confirmation. After all, as was shown in Sect. 2, the dark matter hypothesis has been empirically confirmed on multiple scales, with a broad variety of empirical probes. Yet, this empirical support is still insufficient, according to defenders of MOND.

There is a broader Bayesian take on theory assessment, however, that looks like a plausible framework for MONDian arguments in favor of their theory. Reading the defense of MOND as a form of meta-empirical theory assessment (Dawid, 2013, 2022), the arguments offered in favor of MOND amount to the thesis that only MOND can properly account for MOND phenomenology, suggesting that indicators of strong limitations to scientific underdetermination increase the credence in MOND. In the next section, we will argue that a detailed analysis of MONDian reasoning does indeed reveal argumentative strategies that are highly reminiscent of meta-empirical theory assessment.

5 The MONDian defense as an instance of meta-empirical theory assessment

Before looking into the MOND arguments in detail from that angle, we need to briefly introduce the main ideas behind meta-empirical theory assessment. Meta-empirical theory confirmation (Dawid, 2013, 2019, 2022) claims that a significant degree of trust in a theory’s viability can be generated in the absence of empirical confirmation based on certain meta-level observations. These observations are not of the kind that can be predicted by the theory under scrutiny. Therefore, they cannot empirically confirm the theory. But, it is argued, they do change the credence that the theory is viable in an indirect way.

While the role of the described meta-level observations can be seen most clearly in cases where theories are trusted in the the absence of empirical confirmation, Dawid (2018) argues that their significance is not confined to the assessment of theories that lack empirical support; the trust invested in an empirically confirmed theory must be based on meta-empirical considerations as well. In straightforward cases of empirical testing, those considerations are not made explicit and remain uncontroversial. In cases where the nature and significance of empirical support is more difficult to evaluate, the meta-empirical aspects are sometimes addressed explicitly. To cover cases where meta-level observations are deployed in support of empirical confirmation, the broader concept of meta-empirical theory assessment has been introduced (Dawid, 2021). It will be our claim that discussing the epistemic significance of the MONDian’s reasoning in support of their theory requires an explication of these meta-empirical aspects.

According to meta-empirical assessment, there are specific characteristics of the research process that can be expected if very few or no possible alternatives to the given theory exist, but that are very improbable if there are many alternatives. Observing those characteristics can therefore serve as an indicator that the number of possible alternatives to the theory under scrutiny is very small or zero, or, in other words, that scientific underdetermination is strongly limited. If there are very few or no scientific alternatives to the theory, and if one assumes that a viable scientific theory in the given context exists at all, chances are good that the scientific theory one has found is actually viable. This conclusion provides an epistemic basis for trusting the given theory.

What are these meta-level characteristics? Three meta-level observations support three specific arguments of meta-empirical assessment: (i) The no alternatives argument (NAA): Scientists tend to trust a theory if they observe that, despite considerable efforts, no alternative theory that can account for the corresponding empirical regime is forthcoming. (ii) The unexpected explanation argument (UEA): Scientists tend to trust a theory if they observe that the theory turns out to be capable of explaining significantly more than what it was built to explain. (iii) The meta-inductive argument (MIA): Scientists tend to have increased trust in a theory that fulfills the first or the first two criteria if it is their understanding that previous theories in their research field that satisfied those criteria have usually turned out empirically successful once tested.

Importantly, none of the three can carry a high degree of significance in isolation. Each of the three meta-empirical observations in isolation could be explained without giving reason to assume a small number of possible alternatives to the theory to which meta-empirical assessment is applied. The fact that scientists don’t find alternatives could be explained by their limited capability or diligence. Unexpected explanation could be explained by the viability of some deeper underlying principle that was shared by the theory under scrutiny and did not have possible alternatives, rather than by the lack of alternatives to the theory under scrutiny itself. The fact that there has been a tendency of predictive success of theories that in some respect were similar to the theory under scrutiny could be countered by pointing at dissimilarities between those theories and the theory under scrutiny.

Note, however, that each specific argument of meta-empirical assessment requires a different alternative explanation to be countered. The force of these alternative explanations can be considerably weakened if more than one meta-empirical argument can be formulated. For example, a strong tendency of predictive success of theories that have been supported by a NAA renders the hypothesis that scientists are incapable of finding the possible theories less plausible. Thus, in order to be significant, meta-empirical assessment needs to be based on at least two if not all three arguments in conjunction. In context of the MONDian defense, we identify two possible arguments in support of a NAA for MOND and one in support of UEA for MOND.Footnote 10

5.1 MOND’s attempt at a no alternatives argument, part 1

Unlike other allegedFootnote 11 instances of successful meta-empirical confirmation or meta-empirical assessment in science (the Higgs particle before its discovery, string theory, inflation), MOND has a rival to reckon with that is supported by a majority in the discipline. Any NAA in favor of MOND can therefore only be effective if it can be argued that dark matter, despite its popularity, cannot constitute an adequate rival theory for galaxy phenomenology.Footnote 12 One strategy is to argue that \(\Lambda \)CDM is just unscientific given some acceptable demarcation criterion. We already rejected the strictly Popperian or Lakatosian readings of the MONDian arguments as being at odds with the goal of the defenders of MOND. Here, we read them through the lens of meta-empirical assessment, that is, with explicit epistemic import.

As described in Sect. 3, defenders of MOND aim to argue that dark matter cannot be a properly scientific rival to MOND because it is unfalsifiable–both at the level of \(\Lambda \)CDM and at the level of dark matter candidates. The rejection of dark matter as unscientific provides the foundation for a no alternatives argument in favor of MOND. Dark matter and MOND are currently the only two available theories of galaxy scales. If dark matter is rejected because it is unscientific, that leaves MOND as the only possible alternative.Footnote 13 Note that one might argue that the NAA in this case is even stronger: dark matter and MOND can be assumed to exhaust the space of possible theories of galaxy scales (cf. also Sect. 2). With the rejection of dark matter, MOND remains not just as the only developed, but the only possible theory of galaxy scales.

5.2 MOND’s attempt at a no alternatives argument, part 2

The previous NAA relies on the acceptance of specific scientificality conditions and the assessment that dark matter fails to satisfy them. These are two strong claims to make. The second line of argument in support of a NAA for MOND is more charitable towards dark matter, in that it does not reject dark matter as unscientific. Instead, it argues that, even if the dark matter-hypothesis were scientific, it cannot constitute a rival to MOND because it fails to adequately explain MOND phenomenology. At face value, this claim may sound wildly implausible. After all, dark matter was first introduced because of anomalous observations at galactic scales (the flat rotation curves). How can MOND claim that dark matter cannot be part of a rival explanation for galaxy phenomenology?

The key to the MONDian claim lies in how dark matter could figure in an explanation of galaxy phenomenology, in context of \(\Lambda \)CDM. The first concern is that, in and of itself, \(\Lambda \)CDM makes no clear predictions for galaxy scalesFootnote 14:

The observed mass discrepancy–acceleration relation does not occur naturally in \(\Lambda \)CDM. Indeed, \(\Lambda \)CDM makes no clear prediction for individual galaxies. One must resort to model building. The argument then comes down to what constitutes a plausible model. I have spent many years trying to construct plausible \(\Lambda \)CDM models. I have never published any, because none are satisfactory. All I can tell you so far is what does not work. (S McGaugh, 2015, p. 6)

What McGaugh refers to here is that the success of \(\Lambda \)CDM on cosmological scales, at which gravity is the dominant interaction, does not obviously extrapolate to galactic scales. When deriving predictions from simulations implementing \(\Lambda \)CDM for galactic and cluster scale phenomenology, non-gravitational interactions are no longer negligible. Thus, simulations need to include some representation of astrophysical processes (e.g. star formation, stellar evolution, supernova feedback, feedback from active galactic nuclei), but McGaugh submits that there is currently no plausible way of doing so.

In contrast to McGaugh’s skepticism, various hydrodynamical simulations have claimed success in recovering BTFR and MDAR. For instance, the Illustris simulation project—with tagline “towards a predictive theory of galaxy formation”—reported some initial success in reproducing the BTFR within the available observational constraints, although they recognize that their results still show more scatter than McGaugh’s (2012) (Vogelsberger et al., 2014, pp. 1541–1542). Similarly, SIMBA, a different suite of galaxy formation simulations, proved capable of broadly reproducing the observed BTFR. The authors recognize that there are various possible sources of slight deviation from observations depending on what definition of the circular velocity is used. However, insofar as they aimed to prove that SIMBA could be used to study BTFR, they claim success (Glowacki et al., 2020).

Those successes do not make \(\Lambda \)CDM any more explanatory with respect to galaxy phenomenology, according to defenders of MOND, however:

The failure of the natural \(\Lambda \)CDM galaxy formation model drives simulators to consider feedback. Feedback in the context of galaxy formation invokes the energy created by baryonic processes like supernovae to rearrange the distribution of mass in model galaxies. This is an inherently chaotic process, so it does not naturally lead to the observed organization. [...] Such models are of necessity highly fine-tuned. Fine-tuning is always possible in dark matter models. There are many free parameters, and we are always free to add more. So I do not doubt that it is possible to mimic the data. [...] The question then becomes whether the real universe operates that way. My fear is that feedback has become a modern version of the epicycle. (S McGaugh, 2015, p. 7)

So, simulations implementing feedback cannot be explanatory because they are inevitably fine-tuned. And if \(\Lambda \)CDM fails to explain MOND phenomenology, it cannot constitute a proper rival to MOND, implying once again that there is no alternative to MOND.

Our reconstruction of this second argument in support of a NAA is in line with Massimi (2018)’s reconstruction of the debate. Massimi argues that, while \(\Lambda \)CDM is successful on cosmological scales (where MOND clearly fails), it fails to explain galaxy phenomenology:

In spite of its extraordinary success at explaining large-scale structure (i.e. structure formation, the matter power spectrum, galaxy clusters, and so on), \(\Lambda \)CDM is not equally well-equipped to explain phenomena such as [BTFR] and MDAR at the scale of individual galaxies [...]. This scale has been traditionally regarded as favoring alternative models, such as MOND, which naturally explains [BTFR] and MDAR because they are natural consequences of MOND formalism. (Massimi, 2018, p. 33)

The problem that \(\Lambda \)CDM faces at galactic scales is that, due to the complexity and so-called context-sensitivity of computer simulations, \(\Lambda \)CDM is incapable of offering satisfactory causal explanations of, e.g., BTFR.

Note, however, that Massimi does not fully buy into the MONDian assessment of \(\Lambda \)CDM’s failure to explain MOND phenomenology. She agrees that if computer simulations are able to retrieve MOND phenomenology, “this is success enough, and must count as success enough for \(\Lambda \)CDM” (Massimi, 2018, p. 34). Massimi’s caveat suggests that the MONDian argument that \(\Lambda \)CDM lacks explanatory power assumes certain standards of explanation that go beyond mere empirical adequacy. As will be discussed in detail in Sect. 6.1, it is this shifting of standards that makes this second argument for an NAA in support of MOND unwarranted.

5.3 MOND’s attempt at an unexpected explanation argument

The final argument we identify in the MONDian defense is spelled out as a novel confirmation argument. In a sense, this argument is almost an inverse of the second NAA’s rejection of \(\Lambda \)CDM: while \(\Lambda \)CDM is incapable of MOND phenomenology, MOND itself provides a simple explanation of a wide range of phenomena on galactic scales, including galaxy rotation curves, BTFR, MDAR and more. It is surprising that MOND explains such a large set of observations since it had been developed to account for a much narrower class of phenomena. Indeed, a lot of the work written in defense of MOND uses the same argumentative structure: a long list of predictions is derived from Milgrom’s proposal. For each prediction, it is shown that (1) the prediction is a ‘natural consequence’ of the MOND formalism even though the MOND formalism was not developed with this prediction in mind (the exception, of course, being flat galaxy rotation curves); and, (2) the prediction is corroborated by observations.

Examples of this argumentative structure can be found in recent work from three of the most vocal defenders of MOND. Consider the following, from Milgrom:

Today one can ask: ‘Without the umbrella of MOND, why should the \(a_0\) that enters and determines the asymptotic rotational speed in massive disc galaxies be the same as the \(a_0\) that enters and determines the mean velocity dispersions in dwarf satellites of the Milky Way and Andromeda galaxies? And why should these be the same \(a_0\) that enters and determines the dynamics in galaxy groups, which are hundreds of times larger in size and millions of times more massive then the dwarfs [...]? And why should these appearances in local phenomena in small systems be related to the accelerated expansion of the Universe at large? (Milgrom, 2020, p. 175)

The obvious conclusion, according to Milgrom, is that this unexpected success of MOND must be due to the fact that MOND is getting ‘something’ right.

In a similar vein, McGaugh (2020) goes through fourteen different properties of galaxies and for each of them asks whether (1) the data corroborates the prediction from MOND; (2) whether the prediction was made a priori; and, (3) what dark matter predicts. McGaugh concludes:

We have been surprised at every turn: these were startling facts, when new. Only one theory succeeded in predicting these phenomena in advance: MOND. It has met the gold standard of scientific prediction repeatedly for a wide variety of phenomena. [...] I do not see how this can be a fluke. (S. McGaugh 2020, pp. 22–23)

McGaugh recognizes that there are three possible conclusions one could draw from these findings: the data corroborates MOND because there is something to it, galaxy formation somehow mimics MOND, or some new yet undiscovered physics is responsible. McGaugh submits that the first of these three is the most plausible [ibid., p. 24].

Finally, Merritt (2021a) contrasts the novelty of the MONDian predictions with the mere accommodation by \(\Lambda \)CDM as well:

Several of Milgrom’s successful predictions [...] clearly satisfy both of Leplin’s conditions for novelty. Information about these observed regularities did not contribute in any way to the formulation of Milgrom’s theory: indeed they were not observationally established until some years after 1983. And, [...] the competing theory (the standard cosmological model) provides no “viable reason to expect” these regularities to exist. And at least since the addition (c. 1980) of the postulates relating to dark matter, the standard model can claim no comparable successes of novel prediction. Merritt (2021a, p. 204)

As discussed in Sect. 3, such successful novel prediction is part of Merritt’s argumentation for MOND’s progressiveness as a research program.

The argument is further strengthened, according to the MOND-defenders, by the fact that different observed correlations that were predicted by MOND, like BTFR or MDAR, lead to the same value for Milgrom’s constant \(a_0\)Footnote 15, as already suggested by the above quote from Milgrom. Merritt similarly draws explicit parallels between converging values for \(a_0\) providing support for MOND as a theory, and Perrin’s determination of Avogadro’s number or early measurements of Planck’s constant providing evidence for atomic theory or quantum mechanics, respectively. Now, Merritt admits that mere convergence of measurements of a specific parameter does not obviously lend confirmation to the theory in which that parameter plays a role. However, in certain cases (like those of Perrin and Planck, and, allegedly, MOND), such convergence can confirm the broader theory:

This, perhaps, is a basis for the intuitive judgments of Perrin and Planck: namely that the convergence of the measured value of a ‘constant of nature’ implies a tight connection between facts that would otherwise not have been considered related. (Merritt, 2020, p. 217)

So, the argument in support of MOND goes beyond the explanation of general regularity patterns. There is empirical convergence on a specific value for Milgrom’s constant between those different regularity patterns that, at face value, would not be expected to be obviously related to one another.

For this argument to work in favor of MOND, it is necessary that novel confirmation gives some additional confirmation value to a hypothesis, over and above mere accommodation of observations. This means that defenders of MOND need to rely on a philosophical perspective that can provide an epistemic foundation for acknowledging confirmation value that reaches beyond the formal comparison of a theory’s predictions and empirical data. Note that this is no trivial task. For instance, if defenders of MOND were to adopt a fully Popperian view (as they seem to do at face value), a novel confirmation argument would be meaningless since Popper rejects the concept of confirmation across the board. And even moving away from a strictly Popperian perspective, a wide range of philosophers of science (logical empiricism, empiricist readings of Bayesian confirmation theory) who do acknowledge the usefulness of the concept of confirmation nevertheless deny the extra confirmation value of novel confirmation over accommodation.

In line with this paper’s agenda, we will analyse an embedding within meta-empirical assessment, where, as we will show, novel confirmation does provide additional confirmation value over accommodations. We don’t deny that other philosophical embeddings could be possible, and that these may play out differently in the given case. But we take our analysis to demonstrate that some embedding must be provided, since the nature of such an embedding has strong effects on the epistemic significance of novel confirmation.

So where does the additional confirmation value come from in cases of novel confirmation, according to meta-empirical theory assessment? This is based on UEA. Recall that UEA claims that scientists tend to trust a theory if that theory can explain more than what it was built to explain. Consider the following scenario. Let us assume that a given number of scientific problems wait to be solved in a given scientific context. Let us further assume that the given scientific context (that is, scientific background knowledge and the scientifically well explained set of phenomena) only allows for a very small number of scientific theories that can be constructed. In such a scenario, one can expect that a theory developed in order to solve one problem will solve other problems as well. The scarcity of unconceived alternatives enforces that theories that can be built, will solve more than one problem on average. If, to the contrary, far more theories can be developed in the given scientific context than there are problems to be solved, no such expectation is justified. Therefore, if a theory is found that solves one problem, and that theory then provides significant unexpected explanation, this increases the credence that only few theories can be constructed in the given context. This, in line with all meta-empirical assessment, increases the credence in the given theory’s viability.

Initially, UEA was analysed in cases where unexpected explanations that did not amount to agreement with novel empirical data. UEA can, however, also be applied to cases of novel empirical confirmation (Dawid, 2021). The argument of novel confirmation is based on the observation that a theory turns out to be capable of predicting or explaining significantly more empirical data than what it was built to explain. The reasoning described above can be fully applied in this case. From the perspective of meta-empirical theory assessment, the reason why a theory is ‘more confirmed’ if a novel empirical prediction is corroborated than if that theory post-hoc accommodates the same observation, is based on UEA.

Returning to the case at hand, UEA provides exactly the conceptual basis needed for establishing the epistemic significance of novel confirmation that is asserted by MONDians. Ostensibly, MOND was a phenomenological theory, introduced for the sole purpose of explaining (some) galaxy rotation curves. But after its introduction, it has become clear that MOND can account for a broad range of phenomena at galactic scales in a ‘natural’ way. This is taken to provide significant support for MOND, as it would be according to a UEA.

6 Assessing MOND’s attempts at meta-empirical assessment

So far, we have shown that the MONDian defense can be structured to resemble meta-empirical assessment. But is the MONDian argumentation convincing from the perspective of meta-empirical assessment itself? If that were the case, it would imply that meta-empirical assessment was directly at variance with the large majority’s scientific assessment of MOND. It would, in other words, put meta-empirical assessment in a complicated position. In this section, we will demonstrate, however, that the MONDian implicit appeal to meta-empirical assessment is unconvincing on the latter approach’s own account. From this evaluation, we can draw some lessons for the proper scope of meta-empirical assessment, as well as for the way in which meta-empirical assessment should be developed further in the future.

6.1 Why the MONDian reasoning does not amount to a sound NAA, part 1

The MONDian’s first NAA-type argument uses a Popperian scientificality condition as a basis for a NAA. MONDians claim that \(\Lambda \)CDM or the dark matter hypothesis is not falsifiable, which makes it unscientific by Popperian standards. This leaves MOND as the only genuinely scientific option to explain e.g. galaxy rotation curves which, in turn, provides the starting point for a NAA in favor of MOND. Popper’s normative falsifiability condition, though itself not based on epistemic considerations, is thus deployed in a way that does generate epistemic implications in the end. If we assume that there exists a viable theory that satisfies the falsifiability condition, a NAA can be used to infer that the only known theory that does so (in our case, MOND) is probably viable.

This is a reasonable use of NAA if an epistemic basis for the given scientificality condition can be provided. That is, we need to have a basis for expecting with high confidence that the viable theory will satisfy the stated scientificality condition.

Now, it is important to point out that meta-empirical assessment indeed does rely on falsifiability. The meta-empirical assessment framework is based on the assumption that, most probably, a viable falsifiable theory about a given subject matter exists. Falsifiability is necessary for giving confirmation value to an argument of meta-empirical assessment. Confirmation in the context of meta-empirical assessment is defined in terms of the theory’s probability of being viable. Viability is defined as the agreement of a theory’s predictions with all possible evidence within a given empirical horizon. A theory is unfalsifiable if no imaginable data could contradict the theory’s predictions. This means, however, that an unfalsifiable theory is a priori known to be formally viable (though scientifically vacuous). Therefore, an unfalsifiable theory has P(V)=1 (where V is: “Theory H is viable.”) and no confirmation based on meta-empirical assessment can take place. Any meaningful application of meta-empirical assessment in this light must be based on the use of falsifiability as a scientificality condition and the understanding that there most probably exists a falsifiable viable theory on the given subject matter. The latter understanding is inferred (meta-inductively) from the observation that falsifiable empirically successful theories on well-specified scientific problems can normally be found. Falsifiability as understood above thus is fully endorsed and relied upon by meta-empirical assessment.

A closer look reveals, however, that MONDians in their NAA-type reasoning rely on a different notion of unfalsifiability than the one described above. Unfalsifiability can come in two forms. First, a theory may provide a mere parametrization of the empirical data within the theory’s intended domain without any predictive import whatsoever. Let us call this feature absolute unfalsifiability. This is the form of unfalsifiability that has been discussed in the previous paragraphs and clearly plays the role of a scientificality condition in the context of meta-empirical assessment. A classic example of such an absolutely unfalsifiable theory is Ptolemaic astronomy: the theory was both general and flexible enough to permit the accommodation of any observation of planetary motion by the introduction of the appropriate epicycles. As such, Ptolemaic astronomy did little more than (admittedly effectively!) parameterizing planetary motion. No data-based specification of parameter values could have changed this. Any further, more precise observations could again have been modeled within the framework of Ptolemaic astronomy. Alternative to absolute unfalsifiability, a theory may presently be too little understood, too unspecific or too unconstrained for making testable predictions, while better understanding, further specifications or further data-based constraints would lead towards a falsifiable theory. We may call this second feature transient unfalsifiability.

The problem is that MONDians can at most charge dark matter with transient unfalsifiability. Neither of their two arguments laid out in Sect. 5.1 can support a claim of absolute unfalsifiability. There are specific hypotheses on dark matter candidates that could be ruled out by specifiable future empirical testing. There is, at this point, just no sufficient empirical or conceptual basis for declaring any of those hypotheses essential to the dark matter hypothesis. Further data collection and an improved understanding of the phenomenological implications of specific models may change this situation in the future. The discovery of a dark matter candidate in collider experiments, or other kinds of precision experiments, might provide the basis for a more specific understanding that allows for further empirically testable predictions and therefore amounts to a perfectly falsifiable theory. Similarly, the complex set of variables to be set in simulations of large scale structure formation adds substantially to difficulties of understanding of the empirical implications of specific \(\Lambda \)CDM models. But achieving such an understanding no doubt is a long term goal of \(\Lambda \)CDM research. There is a clear commitment in ongoing research to derive predictions for (sub-)galactic scales. MONDians may be more pessimistic about achieving that goal in the foreseeable future than the typical exponent of \(\Lambda \)CDM. But, as we have pointed out in Sect. 5.1, MONDians do not deny that specifications of \(\Lambda \)CDM can in principle become falsifiable. This means that their unfalsifiability claims fall into the category of transient unfalsifiability.

Transient unfalsifiability, even if taken to be so substantial that it may never be overcome by actual science, cannot play the role of a scientificality condition in the context of meta-empirical assessment however. According to the logic of meta-empirical assesessment, a possible theory does not amount to what we currently know about it. (In the case of an unconceived alternative, we know nothing about that theory today.) Rather, a possible theory amounts to the fully spelled out theory, including all well specified empirical implications it has once it is fully formulated and fully understood. The mere fact that a theory’s empirical implications are insufficiently or not at all understood at a given point therefore does not disqualify it as a scientific alternative in the context of meta-empirical assessment. A theory’s transient unfalisfiability per se is exclusively a statement about the scientists’ current understanding. It is irrelevant for the spectrum of possible theories. Therefore, it neither decreases the probability that the theory is viable nor increases the probability of a known alternative’s viability. A probability increase for an alternative would only be achieved by arguments for the given theory’s absolute unfalsifiability, which are not provided by MONDians.

6.2 Why the MONDian reasoning does not amount to a sound NAA, part 2

The second level of a NAA in favor of MOND relies on explanatory quality rather than falsifiability. MONDians claim that, even if \(\Lambda \)CDM does satisfy the criterion of falsifiability, it can not offer a genuine explanation of important characteristics of observed galaxy phenomenology, such as the BTFR and MDAR. MONDians concede that these features can be modelled in computer simulations based on \(\Lambda \)CDM, but emphasize that they can only be reached by tuning modeling parameters and do not arise generically based on \(\Lambda \)CDM. The relations thus amount to fine-tuning (FT) in a \(\Lambda \)CDM framework and find a genuine explanation only in the context of MOND. Therefore, if one requires a satisfactory explanation of those features, MOND is without alternatives.

Stating a FT issue as a motivation for theory choice is not uncontroversial. An extensive debate has arisen in physics and the philosophy of physics on the question as to whether and to what extent FT should be treated as a genuine scientific problem [see (Friederich, 2018) for a review]. Even if one assumes that it should, however, it seems ill-advised to use the ability to solve existing FT problems as a requirement in a NAA argument. The reason is that FT problems often get solved at levels of description very different from the context where the FT first arises. A theory may well be perfectly viable in a given regime while associated FT problems get solved, if at all, at an entirely different level of description.

A prominent example is the fine-tuning of the cosmological constant. Physicists have tried to solve the problem at the level of fundamental high energy physics. Today, an anthropic explanation has gained substantial support. Others have argued that the problem should best be ignored. Whatever the correct solution, however, few physicists expect that the solution must be found at the level of a general relativistic representation of the dynamics of the universe, where the fine-tuned cosmological constant first needs to be introduced to account for the data. A recent example where a NAA based on a FT argument failed were the arguments in favor of low-energy Super-Symmetry based on its capability to solve the FT problems associated with the separation of the electroweak scale from the Planck scale.

This general worry is highly relevant in the given case. While it is clear that \(\Lambda \)CDM does not offer a satisfactory explanation of BTFR and MDAR as it stands, no one can predict whether, and if so in which way, \(\Lambda \)CDM can provide a framework for a natural explanation of those relations once further specific features of the universe and characteristics of dark matter are introduced in the relevant simulations. A NAA, however, is not an inference from the mere observation that no alternative explanation of a phenomenon has been found yet. NAA must be based on assessing whether the extent to which scientists have searched for alternative explanations without finding them justifies the inference from the observation that no alternative explanations have been found to the hypothesis that none exist. In line with what has been discussed before, this assessment then needs to be bolstered by a MIA that indicates that similar assessments tended to be reliable in the past.

In the case of BTFR and MDAR, none of this works. NAA itself fails because no one would consider it particularly unlikely that more satisfactory explanations could emerge within the \(\Lambda \)CDM-framework due to further scientific progress. But if NAA itself is unconvincing, there is little hope for MIA-based support either. If the theory in question (that is MOND) does not provide the basis for a convincing NAA, the requirement of a convincing NAA cannot be used for selecting the theories to which MIA is applied. Without that requirement, however, chances are dim to select a group of theories with a clear tendency for predictive success. So MIA won’t get off the ground.

Section 5.2 indicated one further MONDian argument against the existence of a satsifactory \(\Lambda \)CDM-based explanation of BTFR and MDAR that could constitute an alternative to MOND’s explanation. Even if \(\Lambda \)CDM or one of its specifications in the end managed to find arguments suggesting that BTFR and MDAR are natural features, this conclusion is unlikely to be deducible from fundamental equations in a straightforward way. Presumably, it would have to be extracted from some kind of analysis based on complex simulations (Smeenk & Gallagher, 2020; Gueguen, 2020). An analysis of that kind could never fully escape the suspicion, however, that some of the many parameters involved have been fixed in an arbitrary way in order to engineer the questionable impression that BTFR and MDAR naturally arise. Therefore, such an explanation could never be genuinely satisfactory. On this basis, MONDians may argue, it is possible after all to predict that no genuinely satisfactory explanation of BTFR and MDAR can ever be provided base on \(\Lambda \)CDM.

Exponents of \(\Lambda \)CDM tend to argue, once again, that MONDians are overly pessimistic with regard to the prospects of future \(\Lambda \)CDM-based simulations. But even if MONDians were right in their assessment, this would not provide a basis for a NAA in favor of MOND. The condition that observed regularities must be deducible from fundamental equations cannot play the role of a scientificality condition within the framework of meta-empirical assessment. The reason is the same as in 6.1: scientists have no epistemic basis for assuming that the viable theory satisfies this condition. If a set of fundamental equations leads to emergent regularity patterns at high levels of complexity, that provides a perfectly legitimate scientific explanation of the corresponding regularity. Whether or not the accurate explanation of BTFR and MDAR is of this kind must be tested empirically and surely cannot be decided based on alleged insurmountable problems to demonstrate the solution’s robustness. Preference for a more straightforward deduction of predictions from fundamental equations thus amounts to an expression of personal taste but fails to generate implications for the prospects of a theory’s viability.

To conclude, the MONDian’s reasoning based on MOND’s capability to explain BTFR and MDAR structurally resembles a NAA. MONDians fail to make a convincing NAA case, however, because they fail to offer epistemically relevant scientificality conditions that exclude \(\Lambda \)CDM. MONDians may argue that their explanation would be nicer than a possible \(\Lambda \)CDM-based explanation if it were viable, but they cannot infer from this assessment that MOND is more likely to be viable.

6.3 Why the MONDian reasoning does not amount to significant UEA

At first glance, it would seem that UEA can indeed be identified in the MOND context. MOND was developed to explain the flat galaxy rotation curves and then turned out to correctly predict other relations, such as BTFR and MDAR, as well. On the current understanding, BTFR and MDAR don’t follow from flat galaxy rotation curves in the absence of MOND. This is a clear case of novel confirmation which, from the perspective of meta-empirical assessment, can provide the basis for UEA and, on that basis, can generate additional confirmation value.

On closer inspection, however, it turns out that no significant UEA is generated in this particular case. In a nutshell, the point is the following: while it is possible for novel confirmation to generate additional confirmation value based on UEA, this is not always the case. The predictive success of MOND is an example of novel confirmation that fails to generate additional confirmation value.

To see why, recall the structure of UEA described in Sect. 5.3. A theory that has been developed would be unlikely to find substantial novel confirmation if many other theories could be developed that give different empirical predictions. Therefore, finding substantial novel confirmation reduces the expected number of so far unconceived possible alternatives and, on that basis, increases the probability that the theory supported by novel confirmation is viable.

The role of novel confirmation on a meta-empirical assessment account is therefore constrained to assessing the number of unconceived alternatives. The number of unconceived alternatives to the known rivalling theories obviously must be the same, however, for all those known rivalling theories. Novel confirmation therefore does not favor a given theory over another known theory beyond the extent that theory would also be favored by post-hoc accommodation. In other words, the fact that BTFR and other relations offer a basis for novel confirmation of MOND rather than just for accommodation does not give MOND an extra advantage over \(\Lambda \)CDM on a meta-empirical assessment account.

In this light, a case like Perrin’s convergence argument in favor of atoms, that has been referred to by Merritt, differs fundamentally from the MOND-case. Perrin faced a situation where no serious contender to atomism was available. The only question to address therefore was the question of unconceived alternatives, which can be addressed by a meta-empirical assessment. The novel confirmation element of convergence therefore was indeed of crucial importance in his case. In the MOND case, to the contrary, a strong alternative theory is known.

Moreover, as discussed above, \(\Lambda \)CDM and MOND, broadly construed, may actually be taken to exhaust the space of possible theories. In light of all this, the main issue, if not the only issue, is whether \(\Lambda \)CDM or MOND constitutes the viable theory. The question of unconceived alternatives besides those two broad options either does not arise at all or is only of secondary significance. Since UEA does not contribute to the comparison of \(\Lambda \)CDM and MOND, novel confirmation, as construed in terms of meta-empirical assessment, therefore does not generate significant additional confirmation value in this particular context. Indeed, meta-empirical theory assessment demonstrates that novel confirmation fails to be an epistemically convincing argument in this particular context.

6.4 A side remark: the significance of empirical confirmation for MOND

The previous subsection has shown that the convergence of the various MONDian calculations of \(a_0\) amounts to straightforward theory confirmation that, on a meta-empirical assessment account, does not get additional confirmation value from the fact that it is novel confirmation. What remains is confirmation based on the agreement between data and a MONDian prediction (without reliance on the prediction’s novelty). We want to emphasize one point, however, that, while playing out at the level of empirical confirmation, involves meta-empirical assessment.

From the point of view of meta-empirical assessment, the agreement between a theory’s prediction and the known empirical data does not, on its own, generate significant trust in the theory’s viability. Such trust requires, in addition, a certain degree of confidence in the understanding that few if any other possible scientific theories could make the same successful predictions. This confidence is generated based on a conceptual analysis of the given scientific context in conjunction with meta-empirical assessment.

MOND has the known competitor \(\Lambda \)CDM. At this point, the successful MOND predictions cannot be extracted from \(\Lambda \)CDM. As discussed before, however, it remains unclear whether or not a deeper and more specific understanding of \(\Lambda \)CDM could offer comparable predictions in the end. From the perspective of meta-empirical assessment, it is of crucial importance to distinguish this scenario from a scenario where the known alternative theories are known to be incapable of providing comparable predictions.

In the latter scenario, the epistemic significance of substantial confirming data would be quite high. In the MOND scenario, if there is a significant chance that the competitor theory in the end does generate comparable predictions, this reduces the confirming data’s epistemic significance. It will retain some significance, to be sure, and the described confirming evidence clearly constitutes the strongest argument in favor of MOND. But it is important not to overstate its significance by conflating the given situation with situations where a comparable degree of agreement of a theory’s predictions with collected data would be epistemically more powerful.

7 Conclusion

Showing that MONDian reasoning does not amount to significant meta-empirical assessment does more than just categorizing the nature of those arguments. Meta-empirical assessment provides a natural framework for understanding the epistemic significance of the given MONDian arguments. The fact that those arguments fail within the framework of meta-empirical assessment thus demonstrates general epistemic limitations of MONDian reasoning against \(\Lambda \)CDM.

Let us once more remember the motivation for the MONDian attempt to deploy NAA-type reasoning. Exponents of \(\Lambda \)CDM have largely succeeded in convincing cosmologists of their theory’s viability by establishing a NAA in its support. This NAA is based on the judgement that MOND is unlikely to work as a consistent alternative. Exponents of MOND cannot retort with a NAA at the same level because there are no serious general consistency issues related to \(\Lambda \)CDM. Instead, MONDians have developed a NAA based on issues of falsifiability and explanation. In effect, the MONDian version of NAA-type reasoning amounts to saying: we demand strong conditions of falsifiability and explanation; those conditions are not met by \(\Lambda \)CDM; this leaves MOND as the only way to go.

This line of reasoning is unconvincing as it stands because MONDians offer no arguments for the epistemic relevance of their conditions of falsifiability and explanation. In the absence of epistemic justification for their requirements, however, it is not possible to understand the epistemic weight of their conclusions. Applying the apparatus of meta-empirical assessment to MONDian reasoning has offered an evaluation of the epistemic significance of MONDian theory assessment. It has led to the conclusion that the MONDian requirements of transient falsifiability and explanation by deduction from natural laws have no epistemic basis. Attempts at a NAA based on those conditions therefore carry no epistemic weight.

Support for MOND in comparison to \(\Lambda \)CDM can not be based on UEA either. On a meta-empirical assessment account, novel confirmation generates epistemic support for a theory by reducing the probability of unconceived alternatives. It does not favor a theory over a known competitor.

To conclude, meta-empirical assessment generates no support for MOND against \(\Lambda \)CDM. In contrast, \(\Lambda \)CDM is supported by a conceptually valid NAA from the perspective of meta-empirical assessment. Of course, the strength of this NAA depends on the cogency of \(\Lambda \)CDM-exponents’ physical analysis of MOND’s conceptual problems. The present paper is not the place to evaluate that cogency. What can be said, however, is the following: to the extent that physical assessment is adequate, it does provide a basis for a sound NAA, which in turn also supports the epistemic significance of the novel confirmation of \(\Lambda \)CDM that was described in Sect. 2.

With regard to meta-empirical assessment itself, our analysis demonstrates that not every argument by scientists that structurally resembles meta-empirical assessment amounts to epistemically significant meta-empirical assessment. Proponents of meta-empirical assessment should recognize that its epistemic significance hinges on the question whether or not the scientificality conditions that provide the framework for meta-empirical assessment are epistemically relevant. They can be epistemically relevant only if scientists have very strong reasons to assume that there is a viable theory that satisfies the scientificality conditions they impose.

According to meta-empirical assessment, the epistemic relevance of scientificality conditions, or the lack thereof can be analysed in a meaningful way. Carrying out such an analysis provides a rational basis for judging the epistemic significance of meta-empirical assessment in a given framework. However, as the slightly misplaced use of arguments of meta-empirical assessment by defenders of MOND demonstrates, the epistemic significance of a meta-empirical assessment in a specific context is not always self-evident.