Skip to main content

Antimodularity: Pragmatic Consequences of Computational Complexity on Scientific Explanation

  • Chapter
  • First Online:
Book cover On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence

Part of the book series: Philosophical Studies Series ((PSSP,volume 134))

  • 1361 Accesses

Abstract

This work is concerned with hierarchical modular descriptions, their algorithmic production, and their importance for certain types of scientific explanations of the structure and dynamical behavior of complex systems. Networks are taken into consideration as paradigmatic representations of complex systems. It turns out that algorithmic detection of hierarchical modularity in networks is a task plagued in certain cases by theoretical intractability (NP-hardness) and in most cases by the still high computational complexity of most approximated methods. A new notion, antimodularity, is then proposed, which consists in the impossibility to algorithmically obtain a modular description fitting the explanatory purposes of the observer for reasons tied to the computational cost of typical algorithmic methods of modularity detection, in relation to the excessive size of the system under assessment and to the required precision. It turns out that occurrence of antimodularity hinders both mechanistic and functional explanation, by damaging their intelligibility. Another newly proposed more general notion, explanatory emergence, subsumes antimodularity under any case in which a system resists intelligible explanations because of the excessive computational cost of algorithmic methods required to obtain the relevant explanatory descriptions from the raw data. The possible consequences, and the likelihood, of incurring in antimodularity or explanatory emergence in the actual scientific practice are finally assessed, concluding that this eventuality is possible, at least in disciplines which are based on the algorithmic analysis of big data. The present work aims to be an example of how certain notions of theoretical computer science can be fruitfully imported into philosophy of science.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    See the seminal (Simon 1962).

  2. 2.

    This epistemic position is usually opposed to an ontic conception of causal explanations, which considers the actual mechanism themselves as their explanations. See Bechtel and Abrahamsen (2005) and Wright (2012).

  3. 3.

    With “system” here I mean a description of a system. In what follows, I will often use the term “system” simpliciter to mean its standard description, usually the basic description (see Sect. 6.2.1).

  4. 4.

    A connection with the rest of the system effected obviously by means of individual links going from nodes internal to the module toward nodes belonging to other modules.

  5. 5.

    See Simon (1962).

  6. 6.

    Exceptions to the common association between structural modularity and dynamical modularity can occur when the system’s parts have highly non-linear responses to inputs: in that case, even an interaction along a structurally weak connection between two parts can induce a disproportionately strong response on the receiving part, due to the non-linearity of its input-output function.

  7. 7.

    See Simon and Ando (1961).

  8. 8.

    See also Kreinovich and Shpak (2008).

  9. 9.

    See Hempel and Oppenheim (1948).

  10. 10.

    The two corresponding seminal works are Bechtel and Richardson (1993) and Machamer et al. (2000). The line led by William Bechtel proposes the so-called epistemic view of mechanisms, which I also endorse (see Sect. 6.2). This is opposed to the ontic view of mechanisms, mainly supported by Carl Craver. See Wright (2012).

  11. 11.

    Bechtel and Abrahamsen (2005, p. 423).

  12. 12.

    See Sect. 6.2.6.

  13. 13.

    See Newman (2003).

  14. 14.

    Starting from Girvan and Newman (2002).

  15. 15.

    The degree of a node is the number of links to which it is connected. In the randomized version, each node, even if possibly connected to different nodes than in the original network, has the same degree of the corresponding node in the original network.

  16. 16.

    As e.g. in Shen-Orr et al. (2002).

  17. 17.

    See Sect. 6.2.6.1.

  18. 18.

    As surveyed in several articles, e.g. Danon et al. (2005), Orman et al. (2009), Yang et al. (2010), Papadopoulos et al. (2011), Orman et al. (2011), Plantié and Crampes (2013) and Chakraborty et al. (2016).

  19. 19.

    See for example Papadopoulos et al. (2011).

  20. 20.

    For example Blondel et al. (2008).

  21. 21.

    Table 1, p.529.

  22. 22.

    Good et al. (2010), p. 10.

  23. 23.

    See Garey and Johnson (1979).

  24. 24.

    See Wong et al. (2012), p. 9, Table 4, and Sect. 6.5.

  25. 25.

    A check of the validity of the modular model would involve its use as a model for a computer simulation, in order to compare its behavior with that of the actual empirical system. But not every explanatorily useful description can be immediately used as a dynamical model for a simulation: in certain cases, a high-level description is able to elicit comprehension of a system’s functioning without providing the necessary details for its implementation as a dynamical model that can be directly put to test in a simulation run. A typical example would be one of the high-level functional block diagrams typically used in cognitive psychology to describe general mental functions: their actual implementation would constitute a computational explanation able to be used as a computer simulation of one or more of the main human mental functions: we obviously probably still lack this explicit implementation of an intelligent system. But we could in certain cases still be able to discern completely invalid functional diagrams from more plausible ones.

  26. 26.

    Intrinsic functional antimodularity can occur even if there is an apparent structural modularity, because in certain cases structural and functional modularity do not coincide. This can happen when the system is highly non-linear: the non-linearity of the input-output functions of the nodes can make even weak connections between them trigger intense responses, preventing this way the temporal decoupling between what appear as structural modules.

  27. 27.

    The possibility of changing the basic description to avoid antimodularity seems in most cases precluded in real science, because each special science determines the basic description of its systems of interest: for example, molecular biology aims to describe a biological system in terms of molecules and their interactions. I think however this question should require further philosophical reflection.

  28. 28.

    See, again, Wright (2012).

  29. 29.

    Other types of explanations, such as deductive-nomological and computational explanation are affected too, as I intend to better highlight in a forthcoming work. Philippe Huneman’s topological explanation (see Huneman 2015) is instead enabled by antimodularity, which in certain conditions is a topological property itself.

  30. 30.

    Or descriptions of systems.

  31. 31.

    I use the term “emergence” in a way akin to the conception, exposed in Ronald et al. (1999), of emergence as the appearance of something unexpected: in this case, the unexpected is the fact that a system is not explainable in an intelligible way.

  32. 32.

    There are many arguments, empirical and theoretical, which favor this conclusion, as those by Herbert Simon (e.g. in Simon 1962) and by Stuart Kauffman (e.g in Kauffman 1993).

  33. 33.

    Sales-Pardo et al. (2007, p. 15227).

  34. 34.

    Orman et al. (2011, p. 273).

  35. 35.

    Introduced in Kashani et al. (2009).

  36. 36.

    As reported in Wong et al. (2012). See p. 9, Table 4 and p. 12, Table 5 in that paper.

  37. 37.

    See, again Wong et al. (2012).

  38. 38.

    See, for example, Smaldino and McElreath (2016).

References

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Rivelli, L. (2019). Antimodularity: Pragmatic Consequences of Computational Complexity on Scientific Explanation. In: Berkich, D., d'Alfonso, M. (eds) On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Philosophical Studies Series, vol 134. Springer, Cham. https://doi.org/10.1007/978-3-030-01800-9_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-01800-9_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-01799-6

  • Online ISBN: 978-3-030-01800-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics