Skip to main content
Log in

A Revised Attack on Computational Ontology

  • Published:
Minds and Machines Aims and scope Submit manuscript

An Erratum to this article was published on 01 November 2013

Abstract

There has been an ongoing conflict regarding whether reality is fundamentally digital or analogue. Recently, Floridi has argued that this dichotomy is misapplied. For any attempt to analyse noumenal reality independently of any level of abstraction at which the analysis is conducted is mistaken. In the pars destruens of this paper, we argue that Floridi does not establish that it is only levels of abstraction that are analogue or digital, rather than noumenal reality. In the pars construens of this paper, we reject a classification of noumenal reality as a deterministic discrete computational system. We show, based on considerations from classical physics, why a deterministic computational view of the universe faces problems (e.g., a reversible computational universe cannot be strictly deterministic).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Some writers, for a variety of reasons some of which may be epistemological, are leery of using the qualifier ‘in itself’ when discussing reality. For a robust defence of this usage (in an epistemological context) see Strawson’s analysis (2008).

  2. Of course, if this set of input–output pairs is infinite, it is questionable whether it can be “given” in an effective sense.

  3. He focuses on digitality from Section 14.3 on. We bring the computational back in the sections “An Argument Against Irreversible Computational Ontology” and “An Attack on Deterministic Reversible Computational Ontology”.

  4. Floridi takes quantum mechanical wave particle duality to be a less metaphysical example.

  5. An example where it might nevertheless be true is if the disjuncts are necessarily mutually exclusive.

  6. We follow Floridi here in omitting the third possibility that they be hybrid.

  7. This argument elaborates on and defends the following observation made by Pieter Adriaans and Peter van Emde Boas. “If deterministic computation is an information discarding process then it implies that the amount of information in the universe rapidly decreases. This contradicts the second law of thermodynamics” (Adriaans and Van Emde Boas 2011, p. 16).

  8. Of course, the destruction of information does not apply to all outputs of deterministic discrete computations, for example, as observed above, logical conjunction that yields a ‘1’.

  9. This principle states the minimum amount of entropy (released into the environment) that is the cost of erasing one bit of information.

  10. This is consistent with Calude’s result that discrete computation can only generate new information upper bounded by a constant (2009, pp. 84–85).

  11. Consider, for example, the original formulation of the second law of thermodynamics by Rudolf Clausius. According to his formulation, heat cannot flow spontaneously from a cold reservoir to a hot reservoir without external work being performed on the system. This is easily evident from everyday experience of refrigeration (Bais and Farmer 2008, pp. 613–614).

  12. We thank Ariel Caticha for suggesting this caveat.

  13. See footnote 12.

  14. A degree of freedom of a physical system is, roughly, a direction for potential action. A particle, for example, has three degrees of freedom, as it can move in any one of three independent possible directions in space.

  15. See the “Appendix” below or (Popper 1950a, b) for the details of this argument.

  16. This construction is merely meant to remove the constraint imposed on the standard TM of at most changing a single symbol on a scanned square at any given time. Parallel operations on different regions of the tape are disallowed. Strictly, the multi-tape extension is unnecessary. Importantly, neither the extra head(s) nor the extra tape(s) increases the computational power of the machine.

  17. There are certainly those who argue that accelerating TMs are not logically impossible [e.g., (Copeland 2002)]. But the physical possibility of accelerating TMs in either an atomic universe or a quantum mechanical universe is highly questionable (Davies 2001, pp. 677–679).

  18. Note that this paradox is the result of a thought experiment rather than of some observed phenomena. One approach that provides a quantitative resolution of this paradox is the Fluctuation Theorem (Evans and Searles 2002). This theorem quantifies the probability of observing violations of the second law in small systems observed for a short time. Still, it applies to small-scale systems, whereas here the whole universe is considered as one closed system.

  19. Using Lecerf’s method instead, once the computation is undone only the input remains, assuming that the TM accepts the input. Otherwise, the simulation process does not terminate (Sutner 2004, p. 319).

  20. ‘Miscomputation’ here means a computational malfunction. Such computational malfunction may occur when a physical computational system fails to correctly follow some step of the algorithm and thereby possibly produce an incorrect output [see Fresco & Primiero (2013) for more details].

References

  • Adriaans, P., & Van Emde Boas, P. (2011). Computation, information, and the arrow of time. In S. B. Cooper & A. Sorbi (Eds.), Computability in context (pp. 1–17). World Scientific: Imperial College Press.

  • Baez, J. C., & Stay, M. (2010). Algorithmic thermodynamics. arXiv:1010.2067.

  • Bais, F. A., & Farmer, J. D. (2008). The physics of information. In P. Adriaans & J. van Benthem (Eds.), Handbook of the philosophy of information (pp. 609–683). Amsterdam: Elsevier.

    Chapter  Google Scholar 

  • Baker, H. (1992). NREVERSAL of fortune—The thermodynamics of garbage collection. In Y. Bekkers & J. Cohen (Eds.), Memory management (Vol. 637, pp. 507–524). Berlin, Heidelberg: Springer.

    Chapter  Google Scholar 

  • Bennett, C. H. (1973). Logical reversibility of computation. IBM Journal of Research and Development, 17(6), 525–532.

    Article  MATH  Google Scholar 

  • Blachowicz, J. (1997). Analog representation beyond mental imagery. The Journal of Philosophy, 94(2), 55–84.

    Article  Google Scholar 

  • Calude, C. S. (2009). Information: The algorithmic paradigm. In G. Sommaruga (Ed.), Formal theories of information (Vol. 5363, pp. 79–94). Berlin, Heidelberg: Springer-Verlag.

    Chapter  Google Scholar 

  • Calude, C., Campbell, D. I., Svozil, K., & Ştefuanecu, D. (1995). Strong determinism vs. computability. In W. D. Schimanovich, E. Köhler, & P. Stadler (Eds.), The foundational debate, complexity and constructivity in mathematics and physics. Berlin: Springer.

    Google Scholar 

  • Copeland, B. J. (2002). Accelerating turing machines. Minds and Machines, 12(2), 281–300.

    Article  MATH  MathSciNet  Google Scholar 

  • Davies, E. B. (2001). Building infinite machines. The British Journal for the Philosophy of Science, 52(4), 671–682. doi:10.1093/bjps/52.4.671.

    Article  MATH  MathSciNet  Google Scholar 

  • Evans, D. J., & Searles, D. J. (2002). The fluctuation theorem. Advances in Physics, 51(7), 1529–1585.

    Article  Google Scholar 

  • Floridi, L. (2009). Against digital ontology. Synthese, 168(1), 151–178.

    Article  MATH  Google Scholar 

  • Floridi, L. (2011). The philosophy of information. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Fredkin, E. (1990). An informational process based on reversible universal cellular automata. Physica D: Nonlinear Phenomena, 45(1–3), 254–270.

    Google Scholar 

  • Fredkin, E. (1992). Finite nature. In G. Chardin (Ed.), Proceedings of the XXVIIth Rencontre De Moriond Series. France: Editions Frontieres.

  • Fredkin, E., & Toffoli, T. (1982). Conservative logic. International Journal of Theoretical Physics, 21(3–4), 219–253.

    Google Scholar 

  • Fresco, N. (2010). Explaining computation without semantics: keeping it simple. Minds and Machines, 20(2), 165–181.

    Article  Google Scholar 

  • Fresco, N., Primiero, G (2013). Miscomputation. Philosophy & Technology, 26(3), 253–272.

  • Jaynes, E. T. (1965). Gibbs vs Boltzmann entropies. American Journal of Physics, 33(5), 391–398.

    Article  MATH  Google Scholar 

  • Kant, I. (1996). Critique of pure reason. (W. S. Pluhar, Trans.). Indianapolis, IN: Hackett Pub. Co.

  • Koupelis, T. (2011). In quest of the universe. Sudbury, MA: Jones and Bartlett Publishers.

    Google Scholar 

  • Landauer, R. (1961). Irreversibility and heat generation in the computing process. IBM Journal of Research and Development, 5(3), 183–191.

    Article  MATH  MathSciNet  Google Scholar 

  • Lange, K.-J., McKenzie, P., & Tapp, A. (2000). Reversible space equals deterministic space. Journal of Computer and System Sciences, 60(2), 354–367.

    Article  MATH  MathSciNet  Google Scholar 

  • Laraudogoitia, J. P. (2011). Supertasks. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2011). Retrieved from http://plato.stanford.edu/archives/spr2011/entries/spacetime-supertasks/.

  • Lewis, D. (1971). Analog and digital. Noûs, 5(3), 321–327.

    Article  Google Scholar 

  • Li, M., & Vitányi, P. M. B. (2008). An introduction to Kolmogorov complexity and its applications. New York: Springer.

    Book  MATH  Google Scholar 

  • Maley, C. J. (2010). Analog and digital, continuous and discrete. Philosophical Studies, 155(1), 117–131.

    Article  Google Scholar 

  • Maroney, O. J. E. (2009). Does a computer have an arrow of time? Foundations of Physics, 40(2), 205–238.

    Google Scholar 

  • Modgil, M. S. (2009). Loschmidt’s paradox, entropy and the topology of spacetime. arxiv: 0907.3165.

  • O’Brien, G., & Opie, J. (2006). How do connectionist networks compute? Cognitive Processing, 7(1), 30–41.

    Article  Google Scholar 

  • Piccinini, G. (2007). Computation without representation. Philosophical Studies, 137(2), 205–241.

    Article  MathSciNet  Google Scholar 

  • Popper, K. R. (1950a). Indeterminism in quantum physics and in classical physics. Part I. British Journal for the Philosophy of Science, 1(2), 117–133.

    Article  MATH  MathSciNet  Google Scholar 

  • Popper, K. R. (1950b). Indeterminism in quantum physics and in classical physics. Part II. British Journal for the Philosophy of Science, 1(3), 173–195.

    Article  MathSciNet  Google Scholar 

  • Pylyshyn, Z. W. (1984). Computation and cognition: Toward a foundation for cognitive science. Cambridge, MA: The MIT Press.

    Google Scholar 

  • Rapaport, W. J. (1998). How minds can be computational systems. Journal of Experimental & Theoretical Artificial Intelligence, 10(4), 403–419.

    Article  MATH  Google Scholar 

  • Schulman, L. S. (2005). A Computer’s arrow of time. Entropy, 7(4), 221–233.

    Article  MATH  MathSciNet  Google Scholar 

  • Steinhart, E. (1998). Digital metaphysics. In T. W. Bynum & J. H. Moor (Eds.), The digital phoenix (pp. 117–134). Cambridge: Blackwell.

  • Strawson, G. (2008). Can we know the nature of reality as it is in itself? In G. Strawson (Ed.), Real materialism: And other essays (pp. 75–100). Oxford: Oxford University Press.

  • Sutner, K. (2004). The complexity of reversible cellular automata. Theoretical Computer Science, 325(2), 317–328.

    Article  MATH  MathSciNet  Google Scholar 

  • Teixeira, A., Matos, A., Souto, A., & Antunes, L. (2011). Entropy measures vs. kolmogorov complexity. Entropy, 13(12), 595–611.

    Article  MATH  MathSciNet  Google Scholar 

  • Vitányi, P. (2005). Time, space, and energy in reversible computing. In Proceedings of the 2nd conference on computing frontiers (pp. 435–444). ACM: ACM Press.

  • Wheeler, J. (1982). The computer and the universe. International Journal of Theoretical Physics, 21(6–7), 557–572.

    Article  Google Scholar 

  • Wolfram, S. (2002). A new kind of science. Champaign, IL: Wolfram Media.

  • Wolpert, D. H. (2001). Computational capabilities of physical systems. Physical Review E, 65(1), 016128.

    Article  MathSciNet  Google Scholar 

  • Zuse, K. (1970). Calculating space. Cambridge, MA: Massachusetts Institute of Technology, Project MAC.

    Google Scholar 

  • Zuse, Konrad. (1993). The computer—My life. Berlin: Springer-Verlag.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nir Fresco.

Appendix

Appendix

In order to make our analysis above more self-contained and accessible for readers of different backgrounds, we provide below a brief summary of some of Popper’s arguments, which are relevant for our discussion. He argued that most physical systems are indeterministic (Popper 1950a, b). Popper equated (physical) indeterminism with the doctrine that “not all events are ‘determined’ in every detail” (1950a, p. 120). Conversely, determinism was taken to be the doctrine that all events are determined, without exception, whether future, present or past. By “determined events” he meant events that are “predictable in accordance with the methods of science” (ibid). The unpredictability of events under consideration is such that it cannot be “mitigated by the predictability of their frequencies” (ibid, p. 117, italics added). This account of determinism makes it scientifically refutable.

Moreover, the predictability of events is, according to Popper, a physical impossibility. An unpredictable observable event may still be correctly described fortuitously. So the predictability of such event is not logically impossible, but rather physically impossible by means of the rational methods of prediction in physics. These methods include the acquisition of initial information by observation (ibid, pp. 117–118). Popper showed that in an important sense all scientific predictions are deficient even from the perspective of classical physics.

Furthermore, Popper suggested that the deterministic character of classical Newtonian mechanics is illustrated by the story of the Laplacean demon, a superhuman omniscient entity (ibid, p. 122). If all the natural laws were in the form of equations, which uniquely determine the future from the present state, then by having a perfect knowledge of the initial state of the world (the initial information) and using mathematical deduction this demon would be able to predict every future state of the world. This kind of predictability is arguably deterministic, for it implies that given the foreknowledge of any future state (based on the initial information), all future states must be determined now, because past, present and future states are all necessarily connected.

To ground this nonphysical demon in a physical realm, Popper proposed to replace it with a calculating predicting machine—Predictor (ibid, p. 118). Predictor (which is in fact a computer) was designed according to the laws of classical physics so as to produce permanent records of some type (say, a write-once TM tape) that can be interpreted as predictions of the positions, velocities, and masses of physical particles. Popper argued that Predictor could never fully predict every one of its own future states, and the part of the world with which it interacts. He showed that either no such Predictor could exist in the physical world or its future states could not be predicted by any existing Predictor (ibid, p. 119).

The crux of Popper’s Predictor argument is that just as indeterminism in quantum physics is related to the measurement problem, in classical physics the interaction of Predictor with the system it measures (possibly itself) results in a similar indeterminism. If Predictor B measures another Predictor A, then B amplifies the signals from A. When another Predictor C measures the system A + B, C must also interact and amplify the signals from A + B. It is further assumed that B must also measure C and that, Popper argued, leads to the breakdown of the “one way membrane” between B and C and with it the conditions for successful predictions. None of these Predictors can have knowledge of its own state before that state has passed. Each Predictor can obtain information about its own state only either by studying the results obtained by another Predictor or by being given these results (ibid, pp. 129–130). Also, he showed that there could not be an infinite series of Predictors, such that the nth Predictor is superior to its predecessors. Only on the assumption that for every Predictor P there exists some P+ that is not only superior to P but also undetectable by P does the finite determinist doctrine hold (ibid, pp. 131–133).

Another version of the Predictor argument was based on a variation on Russell’s Tristram Shandy paradox. Tristram Shandy attempts to narrate his full autobiography and in so doing he spends more time on the description of the details of every event than the time it took him to live through it. His autobiography, accordingly, rather than reaching a state of being “up to date” with present time, becomes more and more out of date. Even if Tristram Shandy is arbitrarily fast in narrating the full description of his history, he must be incapable of bringing it completely up to date (Popper 1950b, p. 174).

Popper proposed another Predictor (call it TP) that is endowed with a memory in which results of its calculated predictions and the initial information received are stored. TP receives accurate and complete information about its state at time T0 and is tasked to predict some future state at time Tn. For every physical machine, there is a maximum running speed and as a result a minimum length of time needed for completing even the shortest description of which that machine is capable. Therefore, it cannot be simply assumed that the series of time intervals between T0 and Tn when TP attempts to perform its prediction task converges. TP must retain in memory not only records of the final predictions, but also intermediate partial results of its calculations. Any description will take at least as much memory space as the description of the state to be predicted. Since the memory space, which can be used before Tn has elapsed, is finite, the description of TP’s memory cannot be completed before Tn regardless of TP’s speed (Popper 1950b, pp. 175–177).

The last argument presented here is Popper’s Oedipus Effect, according to which prediction can influence the predicted event (ibid: pp. 188–190). The point is that the receipt of complete information about its immediate past by a Predictor C will ultimately change its future state, since C is designed to act upon the informative signals received. This self-information qualifies as a strong interference with the working of C. Still, some very superior C+ may foresee the future state change caused by C receiving the information, and give C inaccurate information about C’s state ingeniously designed to induce C to make correct predictions about itself. However, no finite piece of information can be precise self-information. For the finite self-information must contain a description of itself and this is impossible, as there cannot be a bijection from a finite data set S to a smaller subset of S.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Fresco, N., Staines, P.J. A Revised Attack on Computational Ontology. Minds & Machines 24, 101–122 (2014). https://doi.org/10.1007/s11023-013-9327-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-013-9327-1

Keywords

Navigation