The simulation hypothesis is the hypothesis that we live in a simulation. The simulation hypothesis is a metaphysical hypothesis, not an epistemic hypothesis, but some argue that careful consideration of the metaphysical hypothesis can teach valuable epistemic lessons. The simulation hypothesis is related to the digital physics hypothesis, i.e., the hypothesis that physical reality (or anyway that portion of it with which we are in causal contact) is ultimately computational or `digital'. But the simulation hypothesis further states that there is some kind of higher reality, presumably including a creator, living outside of the simulation. Moreover, not all simulations are digital.
Observable consequences of the hypothesis that the observed universe is a numerical simulation performed on a cubic space-time lattice or grid are explored. The simulation scenario is first motivated by extrapolating current trends in computational resource requirements for lattice QCD into the future. Using the historical development of lattice gauge theory technology as a guide, we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization and investigate potentially-observable consequences. Among the observables that are considered are (...) the muon g-2 and the current differences between determinations of alpha, but the most stringent bound on the inverse lattice spacing of the universe, b−1 > ~ 10^11 GeV, is derived from the high-energy cut off of the cosmic ray spectrum. The numerical simulation scenario could reveal itself in the distributions of the highest energy cosmic rays exhibiting a degree of rotational symmetry breaking that reflects the structure of the underlying lattice. (shrink)
Can the theory that reality is a simulation be tested? We investigate this question based on the assumption that if the system performing the simulation is nite (i.e. has limited resources), then to achieve low computational complexity, such a system would, as in a video game, render content (reality) only at the moment that information becomes available for observation by a player and not at the moment of detection by a machine (that would be part of the simulation and whose (...) detection would also be part of the internal computation performed by the Virtual Reality server before rendering content to the player). Guided by this principle we describe conceptual wave/particle duality experiments aimed at testing the simulation theory. (shrink)
Those who believe suitably programmed computers could enjoy conscious experience of the sort we enjoy must accept the possibility that their own experience is being generated as part of a computerized simulation. It would be a mistake to dismiss this is just one more radical sceptical possibility: for as Bostrom has recently noted, if advances in computer technology were to continue at close to present rates, there would be a strong probability that we are each living in a computer simulation. (...) The first part of this paper is devoted to broadening the scope of the argument: even if computers cannot sustain consciousness (as many dualists and materialists believe), there may still be a strong likelihood that we are living simulated lives. The implications of this result are the focus of the second part of the paper. The topics discussed include: the Doomsday argument, scepticism, the different modes of virtual life, transcendental idealism, the Problem of Evil, and simulation ethics. (shrink)
The Simulation Hypothesis proposes that all of reality is in fact an artificial simulation, analogous to a computer simulation. Outlined here is a method for programming relativistic mass, space and time at the Planck level as applicable for use in Planck Universe-as-a-Simulation Hypothesis. For the virtual universe the model uses a 4-axis hyper-sphere that expands in incremental steps (the simulation clock-rate). Virtual particles that oscillate between an electric wave-state and a mass point-state are mapped within this hyper-sphere, the oscillation driven (...) by this expansion. Particles are assigned an N-S axis which determines the direction in which they are pulled along by the expansion, thus an independent particle motion may be dispensed with. Only in the mass point-state do particles have fixed hyper-sphere co-ordinates. The rate of expansion translates to the speed of light and so in terms of the hyper-sphere co-ordinates all particles (and objects) travel at the speed of light, time (as the clock-rate) and velocity (as the rate of expansion) are therefore constant, however photons, as the means of information exchange, are restricted to lateral movement across the hyper-sphere thus giving the appearance of a 3-D space. Lorentz formulas are used to translate between this 3-D space and the hyper-sphere co-ordinates, relativity resembling the mathematics of perspective. (shrink)
Defined are gravitational formulas in terms of Planck units and units of $\hbar c$. Mass is not assigned as a constant property but is instead treated as a discrete event defined by units of Planck mass with gravity as an interaction between these units, the gravitational orbit as the sum of these mass-mass interactions and the gravitational coupling constant as a measure of the frequency of these interactions and not the magnitude of the gravitational force itself. Each particle that is (...) in the mass-state (defined by a unit of Planck mass) per unit of Planck time is directly linked to every other particle also in the mass-state by a discrete unit of $m_P v^2 r = \hbar c$, the velocity of a gravitational orbit is summed from these individual $v^2$. As this approach presumes a digital time, it is suitable for use in programming Simulation Hypothesis models. As this link is responsible for the particle-particle interaction it is analogous to the graviton. Orbital angular momentum of the planetary orbits derives from the sum of the planet-sun particle-particle orbital angular momentum irrespective of the angular momentum of the sun itself and the rotational angular momentum of a planet includes particle-particle rotational angular momentum. (shrink)
Outlined here is a simulation hypothesis approach that uses an expanding (the simulation clock-rate measured in units of Planck time) 4-axis hyper-sphere and mathematical particles that oscillate between an electric wave-state and a mass (unit of Planck mass per unit of Planck time) point-state. Particles are assigned a spin axis which determines the direction in which they are pulled by this (hyper-sphere pilot wave) expansion, thus all particles travel at, and only at, the velocity of expansion (the origin of $c$), (...) however only the particle point-state has definable co-ordinates within the hyper-sphere. Photons are the mechanism of information exchange, as they lack a mass state they can only travel laterally (in hypersphere co-ordinate terms) between particles and so this hypersphere expansion cannot be directly observed, relativity then becomes the mathematics of perspective translating between the absolute (hypersphere) and the relative motion (3D space) co-ordinate systems. A discrete `pixel' lattice geometry is assigned as the gravitational space. Units of $\hbar c$ `physically' link particles into orbital pairs. As these are direct particle to particle links, a gravitational force between macro objects is not required, the gravitational orbit as the sum of these individual orbiting pairs. A 14.6 billion year old hyper-sphere (the sum of Planck black-hole units) has similar parameters to the cosmic microwave background. The Casimir force is a measure of the background radiation density. (shrink)
According to the most common interpretation of the simulation argument, we are very likely to live in an ancestor simulation. It is interesting to ask if some families of simulations are more likely than others inside the space of all simulations. We argue that a natural probability measure is given by computational complexity: easier simulations are more likely to be run. Remarkably this allows us to extract experimental predictions from the fact that we live in a simulation. For instance we (...) show that it is very likely that humanity will not achieve interstellar travel and that humanity will not meet other intelligent species in the universe, in turn explaining the Fermi's Paradox. On the opposite side, experimental falsification of any of these predictions would constitute evidence against our reality being a simulation. (shrink)
Both patched versions of the Bostrom/Kulczycki simulation argument contain serious objective errors, discovered while attempting to formalize them in predicate logic. The English glosses of both versions involve badly misleading meanings of vague magnitude terms, which their impressiveness benefits from. We fix the errors, prove optimal versions of the arguments, and argue that both are much less impressive than they originally appeared. Finally, we provide a guide for readers to evaluate the simulation argument for themselves, using well-justified settings of the (...) argument parameters that have simple, accurate statements in English, which are easier to understand and critique than the statements in the original paper. (shrink)
I present a new argument that we are much more likely to be living in a computer simulation than in the ground-level of reality. (Similar arguments can be marshalled for the view that we are more likely to be Boltzmann brains than ordinary people, but I focus on the case of simulations.) I explain how this argument overcomes some objections to Bostrom’s classic argument for the same conclusion. I also consider to what extent the argument depends upon an internalist conception (...) of evidence, and I refute the common line of thought that finding many simulations being run—or running them ourselves—must increase the odds that we are in a simulation. GPI Working Paper No. 16-2021. (shrink)
Abstract: In the last decade, an urban legend about “glitches in the matrix” has become popular. As it is typical for urban legends, there is no evidence for most such stories, and the phenomenon could be explained as resulting from hoaxes, creepypasta, coincidence, and different forms of cognitive bias. In addition, the folk understanding of probability does not bear much resemblance to actual probability distributions, resulting in the illusion of improbable events, like the “birthday paradox”. Moreover, many such stories, even (...) if they were true, could not be considered evidence of glitches in a linear-time computer simulation, as the reported “glitches” often assume non-linearity of time and space—like premonitions or changes to the past. Different types of simulations assume different types of glitches; for example, dreams are often very glitchy. Here, we explore the theoretical conditions necessary for such glitches to occur and then create a typology of so-called “GITM” reports. One interesting hypothetical subtype is “viruses in the matrix”, that is, self-replicating units which consume computational resources in a manner similar to transposons in the genome, biological and computer viruses, and memes. (shrink)
The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations. Based on this, we provide (...) classification of different possible simulations and we find that simpler, less expensive and one-person-centered simulations, resurrectional simulations, or simulations of the first artificial general intelligence’s (AGI’s) origin (singularity simulations) should dominate. Also, simulations which simulate the 21st century and global catastrophic risks are probable. We then explore whether the simulation could collapse or be terminated. Most simulations must be terminated after they model the singularity or after they model a global catastrophe before the singularity. Undeniably observed glitches, but not philosophical speculations could result in simulation termination. The simulation could collapse if it is overwhelmed by glitches. The Doomsday Argument in simulations implies termination soon. We conclude that all types of the most probable simulations except resurrectional simulations are prone to termination risks in a relatively short time frame of hundreds of years or less from now. (shrink)
Preston Greene (2020) argues that we should not conduct simulation investigations because of the risk that we might be terminated if our world is a simulation designed to research various counterfactuals about the world of the simulators. In response, we propose a sequence of arguments, most of which have the form of an "even if” response to anyone unmoved by our previous arguments. It runs thus: (i) if simulation is possible, then simulators are as likely to care about simulating simulations (...) as they are likely to care about simulating basement (i.e. non-simulated) worlds. But (ii) even if simulations are interested only in simulating basement worlds the discovery that we are in a simulation will have little or no impact on the evolution of ordinary events. But (iii) even if discovering that we are in a simulation impacts the evolution of ordinary events, the effects of seeming to do so could also happen in a basement world, and might be the subject of interesting counterfactuals in the basement world Finally, (iv) there is little reason to think that there is a catastrophic effect from successful simulation probes, and no argument from the precautionary principle can be used to leverage the negligible credence one ought have in this. Thus, if we do develop a simulation probe, then let’s do it. (shrink)
Many philosophers have been attracted to a restricted version of the principle of indifference in the case of self-locating belief. Roughly speaking, this principle states that, within any given possible world, one should be indifferent between different hypotheses concerning who one is within that possible world, so long as those hypotheses are compatible with one’s evidence. My first goal is to defend a more precise version of this principle. After responding to several existing criticisms of such a principle, I argue (...) that existing formulations of the principle are crucially ambiguous, and I go on to defend a particular disambiguation of the principle. According to the disambiguation I defend, how we should apply this restricted principle of indifference sensitively depends on our background metaphysical beliefs. My second goal is to apply this disambiguated principle to classical skeptical problems in epistemology. In particular, I argue that Eternalism threatens to lead us to external world skepticism, and Modal Realism threatens to lead us to inductive skepticism. (shrink)
(Draft of Feb 2023, see upcoming issue for Chalmers' reply) In Reality+: Virtual Worlds and the Problems of Philosophy, David Chalmers argues, among other things, that: if we are living in a full-scale simulation, we would still enjoy broad swathes of knowledge about non-psychological entities, such as atoms and shrubs; and, our lives might still be deeply meaningful. Chalmers views these claims as at least weakly connected: The former claim helps forestall a concern that if objects in the simulation are (...) not genuine (and so not knowable), then life in the simulation is illusory and therefore, not as valuable as a non-simulated life. Taking up these questions, I argue that in general, the value of social knowledge for a meaningful life dramatically swamps the value of non-social knowledge for a meaningful life. Along the way, I propose a non-additive model of the meaningfulness of life, according to which the overall effect of some potential contributor of value to a life depends in part on what is already in a life. One upshot is that the vindication of non-social knowledge, absent a correlative vindication of social knowledge, contributes either not at all or scarcely at all to the claim that our lives in the simulation might be deeply meaningful. This is so even though the vindication of non-social knowledge does forestall the concern that in the simulation, our lives might be wholly meaningless. (shrink)
According to veridicalism, your beliefs about the existence of ordinary objects are typically true, and can constitute knowledge, even if you are in some global sceptical scenario. Even if you are a victim of Descartes’ demon, you can still know that there are tables, for example. Accordingly, even if you don’t know whether you are in some such scenario, you still know that there are tables. This refutes the standard sceptical argument. But does it solve the sceptical problem posed by (...) that argument? I argue that it does not, because we do not know substantively more about the external world according to veridicalism than we do according to the sceptical argument. Rather, veridicalism merely reformulates what little knowledge we have. I then draw some general conclusions about the nature of the sceptical problem, the formulation of the standard argument, and the significance of this for some other, non-veridicalist strategies. (shrink)
The simulation hypothesis can reinforce a cynical dismissal of human potential. This attitude can allow online platform designers to rationalize employing manipulative neuromarketing techniques to control user decisions. We point to cognitive boosting techniques at both user and designer levels to build critical reflection and mindfulness.
This book, the first of its kind, puts forward a novel, unified cognitive account of skeptical doubt. Historically, most philosophers have tried to tackle this difficult topic by directly arguing that skeptical doubt is false. But N. Ángel Pinillos does something different. He begins by trying to uncover the hidden mental rule which, for better or worse, motivates our skeptical inclinations. He then gives an account of the broader cognitive purpose of having and applying this rule. Based on these ideas, (...) he shows how we can give a new response to the traditional problem of global skepticism. He also argues that philosophical skepticism is not just something that comes up during philosophical reflection, as David Hume, Charles Sanders Peirce and other philosophers have urged. Instead, it is of great practical significance. The rule which produces skepticism may itself be operative in certain pathologies such as obsessive-compulsive disorder, in creative endeavors, and in conspiratorial thinking. The rule can also explain some of our reluctance to trust statistical evidence, especially in legal settings. More broadly, this volume aims to breathe new life into a classic problem in philosophy by tackling it from a new perspective and exploring new areas of application. The book will be of interest to philosophers, psychologists and anyone interested in the human capacity to doubt and to question our beliefs. (shrink)
Various theorists contend that we may live in a computer simulation. David Chalmers in turn argues that the simulation hypothesis is a metaphysical hypothesis about the nature of our reality, rather than a sceptical scenario. We use recent work on consciousness to motivate new doubts about both sets of arguments. First, we argue that if either panpsychism or panqualityism is true, then the only way to live in a simulation may be as brains-in-vats, in which case it is unlikely that (...) we live in a simulation. We then argue that if panpsychism or panqualityism is true, then viable simulation hypotheses are substantially sceptical scenarios. We conclude that the nature of consciousness has wide-ranging implications for simulation arguments. (shrink)
This article develops a logical (or semantic) response to scepticism about the existence of an external world. Specifically, it is argued that any doubt about the existence of an external world can be proved to be false, but whatever appears to be doubt about the existence of an external world that _cannot_ be proved to be false is nonsense, insofar as it must rely on the assertion of something that is logically impossible. The article further suggests that both G. E. (...) Moore and Ludwig Wittgenstein worked towards the same solution but left their work unfinished. (shrink)
I introduce the implantation argument, a new argument for the existence of God. Spatiotemporal extensions believed to exist outside of the mind, composing an external physical reality, cannot be composed of either atomlessness, or of Democritean atoms, and therefore the inner experience of an external reality containing spatiotemporal extensions believed to exist outside of the mind does not represent the external reality, the mind is a mere cinematic-like mindscreen, implanted into the mind by a creator-God. It will be shown that (...) only a creator-God can be the implanting creator of the mindscreen simulation, and other simulation theories, such as Bostrom’s famous account, that do not involve a creator-God as the mindscreen simulation creator, involve a reification fallacy. (shrink)
‘Simulation Hypotheses’ are imaginative scenarios that are typically employed in philosophy to speculate on how likely it is that we are currently living within a simulated universe as well as on our possibility for ever discerning whether we do in fact inhabit one. These philosophical questions in particular overshadowed other aspects and potential uses of simulation hypotheses, some of which are foregrounded in this article. More specifically, “A Theodicy for Artificial Universes” focuses on the moral implications of simulation hypotheses with (...) the objective of speculatively answering questions concerning computer simulations such as: If we are indeed living in a computer simulation, what might be its purpose? What aspirations and values could be inferentially attributed to its alleged creators? And would living in a simulated universe affect the value and meaning we attribute to our existence? (shrink)
I show that some of the most initially attractive routes of refuting epistemological solipsism face serious obstacles. I also argue that for creatures like ourselves, solipsism is a genuine form of external world skepticism. I suggest that together these claims suggest the following morals: No proposed solution to external world skepticism can succeed which does not also solve the problem of epistemological solipsism. And, more tentatively: In assessing proposed solutions to external world skepticism, epistemologists should explicitly consider whether those solutions (...) extend to knowledge of other minds. Finally, and also tentatively: epistemological solipsism warrants more philosophical attention than it currently enjoys. (shrink)
Is imagination a source of knowledge? Timothy Williamson has recently argued that our imaginative capacities can yield knowledge of a variety of matters, spanning from everyday practical matters to logic and set theory. Furthermore, imagination for Williamson plays a similar epistemic role in cognitive processes that we would traditionally classify as either a priori or a posteriori, which he takes to indicate that the distinction itself is shallow and epistemologically fruitless. In this chapter, I aim to defend the a priori-a (...) posteriori distinction from Williamson’s challenge by questioning his account of imagination. I distinguish two notions of imagination at play in Williamson’s account – sensory vs. belief-like imagination – and show that both face empirical and normative issues. Sensory imagination seems neither necessary nor sufficient for knowledge. Whereas, belief-like imagination isn’t adequately disentangled from inference. Additionally, Williamson’s examples are ad hoc and don’t generalize. I conclude that Williamson’s case against the a priori-a posteriori distinction is unconvincing, and so is the thesis that imagination is an epistemic source. (shrink)
A popular positivistic line of thinking seems to be cropping up again, declaring that the sciences are on the verge of a paradigmatic shift. One that will merge science and philosophy to finally answer all the great big questions once and for all. Questions such as What is life? What is consciousness? What makes individuals who they are? Why does our universe seem fine-tuned for our existence? How did it all begin? While such questions are undoubtedly important, the truth is, (...) they are essentially philosophical. That is to say, they escape the kind of exactness required of the hard sciences. The upshot is that they are at best only answerable to a limited extent, if they are even answerable at all. (shrink)
Historically, the hypothesis that our world is a computer simulation has struck many as just another improbable-but-possible “skeptical hypothesis” about the nature of reality. Recently, however, the simulation hypothesis has received significant attention from philosophers, physicists, and the popular press. This is due to the discovery of an epistemic dependency: If we believe that our civilization will one day run many simulations concerning its ancestry, then we should believe that we are probably in an ancestor simulation right now. This essay (...) examines a troubling but underexplored feature of the ancestor-simulation hypothesis: the termination risk posed by both ancestor-simulation technology and experimental probes into whether our world is an ancestor simulation. This essay evaluates the termination risk by using extrapolations from current computing practices and simulation technology. The conclusions, while provisional, have great implications for debates concerning the fundamental nature of reality and the safety of contemporary physics. (shrink)
I defend a how-possibly argument for Kantian (or Kant*-ian) transcendental idealism, drawing on concepts from David Chalmers, Nick Bostrom, and the cyberpunk subgenre of science fiction. If we are artificial intelligences living in a virtual reality instantiated on a giant computer, then the fundamental structure of reality might be very different than we suppose. Indeed, since computation does not require spatial properties, spatiality might not be a feature of things as they are in themselves but instead only the way that (...) things necessarily appear to us. It might seem unlikely that we are living in a virtual reality instantiated on a non-spatial computer. However, understanding this possibility can help us appreciate the merits of transcendental idealism in general, as well as transcendental idealism’s underappreciated skeptical consequences. (shrink)
Do we live in a computer simulation? I will present an argument that the results of a certain experiment constitute empirical evidence that we do not live in, at least, one type of simulation. The type of simulation ruled out is very specific. Perhaps that is the price one must pay to make any kind of Popperian progress.
Cartesian arguments for global skepticism about the external world start from the premise that we cannot know that we are not in a Cartesian scenario such as an evil-demon scenario, and infer that because most of our empirical beliefs are false in such a scenario, these beliefs do not constitute knowledge. Veridicalist responses to global skepticism respond that arguments fail because in Cartesian scenarios, many or most of our empirical beliefs are true. Some veridicalist responses have been motivated using verificationism, (...) externalism, and coherentism. I argue that a more powerful veridicalist response to global skepticism can be motivated by structuralism, on which physical entities are understood as those that play a certain structural role. I develop the structuralist response and address objections. (shrink)
The Simulation Hypothesis proposes that all of reality, including the earth and the universe, is in fact an artificial simulation, analogous to a computer simulation, and as such our reality is an illusion. In this essay I describe a method for programming mass, length, time and charge (MLTA) as geometrical objects derived from the formula for a virtual electron; $f_e = 4\pi^2r^3$ ($r = 2^6 3 \pi^2 \alpha \Omega^5$) where the fine structure constant $\alpha$ = 137.03599... and $\Omega$ = 2.00713494... (...) are mathematical constants and the MLTA geometries are; M = (1), T = ($2\pi$), L = ($2\pi^2\Omega^2$), A = ($4\pi \Omega)^3/\alpha$. As objects they are independent of any set of units and also of any numbering system, terrestrial or alien. As the geometries are interrelated according to $f_e$, we can replace designations such as ($kg, m, s, A$) with a rule set; mass = $u^{15}$, length = $u^{-13}$, time = $u^{-30}$, ampere = $u^{3}$. The formula $f_e$ is unit-less ($u^0$) and combines these geometries in the following ratio M$^9$T$^{11}$/L$^{15}$ and (AL)$^3$/T, as such these ratio are unit-less. To translate MLTA to their respective SI Planck units requires an additional 2 unit-dependent scalars. We may thereby derive the CODATA 2014 physical constants via the 2 (fixed) mathematical constants ($\alpha, \Omega$), 2 dimensioned scalars and the rule set $u$. As all constants can be defined geometrically, the least precise constants ($G, h, e, m_e, k_B$...) can also be solved via the most precise ($c, \mu_0, R_\infty, \alpha$), numerical precision then limited by the precision of the fine structure constant $\alpha$. (shrink)
The simulation hypothesis proposes that all of reality is an artificial simulation. In this article I describe a simulation model that derives Planck level units as geometrical forms from a virtual (dimensionless) electron formula $f_e$ that is constructed from 2 unit-less mathematical constants; the fine structure constant $\alpha$ and $\Omega$ = 2.00713494... ($f_e = 4\pi^2r^3, r = 2^6 3 \pi^2 \alpha \Omega^5$). The mass, space, time, charge units are embedded in $f_e$ according to these ratio; ${M^9T^{11}/L^{15}} = (AL)^3/T$ (units = (...) 1), giving mass M=1, time T=$2\pi$, length L=$2\pi^2\Omega^2$, ampere A = $(4\pi \Omega)^3/\alpha$. We can thus for example create as much mass M as we wish but with the proviso that we create an equivalent space L and time T to balance the above. The 5 SI units $kg, m, s, A, K$ are derived from a single unit u = sqrt(velocity/mass) that also defines the relationships between the SI units; kg = $u^{15}$, m = $u^{-13}$, s = $u^{-30}$, A = $u^{3}$, $k_B = u^{29}$. To convert MLTA from the above $\alpha, \Omega$ geometries to their respective SI Planck unit numerical values (and thus solve the dimensioned physical constants $G, h, e, c, m_e, k_B$) requires an additional 2 unit-dependent scalars. Results are consistent with CODATA 2014. The rationale for the virtual electron was derived using the sqrt of momentum P and a black-hole electron model as a function of magnetic-monopoles AL (ampere-meters) and time T. (shrink)
The orthodox position in epistemology, for both externalists and internalists, is that a subject in a ‘bad case’—a sceptical scenario—is so epistemically badly off that they cannot know how badly off they are. Ofra Magidor contends that externalists should break ranks on this question, and that doing so is liberating when it comes time to confront a number of central issues in epistemology, including scepticism and the new evil demon problem for process reliabilism. In this reply, I will question whether (...) Magidor’s argument should persuade externalists, whether it really engages with the orthodox view on what subjects in bad cases can know, and whether the dispute is, as Magidor insists, a significant one for contemporary epistemology. (shrink)
Digital physics claims that the entire universe is, at the very bottom, made out of bits; as a result, all physical processes are intrinsically computational. For that reason, many digital physicists go further and affirm that the universe is indeed a giant computer. The aim of this article is to make explicit the ontological assumptions underlying such a view. Our main concern is to clarify what kind of properties the universe must instantiate in order to perform computations. We analyse the (...) logical form of the two models of computation traditionally adopted in digital physics, namely, cellular automata and Turing machines. These models are computationally equivalent, but we show that they support different ontological commitments about the fundamental properties of the universe. In fact, cellular automata are compatible with a rather traditional form of physicalism, whereas Turing machines support a dualistic ontology, which could be understood as a realism about the laws of nature or, alternatively, as a kind of panpsychism. (shrink)
In this paper, I propose that, in addition to the multiverse hypothesis, which is commonly taken to be an alternative explanation for fine-tuning, other than the design hypothesis, the simulation hypothesis is another explanation for fine-tuning. I then argue that the simulation hypothesis undercuts the alleged evidential connection between ‘designer’ and ‘supernatural designer of immense power and knowledge’ in much the same way that the multiverse hypothesis undercuts the alleged evidential connection between ‘fine-tuning’ and ‘fine-tuner’ (or ‘designer’). If this is (...) correct, then the fine-tuning argument is a weak argument for the existence of God. (shrink)
A 1% skeptic is someone who has about a 99% credence in non-skeptical realism and about a 1% credence in the disjunction of all radically skeptical scenarios combined. The first half of this essay defends the epistemic rationality of 1% skepticism, appealing to dream skepticism, simulation skepticism, cosmological skepticism, and wildcard skepticism. The second half of the essay explores the practical behavioral consequences of 1% skepticism, arguing that 1% skepticism need not be behaviorally inert.
Nick Bostrom’s recently patched ‘‘simulation argument’’ (Bostrom in Philos Q 53:243–255, 2003; Bos- trom and Kulczycki in Analysis 71:54–61, 2011) purports to demonstrate the probability that we ‘‘live’’ now in an ‘‘ancestor simulation’’—that is as a simulation of a period prior to that in which a civilization more advanced than our own—‘‘post-human’’—becomes able to simulate such a state of affairs as ours. As such simulations under consid- eration resemble ‘‘brains in vats’’ (BIVs) and may appear open to similar objections, the (...) paper begins by reviewing objections to BIV-type proposals, specifically those due a presumed mad envatter. In counter example, we explore the motivating rationale behind current work in the development of psychologically realistic social simula- tions. Further concerns about rendering human cognition in a computational medium are confronted through review of current dynamic systems models of cognitive agency. In these models, aspects of the human condition are repro- duced that may in other forms be considered incomputable, i.e., political voice, predictive planning, and consciousness. The paper then argues that simulations afford a unique potential to secure a post-human future, and may be nec- essary for a pre-post-human civilization like our own to achieve and to maintain a post-human situation. Long-s- tanding philosophical interest in tools of this nature for Aristotle’s ‘‘statesman’’ and more recently for E.O. Wilson in the 1990s is observed. Self-extinction-level threats from State and individual levels of organization are compared, and a likely dependence on large-scale psychologically realistic simulations to get past self-extinction-level threats is projected. In the end, Bostrom’s basic argument for the conviction that we exist now in a simulation is reaffirmed. (shrink)
An overview of my work arguing that peer-to-peer computer networking (the Peer-to-Peer Simulation Hypothesis) may be the best explanation of quantum phenomena and a number of perennial philosophical problems.
In this paper, a metaphysics is proposed that includes everything that can be represented by a well-founded multiset. It is shown that this metaphysics, apart from being self-explanatory, is also benevolent. Paradoxically, it turns out that the probability that we were born in another life than our own is zero. More insights are gained by inducing properties from a metaphysics that is not self-explanatory. In particular, digital metaphysics is analyzed, which claims that only computable things exist. First of all, it (...) is shown that digital metaphysics contradicts itself by leading to the conclusion that the shortest computer program that computes the world is infinitely long. This means that the Church-Turing conjecture must be false. Secondly, the applicability of Occam’s razor is explained by evolution: in an evolving physics it can appear at each moment as if the world is caused by only finitely many things. Thirdly and most importantly, this metaphysics is benevolent in the sense that it organizes itself to fulfill the deepest wishes of its observers. Fourthly, universal computers with an infinite memory capacity cannot be built in the world. And finally, all the properties of the world, both good and bad, can be explained by evolutionary conservation. (shrink)
In my 2013 article, “A New Theory of Free Will”, I argued that several serious hypotheses in philosophy and modern physics jointly entail that our reality is structurally identical to a peer-to-peer (P2P) networked computer simulation. The present paper outlines how quantum phenomena emerge naturally from the computational structure of a P2P simulation. §1 explains the P2P Hypothesis. §2 then sketches how the structure of any P2P simulation realizes quantum superposition and wave-function collapse (§2.1.), quantum indeterminacy (§2.2.), wave-particle duality (§2.3.), (...) and quantum entanglement (§2.4.). Finally, §3 argues that although this is by no means a philosophical proof that our reality is a P2P simulation, it provides ample reasons to investigate the hypothesis further using the methods of computer science, physics, philosophy, and mathematics. (shrink)
There has been an ongoing conflict regarding whether reality is fundamentally digital or analogue. Recently, Floridi has argued that this dichotomy is misapplied. For any attempt to analyse noumenal reality independently of any level of abstraction at which the analysis is conducted is mistaken. In the pars destruens of this paper, we argue that Floridi does not establish that it is only levels of abstraction that are analogue or digital, rather than noumenal reality. In the pars construens of this paper, (...) we reject a classification of noumenal reality as a deterministic discrete computational system. We show, based on considerations from classical physics, why a deterministic computational view of the universe faces problems (e.g., a reversible computational universe cannot be strictly deterministic). (shrink)
Two luminaries of 20th century astrophysics were Sir James Jeans and Sir Arthur Eddington. Both took seriously the view that there is more to reality than the physical universe and more to consciousness than simply brain activity. In his Science and the Unseen World Eddington speculated about a spiritual world and that "conscious is not wholly, nor even primarily a device for receiving sense impressions." Jeans also speculated on the existence of a universal mind and a non-mechanical reality, writing in (...) his The Mysterious Universe "the universe begins to look more like a great thought than like a great machine." In his book QED Feynman discusses the situation of photons being partially transmitted and partially reflected by a sheet of glass: reflection amounting to four percent. In other words one out of every 25 photons will be reflected on average, and this holds true even for a "one at a time" flux. The four percent cannot be explained by statistical differences of the photons nor by random variations in the glass. Something is "telling" every 25th photon on average that it should be reflected back instead of being transmitted. Other quantum experiments lead to similar paradoxes. To explain how a single photon in the two-slit experiment can "know" whether there is one slit or two, Hawking and Mlodonow write: In the double-slit experiment Feynman's ideas mean the particles take paths that thread through the first slit, back out though the second slit, and then through the first again; paths that visit the restaurant that serves that great curried shrimp, and then circle Jupiter a few times before heading home; even paths that go across the universe and back. This, in Feynman's view, explains how the particle acquires the information about which slits are openŠ. It is hard to imagine a more absurd physical explanation. We can think of no way to hardwire the behavior of photons in the glass reflection or the two-slit experiments into a physical law. On the other hand, writing a software algorithm that would yield the desired result is really simple. A digital reality whose laws are software is an idea that has started to gain traction in large part thanks to an influential paper in Philosophical Quarterly by Oxford professor Nick Bostrom. Writing in the New York Times John Tierney had this to say: Until I talked to Nick Bostrom, a philosopher at Oxford University it never occurred to me that our universe might be somebody else's hobby. But now it seems quite possible. In fact, if you accept a pretty reasonable assumption of Dr. Bostrom's, it is almost a mathematical certainty that we are living in someone else's computer simulation. An alternate view is that there exists a great consciousness whose mind is the hardware, and whose thoughts are the software creating a virtual universe in which we as beings of consciousness live. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}. (shrink)
Today, the binary understanding of reality is increasingly significant. It is also the starting point for many theoretical considerations (mainly in the area of digital physics) describing the structure of the universe. What is lacking is an experimental confirmation of the binary nature of reality. This article proposes an idea for an experiment that possibly would confirm the following hypothesis: Electromagnetic waves in the form of binary signals of appropriate complexity and other parameters are capable of creating observable, material objects. (...) Also suggested is the use of an Abstract Complexity Definition (derived from aesthetic field), presented in the Supplementary Material section. (shrink)
Our digital technologies have inspired new ways of thinking about old religious topics. Digitalists include computer scientists, transhumanists, singularitarians, and futurists. Digitalists have worked out novel and entirely naturalistic ways of thinking about bodies, minds, souls, universes, gods, and life after death. Your Digital Afterlives starts with three digitalist theories of life after death. It examines personality capture, body uploading, and promotion to higher levels of simulation. It then examines the idea that reality itself is ultimately a system of self-surpassing (...) computations. On that view, you will have infinitely many digital lives across infinitely many digital worlds. Your Digital Afterlives looks at superhuman bodies and infinite bodies. Thinking of nature in purely computational terms has the potential to radically and positively change our understanding of life after death. (shrink)
This paper shows that several live philosophical and scientific hypotheses – including the holographic principle and multiverse theory in quantum physics, and eternalism and mind-body dualism in philosophy – jointly imply an audacious new theory of free will. This new theory, "Libertarian Compatibilism", holds that the physical world is an eternally existing array of two-dimensional information – a vast number of possible pasts, presents, and futures – and the mind a nonphysical entity or set of properties that "read" that physical (...) information off to subjective conscious awareness (in much the same way that a song written on an ordinary compact-disc is only played when read by an outside medium, i.e. a CD-player). According to this theory, every possible physical “timeline” in the multiverse may be fully physically deterministic or physically-causally closed but each person’s consciousness still entirely free to choose, ex nihilo, outside of the physical order, which physically-closed timeline is experienced by conscious observers. Although Libertarian Compatibilism is admittedly fantastic, I show that it not only follows from several live scientific and philosophical hypotheses, I also show that it (A) is a far more explanatorily powerful model of quantum mechanics than more traditional interpretations (e.g. the Copenhagen, Everett, and Bohmian interpretations), (B) makes determinate, testable empirical predictions in quantum theory, and finally, (C) predicts and explains the very existence of a number of philosophical debates and positions in the philosophy of mind, time, personal identity, and free will. First, I show that whereas traditional interpretations of quantum mechanics are all philosophically problematic and roughly as ontologically “extravagant” as Libertarian Compatibilism – in that they all posit “unseen” processes – Libertarian Compatibilism is nearly identical in structure to the only working simulation that human beings have ever constructed capable of reproducing (and so explaining) every general feature of quantum mechanics we perceive: namely, massive-multiplayer-online-roleplaying videogames (or MMORPGs). Although I am not the first to suggest that our world is akin to a computer simulation, I show that existing MMORPGs (online simulations we have already created) actually reproduce every general feature of quantum mechanics within their simulated-world reference-frames. Second, I show that existing MMORPGs also replicate (and so explain) many philosophical problems we face in the philosophy of mind, time, personal identity, and free will – all while conforming to the Libertarian Compatibilist model of reality. -/- I conclude, as such, that as fantastic and metaphysically extravagant as Libertarian Compatibilism may initially seem, it may well be true. It explains a number of features of our reality that no other physical or metaphysical theory does. (shrink)
This special interactive interdisciplinary issue of JCS on the singularity and the future relationship of humanity and AI is the first of two issues centered on David Chalmers’ 2010 JCS article ‘The Singularity, a Philosophical Analysis’. These issues include more than 20 solicited commentaries to which Chalmers responds. To quote Chalmers: -/- "One might think that the singularity would be of great interest to Academic philosophers, cognitive scientists, and artificial intelligence researchers. In practice, this has not been the case. Good (...) was an eminent academic, but his article was largely unappreciated at the time. The subsequent discussion of the singularity has largely taken place in non-academic circles, including Internet forums, popular media and books, and workshops organized by the independent Singularity Institute. Perhaps the highly speculative flavour of the singularity idea has been responsible for academic resistance to it. I think this resistance is a shame, as the singularity idea is clearly an important one. The argument for a singularity is one that we should take seriously. And the questions surrounding the singularity are of enormous practical and philosophical concern". -/- It is fair to say that Chalmers is the first to provide a detailed comprehensive philosophical analysis of the idea of the singularity that brings into focus not only questions about the nature of intelligence and the prospects for an intelligence explosion but also important philosophical questions about consciousness, identity and the relationship between facts and values. (shrink)
Nico Silins has proposed and defended a form of Liberalism about perception that, he thinks, is a good compromise between the Dogmatism of Jim Pryor and others, and the Conservatism of Roger White, Crispin Wright, and others. In particular, Silins argues that his theory can explain why having justification to believe the negation of skeptical hypotheses is a necessary condition for having justification to believe ordinary propositions, even though (contra the Conservative) the latter is not had in virtue of the (...) former. I argue that Silins's explanation is unsuccessful, and hence that we should prefer either White/Wright-style Conservatism (which can explain this necessary condition) or Pryor-style Dogmatism (which denies that this is a necessary condition). (shrink)
We frame the question of what kind of subjective experience a brain simulation would have in contrast to a biological brain. We discuss the brain prosthesis thought experiment. Then, we identify finer questions relating to the original inquiry, and set out to answer them moving forward from both a general physicalist perspective, and pan-experientialism. We propose that the brain simulation is likely to have subjective experience, however, it may differ significantly from human experience. Additionally, we discuss the relevance of quantum (...) properties, digital physics, theory of relativity, and information theory to the question. (shrink)
According to the scenario of cosmological artificial selection and artificial cosmogenesis, our universe was created and possibly even fine-tuned by cosmic engineers in another universe. This approach shall be compared to other explanations, and some far-reaching problems of it shall be discussed.
Jan Greben criticized fine-tuning by taking seriously the idea that “nature is quantum mechanical”. I argue that this quantum view is limited, and that fine-tuning is real, in the sense that our current physical models require fine-tuning. Second, I examine and clarify many difficult and fundamental issues raised by Rüdiger Vaas’ comments on Cosmological Artificial Selection.
Some theists maintain that they need not answer the threat posed to theistic belief by natural evil; they have reason enough to believe that God exists and it renders impotent any threat that natural evil poses to theism. Explicating how God and natural evil coexist is not necessary since they already know both exist. I will argue that, even granting theists the knowledge they claim, this does not leave them in an agreeable position. It commits the theist to a very (...) unpalatable position: our universe was not designed by God and is instead, most likely, a computer simulation. (shrink)