An overview of my work arguing that peer-to-peer computer networking (the Peer-to-Peer SimulationHypothesis) may be the best explanation of quantum phenomena and a number of perennial philosophical problems.
David. J. Chalmers examines eleven possible solutions to the meta-problem of consciousness, ‘the problem of explaining why we think that there is a problem of consciousness.’ The present paper argues that Chalmers overlooks an explanation that he has otherwise taken seriously, and which a number of philosophers, physicists, and computer scientists have taken seriously as well: the hypothesis that we are living in a computer simulation. This paper argues that a particular version of the simulationhypothesis (...) is at least as good of a solution to the meta-problem of consciousness as many explanations Chalmers considers, and may even be a better one—as it may be the best solution to a much broader meta-philosophical problem: the ‘meta-problem of everything’, the problem of explaining why our world has the quantum-mechanical, relativistic, and philosophical features it does. (shrink)
In this paper, I propose that, in addition to the multiverse hypothesis, which is commonly taken to be an alternative explanation for fine-tuning, other than the design hypothesis, the simulationhypothesis is another explanation for fine-tuning. I then argue that the simulationhypothesis undercuts the alleged evidential connection between ‘designer’ and ‘supernatural designer of immense power and knowledge’ in much the same way that the multiverse hypothesis undercuts the alleged evidential connection between ‘fine-tuning’ (...) and ‘fine-tuner’ (or ‘designer’). If this is correct, then the fine-tuning argument is a weak argument for the existence of God. (shrink)
Some theists maintain that they need not answer the threat posed to theistic belief by natural evil; they have reason enough to believe that God exists and it renders impotent any threat that natural evil poses to theism. Explicating how God and natural evil coexist is not necessary since they already know both exist. I will argue that, even granting theists the knowledge they claim, this does not leave them in an agreeable position. It commits the theist to a very (...) unpalatable position: our universe was not designed by God and is instead, most likely, a computer simulation. (shrink)
Computational imagination (CI) conceives imagination as an agent’s simulated sensorimotor interaction with the environment in the absence of sensory feedback, predicting consequences based on this interaction (Marques and Holland in Neurocomputing 72:743–759, 2009). Its bedrock is the simulationhypothesis whereby imagination resembles seeing or doing something in reality as both involve similar neural structures in the brain (Hesslow in Trends Cogn Sci 6(6):242–247, 2002). This paper raises two-forked doubts: (1) neural-level equivalence is escalated to make phenomenological equivalence. Even (...) at an abstract level, many imagined and real actions turn out to be dissimilar. More so, some imagined actions have no corresponding real actions and vice versa, even though neural regions involved in imaginings and real action-perception are the same (Sect. 1). (2) At the implementation level, the hypothesis presents a mutually exclusive view of imagination and perception whereby imagination functions in the absence of the sensory feedback and is action based. Both these issues are contested here: Neither imagination functions in the absence of perception nor all forms of imaginings are action based; it is, rather, about conceiving possibilities which emerge during the perceptual stage itself (Sect. 2). For the modal aspect to arise, it is submitted that an integrative framework is required which Kant can provide for whom imagination is an indispensable part of perception. Kant’s views on concept-formation are presented here to illustrate this aspect (Sect. 3). The Paper is concluded with emphasizing the relevance of Kant’s views to the problems identified in the two sections. (shrink)
In my 2013 article, “A New Theory of Free Will”, I argued that several serious hypotheses in philosophy and modern physics jointly entail that our reality is structurally identical to a peer-to-peer (P2P) networked computer simulation. The present paper outlines how quantum phenomena emerge naturally from the computational structure of a P2P simulation. §1 explains the P2P Hypothesis. §2 then sketches how the structure of any P2P simulation realizes quantum superposition and wave-function collapse (§2.1.), quantum indeterminacy (...) (§2.2.), wave-particle duality (§2.3.), and quantum entanglement (§2.4.). Finally, §3 argues that although this is by no means a philosophical proof that our reality is a P2P simulation, it provides ample reasons to investigate the hypothesis further using the methods of computer science, physics, philosophy, and mathematics. (shrink)
People are minded creatures; we have thoughts, feelings and emotions. More intriguingly, we grasp our own mental states, and conduct the business of ascribing them to ourselves and others without instruction in formal psychology. How do we do this? And what are the dimensions of our grasp of the mental realm? In this book, Alvin I. Goldman explores these questions with the tools of philosophy, developmental psychology, social psychology and cognitive neuroscience. He refines an approach called simulation theory, which (...) starts from the familiar idea that we understand others by putting ourselves in their mental shoes. Can this intuitive idea be rendered precise in a philosophically respectable manner, without allowing simulation to collapse into theorizing? Given a suitable definition, do empirical results support the notion that minds literally create surrogates of other peoples mental states in the process of mindreading? Goldman amasses a surprising array of evidence from psychology and neuroscience that supports this hypothesis. (shrink)
People are minded creatures; we have thoughts, feelings and emotions. More intriguingly, we grasp our own mental states, and conduct the business of ascribing them to ourselves and others without instruction in formal psychology. How do we do this? And what are the dimensions of our grasp of the mental realm? In this book, Alvin I. Goldman explores these questions with the tools of philosophy, developmental psychology, social psychology and cognitive neuroscience. He refines an approach called simulation theory, which (...) starts from the familiar idea that we understand others by putting ourselves in their mental shoes. Can this intuitive idea be rendered precise in a philosophically respectable manner, without allowing simulation to collapse into theorizing? Given a suitable definition, do empirical results support the notion that minds literally create surrogates of other peoples mental states in the process of mindreading? Goldman amasses a surprising array of evidence from psychology and neuroscience that supports this hypothesis. (shrink)
This paper is concerned with the problem of selfidentification in the domain of action. We claim that this problem can arise not just for the self as object, but also for the self as subject in the ascription of agency. We discuss and evaluate some proposals concerning the mechanisms involved in selfidentification and in agencyascription, and their possible impairments in pathological cases. We argue in favor of a simulationhypothesis that claims that actions, whether overt or covert, are (...) centrally simulated by the neural network, and that this simulation provides the basis for action recognition and attribution. (shrink)
Preston Greene (2020) argues that we should not conduct simulation investigations because of the risk that we might be terminated if our world is a simulation designed to research various counterfactuals about the world of the simulators. In response, we propose a sequence of arguments, most of which have the form of an "even if” response to anyone unmoved by our previous arguments. It runs thus: (i) if simulation is possible, then simulators are as likely to care (...) about simulating simulations as they are likely to care about simulating basement (i.e. non-simulated) worlds. But (ii) even if simulations are interested only in simulating basement worlds the discovery that we are in a simulation will have little or no impact on the evolution of ordinary events. But (iii) even if discovering that we are in a simulation impacts the evolution of ordinary events, the effects of seeming to do so could also happen in a basement world, and might be the subject of interesting counterfactuals in the basement world Finally, (iv) there is little reason to think that there is a catastrophic effect from successful simulation probes, and no argument from the precautionary principle can be used to leverage the negligible credence one ought have in this. Thus, if we do develop a simulation probe, then let’s do it. (shrink)
Historically, the hypothesis that our world is a computer simulation has struck many as just another improbable-but-possible “skeptical hypothesis” about the nature of reality. Recently, however, the simulationhypothesis has received significant attention from philosophers, physicists, and the popular press. This is due to the discovery of an epistemic dependency: If we believe that our civilization will one day run many simulations concerning its ancestry, then we should believe that we are probably in an ancestor (...)simulation right now. This essay examines a troubling but underexplored feature of the ancestor-simulationhypothesis: the termination risk posed by both ancestor-simulation technology and experimental probes into whether our world is an ancestor simulation. This essay evaluates the termination risk by using extrapolations from current computing practices and simulation technology. The conclusions, while provisional, have great implications for debates concerning the fundamental nature of reality and the safety of contemporary physics. (shrink)
Can the theory that reality is a simulation be tested? We investigate this question based on the assumption that if the system performing the simulation is nite (i.e. has limited resources), then to achieve low computational complexity, such a system would, as in a video game, render content (reality) only at the moment that information becomes available for observation by a player and not at the moment of detection by a machine (that would be part of the (...) class='Hi'>simulation and whose detection would also be part of the internal computation performed by the Virtual Reality server before rendering content to the player). Guided by this principle we describe conceptual wave/particle duality experiments aimed at testing the simulation theory. (shrink)
In this contribution I outline some ideas on what the pragmatist model of habit ontology could offer us as regards the appreciation of the constitutive role that imagery plays for social action and cognition. Accordingly, a Deweyan understanding of habit would allow for an understanding of imagery in terms of embodied cognition rather than in representational terms. I first underline the motor character of imagery, and the role its embodiment in habit plays for the anticipation of action. Secondly, I reconstruct (...) Dewey's notion of imaginative rehearsal in light of contemporary, competing models of intersubjectivity such as embodied simulation theory and the narrative practice hypothesis, and argue that the Deweyan model offers us a more encompassing framework which can be useful for reconciling these approaches. In this text I am mainly concerned with sketching a broad picture of the lines along which such a project could be developed. For this reason not all questions are given equal attention, and I shall concentrate mainly on the basic ideas, without going directly into the details of many of them. (shrink)
It is often claimed that scientists can obtain new knowledge about nature by running computer simulations. How is this possible? I answer this question by arguing that computer simulations are arguments. This view parallels Norton’s argument view about thought experiments. I show that computer simulations can be reconstructed as arguments that fully capture the epistemic power of the simulations. Assuming the extended mind hypothesis, I furthermore argue that running the computer simulation is to execute the reconstructing argument. I (...) discuss some objections and reject the view that computer simulations produce knowledge because they are experiments. I conclude by comparing thought experiments and computer simulations, assuming that both are arguments. (shrink)
The simulationhypothesis claims that the whole observable universe, including us, is a computer simulation implemented by technologically advanced beings for an unknown purpose. The simulation argument (as I reconstruct it) is an argument for this hypothesis with moderately plausible premises. I develop two lines of objection to the simulation argument. The first takes the form of a structurally similar argument for a conflicting conclusion, the claim that I am a so-called freak observer, formed (...) spontaneously in a quantum or thermodynamic fluctuation rather than through ordinary processes of evolution and growth. The second rejects the basic line of reasoning of both arguments: the sort of evidence they cite is not capable of supporting either the claim that I am a simulant or the claim that I am a freak observer. The evidence that simulants or freak observers exist is not a reason to think that I am one of them. (shrink)
I introduce the implantation argument, a new argument for the existence of God. Spatiotemporal extensions believed to exist outside of the mind, composing an external physical reality, cannot be composed of either atomlessness, or of Democritean atoms, and therefore the inner experience of an external reality containing spatiotemporal extensions believed to exist outside of the mind does not represent the external reality, the mind is a mere cinematic-like mindscreen, implanted into the mind by a creator-God. It will be shown that (...) only a creator-God can be the implanting creator of the mindscreen simulation, and other simulation theories, such as Bostrom’s famous account, that do not involve a creator-God as the mindscreen simulation creator, involve a reification fallacy. (shrink)
In her Behavioral and Brain Sciences target article, Greenfield (1991) proposed that early in a child's development Broca's area may serve the dual function of coordinating object assembly and organizing the production of structured utterances. As development progresses, the upper and lower regions of Broca's area become increasingly specialized for motor coordination and speech, respectively. This commentary presents a connectionist simulation of aspects of this proposal. The results of the simulation confirm the main thrust of Greenfield's argument and (...) suggest that an important impetus for the developmental differentiation in Broca's area may be the increasing complexity of the computational demands made upon it. (shrink)
Nick Bostrom’s ‘Simulation Argument’ purports to show that, unless we are confident that advanced ‘posthuman’ civilizations are either extremely rare or extremely rarely interested in running simulations of their own ancestors, we should assign significant credence to the hypothesis that we are simulated. I argue that Bostrom does not succeed in grounding this constraint on credence. I first show that the Simulation Argument requires a curious form of selective scepticism, for it presupposes that we possess good evidence (...) for claims about the physical limits of computation and yet lack good evidence for claims about our own physical constitution. I then show that two ways of modifying the argument so as to remove the need for this presupposition fail to preserve the original conclusion. Finally, I argue that, while there are unusual circumstances in which Bostrom’s selective scepticism might be reasonable, we do not currently find ourselves in such circumstances. There is no good reason to uphold the selective scepticism the Simulation Argument presupposes. There is thus no good reason to believe its conclusion. (shrink)
We consider the relation between past and future events from the perspective of the constructive episodic simulationhypothesis, which holds that episodic simulation of future events requires a memory system that allows the flexible recombination of details from past events into novel scenarios. We discuss recent neuroimaging and behavioral evidence that support this hypothesis in relation to the theater production metaphor.
Competition between scientific hypotheses is not always a matter of mutual exclusivity. Consistent hypotheses can compete to varying degrees either directly or indirectly via a body of evidence. We motivate and defend a particular account of hypothesis competition by showing how it captures these features. Computer simulations of Bayesian inference are used to highlight the limitations of adopting mutual exclusivity as a simplifying assumption to model scientific reasoning, particularly due to the exclusion of hypotheses that may be true. We (...) end with a case study demonstrating the subtleties involved in hypothesis competition in scientific practice. (shrink)
Astrophysics faces methodological challenges as a result of being a predominantly observation-based science without access to traditional experiments. In light of these challenges, astrophysicists frequently rely on computer simulations. Using collisional ring galaxies as a case study, I argue that computer simulations play three roles in reasoning in astrophysics: (1) hypothesis testing, (2) exploring possibility space, and (3) amplifying observations.
Various theorists contend that we may live in a computer simulation. David Chalmers in turn argues that the simulationhypothesis is a metaphysical hypothesis about the nature of our reality, rather than a sceptical scenario. We use recent work on consciousness to motivate new doubts about both sets of arguments. First, we argue that if either panpsychism or panqualityism is true, then the only way to live in a simulation may be as brains-in-vats, in which (...) case it is unlikely that we live in a simulation. We then argue that if panpsychism or panqualityism is true, then viable simulation hypotheses are substantially sceptical scenarios. We conclude that the nature of consciousness has wide-ranging implications for simulation arguments. (shrink)
Nick Bostrom’s “simulation argument” purports to show that if it is possible to create and run a vast number of computer simulations indistinguishable from the reality we are living in, then it is highly probable that we are already living in a computer simulation. However, the simulation argument requires a modification to escape the undermining implications of the scepticism it implies, as argued by Birch. The present paper shows that, even if the modified simulation argument is (...) valid, still it is unsound since it relies on the indistinguishability assumption that even in principle cannot be tested. To account for the unsoundness of the simulation argument, the present paper draws on John Woods' theory of fiction, to expose structural similarities between general fiction and the simulation argument. Though the simulation argument is unsound, it seems persuasive, because the argument immerses the reader in a fictive world with the help of tacit assumptions, leveraging just enough common sense to remain compelling while covering over an untestable premise. Simultaneously with the critique of Bostrom’s argument, Chalmers' argument for the matrix hypothesis is assessed on similar criteria. In either case, both arguments rely on an accumulation of assumptions, both implicit and explicit, hiding the premises that are untestable in principle. (shrink)
Until recently, philosophers debating the rationality of time-biases have supposed that people exhibit a first-person hedonic bias toward the future, but that their non-hedonic and third-person preferences are time-neutral. Recent empirical work, however, suggests that our preferences are more nuanced. First, there is evidence that our third-person preferences exhibit time-neutrality only when the individual with respect to whom we have preferences—the preference target—is a random stranger about whom we know nothing; given access to some information about the preference target, third-person (...) preferences mirror first-person preferences. As a result, the simulationhypothesis has been proposed, according to which third-person preferences will mirror first-person preferences when we can simulate the mental states of the preference target. Second, there is evidence that we prefer negative hedonic events to be in our past (we are first-person negatively hedonically future-biased) only when we view future events as fixed and in no way under our control. By contrast, when we perceive it to be within our power to mitigate the badness of future events, we are first-person negatively hedonically past-biased. This is the mitigation hypothesis. We distinguish two versions of the mitigation hypothesis, the squirrelling version and the heuristic version. We ran a study which tested the simulationhypothesis, and which aimed to determine whether the squirrelling or the heuristic version of the mitigation hypothesis enjoys more empirical support. We found support for the heuristic version of the hypothesis, but no support for the squirrelling version. (shrink)
The Narrative Practice Hypothesis (NPH) is a recently conceived, late entrant into the contest of trying to understand the basis of our mature folk psychological abilities, those involving our capacity to explain ourselves and comprehend others in terms of reasons. This paper aims to clarify its content, importance and scientific plausibility by: distinguishing its conceptual features from those of its rivals, articulating its philosophical significance, and commenting on its empirical prospects. I begin by clarifying the NPH's target explanandum and (...) the challenge it presents to theory theory (TT), simulation theory (ST) and hybrid combinations of these theories. The NPH competes with them directly for the same explanatory space insofar as these theories purport to explain the core structural basis of our folk psychological (FP)-competence (those of the sort famously but not exclusively deployed in acts of third-personal mindreading). (shrink)
Several theories claim that dreaming is a random by-product of REM sleep physiology and that it does not serve any natural function. Phenomenal dream content, however, is not as disorganized as such views imply. The form and content of dreams is not random but organized and selective: during dreaming, the brain constructs a complex model of the world in which certain types of elements, when compared to waking life, are underrepresented whereas others are over represented. Furthermore, dream content is consistently (...) and powerfully modulated by certain types of waking experiences. On the basis of this evidence, I put forward the hypothesis that the biological function of dreaming is to simulate threatening events, and to rehearse threat perception and threat avoidance. To evaluate this hypothesis, we need to consider the original evolutionary context of dreaming and the possible traces it has left in the dream content of the present human population. In the ancestral environment human life was short and full of threats. Any behavioral advantage in dealing with highly dangerous events would have increased the probability of reproductive success. A dream-production mechanism that tends to select threatening waking events and simulate them over and over again in various combinations would have been valuable for the development and maintenance of threat-avoidance skills. Empirical evidence from normative dream content, children's dreams, recurrent dreams, nightmares, post traumatic dreams, and the dreams of hunter-gatherers indicates that our dream-production mechanisms are in fact specialized in the simulation of threatening events, and thus provides support to the threat simulationhypothesis of the function of dreaming. Key Words: dream content; dream function; evolution of consciousness; evolutionary psychology; fear; implicit learning; nightmares; rehearsal; REM; sleep; threat perception. (shrink)
The threat simulation theory of dreaming states that dream consciousness is essentially an ancient biological defence mechanism, evolutionarily selected for its capacity to repeatedly simulate threatening events. Threat simulation during dreaming rehearses the cognitive mechanisms required for efficient threat perception and threat avoidance, leading to increased probability of reproductive success during human evolution. One hypothesis drawn from TST is that real threatening events encountered by the individual during wakefulness should lead to an increased activation of the system, (...) a threat simulation response, and therefore, to an increased frequency and severity of threatening events in dreams. Consequently, children who live in an environment in which their physical and psychological well-being is constantly threatened should have a highly activated dream production and threat simulation system, whereas children living in a safe environment that is relatively free of such threat cues should have a weakly activated system. We tested this hypothesis by analysing the content of dream reports from severely traumatized and less traumatized Kurdish children and ordinary, non-traumatized Finnish children. Our results give support for most of the predictions drawn from TST. The severely traumatized children reported a significantly greater number of dreams and their dreams included a higher number of threatening dream events. The dream threats of traumatized children were also more severe in nature than the threats of less traumatized or non-traumatized children. (shrink)
The Epistemology Of Computer Simulation has developed as an epistemological and methodological analysis of simulative sciences using quantitative computational models to represent and predict empirical phenomena of interest. In this paper, Executable Cell Biology and Agent-Based Modelling are examined to show how one may take advantage of qualitative computational models to evaluate reachability properties of reactive systems. In contrast to the thesis, advanced by EOCS, that computational models are not adequate representations of the simulated empirical systems, it is shown (...) how the representational adequacy of qualitative models is essential to evaluate reachability properties. Justification theory, if not playing an essential role in EOCS, is exhibited to be involved in the process of advancing and corroborating model-based hypotheses about empirical systems in ECB and ABM. Finally, the practice of evaluating model-based hypothesis by testing the simulated systems is shown to constitute an argument in favour of the thesis that computer simulations in ECB and ABM can be put on a par with scientific experiments. (shrink)
‘Simulation Hypotheses’ are imaginative scenarios that are typically employed in philosophy to speculate on how likely it is that we are currently living within a simulated universe as well as on our possibility for ever discerning whether we do in fact inhabit one. These philosophical questions in particular overshadowed other aspects and potential uses of simulation hypotheses, some of which are foregrounded in this article. More specifically, “A Theodicy for Artificial Universes” focuses on the moral implications of (...) class='Hi'>simulation hypotheses with the objective of speculatively answering questions concerning computer simulations such as: If we are indeed living in a computer simulation, what might be its purpose? What aspirations and values could be inferentially attributed to its alleged creators? And would living in a simulated universe affect the value and meaning we attribute to our existence? (shrink)
Nick Bostrom has famously defended the credibility of the simulationhypothesis – the hypothesis that we live in a computer simulation. Barry Dainton has recently employed the simulationhypothesis to defend the ‘simulation solution’ to the problem of natural evil. The simulation solution claims that apparently natural evils are in fact the result of wrong actions on the part of the people who create our simulation. In this way, it treats apparently (...) natural evils as actually being moral evils, allowing them to be explained via the free will theodicy. Other theodicies which assimilate apparently natural evils to moral ones include Fall theodicies, which attribute apparently natural evils to the biblical Fall, and diabolical theodicies, which attribute them to the activity of demons. Unfortunately, Dainton fails to give compelling reasons for preferring the simulation solution to Fall or diabolical theodicies. He gives one argument against diabolical theodicies, but it has no force against their best version, and he does not discuss Fall theodicies at all. In this article, I attempt to rectify this. I discuss several problems faced by Fall and diabolical theodicies which the simulation solution avoids. These provide some reason to prefer the simulation solution to these alternatives. (shrink)
Threat themes are clearly over-represented in dreams. Threat is, however, not the only theme with potential evolutionary significance. Even for hypnagogic and hypnopompic hallucinations during sleep paralysis, for which threat themes are far commoner than for ordinary dreaming, consistent non-threat themes have been reported. Revonsuo's simulationhypothesis represents an encouraging initiative to develop an evolutionary functional approach to dream-related experiences but it could be broadened to include evolutionarily relevant themes beyond threat. It is also suggested that Revonsuo's evolutionary (...) re-interpretation of dreams might profitably be compared to arguments for, and models of, evolutionary functions of play. [Revonsuo]. (shrink)
According to the most common interpretation of the simulation argument, we are very likely to live in an ancestor simulation. It is interesting to ask if some families of simulations are more likely than others inside the space of all simulations. We argue that a natural probability measure is given by computational complexity: easier simulations are more likely to be run. Remarkably this allows us to extract experimental predictions from the fact that we live in a simulation. (...) For instance we show that it is very likely that humanity will not achieve interstellar travel and that humanity will not meet other intelligent species in the universe, in turn explaining the Fermi's Paradox. On the opposite side, experimental falsification of any of these predictions would constitute evidence against our reality being a simulation. (shrink)
According to Revonsuo, dreams are the output of a evolved “threat simulation mechanism.” The author marshals a diverse and comprehensive array of empirical and theoretical support for this hypothesis. We propose that the hypothesized threat simulation mechanism might be more domain-specific in design than the author implies. To illustrate, we discuss the possible sex-differentiated design of the hypothesized threat simulation mechanism. [Revonsuo].
Can computer simulation results be evidence for hypotheses about real-world systems and phenomena? If so, what sort of evidence? Can we gain genuinely new knowledge of the world via simulation? I argue that evidence from computer simulation is aptly characterized as higher-order evidence: it is evidence that other evidence regarding a hypothesis about the world has been collected. Insofar as particular epistemic agents do not have this other evidence, it is possible that they will gain genuinely (...) new knowledge of the world via simulation. I illustrate with examples inspired by uses of simulation in meteorology and astrophysics. (shrink)
Based on a framework that distinguishes several types, roles and functions of values in science, we discuss legitimate applications of values in the validation of computer simulations. We argue that, first, epistemic values, such as empirical accuracy and coherence with background knowledge, have the role to assess the credibility of simulation results, whereas, second, cognitive values, such as comprehensiveness of a conceptual model or easy handling of a numerical model, have the role to assess the usefulness of a model (...) for investigating a hypothesis. In both roles, values perform what we call first-order functions. In addition, cogni¬tive values may also serve an auxiliary function by facilitating the assessment of credibility. As for a third type of values, i.e. social values, their legitimate role consists in specifying and weighing epistemic and cognitive values with respect to practical uses of a simulation, which is considered a second-order function. Rational intersubjective agreement on how to specify and weigh the different values is supposed to ensure objectivity in simulation validation. (shrink)
Gravitational interactions allowed astronomers to conclude that dark matter rings all luminous galaxies in gigantic halos, but this only accounts for a fraction of the total mass of dark matter believed to exist. Where is the rest? We hypothesize that some of it resides in dark galaxies, pure dark matter halos that either never possessed or have totally lost their baryonic matter. This article explores methodological challenges that arise because of the nature of observation in astrophysics and examines how the (...) blend of observation, simulation, and theory we call the Observing the Invisible approach might make detecting such dark objects possible. (shrink)
Using four examples of models and computer simulations from the history of psychology, I discuss some of the methodological aspects involved in their construction and use, and I illustrate how the existence of a model can demonstrate the viability of a hypothesis that had previously been deemed impossible on a priori grounds. This shows a new way in which scientists can learn from models that extends the analysis of Morgan (1999), who has identified the construction and manipulation of models (...) as those phases in which learning from models takes place. (shrink)
This paper specifies two hypotheses that are intimated in recent research on empathy and mindreading. The first, the phenomenal simulationhypothesis, holds that those attributing mental states (i.e., mindreaders) sometimes simulate the phenomenal states of those to whom they are making attributions (i.e., targets). The second, the phenomenal mindreading hypothesis, holds that this phenomenal simulation plays an important role in some mental state attributions. After explicating these hypotheses, the paper focuses on the first. It argues that (...) neuropsychological experiments on empathy and behavioral experiments on imitation provide good reason to think that mindreaders sometimes simulate targets' phenomenal states. Accordingly, the paper concludes, the phenomenal mindreading hypothesis merits consideration. (shrink)
Recent computer simulations of evolving neural networks have shown that population-level behavioral asymmetries can arise without social interactions. Although these models are quite limited at present, they support the hypothesis that social pressures can be sufficient but are not necessary for population lateralization to occur, and they provide a framework for further theoretical investigation of this issue.
The present paper introduces "ontomimetic simulation" and argues that this class of models has enabled the investigation of hypotheses about complex systems in new ways that have epistemological relevance. Ontomimetic simulation can be differentiated from other types of modeling by its reliance on causal similarity in addition to representation. Phenomena are modeled not directly but via mimesis of the ontology (i.e. the "underlying physics", microlevel etc.) of systems and a subsequent animation of the resulting model ontology as a (...) dynamical system. While the ontology is clearly used for computing system states, what is epistemologically important is that it is viewed as a hypothesis about the makeup of the studied system. This type of simulation, where model ontologies are used as hypotheses, is here called inverse ontomimetic simulation since it reverses the typical informational path from the target to the model system. It links experimental and analytical techniques in being explicitly dynamical while at the same time capable of abstraction. Inverse ontomimetic simulation is argued to have a great impact on science and to be the tool for hypothesis-testing that has made systematic theory development for complex systems possible. (shrink)
Partial lying denotes the cases where we partially believe something to be false but nevertheless assert it with the intent to deceive the addressee. We investigate how the severity of partial lying may be determined and how partial lies can be classified. We also study how much epistemic damage an agent suffers depending on the level of trust that she invests in the liar and the severity of the lies she is told. Our analysis is based on the results from (...) exploratory computer simulations of an arguably rational Bayesian agent who is trying to determine how biased a coin is while observing the coin tosses and listening to a liar’s misleading predictions about the outcomes. Our results provide an interesting testable hypothesis at the intersection of epistemology and ethics, namely that in the longer term partial lies lead to more epistemic damage than outright lies. (shrink)
It has been proposed that the design of robots might benefit from interactions that are similar to caregiver-child interactions, which is tailored to children's respective capacities to a high degree. However, so far little is known about how people adapt their tutoring behaviour to robots and whether robots can evoke input that is similar to child-directed interaction. The paper presents detailed analyses of speakers' linguistic behaviour and non-linguistic behaviour, such as action demonstration, in two comparable situations: In one experiment, parents (...) described and explained to their nonverbal infants the use of certain everyday objects; in the other experiment, participants tutored a simulated robot on the same objects. The results, which show considerable differences between the two situations on almost all measures, are discussed in the light of the computer-as-social-actor paradigm and the register hypothesis. Keywords: child-directed speech (CDS); motherese; robotese; motionese; register theory; social communication; human-robot interaction (HRI); computers-as-social-actors; mindless transfer. (shrink)
This commentary discusses Oatley's proposal that literary works considered as simulations that run on minds can fulfill similar epistemic functions as computer simulations of mental processes. Whereas in computer simulation, both the input data and the computations to be performed on these data are explicit, only the input is explicitly known in the case of mental simulation. For this reason, literary simulations cannot play exactly the same epistemic role as computer simulations. Still, literary simulations can provide knowledge (e.g., (...) about the phenomenal quality of emotions or about possible emotional dynamics) that is relevant for emotion science: it adds to the corpus of facts about emotions that need to be explained, and it may suggest hypotheses about the constitution of the mechanisms that generate emotions. In addition, the hypotheses suggested by a literary simulation can be tested in new mental simulations. However, at least for the purpose of hypothesis testing, the simulation of a multiplicity of experimentally manipulated scenarios should be more revealing than that of a single literary work describing only one possible course of events. (shrink)
According to a dominant interpretation of the simulationhypothesis, in recognizing an emotion we use the same neural processes used in experiencing that emotion. This paper argues that the view is fundamentally misguided. I will examine the simulational arguments for the three basic emotions of fear, disgust, and anger and argue that the simulational account relies strongly on a narrow sense of emotion processing which hardly squares with evidence on how, in fact, emotion recognition is processed. I contend (...) that the current body of empirical evidence suggests that emotion recognition is processed in an integrative system involving multiple cross-regional interactions in the brain, a view which squares with understanding emotion recognition as an information-rich, rather than simulational, process. In the final section, I discuss possible objections. (shrink)
It has been proposed that the design of robots might benefit from interactions that are similar to caregiver–child interactions, which is tailored to children’s respective capacities to a high degree. However, so far little is known about how people adapt their tutoring behaviour to robots and whether robots can evoke input that is similar to child-directed interaction. The paper presents detailed analyses of speakers’ linguistic behaviour and non-linguistic behaviour, such as action demonstration, in two comparable situations: In one experiment, parents (...) described and explained to their nonverbal infants the use of certain everyday objects; in the other experiment, participants tutored a simulated robot on the same objects. The results, which show considerable differences between the two situations on almost all measures, are discussed in the light of the computer-as-social-actor paradigm and the register hypothesis. Keywords: child-directed speech ; motherese; robotese; motionese; register theory; social communication; human–robot interaction ; computers-as-social-actors; mindless transfer. (shrink)
Malcolm-Smith, Solms, Turnbull and Tredoux [Malcolm-Smith, S., Solms, M.,Turnbull, O., & Tredoux, C. . Threat in dreams: An adaptation? Consciousness and Cognition, 17, 1281–1291.] have made an attempt to test the Threat-Simulation Theory , a theory offering an evolutionary psychological explanation for the function of dreaming [Revonsuo, A. . The reinterpretation of dreams: An evolutionary hypothesis of the function of dreaming. Behavioral and Brain Sciences, 23, 877–901]. Malcolm-Smith et al. argue that empirical evidence from their own study as (...) well as from some other studies in the literature does not support the main predictions of the TST: that threatening events are frequent and overrepresented in dreams, that exposure to real threats activates the threat-simulation system, and that dream threats contain realistic rehearsals of threat avoidance responses. Other studies, including our own, have come up with results and conclusions that are in conflict with those of Malcolm-Smith et al. In this commentary, we provide an analysis of the sources of these disagreements, and their implications to the TST. Much of the disagreement seems to stem from differing interpretations of the theory and, consequently, of differing methods to test it. (shrink)