A new theory is taking hold in neuroscience. It is the theory that the brain is essentially a hypothesis-testing mechanism, one that attempts to minimise the error of its predictions about the sensory input it receives from the world. It is an attractive theory because powerful theoretical arguments support it, and yet it is at heart stunningly simple. Jakob Hohwy explains and explores this theory from the perspective of cognitive science and philosophy. The key argument throughout The Predictive Mind is (...) that the mechanism explains the rich, deep, and multifaceted character of our conscious perception. It also gives a unified account of how perception is sculpted by attention, and how it depends on action. The mind is revealed as having a fragile and indirect relation to the world. Though we are deeply in tune with the world we are also strangely distanced from it. (shrink)
Are predictions about how one will freely and intentionally behave in the future ever relevant to how one ought to behave? There is good reason to think they are. As imperfect agents, we have responsibilities of self-management, which seem to require that we take account of the predictable ways we're liable to go wrong. I defend this conclusion against certain objections to the effect that incorporating predictions concerning one's voluntary conduct into one's practical reasoning amounts to evading responsibility for that (...) conduct. There is, however, some truth to this sort of objection. To understand the legitimate role of self-prediction in practical reasoning, we need to distinguish instances of coping responsibly with an anticipated failure to behave as one ought, on the one hand, from mere acquiescence in one's flaws, on the other. I argue that, to draw this distinction, we must recognize certain limits on the use of self-prediction as a ground of choice. (shrink)
Recent work in computational and cognitive neuroscience depicts the brain as an ever‐active prediction machine: an inner engine continuously striving to anticipate the incoming sensory barrage. I briefly introduce this class of models before contrasting two ways of understanding the implied vision of mind. One way (Conservative Predictive Processing) depicts the predictive mind as an insulated inner arena populated by representations so rich and reconstructive as to enable the organism to ‘throw away the world’. The other (Radical Predictive Processing) (...) stresses the use of fast and frugal, action‐involving solutions of the kind highlighted by much work in robotics and embodied cognition. But it goes further, by showing how predictive schemes can combine frugal and more knowledge‐intensive strategies, switching between them fluently and continuously as task and context dictate. I end by exploring some parallels with work in enactivism, and by noting a certain ambivalence concerning internal representations and their role in the predictive mind. (shrink)
Diabetes is one of the most common diseases worldwide where a cure is not found for it yet. Annually it cost a lot of money to care for people with diabetes. Thus the most important issue is the prediction to be very accurate and to use a reliable method for that. One of these methods is using artificial intelligence systems and in particular is the use of Artificial Neural Networks (ANN). So in this paper, we used artificial neural networks (...) to predict whether a person is diabetic or not. The criterion was to minimize the error function in neural network training using a neural network model. After training the ANN model, the average error function of the neural network was equal to 0.01 and the accuracy of the prediction of whether a person is diabetics or not was 87.3%. (shrink)
According to the predictive coding theory of cognition , brains are predictive machines that use perception and action to minimize prediction error, i.e. the discrepancy between bottom–up, externally-generated sensory signals and top–down, internally-generated sensory predictions. Many consider PCT to have an explanatory scope that is unparalleled in contemporary cognitive science and see in it a framework that could potentially provide us with a unified account of cognition. It is also commonly assumed that PCT is a representational theory of sorts, (...) in the sense that it postulates that our cognitive contact with the world is mediated by internal representations. However, the exact sense in which PCT is representational remains unclear; neither is it clear that it deserves such status—that is, whether it really invokes structures that are truly and nontrivially representational in nature. In the present article, I argue that the representational pretensions of PCT are completely justified. This is because the theory postulates cognitive structures—namely action-guiding, detachable, structural models that afford representational error detection—that play genuinely representational functions within the cognitive system. (shrink)
The American justice system, from police departments to the courts, is increasingly turning to information technology for help identifying potential offenders, determining where, geographically, to allocate enforcement resources, assessing flight risk and the potential for recidivism amongst arrestees, and making other judgments about when, where, and how to manage crime. In particular, there is a focus on machine learning and other data analytics tools, which promise to accurately predict where crime will occur and who will perpetrate it. Activists and academics (...) have begun to raise critical questions about the use of these tools in policing contexts. In this chapter, I review the emerging critical literature on predictive policing and contribute to it by raising ethical questions about the use of predictive analytics tools to identify potential offenders. Drawing from work on the ethics of profiling, I argue that the much-lauded move from reactive to preemptive policing can mean wrongfully generalizing about individuals, making harmful assumptions about them, instrumentalizing them, and failing to respect them as full ethical persons. I suggest that these problems stem both from the nature of predictive policing tools and from the sociotechnical contexts in which they are implemented... (shrink)
I propose an account of the speech act of prediction that denies that the contents of prediction must be about the future and illuminates the relation between prediction and assertion. My account is a synthesis of two ideas: (i) that what is in the future in prediction is the time of discovery and (ii) that, as Benton and Turri recently argued, prediction is best characterized in terms of its constitutive norms.
In earlier work ( Cleland  , ), I sketched an account of the structure and justification of ‘prototypical’ historical natural science that distinguishes it from ‘classical’ experimental science. This article expands upon this work, focusing upon the close connection between explanation and justification in the historical natural sciences. I argue that confirmation and disconfirmation in these fields depends primarily upon the explanatory (versus predictive or retrodictive) success or failure of hypotheses vis-à-vis empirical evidence. The account of historical explanation that (...) I develop is a version of common cause explanation. Common cause explanation has long been vindicated by appealing to the principle of the common cause. Many philosophers of science (e.g., Sober and Tucker) find this principle problematic, however, because they believe that it is either purely methodological or strictly metaphysical. I defend a third possibility: the principle of the common cause derives its justification from a physically pervasive time asymmetry of causation (a.k.a. the asymmetry of overdetermination). I argue that explicating the principle of the common cause in terms of the asymmetry of overdetermination illuminates some otherwise puzzling features of the practices of historical natural scientists. (shrink)
Predictive Processing theory, hotly debated in neuroscience, psychology and philosophy, promises to explain a number of perceptual and cognitive phenomena in a simple and elegant manner. In some of its versions, the theory is ambitiously advertised as a new theory of conscious perception. The task of this paper is to assess whether this claim is realistic. We will be arguing that the Predictive Processing theory cannot explain the transition from unconscious to conscious perception in its proprietary terms. The explanations offer (...) by PP theorists mostly concern the preconditions of conscious perception, leaving the genuine material substrate of consciousness untouched. (shrink)
Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. Such accounts offer a unifying model of perception and action, illuminate the functional role of attention, and may neatly capture the special contribution of cortical (...) processing to adaptive success. This target article critically examines this approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action. Sections 1 and 2 lay out the key elements and implications of the approach. Section 3 explores a variety of pitfalls and challenges, spanning the evidential, the methodological, and the more properly conceptual. The paper ends (sections 4 and 5) by asking how such approaches might impact our more general vision of mind, experience, and agency. (shrink)
Predictive processing (PP) accounts of perception are unique not merely in that they postulate a unity between perception and imagination. Rather, they are unique in claiming that perception should be conceptualised in terms of imagination and that the two involve an identity of neural implementation. This paper argues against this postulated unity, on both conceptual and empirical grounds. Conceptually, the manner in which PP theorists link perception and imagination belies an impoverished account of imagery as cloistered from the external world (...) in its intentionality, akin to a virtual reality, as well as endogenously generated. Yet this ignores a whole class of imagery whose intentionality is directed on the actual environment—projected mental imagery—and also ignores the fact that imagery may be triggered crossmodally in a bottom-up, stimulus-driven way. Empirically, claiming that imagery and perception share neural circuitry ignores relevant clinical results in this area. These evidence substantial perception/imagery neural dissociations, most notably in the case of aphantasia. Taken together, the arguments here suggest that PP theorists should substantially temper, if not outright abandon, their claim to a perception/imagination unity. (shrink)
This paper examines the relationship between perceiving and imagining on the basis of predictive processing models in neuroscience. Contrary to the received view in philosophy of mind, which holds that perceiving and imagining are essentially distinct, these models depict perceiving and imagining as deeply unified and overlapping. It is argued that there are two mutually exclusive implications of taking perception and imagination to be fundamentally unified. The view defended is what I dub the ecological–enactive view given that it does not (...) succumb to internalism about the mind-world relation, and allows one to keep a version of the received view in play. (shrink)
an observation to formulate a theory, it is no surprise that the resulting theory accurately captures that observation. However, when the theory makes a novel prediction—when it predicts an observation that was not used in its formulation—this seems to provide more substantial confirmation of the theory. This paper presents a new approach to the vexed problem of understanding the epistemic difference between prediction and accommodation. In fact, there are several problems that need to be disentangled; in all of (...) them, the key is the concept of overfitting. We float the hypothesis that accommodation is a defective methodology only when the methods used to accommodate the data fail to guard against the risk of overfitting. We connect our analysis with the proposals that other philosophers have made. We also discuss its bearing on the conflict between instrumentalism and scientific realism. Introduction Predictivisms—a taxonomy Observations Formulating the problem What might Annie be doing wrong? Solutions Observations explained Mayo on severe tests The miracle argument and scientific realism Concluding comments. (shrink)
This paper examines the famous doctrine that independent prediction garners more support than accommodation. The standard arguments for the doctrine are found to be invalid, and a more realistic position is put forward, that whether evidence supports or not a hypothesis depends on the prior probability of the hypothesis, and is independent of whether it was proposed before or after the evidence. This position is implicit in the subjective Bayesian theory of confirmation, and the paper ends with a brief (...) account of this theory, and answer to the principal objections to it. (shrink)
Predictive processing has recently been advanced as a global cognitive architecture for the brain. I argue that its commitments concerning the nature and format of cognitive representation are inadequate to account for two basic characteristics of conceptual thought: first, its generality—the fact that we can think and flexibly reason about phenomena at any level of spatial and temporal scale and abstraction; second, its rich compositionality—the specific way in which concepts productively combine to yield our thoughts. I consider two strategies for (...) avoiding these objections and I argue that both confront formidable challenges. (shrink)
What individuates the speech act of prediction? The standard view is that prediction is individuated by the fact that it is the unique speech act that requires future-directed content. We argue against this view and two successor views. We then lay out several other potential strategies for individuating prediction, including the sort of view we favor. We suggest that prediction is individuated normatively and has a special connection to the epistemic standards of expectation. In the process, (...) we advocate some constraints that we think a good theory of prediction should respect. (shrink)
A widely endorsed thesis in the philosophy of science holds that if evidence for a hypothesis was not known when the hypothesis was proposed, then that evidence confirms the hypothesis more strongly than would otherwise be the case. The thesis has been thought to be inconsistent with Bayesian confirmation theory, but the arguments offered for that view are fallacious. This paper shows how the special value of prediction can in fact be given Bayesian explanation. The explanation involves consideration of (...) the reliability of the method by which the hypothesis was discovered, and thus reveals an intimate connection between the 'logic of discovery' and confirmation theory. (shrink)
In this research, an Artificial Neural Network (ANN) model was developed and tested to predict Birth Weight. A number of factors were identified that may affect birth weight. Factors such as smoke, race, age, weight (lbs) at last menstrual period, hypertension, uterine irritability, number of physician visits in 1st trimester, among others, as input variables for the ANN model. A model based on multi-layer concept topology was developed and trained using the data from some birth cases in hospitals. The evaluation (...) of testing the dataset shows that the ANN model is capable of correctly predicting the birth weight with 100% accuracy. (shrink)
Gregor Betz explores the following questions: Where are the limits of economics, in particular the limits of economic foreknowledge? Are macroeconomic forecasts credible predictions or mere prophecies and what would this imply for the way economic policy decisions are taken? Is rational economic decision making possible without forecasting at all?
This chapter seeks to recover an approach to consciousness from a general theory of brain function, namely the prediction error minimization theory. The way this theory applies to mental and developmental disorder demonstrates its relevance to consciousness. The resulting view is discussed in relation to a contemporary theory of consciousness, namely the idea that conscious perception depends on Bayesian metacognition; this theory is also supported by considerations of psychopathology. This Bayesian theory is first disconnected from the higher-order thought theory, (...) and then, via a prediction error conception of action, connected instead to the global neuronal workspace theory. Considerations of mental and developmental disorder therefore show that a very general theory of brain function is relevant to explaining the structure of conscious perception; furthermore, this theory can subsume and unify two contemporary approaches to consciousness, in a move that seeks to elucidate the fundamental mechanism for selection of representational content into consciousness. (shrink)
A major disagreement between different views about the foundations of quantum mechanics concerns whether for a theory to be intelligible as a fundamental physical theory it must involve a ‘primitive ontology’ (PO), i.e. variables describing the distribution of matter in four-dimensional space–time. In this article, we illustrate the value of having a PO. We do so by focusing on the role that the PO plays for extracting predictions from a given theory and discuss valid and invalid derivations of predictions. To (...) this end, we investigate a number of examples based on toy models built from the elements of familiar interpretations of quantum theory. (shrink)
The use of forward models is well established in cognitive and computational neuroscience. We compare and contrast two recent, but interestingly divergent, accounts of the place of forward models in the human cognitive architecture. On the Auxiliary Forward Model account, forward models are special-purpose prediction mechanisms implemented by additional circuitry distinct from core mechanisms of perception and action. On the Integral Forward Model account, forward models lie at the heart of all forms of perception and action. We compare these (...) neighbouring but importantly different visions and consider their implications for the cognitive sciences. We end by asking what kinds of empirical research might offer evidence favouring one or the other of these approaches. (shrink)
In a recent Analysis piece, John Shand (2014) argues that the Predictive Theory of Mind provides a unique explanation for why one cannot play chess against oneself. On the basis of this purported explanatory power, Shand concludes that we have an extra reason to believe that PTM is correct. In this reply, we first rectify the claim that one cannot play chess against oneself; then we move on to argue that even if this were the case, Shand’s argument does not (...) give extra weight to the Predictive Theory of Mind. (shrink)
We investigated the relationship between guilt proneness and counterproductive work behavior (CWB) using a diverse sample of employed adults working in a variety of different industries at various levels in their organizations. CWB refers to behaviors that harm or are intended to harm organizations or people in organizations. Guilt proneness is a personality trait characterized by a predisposition to experience negative feelings about personal wrongdoing. CWB was engaged in less frequently by individuals high in guilt proneness compared to those low (...) in guilt proneness, controlling for other known correlates of CWB. CWB was also predicted by gender, age, intention to turnover, interpersonal conflict at work, and negative affect at work. Given the detrimental impact of CWB on people and organizations, it may be wise for employers to consider guilt proneness when making hiring decisions. (shrink)
Clark has recently suggested that predictive processing advances a theory of neural function with the resources to put an ecumenical end to the “representation wars” of recent cognitive science. In this paper I defend and develop this suggestion. First, I broaden the representation wars to include three foundational challenges to representational cognitive science. Second, I articulate three features of predictive processing’s account of internal representation that distinguish it from more orthodox representationalist frameworks. Specifically, I argue that it posits a resemblance-based (...) representational architecture with organism-relative contents that functions in the service of pragmatic success, not veridical representation. Finally, I argue that internal representation so understood is either impervious to the three anti-representationalist challenges I outline or can actively embrace them. (shrink)
1. Introduction 2. Reward-Guided Decision Making 3. Content in the Model 4. How to Deflate a Metarepresentational Reading Proust and Carruthers on metacognitive feelings 5. A Deflationary Treatment of RPEs? 5.1 Dispensing with prediction errors 5.2 What is use of the RPE focused on? 5.3 Alternative explanations—worldly correlates 5.4 Contrast cases 6. Conclusion Appendix: Temporal Difference Learning Algorithms.
This chapter explores to what extent some core ideas of predictive processing can be applied to the phenomenology of time consciousness. The focus is on the experienced continuity of consciously perceived, temporally extended phenomena (such as enduring processes and successions of events). The main claim is that the hierarchy of representations posited by hierarchical predictive processing models can contribute to a deepened understanding of the continuity of consciousness. Computationally, such models show that sequences of events can be represented as states (...) of a hierarchy of dynamical systems. Phenomenologically, they suggest a more fine-grained analysis of the perceptual contents of the specious present, in terms of a hierarchy of temporal wholes. Visual perception of static scenes not only contains perceived objects and regions but also spatial gist; similarly, auditory perception of temporal sequences, such as melodies, involves not only perceiving individual notes but also slightly more abstract features (temporal gist), which have longer temporal durations (e.g., emotional character or rhythm). Further investigations into these elusive contents of conscious perception may be facilitated by findings regarding its neural underpinnings. Predictive processing models suggest that sensorimotor areas may influence these contents. (shrink)
Butz, Achimova, Bilkey, and Knott provide a topic overview and discuss whether the special issue contributions may imply that event‐predictive abilities constitute a root for conceptual human thought, because they enable complex, mutually beneficial, but also intricately competitive, social interactions and language communication.
Letheby’s "Philosophy of Psychedelics" relies on Predictive Processing to try and find unifying explanations relevant to understanding how serotonergic psychedelics work in psychiatric therapy, what subjective experiences are associated with their use and whether such experiences are epistemically defective. But if Predictive Processing lacks genuinely explanatory unifying power, Letheby’s account of psychedelic therapy risks being unwarranted. In this commentary, I motivate this worry and sketch an alternative interpretation of psychedelic therapy within the Reinforcement Learning framework.
Although prediction has been largely absent from discussions of explanation for the past 40 years, theories of explanation can gain much from a reintroduction. I review the history that divorced prediction from explanation, examine the proliferation of models of explanation that followed, and argue that accounts of explanation have been impoverished by the neglect of prediction. Instead of a revival of the symmetry thesis, I suggest that explanation should be understood as a cognitive tool that assists us (...) in generating new predictions. This view of explanation and prediction clarifies what makes an explanation scientific and why inference to the best explanation makes sense in science. *Received August 2009; revised September 2009. †To contact the author, please write to: Department of Philosophy, University of Tennessee, 801 McClung Tower, Knoxville, TN 37920‐0480; e‐mail: [email protected]. (shrink)
Many philosophers claim that the neurocomputational framework of predictive processing entails a globally inferentialist and representationalist view of cognition. Here, I contend that this is not correct. I argue that, given the theoretical commitments these philosophers endorse, no structure within predictive processing systems can be rightfully identified as a representational vehicle. To do so, I first examine some of the theoretical commitments these philosophers share, and show that these commitments provide a set of necessary conditions the satisfaction of which allows (...) us to identify representational vehicles. Having done so, I introduce a predictive processing system capable of active inference, in the form of a simple robotic “brain”. I examine it thoroughly, and show that, given the necessary conditions highlighted above, none of its components qualifies as a representational vehicle. I then consider and allay some worries my claim could raise. I consider whether the anti-representationalist verdict thus obtained could be generalized, and provide some reasons favoring a positive answer. I further consider whether my arguments here could be blocked by allowing the same representational vehicle to possess multiple contents, and whether my arguments entail some extreme form of revisionism, answering in the negative in both cases. A quick conclusion follows. (shrink)
Butz, Achimova, Bilkey, and Knott provide a topic overview and discuss whether the special issue contributions may imply that event‐predictive abilities constitute a root for conceptual human thought, because they enable complex, mutually beneficial, but also intricately competitive, social interactions and language communication.
Despite their popularity, relatively scant attention has been paid to the upshot of Bayesian and predictive processing models of cognition for views of overall cognitive architecture. Many of these models are hierarchical ; they posit generative models at multiple distinct "levels," whose job is to predict the consequences of sensory input at lower levels. I articulate one possible position that could be implied by these models, namely, that there is a continuous hierarchy of perception, cognition, and action control comprising levels (...) of generative models. I argue that this view is not entailed by a general Bayesian/predictive processing outlook. Bayesian approaches are compatible with distinct formats of mental representation. Focusing on Bayesian approaches to motor control, I argue that the junctures between different types of mental representation are places where the transitivity of hierarchical prediction may be broken, and I consider the upshot of this conclusion for broader discussions of cognitive architecture. (shrink)
Recent neo-Humean theories of laws of nature have placed substantial emphasis on the characteristic epistemic roles played by laws in scientific practice. In particular, these theories seek to understand laws in terms of their optimal predictive utility to creatures in our epistemic situation. In contrast to other approaches, this view has the distinct advantage that it is able to account for a number of pervasive features possessed by putative actual laws of nature. However, it also faces some unique challenges. First, (...) since the view tries to characterize the laws in terms of their predictive utility, any respects in which putative actual laws are sub-optimally predictively useful are inherently problematic. Such "predictive infelicities" can easily be found among our best physical theories. Second, in tying the laws to our epistemic situation, neo-Humeanism raises the possibility that by changing our epistemic situation, we can change the laws themselves, though it is hard to believe that we have such influence over the laws. This paper aims to address both of these challenges, first by presenting a variety of strategies for explaining away predictive infelicities, and then by developing a version of Lewis's "rigidification" strategy to preclude the possibility of our changing the laws. (shrink)
Recent scholarship in philosophy of science and technology has shown that scientific and technological decision making are laden with values, including values of a social, political, and/or ethical character. This paper examines the role of value judgments in the design of machine-learning systems generally and in recidivism-prediction algorithms specifically. Drawing on work on inductive and epistemic risk, the paper argues that ML systems are value laden in ways similar to human decision making, because the development and design of ML (...) systems requires human decisions that involve tradeoffs that reflect values. In many cases, these decisions have significant—and, in some cases, disparate—downstream impacts on human lives. After examining an influential court decision regarding the use of proprietary recidivism-prediction algorithms in criminal sentencing, Wisconsin v. Loomis, the paper provides three recommendations for the use of ML in penal systems. (shrink)
The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.
In the last two decades, philosophy of neuroscience has predominantly focused on explanation. Indeed, it has been argued that mechanistic models are the standards of explanatory success in neuroscience over, among other things, topological models. However, explanatory power is only one virtue of a scientific model. Another is its predictive power. Unfortunately, the notion of prediction has received comparatively little attention in the philosophy of neuroscience, in part because predictions seem disconnected from interventions. In contrast, we argue that topological (...) predictions can and do guide interventions in science, both inside and outside of neuroscience. Topological models allow researchers to predict many phenomena, including diseases, treatment outcomes, aging, and cognition, among others. Moreover, we argue that these predictions also offer strategies for useful interventions. Topology-based predictions play this role regardless of whether they do or can receive a mechanistic interpretation. We conclude by making a case for philosophers to focus on prediction in neuroscience in addition to explanation alone. (shrink)
Predictive success as an aim of science -- On the very possibility of prediction in the social sciences -- Empirical facts about social prediction: its mode, object and performance -- Understanding poor forecast performance.
This paper examines racial discrimination and algorithmic bias in predictive policing algorithms (PPAs), an emerging technology designed to predict threats and suggest solutions in law enforcement. We first describe what discrimination is in a case study of Chicago’s PPA. We then explain their causes with Broadbent’s contrastive model of causation and causal diagrams. Based on the cognitive science literature, we also explain why fairness is not an objective truth discoverable in laboratories but has context-sensitive social meanings that need to be (...) negotiated through democratic processes. With the above analysis, we next predict why some recommendations given in the bias reduction literature are not as effective as expected. Unlike the cliché highlighting equal participation for all stakeholders in predictive policing, we emphasize power structures to avoid hermeneutical lacunae. Finally, we aim to control PPA discrimination by proposing a governance solution—a framework of a social safety net. (shrink)
In this paper, I discuss the relationship between bodily experiences in dreams and the sleeping, physical body. I question the popular view that dreaming is a naturally and frequently occurring real-world example of cranial envatment. This view states that dreams are functionally disembodied states: in a majority of dreams, phenomenal experience, including the phenomenology of embodied selfhood, unfolds completely independently of external and peripheral stimuli and outward movement. I advance an alternative and more empirically plausible view of dreams as weakly (...) phenomenally-functionally embodied states. The view predicts that bodily experiences in dreams can be placed on a continuum with bodily illusions in wakefulness. It also acknowledges that there is a high degree of variation across dreams and different sleep stages in the degree of causal coupling between dream imagery, sensory input, and outward motor activity. Furthermore, I use the example of movement sensations in dreams and their relation to outward muscular activity to develop a predictive processing account. I propose that movement sensations in dreams are associated with a basic and developmentally early kind of bodily self-sampling. This account, which affords a central role to active inference, can then be broadened to explain other aspects of self- and world-simulation in dreams. Dreams are world-simulations centered on the self, and important aspects of both self- and world-simulation in dreams are closely linked to bodily self-sampling, including muscular activity, illusory own-body perception, and vestibular orienting in sleep. This is consistent with cognitive accounts of dream generation, in which long-term beliefs and expectations, as well as waking concerns and memories play an important role. What I add to this picture is an emphasis on the real-body basis of dream imagery. This offers a novel perspective on the formation of dream imagery and suggests new lines of research. (shrink)
The two-factor theory (Davies, Coltheart, Langdon & Breen 2001; Coltheart 2007; Coltheart, Menzies & Sutton 2010) is an influential account of delusion formation. According to the theory, there are two distinct factors that are causally responsible for delusion formation. The first factor is supposed to explain the content of the delusion, while the second factor is supposed to explain why the delusion is adopted and maintained. Recently, another remarkable account of delusion formation has been proposed, in which the notion of (...) “prediction error” plays the central role (Fletcher & Frith 2009; Corlett, Krystal, Taylor & Fletcher 2009; Corlett, Taylor, Wang, Fletcher & Krystal 2010). According to this account, the prediction-error theory, delusions are formed in response to aberrant prediction-error signals, those signals that indicate a mismatch between expectation and actual experience. -/- In this chapter, we examine the relationship between the two-factor theory and the prediction-error theory in some detail. Our view is that the prediction-error theory does not have to be understood as a rival to the two-factor theory. We do not deny that there are some important differences between them. However, those differences are not as significant as they have been presented in the literature. Moreover, the core ideas of the prediction-error theory may be incorporated into the two- factor framework. For instance, the aberrant prediction-error signal that is posited by prediction-error theorists can be (or underlie) the first factor contributing to the formation of some delusions, and help explain the content of those delusions. Alternatively, the aberrant prediction-error signal can be (or underlie) the second factor, and help explain why the delusion is adopted and maintained. (shrink)
The paper develops an account of minimal traces devoid of representational content and exploits an analogy to a predictive processing framework of perception. As perception can be regarded as a prediction of the present on the basis of sparse sensory inputs without any representational content, episodic memory can be conceived of as a “prediction of the past” on the basis of a minimal trace, i.e., an informationally sparse, merely causal link to a previous experience. The resulting notion of (...) episodic memory will be validated as a natural kind distinct from imagination. This trace minimalist view contrasts with two theory camps dominating the philosophical debate on memory. On one side, we face versions of the Causal Theory that hold on to the idea that episodic remembering requires a memory trace that causally links the event of remembering to the event of experience and carries over representational content from the content of experience to the content of remembering. The Causal Theory, however, fails to account for the epistemic generativity of episodic memory and is psychologically and information-theoretically implausible. On the other side, a new camp of simulationists is currently forming up. Motivated by empirical and conceptual deficits of the Causal Theory, they reject not only the necessity of preserving representational content, but also the necessity of a causal link between experience and memory. They argue that remembering is nothing but a peculiar form of imagination, peculiar only in that it has been reliably produced and is directed towards an episode of one’s personal past. Albeit sharing their criticism of the Causal Theory and, in particular, rejecting its demand for an intermediary carrier of representational content, the paper argues that a causal connection to experience is still necessary to fulfill even the minimal requirements of past-directedness and reliability. (shrink)
We introduce the predictive processing account of body representation, according to which body representation emerges via a domain-general scheme of (long-term) prediction error minimisation. We contrast this account against one where body representation is underpinned by domain-specific systems, whose exclusive function is to track the body. We illustrate how the predictive processing account offers considerable advantages in explaining various empirical findings, and we draw out some implications for body representation research.
In this paper I argue that, by combining eliminativist and fictionalist approaches toward the sub-personal representational posits of predictive processing, we arrive at an empirically robust and yet metaphysically innocuous cognitive scientific framework. I begin the paper by providing a non-representational account of the five key posits of predictive processing. Then, I motivate a fictionalist approach toward the remaining indispensable representational posits of predictive processing, and explain how representation can play an epistemologically indispensable role within predictive processing explanations without thereby (...) requiring that representation metaphysically exists. Finally, I outline four consequences of accepting this approach and explain why they are beneficial: we arrive at a victory for metaphysical eliminativism in the ‘representation wars’; my account fits with extant empirical practice; my account provides guidance for future research; and, my account provides the beginnings of a response to Mark Sprevak’s IBE problem for fictionalist approaches toward sub-personal representation. (shrink)
Should we insist on prediction, i.e. on correctly forecasting the future? Or can we rest content with accommodation, i.e. empirical success only with respect to the past? I apply general considerations about this issue to the case of economics. In particular, I examine various ways in which mere accommodation can be sufficient, in order to see whether those ways apply to economics. Two conclusions result. First, an entanglement thesis: the need for prediction is entangled with the methodological role (...) of orthodox economic theory. Second, a conditional predictivism: if we are not committed to orthodox economic theory, then we should demand prediction rather than accommodation – against most current practice. (shrink)
Two decades ago, the introduction of the Implicit Association Test (IAT) sparked enthusiastic reactions. With implicit measures like the IAT, researchers hoped to finally be able to bridge the gap between self-reported attitudes on one hand and behavior on the other. Twenty years of research and several meta-analyses later, however, we have to conclude that neither the IAT nor its derivatives have fulfilled these expectations. Their predictive value for behavioral criteria is weak and their incremental validity over and above self-report (...) measures is negligible. In our review, we present an overview of explanations for these unsatisfactory findings and delineate promising ways forward. Over the years, several reasons for the IAT’s weak predictive validity have been proposed. They point to four potentially problematic features: First, the IAT is by no means a pure measure of individual differences in associations but suffers from extraneous influences like recoding. Hence, the predictive validity of IAT-scores should not be confused with the predictive validity of associations. Second, with the IAT, we usually aim to measure evaluation (“liking”) instead of motivation (“wanting”). Yet, behavior might be determined much more often by the latter than the former. Third, the IAT focuses on measuring associations instead of propositional beliefs and thus taps into a construct that might be too unspecific to account for behavior. Finally, studies on predictive validity are often characterized by a mismatch between predictor and criterion (e.g., while behavior is highly context-specific, the IAT usually takes into account neither situation nor domain). Recent research, however, also revealed advances addressing each of these problems, namely (1) procedural and analytical advances to control for recoding in the IAT, (2) measurement procedures to assess implicit wanting, (3) measurement procedures to assess implicit beliefs, and (4) approaches to increase the fit between implicit measures and behavioral criteria (e.g., by incorporating contextual information). Implicit measures like the IAT hold an enormous potential. In order to allow them to fulfill this potential, however, we have to refine our understanding of these measures, and we should incorporate recent conceptual and methodological advancements. This review provides specific recommendations on how to do so. (shrink)
Cognitive niche construction is the process whereby organisms create and maintain cause–effect models of their niche as guides for fitness influencing behavior. Extended mind theory claims that cognitive processes extend beyond the brain to include predictable states of the world. Active inference and predictive processing in cognitive science assume that organisms embody predictive (i.e., generative) models of the world optimized by standard cognitive functions (e.g., perception, action, learning). This paper presents an active inference formulation that views cognitive niche construction as (...) a cognitive function aimed at optimizing organisms' generative models. We call that process of optimization extended active inference. (shrink)
According to Hempel, all scientific explanations and predictions which are produced exclusively with deterministic laws must be deductive, in the sense that the explanandum or the prediction must be a logical consequence of the laws and the initial conditions in the explanans. This deducibility thesis has been attacked from several quarters. Some time ago Canfield and Lehrer presented a “refutation” of DT as applied to predictions, in which they tried to prove that “if the deductive reconstruction [DT for predictions] (...) were an adequate reconstruction, then scientific prediction would be impossible”. Their argument seems to have been uncontested except for an inconclusive rejoinder by Beard. Moreover, Stegmüller has recently argued that “it may turn out that all or at least most of the so-called deductive-nomological explanations are in truth inductive and not deductive arguments, in view of the difficulty which has been pointed out by Canfield and Lehrer”. It seems it would be worth investigating whether Canfield and Lehrer's argument is, indeed, correct. (shrink)
The search for the neural correlates of consciousness is in need of a systematic, principled foundation that can endow putative neural correlates with greater predictive and explanatory value. Here, we propose the predictive processing framework for brain function as a promising candidate for providing this systematic foundation. The proposal is motivated by that framework’s ability to address three general challenges to identifying the neural correlates of consciousness, and to satisfy two constraints common to many theories of consciousness. Implementing the search (...) for neural correlates of consciousness through the lens of predictive processing delivers strong potential for predictive and explanatory value through detailed, systematic mappings between neural substrates and phenomenological structure. We conclude that the predictive processing framework, precisely because it at the outset is not itself a theory of consciousness, has significant potential for advancing the neuroscience of consciousness. (shrink)