Predictive processing approaches to the mind are increasingly popular in the cognitive sciences. This surge of interest is accompanied by a proliferation of philosophical arguments, which seek to either extend or oppose various aspects of the emerging framework. In particular, the question of how to position predictive processing with respect to enactive and embodied cognition has become a topic of intense debate. While these arguments are certainly of valuable scientific and philosophical merit, they risk underestimating the variety of approaches gathered (...) under the predictive label. Here, we first present a basic review of neuroscientific, cognitive, and philosophical approaches to PP, to illustrate how these range from solidly cognitivist applications—with a firm commitment to modular, internalistic mental representation—to more moderate views emphasizing the importance of ‘body-representations’, and finally to those which fit comfortably with radically enactive, embodied, and dynamic theories of mind. Any nascent predictive processing theory must take into account this continuum of views, and associated theoretical commitments. As a final point, we illustrate how the Free Energy Principle attempts to dissolve tension between internalist and externalist accounts of cognition, by providing a formal synthetic account of how internal ‘representations’ arise from autopoietic self-organization. The FEP thus furnishes empirically productive process theories by which to guide discovery through the formal modelling of the embodied mind. (shrink)
If one formulates Helmholtz's ideas about perception in terms of modern-day theories one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts. Using constructs from statistical physics it can be shown that the problems of inferring what cause our sensory inputs and learning causal regularities in the sensorium can be resolved using exactly the same principles. Furthermore, inference and learning can proceed in a biologically plausible fashion. The ensuing scheme rests on (...) Empirical Bayes and hierarchical models of how sensory information is generated. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of the brain's organisation and responses. In this paper, we suggest that these perceptual processes are just one emergent property of systems that conform to a free-energy principle. The free-energy considered here represents a bound on the surprise inherent in any exchange with the environment, under expectations encoded by its state or configuration. A system can minimise free-energy by changing its configuration to change the way it samples the environment, or to change its expectations. These changes correspond to action and perception, respectively, and lead to an adaptive exchange with the environment that is characteristic of biological systems. This treatment implies that the system's state and structure encode an implicit and probabilistic model of the environment. We will look at models entailed by the brain and how minimisation of free-energy can explain its dynamics and structure. (shrink)
We present a multiscale integrationist interpretation of the boundaries of cognitive systems, using the Markov blanket formalism of the variational free energy principle. This interpretation is intended as a corrective for the philosophical debate over internalist and externalist interpretations of cognitive boundaries; we stake out a compromise position. We first survey key principles of new radical views of cognition. We then describe an internalist interpretation premised on the Markov blanket formalism. Having reviewed these accounts, we develop our positive multiscale account. (...) We argue that the statistical seclusion of internal from external states of the system—entailed by the existence of a Markov boundary—can coexist happily with the multiscale integration of the system through its dynamics. Our approach does not privilege any given boundary, nor does it argue that all boundaries are equally prescient. We argue that the relevant boundaries of cognition depend on the level being characterised and the explanatory interests that guide investigation. We approach the issue of how and where to draw the boundaries of cognitive systems through a multiscale ontology of cognitive systems, which offers a multidisciplinary research heuristic for cognitive science. (shrink)
This paper considers the Cartesian theatre as a metaphor for the virtual reality models that the brain uses to make inferences about the world. This treatment derives from our attempts to understand dreaming and waking consciousness in terms of free energy minimization. The idea here is that the Cartesian theatre is not observed by an internal audience but furnishes a theatre in which fictive narratives and fantasies can be rehearsed and tested against sensory evidence. We suppose the brain is driven (...) by the imperative to infer the causes of its sensory samples; in much the same way as scientists are compelled to test hypotheses about experimental data. This recapitulates Helmholtz's notion of unconscious inference and Gregory's treatment of perception as hypothesis testing. However, we take this further and consider the active sampling of the world as the gathering of confirmatory evidence for hypotheses based on our virtual reality. The ensuing picture of consciousness resolves a number of seemingly hard problems in consciousness research and is internally consistent with current thinking in systems neuroscience and theoretical neurobiology. In this formalism, there is a dualism that distinguishes between the process of inference and the process that entails inference. This separation is reflected by the distinction between beliefs and the physical brain states that encode them. This formal approach allows us to appeal to simple but fundamental theorems in information theory and statistical thermodynamics that dissolve some of the mysterious aspects of consciousness. (shrink)
The processes underwriting the acquisition of culture remain unclear. How are shared habits, norms, and expectations learned and maintained with precision and reliability across large-scale sociocultural ensembles? Is there a unifying account of the mechanisms involved in the acquisition of culture? Notions such as “shared expectations,” the “selective patterning of attention and behaviour,” “cultural evolution,” “cultural inheritance,” and “implicit learning” are the main candidates to underpin a unifying account of cognition and the acquisition of culture; however, their interactions require greater (...) specification and clarification. In this article, we integrate these candidates using the variational approach to human cognition and culture in theoretical neuroscience. We describe the construction by humans of social niches that afford epistemic resources called cultural affordances. We argue that human agents learn the shared habits, norms, and expectations of their culture through immersive participation in patterned cultural practices that selectively pattern attention and behaviour. We call this process “thinking through other minds” – in effect, the process of inferring other agents’ expectations about the world and how to behave in social context. We argue that for humans, information from and about other people's expectations constitutes the primary domain of statistical regularities that humans leverage to predict and organize behaviour. The integrative model we offer has implications that can advance theories of cognition, enculturation, adaptation, and psychopathology. Crucially, this formal treatment seeks to resolve key debates in current cognitive science, such as the distinction between internalist and externalist accounts of theory of mind abilities and the more fundamental distinction between dynamical and representational accounts of enactivism. (shrink)
Viewing the brain as an organ of approximate Bayesian inference can help us understand how it represents the self. We suggest that inferred representations of the self have a normative function: to predict and optimise the likely outcomes of social interactions. Technically, we cast this predict-and-optimise as maximising the chance of favourable outcomes through active inference. Here the utility of outcomes can be conceptualised as prior beliefs about final states. Actions based on interpersonal representations can therefore be understood as minimising (...) surprise – under the prior belief that one will end up in states with high utility. Interpersonal representations thus serve to render interactions more predictable, while the affective valence of interpersonal inference renders self-perception evaluative. Distortions of self-representation contribute to major psychiatric disorders such as depression, personality disorder and paranoia. The approach we review may therefore operationalise the study of interpersonal representations in pathological states. (shrink)
The target article “Thinking Through Other Minds” offered an account of the distinctively human capacity to acquire cultural knowledge, norms, and practices. To this end, we leveraged recent ideas from theoretical neurobiology to understand the human mind in social and cultural contexts. Our aim was both synthetic – building an integrative model adequate to account for key features of cultural learning and adaptation; and prescriptive – showing how the tools developed to explain brain dynamics can be applied to the emergence (...) of social and cultural ecologies of mind. In this reply to commentators, we address key issues, including: refining the concept of culture to show how TTOM and the free-energy principle can capture essential elements of human adaptation and functioning; addressing cognition as an embodied, enactive, affective process involving cultural affordances; clarifying the significance of the FEP formalism related to entropy minimization, Bayesian inference, Markov blankets, and enactivist views; developing empirical tests and applications of the TTOM model; incorporating cultural diversity and context at the level of intra-cultural variation, individual differences, and the transition to digital niches; and considering some implications for psychiatry. The commentators’ critiques and suggestions point to useful refinements and applications of the model. In ongoing collaborations, we are exploring how to augment the theory with affective valence, take into account individual differences and historicity, and apply the model to specific domains including epistemic bias. (shrink)
This commentary takes a closer look at how “constructive models of subjective perception,” referred to by Collerton et al. (sect. 2), might contribute to the Perception and Attention Deficit (PAD) model. It focuses on the neuronal mechanisms that could mediate hallucinations, or false inference – in particular, the role of cholinergic systems in encoding uncertainty in the context of hierarchical Bayesian models of perceptual inference (Friston 2002b; Yu & Dayan 2002).
Over the last 30 years, representationalist and dynamicist positions in the philosophy of cognitive science have argued over whether neurocognitive processes should be viewed as representational or not. Major scientific and technological developments over the years have furnished both parties with ever more sophisticated conceptual weaponry. In recent years, an enactive generalization of predictive processing – known as active inference – has been proposed as a unifying theory of brain functions. Since then, active inference has fueled both representationalist and dynamicist (...) campaigns. However, we believe that when diving into the formal details of active inference, one should be able to find a solution to the war; if not a peace treaty, surely an armistice of a sort. Based on an analysis of these formal details, this paper shows how both representationalist and dynamicist sensibilities can peacefully coexist within the new territory of active inference. (shrink)
This commentary considers how far one can go in making inferences about functional modularity or segregation, based on the sorts of analyses used by Caplan & Waters in relation to the underlying neuronal infrastructure. Specifically, an attempt is made to relate the “functionalist” approach adopted in the target article to “neuroreductionist” perspectives on the same issue.