A new theory is taking hold in neuroscience. It is the theory that the brain is essentially a hypothesis-testing mechanism, one that attempts to minimise the error of its predictions about the sensory input it receives from the world. It is an attractive theory because powerful theoretical arguments support it, and yet it is at heart stunningly simple. Jakob Hohwy explains and explores this theory from the perspective of cognitive science and philosophy. The key argument throughout The Predictive Mind (...) is that the mechanism explains the rich, deep, and multifaceted character of our conscious perception. It also gives a unified account of how perception is sculpted by attention, and how it depends on action. The mind is revealed as having a fragile and indirect relation to the world. Though we are deeply in tune with the world we are also strangely distanced from it. (shrink)
An exciting theory in neuroscience is that the brain is an organ for prediction error minimization (PEM). This theory is rapidly gaining influence and is set to dominate the science of mind and brain in the years to come. PEM has extreme explanatory ambition, and profound philosophical implications. Here, I assume the theory, briefly explain it, and then I argue that PEM implies that the brain is essentially self-evidencing. This means it is imperative to identify an evidentiary boundary between the (...) brain and its environment. This boundary defines the mind-world relation, opens the door to skepticism, and makes the mind transpire as more inferentially secluded and neurocentrically skull-bound than many would nowadays think. Therefore, PEM somewhat deflates contemporary hypotheses that cognition is extended, embodied and enactive; however, it can nevertheless accommodate the kinds of cases that fuel these hypotheses. (shrink)
An exciting theory in neuroscience is that the brain is an organ for prediction error minimization. This theory is rapidly gaining influence and is set to dominate the science of mind and brain in the years to come. PEM has extreme explanatory ambition, and profound philosophical implications. Here, I assume the theory, briefly explain it, and then I argue that PEM implies that the brain is essentially self-evidencing. This means it is imperative to identify an evidentiary boundary between the brain (...) and its environment. This boundary defines the mind-world relation, opens the door to skepticism, and makes the mind transpire as more inferentially secluded and neurocentrically skull-bound than many would nowadays think. Therefore, PEM somewhat deflates contemporary hypotheses that cognition is extended, embodied and enactive; however, it can nevertheless accommodate the kinds of cases that fuel these hypotheses. (shrink)
The Predictive Mind by Jakob Hohwy is the first monograph to address the philosophical significance of what Hohwy calls the prediction error minimization framework. The central claim of the book is that, on a conceptual level, perception, action, and cognition can be understood by reference to a single principle: prediction error minimization. The corresponding empirical hypothesis is that the brain implements a hierarchical generative model that generates predictions about sensory inputs and their hidden causes. When sensory signals arrive, only (...) their divergence from the predictions has to be further processed. The general strategy of using predictions derived from generative models to compress and transmit information is also known as predictive coding.Perception is thus not construed as a purely bottom–up process, but. (shrink)
The notion that the brain is a prediction error minimizer entails, via the notion of Markov blankets and self-evidencing, a form of global scepticism — an inability to rule out evil demon scenarios. This type of scepticism is viewed by some as a sign of a fatally flawed conception of mind and cognition. Here I discuss whether this scepticism is ameliorated by acknowledging the role of action in the most ambitious approach to prediction error minimization, namely under the free energy (...) principle. I argue that the scepticism remains but that the role of action in the free energy principle constrains the demon’s work. This yields new insights about the free energy principle, epistemology, and the place of mind in nature. (shrink)
The article gives an account of life and work of Jakob von Uexk?ll, together with a description of his impact to theoretical biology, behavioural studies, and semiotics. It includes the complete bibliography of Uexk?ll's published works, as well as an extensive list of publications about him.
It appears that consciousness science is progressing soundly, in particular in its search for the neural correlates of consciousness. There are two main approaches to this search, one is content-based (focusing on the contrast between conscious perception of, e.g., faces vs. houses), the other is state-based (focusing on overall conscious states, e.g., the contrast between dreamless sleep vs. the awake state). Methodological and conceptual considerations of a number of concrete studies show that both approaches are problematic: the content-based approach seems (...) to set aside crucial aspects of consciousness; and the state-based approach seems over-inclusive in a way that is hard to rectify without losing sight of the crucial conscious-unconscious contrast. Consequently, the search for the neural correlates of consciousness is in need of new experimental paradigms. (shrink)
The phenomenology of agency and perception is probably underpinned by a common cognitive system based on generative models and predictive coding. I defend the hypothesis that this cognitive system explains core aspects of the sense of having a self in agency and perception. In particular, this cognitive model explains the phenomenological notion of a minimal self as well as a notion of the narrative self. The proposal is related to some influential studies of overall brain function, and to psychopathology. These (...) elusive notions of the self are shown to be the natural upshots of general cognitive mechanisms whose fundamental purpose is to enable agents to represent the world and act in it. (shrink)
This study examines determinants of retail chains’ corporate social responsibility communication on their web pages. The theoretical foundation for the study is signaling theory, which suggests that firms will communicate about their CSR efforts when this is profitable for them and when such communication makes it possible for outsiders to distinguish good from bad performers. Based on this theory, I develop hypotheses about retail chains’ CSR signaling. The hypotheses are tested in a sample of 208 retail chains in the Norwegian (...) market. As hypothesized, I find that foreign chains, chains using private brands, and vertically integrated chains are more likely to signal, but I find no relationship between pricing and signaling. In further analysis using chains’ CSR memberships and certifications as the measure of signals, only the relationship between organizational form and signaling is replicated. In total, the findings give partial support to signaling theory. (shrink)
Different cognitive functions recruit a number of different, often overlapping, areas of the brain. Theories in cognitive and computational neuroscience are beginning to take this kind of functional integration into account. The contributions to this special issue consider what functional integration tells us about various aspects of the mind such as perception, language, volition, agency, and reward. Here, I consider how and why functional integration may matter for the mind; I discuss a general theoretical framework, based on generative models, that (...) may unify many of the debates surrounding functional integration and the mind; and I briefly introduce each of the contributions. (shrink)
Shaun Gallagher calls for a radical rethinking of the concept of nature and he resists reduction of phenomenology to computational-neural science. However, classic, reductionist science, at least in contemporary computational guise, has the resources to accommodate insights from transcendental phenomenology. Reductionism should be embraced, not feared.
We use the hierarchical nature of Bayesian perceptual inference to explain a fundamental aspect of the temporality of experience, namely the phenomenology of temporal flow. The explanation says that the sense of temporal flow in conscious perception stems from probabilistic inference that the present cannot be trusted. The account begins by describing hierarchical inference under the notion of prediction error minimization, and exemplifies distrust of the present within bistable visual perception and action initiation. Distrust of the present is then discussed (...) in relation to previous research on temporal phenomenology. Finally, we discuss how there may be individual differences in the experience of temporal flow, in particular along the autism spectrum. The resulting view is that the sense of temporal flow in conscious perception results from an internal, inferential process. (shrink)
In rubber hand illusions and full body illusions, touch sensations are projected to non-body objects such as rubber hands, dolls or virtual bodies. The robustness, limits and further perceptual consequences of such illusions are not yet fully explored or understood. A number of experiments are reported that test the limits of a variant of the rubber hand illusion. Methodology/Principal Findings -/- A variant of the rubber hand illusion is explored, in which the real and foreign hands are aligned in personal (...) space. The presence of the illusion is ascertained with participants' scores and temperature changes of the real arm. This generates a basic illusion of touch projected to a foreign arm. Participants are presented with further, unusual visuotactile stimuli subsequent to onset of the basic illusion. Such further visuotactile stimulation is found to generate very unusual experiences of supernatural touch and touch on a non-hand object. The finding of touch on a non-hand object conflicts with prior findings, and to resolve this conflict a further hypothesis is successfully tested: that without prior onset of the basic illusion this unusual experience does not occur. Conclusions/Significance -/- A rubber hand illusion is found that can arise when the real and the foreign arm are aligned in personal space. This illusion persists through periods of no tactile stimulation and is strong enough to allow very unusual experiences of touch felt on a cardboard box and experiences of touch produced at a distance, as if by supernatural causation. These findings suggest that one's visual body image is explained away during experience of the illusion and they may be of further importance to understanding the role of experience in delusion formation. The findings of touch on non-hand objects may help reconcile conflicting results in this area of research. In addition, new evidence is provided that relates to the recently discovered psychologically induced temperature changes that occur during the illusion. (shrink)
Bortolotti’s Delusions and Other Irrational Beliefs defends the view that delusions are beliefs on a continuum with other beliefs. A different view is that delusions are more like illusions, that is, they arise from faulty perception. This view, which is not targeted by the book, makes it easier to explain why delusions are so alien and disabling but needs to appeal to forensic aspects of functioning.
There is surprising evidence that introspection of our phenomenal states varies greatly between individuals and within the same individual over time. This puts pressure on the notion that introspection gives reliable access to our own phenomenology: introspective unreliability would explain the variability, while assuming that the underlying phenomenology is stable. I appeal to a body of neurocomputational, Bayesian theory and neuroimaging findings to provide an alternative explanation of the evidence: though some limited testing conditions can cause introspection to be unreliable, (...) mostly it is our phenomenology itself that is variable. With this account of phenomenal variability, the occurrence of the surprising evidence can be explained while generally retaining introspective reliability. (shrink)
Most consciousness researchers, almost no matter what their views of the metaphysics of consciousness, can agree that the first step in a science of consciousness is the search for the neural correlate of consciousness (the NCC). The reason for this agreement is that the notion of ‘correlation’ doesn’t by itself commit one to any particular metaphysical view about the relation between (neural) matter and consciousness. For example, some might treat the correlates as causally related, while others might view the correlation (...) as evidence for identity between conscious states and brain states. The common ground therefore seems to be that the scientific search for the NCC is largely independent of the metaphysics of consciousness. (shrink)
It is common in moral philosophy to test the validity of moral principles by proposing counter-examples in the form of cases where the application of the principle does not give the conclusion we intuitively find valid. These cases are often imaginary and sometimes rather ‘outlandish’, involving ray guns, non-existent creatures, etc. I discuss whether we can test moral principles with the help of outlandish cases, or if only realistic cases are admissible. I consider two types of argument against outlandish cases: (...) 1) Since moral principles are meant for guiding action in this world, cases drawn from other worlds are irrelevant. 2) We lack the capacity to apply our intuitive moral competence to outlandish cases. I argue that while the first approach is importantly flawed, the second approach is plausible, not because our moral competence per se is limited to cases from this world, but because we lack the capacity to imagine outlandish cases, and we cannot apply our moral competence to a case we fail to imagine properly. (shrink)
Three challenges to a unified understanding of delusions emerge from Radden's On Delusion (2011). Here, I propose that in order to respond to these challenges, and to work towards a unifying framework for delusions, we should see delusions as arising in inference under uncertainty. This proposal is based on the observation that delusions in key respects are surprisingly like perceptual illusions, and it is developed further by focusing particularly on individual differences in uncertainty expectations.
What, if anything, is problematic about gentrification? This article addresses this question from the perspective of normative political theory. We argue that gentrification is problematic insofar as it involves a violation of city-dwellers’ occupancy rights. We distinguish these rights from other forms of territorial rights and discuss the different implications of the argument for urban governance. If we agree on the ultimate importance of being able to pursue one’s located life plans, the argument goes, we must also agree on limiting (...) the impact of gentrification on peoples’ lives. Limiting gentrification’s impact, however, does not entail halting processes of gentrification once and for all. (shrink)
According to one theory, the brain is a sophisticated hypothesis tester: perception is Bayesian unconscious inference where the brain actively uses predictions to test, and then refine, models about what the causes of its sensory input might be. The brain’s task is simply continually to minimise prediction error. This theory, which is getting increasingly popular, holds great explanatory promise for a number of central areas of research at the intersection of philosophy and cognitive neuroscience. I show how the theory can (...) help us understand striking phenomena at three cognitive levels: vision, sensory integration, and belief. First, I illustrate central aspects of the theory by showing how it provides a nice explanation of why binocular rivalry occurs. Then I suggest how the theory may explain the role of the unified sense of self in rubber hand and full body illusions driven by visuotactile conflict. Finally, I show how it provides an approach to delusion formation that is consistent with one-deficit accounts of monothematic delusions. (shrink)
Some monothematic types of delusions may arise because subjects have unusual experiences. The role of this experiential component in the pathogenesis of delusion is still not understood. Focussing on delusions of alien control, we outline a model for reality testing competence on unusual experiences. We propose that nascent delusions arise when there are local failures of reality testing performance, and that monothematic delusions arise as normal responses to these. In the course of this we address questions concerning the tenacity with (...) which delusions are maintained, their often bizarre content, the patients' inability to dismiss them, and their often circumscribed character. (shrink)
How can we determine the adequacy of a probabilistic coherence measure? A widely accepted approach to this question besides formulating adequacy constraints is to employ paradigmatic test cases consisting of a scenario providing a joint probability distribution over some specified set of propositions coupled with a normative coherence assessment for this set. However, despite the popularity of the test case approach, a systematic evaluation of the proposed test cases is still missing. This paper’s aim is to change this. Using a (...) custom written computer program for the necessary probabilistic calculations a large number of coherence measures in an extensive collection of test cases is examined. The result is a detailed overview of the test case performance of any probabilistic coherence measures proposed so far. It turns out that none of the popular coherence measures such as Shogenji’s, Glass’ and Olsson’s, Fitelson’s or Douven and Meijs’ but two rather unnoticed measures perform best. This, however, does not mean that the other measures can be rejected straightforwardly. Instead, the results presented here are to be understood as a contribution among others to the project of finding adequate probabilistic coherence measures. (shrink)
Cognitive neuroscience aspires to explain how the brain produces conscious states. Many people think this aspiration is threatened by the subjective nature of introspective reports, as well as by certain philosophical arguments. We propose that good neuroscientific explanations of conscious states can consolidate an interpretation of introspective reports, in spite of their subjective nature. This is because the relative quality of explanations can be evaluated on independent, methodological grounds. To illustrate, we review studies that suggest that aspects of the feeling (...) of being in control of one's bodily movement can be explained in terms of the complex and surprising way the brain predicts movement. This is a modest type of functional, contrastive explanation. Though we do not refute the threatening philosophical arguments, we show that they do not apply to this type of explanation. (shrink)
Any position that promises genuine progress on the mind-body problem deserves attention. Recently, Daniel Stoljar has identified a physicalist version of Russells notion of neutral monism; he elegantly argues that with this type of physicalism it is possible to disambiguate on the notion of physicalism in such a way that the problem is resolved. The further issue then arises of whether we have reason to believe that this type of physicalism is in fact true. Ultimately, one needs to argue for (...) this position by inference to the best explanation, and I show that this new type of physicalism does not hold promise of more explanatory prowess than its relevant rivals, and that, whether it is better than its rivals or not, it is doubtful whether it would furnish us with genuine explanations of the phenomenal at all. (shrink)
Recently, Julian Savulescu and Guy Kahane have defended the Principle of Procreative Beneficence (PB), according to which prospective parents ought to select children with the view that their future child has ‘the best chance of the best life’. I argue that the arguments Savulescu and Kahane adduce in favour of PB equally well support what I call the Principle of General Procreative Beneficence (GPB). GPB states that couples ought to select children in view of maximizing the overall expected value in (...) the world, not just the welfare of their future child. I further argue that Savulescu and Kahane's claim that PB has significantly more weight than competing moral principles, such as GPB, lacks justification. A possible argument for PB having significant weight builds on a principle of parental partiality towards one's own children. But this principle does not support PB; it supports a Principle of Sibling-Oriented Procreative Beneficence (SPB), according to which parents selecting a child should maximize the benefit of all their children. Indeed, PB itself will in some cases be self-effacing in favour of SPB. (shrink)
The paper contrasts Kant's conception of original common possession of the earth with Hugo Grotius's superficially similar notion. The aim is not only to elucidate how much Kant departs from his natural law predecessors—given that Grotius's needs-based framework very much lines in with contemporary theorists’ tendency to reduce issues of global concern to questions of how to divide the world up, it also seeks to advocate Kant's global thinking as an alternative for current debates. Crucially, it is Kant's radical shift (...) in perspective—from an Archimedean ’view from nowhere‘, to a first-personal standpoint through which agents reflexively recognise their systematic interdependence in a world of limited space—that provides him with the more thorough and ultimately convincing global standpoint. This standpoint does not come with ready-made solutions to shared global problems, but provides a promising perspective from which to theorise them. (shrink)
Coherence is the property of propositions hanging or fitting together. Intuitively, adding a proposition to a set of propositions should be compatible with either increasing or decreasing the set’s degree of coherence. In this paper we show that probabilistic coherence measures based on relative overlap are in conflict with this intuitive verdict. More precisely, we prove that according to the naive overlap measure it is impossible to increase a set’s degree of coherence by adding propositions and that according to the (...) refined overlap measure no set’s degree of coherence exceeds the degree of coherence of its maximally coherent subset. We also show that this result carries over to all other subset-sensitive refinements of the naive overlap measure. As both results stand in sharp contrast to elementary coherence intuitions, we conclude that extant relative overlap measures of coherence are inadequate. (shrink)
This chapter explores the idea that the need to establish common knowledge is one feature that makes social cognition stand apart in important ways from cognition in general. We develop this idea on the background of the claim that social cognition is nothing but a type of causal inference. We focus on autism as our test-case, and propose that a specific type of problem with common knowledge processing is implicated in challenges to social cognition in autism spectrum disorder (ASD). This (...) problem has to do with the individual’s assessment of the reliability of messages that are passed between people as common knowledge emerges. The proposal is developed on the background of our own empirical studies and outlines different ways common knowledge might be comprised. We discuss what these issues may tell us about ASD, about the relation between social and non-social cognition, about social objects, and about the dynamics of social networks. (shrink)
Based on an empirical study of the British think tank Demos, the article deliberates on the nature of current political ideas. The key argument is that such a deliberation must take into account not only ideas of production but also ideas of mediation. The article argues that the ability to disseminate, brand, and market political ideas in the public sphere through the mass media is a crucial part of the activities of modern idea producers such as think tanks. Ideas are (...) normally conceptualized as statements. As an analytical tool, the article makes a distinction between the two components of a statement. The two components are utterance (or impartation) and proposition (or semantic unit more generally). By so doing, it is possible to focus on (1) the necessity for think tanks to be an `impartational node' in communicative networks and (2) the importance of attributing certain meanings and values to the political ideas (to brand ideas). From this theoretical outset, the article then describes the logic or nature of the mediation, marketing and branding of political ideas. (shrink)
Over the years several non-equivalent probabilistic measures of coherence have been discussed in the philosophical literature. In this paper we examine these measures with respect to their empirical adequacy. Using test cases from the coherence literature as vignettes for psychological experiments we investigate whether the measures can predict the subjective coherence assessments of the participants. It turns out that the participants’ coherence assessments are best described by Roche’s coherence measure based on Douven and Meijs’ average mutual support approach and the (...) conditional probability. (shrink)
The period from Plato's birth to Aristotle's death (427-322 BC) is one of the most influential and formative in the history of Western philosophy. The developments of logic, metaphysics, epistemology, ethics and science in this period have been investigated, controversies have arisen and many new theories have been produced. But this is the first book to give detailed scholarly attention to the development of dialectic during this decisive period. It includes chapters on topics such as: dialectic as interpersonal debate between (...) a questioner and a respondent; dialectic and the dialogue form; dialectical methodology; the dialectical context of certain forms of arguments; the role of the respondent in guaranteeing good argument; dialectic and presentation of knowledge; the interrelations between written dialogues and spoken dialectic; and definition, induction and refutation from Plato to Aristotle. The book contributes to the history of philosophy and also to the contemporary debate about what philosophy is. (shrink)
subjects mean when they report their mental states it is useful to be guided by a sound grasp of their concepts for mental events. <sup>3</sup> Though this is often ignored in favor of libertarian notions of free will, in which free action is seen as completely undetermined by the subject.
In this paper, we consider how certain longstanding philosophical questions about mental representation may be answered on the assumption that cognitive and perceptual systems implement hierarchical generative models, such as those discussed within the prediction error minimization framework. We build on existing treatments of representation via structural resemblance, such as those in Gładziejewski :559–582, 2016) and Gładziejewski and Miłkowski, to argue for a representationalist interpretation of the PEM framework. We further motivate the proposed approach to content by arguing that it (...) is consistent with approaches implicit in theories of unsupervised learning in neural networks. In the course of this discussion, we argue that the structural representation proposal, properly understood, has more in common with functional-role than with causal/informational or teleosemantic theories. In the remainder of the paper, we describe the PEM framework for approximate Bayesian inference in some detail, and discuss how structural representations might arise within the proposed Bayesian hierarchies. After explicating the notion of variational inference, we define a subjectively accessible measure of misrepresentation for hierarchical Bayesian networks by appeal to the Kullbach–Leibler divergence between posterior generative and approximate recognition densities, and discuss a related measure of objective misrepresentation in terms of correspondence with the facts. (shrink)