When the brain engages a task, motivating electrical energy is generated at a source and this motivation is defined by free will. If the task becomes too complex, we are at risk of overloading the supply. In this situation, endorphins intervene to sedate the source, effectively ending the task. When endorphins don’t break the connection to the source in a timely fashion, we experience a seizure which again, effectively ends the endeavour. Whichever the solution, the involvement of endorphins ensures that (...) we forget that these events took place, over and over again. (shrink)
This model is intended to provide a description of the organization of physical processes in a biological organism that could produce a rudimentary function of cognition. The model is based on the formulation of relational biology develop by Rosen. The model is used to explore the underlying logic and foundational principles of Peircean biosemiotics. The relational approach provides insight into the synchronization of developmental processes. Specifically, the synchronization of the functions of pattern recognition and categorical attribution results in an organic (...) learning system that has similar properties to learning systems from the field of artificial intelligence (i.e. reservoir computing with generalized synchronization) and is stabilized by a hermeneutical circle of return. (shrink)
What does it mean to hold a belief? Some of our ways of speaking in English suggest that to hold a belief is to have something in your mind: beliefs are things we acquire, defend, recover, and so on (Abelson, 1986). That is, believing is a matter of being in a state of having a thing. In this paper, I will argue for an alternative: believing is something we do. This is not a new suggestion. For instance, Matthew Boyle (2011) (...) defends a theory of belief as an activity, which he traces back to Aristotle. This paper, however, makes two new contributions: first, I argue for an analogy between belief and planning that fleshes out what it would mean to understand belief as an activity, and second, I aim to show how the resulting view can help sense of a variety of theories in cognitive psychology that suggest cognitive information storage is dynamic and reconstructive. (shrink)
Comparative cognitive science often involves asking questions like ‘Do nonhumans have C?’ where C is a capacity we take humans to have. These questions frequently generate unproductive disagreements, in which one party affirms and the other denies that nonhumans have the relevant capacity on the basis of the same evidence. I argue that these questions can be productively understood as questions about natural kinds: do nonhuman capacities fall into the same natural kinds as our own? Understanding such questions in this (...) way has several advantages: it preserves the intuition that these are substantive empirical questions worth asking; it helps us to understand why they so frequently give rise to disagreements of the kind described; and it provides clues about how to diagnose and resolve them. (shrink)
The traditional approach to explanation in cognitive neuroscience is realist about psychological constructs, and treats them as explanatory. On the “standard framework,” cognitive neuroscientists explain behavior as the result of the instantiation of psychological functions in brain activity. This strategy is questioned by results suggesting the distribution of function in the brain, the multifunctionality of individual parts of the brain, and the overlap in neural realization of purportedly distinct psychological constructs. One response to this in the field has been to (...) employ the tools of databasing and machine learning to attempt to find and quantify specific correlations between psychological kinds such as ‘memory’ or ‘attention’ (or sub-kinds thereof) and patterns of activity in the brain. I assess the status and prospects of these projects. I argue that current proponents of the project are vague about their aims, vis-à-vis the standard framework, sometimes suggesting substantiation of the framework, sometimes suggesting retaining the framework but revising the ontology of mental constructs, and sometimes suggesting abandonment of the framework. I argue that extant results from within the projects fail to substantiate the standard framework, and propose an alternative. On my view, psychological constructs should not be viewed as explanantia, but instead as heuristic concepts that help us uncover ways that behaviors can vary and the ways that the brain implements those distinctions. I then discuss the normative upshot of these views for databasing and brain mapping projects. (shrink)
This paper offers a new argument in favour of experiential pluralism about visual experience – the view that the nature of successful visual experience is different from the nature of unsuccessful visual experience. The argument appeals to the role of experience in explaining possession of ordinary abilities. In addition, the paper makes a methodological point about philosophical debates concerning the nature of perceptual experience: whether a given view about the nature of experience amounts to an interesting and substantive thesis about (...) our own minds depends on the significance of the psychological or mental kind claim made by it. This means that an adequate defence of a given view of the nature of experience must include articulation of the latter's significance qua psychological or mental kind. The argument advanced provides the material to meet this demand. In turn, this constitutes further support for the argument itself. (shrink)
These ten lectures articulate a distinctive vision of the structure and workings of the human mind, drawing from research on embodied cognition as well as from historically more entrenched approaches to the study of human thought. On the author’s view, multifarious materials co-contribute to the production of virtually all forms of human behavior, rendering implausible the idea that human action is best explained by processes taking place in an autonomous mental arena – those in the conscious mind or occurring at (...) the so-called personal level. Rather, human behavior issues from a widely varied, though nevertheless integrated, collection of states and mechanisms, the integrated nature of which is determined by a form of clustering in the components’ contributions to the production of intelligent behavior. This package of resources, the cognitive system, is the human self. Among its elements, the cognitive system includes a vast number of representations, many subsets of which share their content. On the author’s view, redundancy of content itself constitutes an important explanatory quantity; the greater the extent of content-redundancy among representations that co-contribute to the production of an instance of behavior, the more fluid the behavior. In the course of developing and applying these views, the author addresses questions about the content of mental representations, extended cognition, the value of knowledge, and group minds. (shrink)
Muhammad Ali Khalidi contends that because cognitive science casts a wider net than neuroscience in searching for the causes of cognition, it is in the superior position to discover “real” cognitive kinds. I argue that while Khalidi identifies appropriate norms for individuating cognitive kinds, these norms ground his characterization of taxonomic practices in cognitive science, rather than the other way around. If we instead treat Khalidi's norms not as descriptively accurate characterizations of taxonomic practices in cognitive science, but as a (...) set of best practices for kinding cognition, is cognitive science in and neuroscience definitively out of the cognitive kinding game? (shrink)
In this article it is defended that the notion known as “The mark of the cognitive” is better characterized as a process that performs the function of generating intelligent behavior, in a flexible and adaptive way, capable of adapting to circumstances, given it is a context sensitive process. For that, some relevant definitions of cognition are examined. In the end, it is pointed out that the definition of the mark of cognition as a context-sensitive process takes into account several factors (...) that were added as constitutive parts of cognitive phenomena over the years, especially the 4E Cognition, which cannot be satisfactorily accommodated alongside the preceding notions of cognition. Accordingly, this essay shouldn’t be mistaken merely as an exercise in intellectual history, but as a brief and accessible attempt to advance the debate. (shrink)
Foundational ontologies, central constructs in ontological investigations and engineering alike, are based on ontological categories. Firstly proposed by Aristotle as the very ur- elements from which the whole of reality can be derived, they are not easy to identify, let alone partition and/or hierarchize; in particular, the question of their number poses serious challenges. The late medieval philosopher Dietrich of Freiberg wrote around 1286 a tutorial that can help us today with this exceedingly difficult task. In this paper, I discuss (...) ontological categories and their importance for foundational ontologies from both the contemporary perspective and the original Aristotelian viewpoint, I provide the translation from the Latin into English of Dietrich's De origine II with an introductory elaboration, and I extract a foundational ontology–that is in fact a single-category one–from this text rooted in Dietrich's specification of types of subjecthood and his conception of intentionality as causal operation. (shrink)
dolce, the first top-level ontology to be axiomatized, has remained stable for twenty years and today is broadly used in a variety of domains. dolce is inspired by cognitive and linguistic considerations and aims to model a commonsense view of reality, like the one human beings exploit in everyday life in areas as diverse as socio-technical systems, manufacturing, financial transactions and cultural heritage. dolce clearly lists the ontological choices it is based upon, relies on philosophical principles, is richly formalized, and (...) is built according to well-established ontological methodologies, e.g. OntoClean. Because of these features, it has inspired most of the existing top-level ontologies and has been used to develop or improve standards and public domain resources. Being a foundational ontology, dolce is not directly concerned with domain knowledge. Its purpose is to provide the general categories and relations needed to give a coherent view of reality, to integrate domain knowledge, and to mediate across domains. In these 20 years dolce has shown that applied ontologies can be stable and that interoperability across reference and domain ontologies is a reality. This paper briefly introduces the ontology and shows how to use it on a few modeling cases. (shrink)
Our understanding of implicit bias and how to measure it has yet to be settled. Various debates between cognitive scientists are unresolved. Moreover, the public’s understanding of implicit bias tests continues to lag behind cognitive scientists’. These discrepancies pose potential problems. After all, a great deal of implicit bias research has been publicly funded. Further, implicit bias tests continue to feature in discourse about public- and private-sector policies surrounding discrimination, inequality, and even the purpose of science. We aim to do (...) our part by reconstructing some of the recent arguments in ordinary language and then revealing some of the operative norms or values that are often hidden beneath the surface of these arguments. This may help the public learn more about the science of implicit bias. It may also help both laypeople and scientists reflect on the values, interests, and stakeholders involved in establishing, justifying, and communicating scientific research. (shrink)
After introducing the new field of cultural evolution, we review a growing body of empirical evidence suggesting that culture shapes what people attend to, perceive and remember as well as how they think, feel and reason. Focusing on perception, spatial navigation, mentalizing, thinking styles, reasoning (epistemic norms) and language, we discuss not only important variation in these domains, but emphasize that most researchers (including philosophers) and research participants are psychologically peculiar within a global and historical context. This rising tide of (...) evidence recommends caution in relying on one’s intuitions or even in generalizing from reliable psychological findings to the species, Homo sapiens. Our evolutionary approach suggests that humans have evolved a suite of reliably developing cognitive abilities that adapt our minds, information-processing abilities and emotions ontogenetically to the diverse culturally-constructed worlds we confront. (shrink)
Despite the recent upsurge in research on abstract concepts, there remain puzzles at the foundation of their empirical study. These are most evident when we consider what is required to assess a person’s abstract conceptual abilities without using language as a prompt or requiring it as a response—as in classic non-verbal categorization tasks, which are standardly considered tests of conceptual understanding. After distinguishing two divergent strands in the most common conception of what it is for a concept to be abstract, (...) we argue that neither reliably captures the kind of abstraction required to successfully categorize in non-verbal tasks. We then present a new conception of concept abstractness—termed Trial Concreteness—that is keyed to individual categorization trials. It has advantages in capturing the context-relativity of the degree of abstraction required for the application of a concept and fittingly correlates with participant success in recent experiments. (shrink)
The aim of this paper is to propose a new conceptualization of the distinction between realism and anti-realism about beliefs that is based on the division between natural and non-natural properties, as defined by Lewis. It will be argued that although the traditional form of anti-realism about beliefs, namely eliminative materialism, has failed, there is a possibility to reformulate the division in question. The background assumption of the proposal is the framework of deflationism about truth and existence: it will be (...) assumed that beliefs can be said to exist and their attributions can said to be true. The aim is to show that even when we buy into such assumptions we can meaningfully distinguish between the realist and anti-realist approach to belief. According to the proposal, the paradigmatic anti-realist view on beliefs should be seen as a conjunction of three claims: that belief attributions do not track objective similarities, that beliefs are not causally active, and that there is no viable way of naturalizing content. It will be shown that seeing the debate in the proposed way has important advantages as it allows the issue of belief realism to be made non-trivial and tractable, and it introduces theoretical unity into contemporary metaphysics of beliefs. (shrink)
When people combine concepts these are often characterised as “hybrid”, “impossible”, or “humorous”. However, when simply considering them in terms of extensional logic, the novel concepts understood as a conjunctive concept will often lack meaning having an empty extension (consider “a tooth that is a chair”, “a pet flower”, etc.). Still, people use different strategies to produce new non-empty concepts: additive or integrative combination of features, alignment of features, instantiation, etc. All these strategies involve the ability to deal with conflicting (...) attributes and the creation of new (combinations of) properties. We here consider in particular the case where a Head concept has superior ‘asymmetric’ control over steering the resulting concept combination (or hybridisation) with a Modifier concept. Specifically, we propose a dialogical approach to concept combination and discuss an implementation based on axiom weakening, which models the cognitive and logical mechanics of this asymmetric form of hybridisation. (shrink)
Wilfrid Sellars’ denunciation of the Myth of the Given was meant to clarify, against empiricism, that perceptual episodes alone are insufficient to ground and justify perceptual knowledge. Sellars showed that in order to accomplish such epistemic tasks, more resources and capacities, such as those involved in using concepts, are needed. Perceptual knowledge belongs to the space of reasons and not to an independent realm of experience. Dan Hutto and Eric Myin have recently presented the Hard Problem of Content as an (...) ensemble of reasons against naturalistic accounts of content. In a nutshell, it states that covariance relations—even though they are naturalistically acceptable explanatory resources—do not constitute content. The authors exploit this move in order to promote their preferred radical enactivist and anti-representationalist option, according to which, basic minds—the lower stratum of cognition—do not involve content. Although it is controversial to argue that the Hard Problem of Content effectively dismisses naturalistic theories of representation, a central aspect of it—the idea that information as covariance does not suffice to explain content—finds support among the defenders of classical cognitive representationalism, such as Marcin Miłkowski. This support—together with the acknowledgment this remark about covariance is a point already made by Sellars in his criticism of the Myth of the Given—has a number of interesting implications. Not only is it of interest for the debates about representationalism in cognitive science, where it can be understood as an anticipatory move, but it also offers some clues and insights for reconsidering some issues along Sellarsian lines—a conflation between two concepts of representation that is often assumed in cognitive science, a distinction between two types of relevant normativities, and a reconsideration of the naturalism involved in such explanations. (shrink)
The purpose of this paper is to analyze the theoretical commitments of autopoietic enactivism in relation to Errol E Harris’s dialectical holism in the interest of establishing a common metaphysical ground. This will be undertaken in three stages. First, it is argued that Harris’s reasoning provides a means of developing enactivist ontology beyond discussions limited to cognitive science and into domains of metaphysics that have traditionally been avoided by phenomenologists. Here, I maintain enactivist commitments are consistent with Harris’s reasoning from (...) certain synthetic a priori first principles, to his derivation of a teleological anthropic principle, which asserts the necessity of consciousness within the cosmos. Second, it is proposed that Steven Rosen’s long-standing proposal for a topology of phenomenology may provide a common logical foundation for both Harris and enactivists regarding anthropic reasoning. Third, it is argued that a pragmatic approach to process ontology is the most rigorous way of responding to the realism/anti-realism concerns that inevitably follow. If successful, this work will update Harris’s arguments with contemporary scientific and philosophical terminology and extend enactivism from philosophy of mind, into a general phenomenological ontology. (shrink)
Since Tolman’s paper in 1948, psychologists and neuroscientists have argued that cartographic representations play an important role in cognition. These empirical findings align with some theoretical works developed by philosophers who promote a pluralist view of representational vehicles, stating that cognitive processes involve representations with different formats. However, the inferential relations between maps and representations with different formats have not been sufficiently explored. Thus, this paper is focused on the inferential relations between cartographic and linguistic representations. To that effect, we (...) appeal to heterogeneous inference with ordinary maps and sentences. In doing so, we aim to build a model to bridge the gap between cartographic and linguistic thought. (shrink)
Despite their popularity, relatively scant attention has been paid to the upshot of Bayesian and predictive processing models of cognition for views of overall cognitive architecture. Many of these models are hierarchical ; they posit generative models at multiple distinct "levels," whose job is to predict the consequences of sensory input at lower levels. I articulate one possible position that could be implied by these models, namely, that there is a continuous hierarchy of perception, cognition, and action control comprising levels (...) of generative models. I argue that this view is not entailed by a general Bayesian/predictive processing outlook. Bayesian approaches are compatible with distinct formats of mental representation. Focusing on Bayesian approaches to motor control, I argue that the junctures between different types of mental representation are places where the transitivity of hierarchical prediction may be broken, and I consider the upshot of this conclusion for broader discussions of cognitive architecture. (shrink)
Within the field of neuroscience, it is assumed that the central nervous system is divided into two functionally distinct components: the brain, which does the cognizing, and the spinal cord, which is a conduit of information enabling the brain to do its job. We dub this the “Cinderella view” of the spinal cord. Here, we suggest it should be abandoned. Marshalling recent empirical findings, we claim that the spinal cord is best conceived as an intrabodily cognitive extension: a piece of (...) biological circuitry that, together with the brain, constitutes our cognitive engine. To do so, after a brief introduction to the anatomy of the spinal cord, we briefly present a number of empirical studies highlighting the role played by the spinal cord in cognitive processing. Having so done, we claim that the spinal cord satisfies two popular and often endorsed criteria used to adjudicate cases of cognitive extension; namely the parity principle and the so-called “trust and glue” criteria. This, we argue, is sufficient to vindicate the role of the spinal cord as an intrabodily mental extension. We then steel our case considering a sizable number of prominent anti-extension arguments, showing that none of them poses a serious threat to our main claim. We then conclude the essay, spelling out a number of far-from trivial implications of our view. (shrink)
In his “Bridging mainstream and formal ontology”, Augusto (2021) gives an excellent analysis of Dietrich von Freiberg’s idea of using causality as a partitioning principle for upper ontologies. For this Dietrich’s notion of extrinsic principles is crucial. The question whether causation can and indeed should be used as a partitioning principle for ontologies is discussed using mathematics and physics as examples.
Monothematic delusions involve a single theme, and often occur in the absence of a more general delusional belief system. They are cognitively atypical insofar as they are said to be held in the absence of evidence, are resistant to correction, and have bizarre contents. Empiricism about delusions has it that anomalous experience is causally implicated in their formation, whilst rationalism has it that delusions result from top down malfunctions from which anomalous experiences can follow. Within empiricism, two approaches to the (...) nature of the abnormality/abnormalities involved have been touted by philosophers and psychologists. One-factor approaches have it that monothematic delusions are a normal response to anomalous experiences whilst two-factor approaches seek to identify a clinically abnormal pattern of reasoning in addition to anomalous experience to explain the resultant delusion. In this paper we defend a one-factor approach. We begin by making clear what we mean by atypical, abnormal, and factor. We then identify the phenomenon of interest and overview one and two-factor empiricism about its formation. We critically evaluate the cases for various second factors, and find them all wanting. In light of this we turn to our one-factor account, identifying two ways in which ‘normal response’ may be understood, and how this bears on the discussion of one-factor theories up until this point. We then conjecture that what is at stake is a certain view about the epistemic responsibility of subjects with delusions, and the role of experience, in the context of familiar psychodynamic features. After responding to two objections, we conclude that the onus is on two-factor theorists to show that the one-factor account is inadequate. Until then, the one-factor account ought to be understood as the default position for explaining monothematic delusion formation and retention. We don’t rule out the possibility that, for particular subjects with delusions there may be a second factor at work causally implicated in their delusory beliefs but, until the case for the inadequacy of the single factor is made, the second factor is redundant and fails to pick out the minimum necessary for a monothematic delusion to be present. (shrink)
We present an algorithm for concept combination inspired and informed by the research in cognitive and experimental psychology. Dealing with concept combination requires, from a symbolic AI perspective, to cope with competitive needs: the need for compositionality and the need to account for typicality effects. Building on our previous work on weighted logic, the proposed algorithm can be seen as a step towards the management of both these needs. More precisely, following a proposal of Hampton , it combines two weighted (...) Description Logic formulas, each defining a concept, using the following general strategy. First it selects all the features needed for the combination, based on the logical distinc- tion between necessary and impossible features. Second, it determines the threshold and assigns new weights to the features of the combined concept trying to preserve the relevance and the necessity of the features. We illustrate how the algorithm works exploiting some paradigmatic examples discussed in the cognitive literature. (shrink)
The meta-problem of consciousness is the problem of explaining why we have problem intuitions about consciousness, why we intuitively think that conscious experience cannot be scientifically explained. In his discussion of this problem, David Chalmers briefly considers the possibility of giving a 'genealogical' solution, according to which problem intuitions are 'accidents of cultural history' (2018, p. 33). Chalmers' response to this solution is largely dismissive. In this paper, we defend the viability of a genealogical solution. Our strategy is to focus (...) on a particular problem intuition: the thought that the phenomenal character of colour experience is irreducibly subjective. We use the history of the inverted spectrum thought experiment as a window into how various philosophers have thought about colour experience. Our genealogy reveals that problem intuitions about colour are not timeless, but instead arise in a specific historical context, one that, in large part, explains why we have these intuitions. (shrink)
People are systematically biased against the possibility that ideas are innate. Berent (2020) traces these attitudes to an ontological dissonance, arising from the collision of two fundamental principles of human cognition -- dualism and essentialism. Carruthers (this issue) challenges this hypothesis and attributes our empiricist bias primarily to mindreading intuitions. Here, I counter Carruthers' concerns and show that mindreading cannot be the sole source of the empiricist bias. Specifically, mindreading fails to explain why our empiricist intuitions depend on the perceived (...) immateriality of ideas. The ontological dissonance hypothesis accounts for these facts. Because essentialism requires innate traits to be material, and because, per dualism, ideas are immaterial, people conclude that ideas cannot be innate. (shrink)
Berent (this issue) critiques one of the three main proposals put forward by Carruthers (this issue), who suggests that cognitive scientists are biased against innateness-claims by the tacit assumptions of the mentalizing faculty. Berent proposes, instead, that the bias results from dissonance produced by a conflict between our innate dualism and our innate essentialism. The present response raises a number of difficulties for her argument.
Working memory is a foundational construct of cognitive psychology, where it is thought to be a capacity that enables us to keep information in mind and to use that information to support goal directed behavior. Philosophers have recently employed working memory to explain central cognitive processes, from consciousness to reasoning. In this paper, I show that working memory cannot meet even a minimal account of natural kindhood, as the functions of maintenance and manipulation of information that tie working memory models (...) and theories together do not have a coherent or univocal realizer in the brain. As such, working memory cannot explain central cognition. Rather, I argue that working memory merely redescribes its target phenomenon, and in doing so it obfuscates relevant distinctions amongst the many ways that brains like ours retain and transform information in the service of cognition. While this project ultimately erodes the explanatory role that working memory has played in our understanding of cognition, it simultaneously prompts us to evaluate the function of natural kinds within cognitive science, and signals the need for a productive pessimism to frame our future study of cognitive categories. (shrink)
Not only does Hoffman claim that we do not see reality as it is, but that unperceived brains, trees and moons do not exist. His “interface theory of perception” is a peculiar blend of metaphorical ontology (objects are icons, space-time is a desktop) and mathematical modelling (the game-theoretical argument that fitness trumps truth. Conflating abstractions with concrete experience, evolution is used to refute everything (including evolution itself. Hoffman’s sweeping iconoclasm then lands where it took off: addressing the problem of consciousness. (...) After arguing against reality, he will tell us what it is. (shrink)
I present two caveats to the meta-problem challenge to theories of consciousness. Chalmers suggests that a theory of consciousness that solves the hard problem should also inform us about the meta-problem, and vice versa. The first caveat is the view that mechanism M, the mechanism through which content becomes conscious, may be neutral with respect to the content it renders conscious. This means that there can be no systematic connection between M and conscious content. The second caveat concerns how we (...) should treat the problem intuitions fueling the meta-problem. I argue that we should award them no special status with respect to their explanatory power in relation to the hard problem. (shrink)
Chalmers' (2018) meta-problem of consciousness emphasizes unexpected common ground between otherwise incompatible positions. We argue that the materialist should welcome discussion of the meta-problem. We suggest that the core of the metaproblem is the seeming arbitrariness of subjective experience. This has an unexpected resolution when one moves to an interventionist account of scientific explanation: the same interventions that resolve the hard problem should also resolve the meta-problem.
Embracing an inter-disciplinary approach grounded on Gärdenfors’ theory of conceptual spaces, we introduce a formal framework to analyse and compare selected theories about technical artefacts present in the literature. Our focus is on design-oriented approaches where both designing and manufacturing activities play a crucial role. Intentional theories, like Kroes’ dual nature thesis, are able to solve disparate problems concerning artefacts but they face both the philosophical challenge of clarifying the ontological nature of intentional properties, and the empirical challenge of testing (...) the attribution of such intentional properties to artefacts. To avoid these issues, we propose an approach that, by identifying different modalities to characterise artefact types, does not commit to intentional qualities and is able to empirically ground compliance tests. (shrink)
Chalmers (2018) considers a wide range of possible responses to the meta-problem of consciousness. Among them is the ignorance hypothesis -- the view that there only appears to be a hard problem because of our inadequate conception of the physical. Although Chalmers quickly dismisses this view, I argue that it has much greater promise than he recognizes. The plausibility of the ignorance hypothesis depends on how exactly one frames the 'problem intuitions' that a solution to the meta-problem must explain. I (...) argue that problem intuitions are hybrid intuitions that encompass one's intuitive take on the phenomenal and one's intuitive take on the physical. The ignorance hypothesis undermines the second half of these hybrid intuitions. I show how the ignorance hypothesis is preferable to the alternatives and attempt to explain why there is such widespread resistance to this promising position. (shrink)
Solving the meta-problem of consciousness requires, among other things, explaining why we are so reluctant to endorse various forms of illusionism about the phenomenal. I will try to tackle this task in two steps. The first consists in clarifying how the concept of consciousness precludes the possibility of any distinction between 'appearance' and 'reality'. The second consists in spelling out our reasons for recognizing the existence of something that satisfies that concept.
The paper is dedicated to particular cases of interaction and mutual impact of philosophy and cognitive science. Thus, philosophical preconditions in the middle of the 20th century shaped the newly born cognitive science as mainly based on conceptual and propositional representations and syntactical inference. Further developments towards neural networks and statistical representations did not change the prejudice much: many still believe that network models must be complemented with some extra tools that would account for proper human cognitive traits. I address (...) some real implemented connectionist models that show how ‘new associationism ’ of the neural network approach may not only surpass Humean limitations, but, as well, realistically explain abstraction, inference and prediction. Then I stay on Predictive Processing theories in a little more detail to demonstrate that sophisticated statistical tools applied to a biologically realist ontology may not only provide solutions to scientific problems or integrate different cognitive paradigms but propose some philosophical insights either. To conclude, I touch on a certain parallelism of Predictive Processing and philosophical inferentialism as presented by Robert Brandom. (shrink)
Discussions on the alleged methodological specificity of social knowledge are fueled to not the least extent by a kind of retarded position of the latter against technological advancements of natural and information science based on exact methods and formal or quantitative languages. It is more or less obvious that applicability of exact scientific methods to social disciplines is highly dependent on a chosen conception of social reality, i. e., on social ontology. In the article, the author critically approaches the ontological (...) views of Tony Lawson and proposes a computational view on social ontology that is supposed to eliminate some internal contradictions of Lawson's realist conception. (shrink)
The meta-problem of consciousness is the problem of explaining why we have the intuition that there is a hard problem of consciousness. David Chalmers briefly notes that my phenomenal powers view may be able to answer to this challenge in a way that avoids problems (having to do with avoiding coincidence) facing other realist views. In this response, I will briefly outline the phenomenal powers view and my main arguments for it and—drawing in part on a similar view developed by (...) Harold Langsam—discuss how more precisely its answer to the challenge would go. (shrink)
The meta-problem of consciousness is to explain why we think that there is a hard problem of consciousness. On Chalmers' view of the meta-problem, our judgments about the hard problem of consciousness arise non-inferentially as a result of introspection. I raise two problems for such a non-inferentialist view of the metaproblem. It does not seem to match the psychological facts about how we come to the realization of the hard problem, and it is unclear how the view can bridge the (...) gap between the content of introspection and the content involved in formulations of the hard problem. The inferentialist view of the meta-problem, on which the hard problem results from inference, explains both the psychology and content introduction. We should therefore prefer an inferentialist view of the meta-problem. (shrink)
Based on questions about herself, the character Motoko, in the film Ghost in the Shell, wonders about her continuity over time and her human condition and as a person. Similarly, it is possible to entertain some scenarios in which the addition of elements external to the body produces a similar tension with respect to human persons. One of these scenarios is that of natural cyborgs, as understood by Andy Clark. Based on the notion of natural cyborgs, through the coupling of (...) elements external to the body, not as artificial implants, but within a systemic whole for the realization of cognitive processes and mental states, it is possible to question the maintenance of personal identity and status as a person, as these external elements are increasingly dependent. This is especially true with Alzheimer’s patients. In this work, some notions are minimally clarified, such as extended mind, natural cyborgs and personal identity, which support the further discussion on the possibility of maintaining the personal identity and the status as a person for Alzheimer’s patients, along the progressive process of cognitive deterioration, through the understanding that they are natural cyborgs. (shrink)
Chalmers (2018) maintains that even if we understood every physical process in the brain we could still wonder why these processes give rise to conscious experience. The meta-problem is the challenge of explaining why we think this 'hard problem' exists. This response to the target paper endorses illusionist accounts of three 'problem intuitions' about consciousness: duality, presentation, and revelation. Subject–object duality is explained in terms of a clash between two compelling but contradictory convictions about consciousness. Phenomenal presence is understood in (...) relation to the configurational features of sensory experiences. And intuitions of revelation are explained as due to an unfounded belief in introspective ontological access. These illusionist analyses are used to bolster the case for physicalist realism rather than to support 'strong' illusionism. In addition to addressing the meta-problem they suggest a promising approach to the hard problem as wel. (shrink)
The notion of a physiological individuals has been developed and applied in the philosophy of biology to understand symbiosis, an understanding of which is key to theorising about the major transition in evolution from multi-organismality to multi-cellularity. The paper begins by asking what such symbiotic individuals can help to reveal about a possible transition in the evolution of cognition. Such a transition marks the movement from cooperating individual biological cognizers to a functionally integrated cognizing unit. Somewhere along the way, did (...) such cognizing units simultaneously have cognizers as parts? Expanding upon the multiscale integration view of the Free Energy Principle, this paper develops an account of reciprocal integration, demonstrating how some coupled biological cognizing systems, when certain constraints are met, can result in a cognizing unit that is in ways greater than the sum of its cognizing parts. Symbiosis between V. Fischeri bacteria and the bobtail squid is used to provide an illustration this account. A novel manner of conceptualizing biological cognizers as gradient is then suggested. Lastly it is argued that the reason why the notion of ontologically nested cognizers may be unintuitive stems from the fact that our folk-psychology notion of what a cognizer is has been deeply influenced by our folk-biological manner of understanding biological individuals as units of reproduction. (shrink)
Although there have been efforts to integrate Semantic Web technologies and artificial agents related AI research approaches, they remain relatively isolated from each other. Herein, we introduce a new ontology framework designed to support the knowledge representation of artificial agents’ actions within the context of the actions of other autonomous agents and inspired by standard cognitive architectures. The framework consists of four parts: 1) an event ontology for information pertaining to actions and events; 2) an epistemic ontology containing facts about (...) knowledge, beliefs, perceptions and communication; 3) an ontology concerning future intentions, desires, and aversions; and, finally, 4) a deontic ontology for modeling obligations and prohibitions which limit agents’ actions. The architecture of the ontology framework is inspired by deontic cognitive event calculus as well as epistemic and deontic logic. We also describe a case study in which the proposed DCEO ontology supports autonomous vehicle navigation. (shrink)
This study aims to show the socio-cognitive engineering of the pickpocket craft from the point of view of cognitive ecology. Being a pickpocket has a wider, existential status; studying it goes beyond the field of cognitive sciences. My ambitions are more modest: I try to show that the question about what it is like to be someone like a pickpocket is also a question about the cognitive structure of his or her activity space. In this light, I analyze some aspects (...) of the reality presented in the movie Pickpocket by Robert Bresson. From the ecological point of view, scenes from the old movie present pickpocketing techniques in the context of the opportunities and constraints of a given environment. I claim that studies like this require integrating certain conceptual tools, like distributed cognition approach, ecological psychology, and cognitive studies of design. (shrink)
I argue that there is a version of (quasi-Armstrongian) weak illusionism that intelligibly relates phenomenal concepts and introspective opacity, accounts for the (hard) problem intuitions Chalmers highlights (modal, epistemic, explanatory, and metaphysical), and undermines the most important arguments Chalmers deploys against type-B and type-C materialisms. If this is successful, we can satisfactorily account for the meta-problem of consciousness, mollify our hard problem intuitions, and remain genuine realists about phenomenal experience.
Addiction to observation.... There is a circular-linear relationship between observer and observation proving 'pi' is the only observer (a circle is the background state for everything) (making observation possible).
Many aspects of how humans form and combine concepts are notoriously difficult to capture formally. In this paper, we focus on the representation of three particular such aspects, namely overexten- sion, underextension, and dominance. Inspired in part by the work of Hampton, we consider concepts as given through a prototype view, and by considering the interdependencies between the attributes that define a concept. To approach this formally, we employ a recently introduced family of operators that enrich Description Logic languages. These (...) operators aim to characterise complex concepts by collecting those instances that apply, in a finely controlled way, to ‘enough’ of the concept’s defin- ing attributes. Here, the meaning of ‘enough’ is technically realised by accumulating weights of satisfied attributes and comparing with a given threshold that needs to be met. (shrink)
Às vezes, o debate entre causalistas e simulacionistas em filosofia da memória é apresentado de tal modo que parece que apenas o simulacionismo é compatível com a psicologia da memória contemporânea. Contudo, ambas teorias são compatíveis com os fatos descobertos pela ciência. Mas se o debate não é sobre a adequação aos fatos, sobre o que é? Nós propomos que este debate é um caso de negociação metalinguística. Caulistas e simulacionistas aceitam o mesmo conjunto de fatos, mas disputam sobre como (...) devemos definir "memória" e "lembrança". (shrink)