Skip to main content
Log in

Shannon + Friston = Content: Intentionality in predictive signaling systems

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

What is the content of a mental state? This question poses the problem of intentionality: to explain how mental states can be about other things, where being about them is understood as representing them. A framework that integrates predictive coding and signaling systems theories of cognitive processing offers a new perspective on intentionality. On this view, at least some mental states are evaluations, which differ in function, operation, and normativity from representations. A complete naturalistic theory of intentionality must account for both types of intentional state.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. The view of consciousness or phenomenal experience that explains it in intentional terms is also called representationalism, thus presupposing the identification of intentionality with representation. This form of representationalism makes a brief appearance in fn.19.

  2. These terms can vary significantly. Anti-representationalism predates the predictive processing framework (e.g. Stich 1983; Dennett 1987) or is independent of it (e.g., Ramsey 2007). Some anti-representationalists provide accounts of intentionality in terms of embodiment, in different ways (e.g. Hutto and Myin 2017; Chemero 2009; Bruineberg and Rietveld 2014 integrate enactivism with Friston’s work, focusing on the environmental structures to which the organism responds). Since I am not arguing for or against any form of anti-representationalism or representationalism, I set aside these nuances.

  3. Sayre (1983, pp. 78–9) argues that Dretske rejected every important aspect of Shannon's theory except for the use of a quantitative measure in an account of content. Dretske (1983, pp. 82–3) admits as much, but insists his interest was in "the ideas clothed in mathematical dress". Unfortunately, Dretske sanitized Shannon’s ideas to fit philosophical presuppositions about intentionality, rather than adjusting the latter to Shannon’s unadulterated ideas.

  4. Scarantino’s (2015) Probabilistic Difference Maker Theory shares this feature, but with an important difference discussed in the text; also, on his view content is fixed by a 3-place relation where all but one of the world states in the set play a role as background data. Other probabilistic accounts identify the content of a representation with one state of affairs in the set, differing in how that condition is identified (e.g. Shea 2007; Eliasmith 2005; Stegmann 2015 for a critical review).

  5. Worked out, the content of signal A is represented by the vector of the logs of these ratios:

    \({\text{V}}({\text{A}}) = \left\langle {\log \left( {\frac{{P\left( {S1|A} \right)}}{{P\left( {S1} \right)}}} \right),\left( {\frac{{P\left( {S2|A} \right)}}{{P\left( {S2} \right)}}} \right)} \right\rangle = \left\langle {\log \frac{(.9)}{(.5)},\log \frac{(.1)}{(.5)}} \right\rangle = \left\langle {\log (1.8),\log (.2)} \right\rangle\)

    and likewise, mutatis mutandis, for the content of signal B. If, with Skyrms (and following Shannon), we choose log base 2 and round to 2 decimal points, ν (A) = <.85, − 2.32>. What makes a vector semantic is its intended interpretation as a model of an actual signaling system type. The example presupposes but omits reference to a physical system that observes world-states and sends signals, even though world-states and signals are equally events as far as the mathematical formalism is concerned. Each vector slot represents a distinct possible change for that system (Skyrms 2010a, p. 35 fn. 2). World states are "whatever the sender can discriminate", and the evolution of categories is “driven by pragmatics—available acts and payoffs” (Skyrms 2010a, pp. 107–109, 139; Harms 2004). Like other naturalists, Skyrms elaborates his view with nonconceptual signals—which lack the syntax of natural language – but also indicates how to extend the theory to complex signals.

  6. Ambiguity can arise because “signal” and “message” are often used as synonyms, and “sender” and “receiver” are used to pick out distinct individuals or distinct subprocesses within one individual (and can go in both directions: Lean 2014). Where Shannon has two “signals”—the Message, with the Information Source and the Transmitter as the relata, and the Signal, with the Transmitter and the Receiver as the relata—some philosophical presentations simplify his schema to show only one (Scarantino 2015, p. 424, fn. 6, Cao 2012, p. 50, Lombardi 2005, p. 24). Although Shannon’s distinction does not always matter in a given discussion, this omission may suggest that the encoding step involves nothing of philosophical interest. Others use the full version (Godfrey-Smith 2013, p. 43, Lombardi et al. 2016, p. 1985, and, in essence, Martinez 2018, p. in pre-print).

  7. Shannon’s entropy formula captures the average uncertainty of a signal in the light of the probability distribution defined over the members of the set from which the signal might be chosen. Slightly more technically, a signal (e.g. “Q”) is a value of a random variable X selected from a set of possible values (e.g. “A”, “B”, …, “Z”); the entropy of X is defined over all the possible values that X can take weighted by each value’s frequency. This average uncertainty (entropy) is the sum of the weighted average of the log probabilities of each signal in a set of possible signals. It is greater if there are more possible signals to select from and if their individual probabilities of selection are closer to being equiprobable. His mutual information formula, defined for joint probability distributions, expresses the reduction in average uncertainty (entropy) of X given another variable Y whose value is known—for example, after “Q” is chosen, the entropy of X (ranging over the English alphabet) is greatly reduced because “U” is now statistically highly likely to be chosen. The mutual information is zero if the variables are independent, but naturalization projects rely on probabilistically related sets, in particular world states and signals. In addition, Shannon’s goal of reliable communication is achieved in his theory by compression (to eliminate redundancy) and selective insertion of redundancy (to manage noise). I focus on encoding for compression. (I thank an anonymous reviewer for drawing my attention to this distinction.) A predictive coding system also uses precision weighting to manage noise (see fn. 11).

  8. Soni and Goodman (2017) recount how prior efforts to eliminate noise focused on trying to improve the transmission channels, such as by making undersea cables as strong and insulated as possible. Obviously, physical media matter for reliability, whether this is undersea cable quality and insulation or neural integrity and myelin sheaths. The point is that reliability is redefined in terms of the probabilities of the signals in a set such that, given a physical channel (with a certain transmission capacity), the same message can be encoded in ways that make transmission through that channel more or less reliable.

  9. For example, he mentions (1948, p. 5) semantic compression in opposite extreme cases: when an English sentence is transformed into Basic English, which contains about 850 words, and James Joyce’s Finnegans Wake, where neologisms replace long phrases.

  10. I suppress many important details to focus on the theory’s basic functional distinctions between the generative model and predictions, and the prediction error, and assume its basic principle that the organism aims to minimize long-term average prediction error. For example, Friston and Frith (2015) distinguish between PE signals used to update the GM and those that are used to update action; these correspond to sensory and proprioceptive predictions. I discuss precision weighting of the PE signal in Sect. 4.

  11. As Stephen Mann (in personal communication) and two anonymous reviewers note, this nudging will be modulated by precision weighting of the PE signal to account for noise or other disturbances. Precision error weighting—conceptually, how reliable the error signal is or how much confidence should be placed in it (Friston and Frith 2015)—helps explain how the GM responds to the PE signal, but does not affect the latter’s status qua evaluation. For example, in poor visual conditions the PE signal may be weighted as unreliable; since it is more likely to misevaluate in such conditions, the GM in effect treats the evaluation with a grain of salt. A generative model might also systematically assign more weight to optimistic misevaluations (those that downplay or understate the actual error size), manifesting a kind of Dunning–Kruger effect; see Prosser et al. (2018) on precision weighting and psychopathy. I discuss misevaluation below.

  12. Intentional inexistence is Brentano’s (1874/2014) label for the fact that we can think about things that don’t exist, such as Santa Claus or unicorns. In response, Meinong (1904) argued for ontological commitment to objects that do not exist—a view that has long been anathema to naturalists.

  13. Linguistic descriptions of nonconceptual and subpersonal contents are usually charitably understood as approximations or glosses of the information that a given theory says such states contain (as Martinez and Klein 2016, p. 284 pointedly note). I assume a similar attitude towards the various content descriptions used by me or others throughout this paper.

  14. The Xerox principle (Dretske 1983, p. 57) is: “If C carries the information that B, and B’s occurrence carries the information that A, then C carries the information that A. You don’t lose information about the original (A) by perfectly reproduced copies (B of A and C of B). Without the transitivity this principle describes, the flow of information would be impossible.” Predictive processing offers a different account of this flow.

  15. A problem also occurs when there is no prediction error signal but there should have been one: the system fails to get the evaluation it should have gotten, and it proceeds as if it needs no adjustment. This may be treated as a degenerate case of misevaluation, but is better treated as a different kind of problem—for example, the sender may be damaged.

  16. A reviewer suggests that optimism or pessimism depend on the content of the prediction as well as the size of the error: for example, if I predict seeing a lion but in fact encounter an impala, “this may produce a large prediction error, albeit the error intuitively counts as optimistic. A low prediction error relative to the same hypothesis is pessimistic”. These uses of the concepts of optimism and pessimism appear to qualify the GM (e.g., its expectations). For PE signals, actual error size and misevaluation are distinct: for example, a very large error term may be exactly right. Suppose N is the actual large discrepancy between the prediction (Lion) and the actual input (Impala). If my PE signal encodes N, it is a Goldilocks evaluation. If it encodes a smaller discrepancy than it should (e.g., as if what was in front of me is a juvenile lion), it is an optimistic misevaluation: it tells me there’s a discrepancy, but it downplays the difference. If my PE signal encodes a larger error than it should (e.g., as if a rhinoceros was in front of me) it is a pessimistic misevaluation: in this case, it exaggerates the discrepancy.

  17. This issue of degrees may not be a problem for accounts of the GM’s representational vehicles in terms of structural similarity (e.g., Gladziejewski and Milkowski 2017). Maplike structures are normatively assessed in terms of accuracy, which comes in degrees. The contents of such representations would still be the worldly states they represent, even if they are not assessed for truth. Also, Kiefer and Hohwy (2018, p. 2404) suggest that the Kullback–Leibler (KL) divergence—a mathematical measure common to Shannon communication and predictive coding—can be used as a measure of misrepresentation. The KL divergence is a method of averaging over all the log ratios in probability vectors that lets one mathematically compare one probability distribution to a reference probability distribution (see also Skyrms 2010a, p. 36). Optimally, one minimizes the KL divergence from the reference distribution. They suggest taking the KL divergence between the GM's posterior distribution and the causal structure of the world as an internal proxy for an objective notion of mismatch between the GM and the world. The KL divergence may also work as an average measure of misevaluation of PE signals, since in predictive signaling systems aiming for KL optimality is minimizing prediction error over the long term. This could provide a measure of whether a system’s (or the Transmitter’s) PE signals are on average Goldilocks signals (or not); thus, a system that systematically generates optimistic misevaluations (long-term, on average) might itself be judged optimistic.

  18. Millikan (1995, p. 190) moots an adverbial account of the attitudes, rather than the standard relational account, in which representational content depends essentially on the functional role the representation plays in the system. However, an adverbial structure for propositional attitudes would not turn them into evaluations.

  19. This evaluationist view should not be confused with Bain’s (2017) evaluationism regarding the content of phenomenal experience, in which pain experience represents that a body part is damaged and that this condition is bad for you. This is a conservative extension of representationalist explanations of experience (e.g., Cutter and Tye 2011; Aydede 2019). Bain’s evaluation component is in effect a judgment that p; in the predictive coding framework, it is an element of the GM generated in response to tissue damage and is distinct from the PE error signal (see Wiech 2016 for a predictive processing account of pain). Lewis/Skyrms signaling theory has been invoked in relation to pain experience by Martinez and Klein (2016), but they do not invoke Skyrms’ theory of content.

  20. The ontology of propositions is disputed, although identifying them with sets of possible worlds dominates (Skyrms adopts this view, albeit with his modifications).

  21. For example, in his influential proposal, Field (1972, 1978) analyzed "believing that p" in terms of a naturalistically acceptable relation (dubbed believes*) to a sentence S, and S means that p. Truth is explained in terms of reference (or denotation) to objects and properties, and reference in terms of a causal theory of reference. Jacob (2019) provides an excellent overview of the orthodoxy regarding intentionality; see also Pitt (2020).

  22. In Friston and Frith’s (2015) elaboration of communication between two agents using the active inference framework, both agents come to synchronize their generative models and expectations through sensory exchange (two songbirds, in their example). When this integration is achieved, in some sense their agency is not distinct. In the text, this degree of integration between Newman and Revere does not occur, which is true of many stable communication systems. It is an open question the degree to which social insect or bacteria colonies or other multi-agent systems or superorganisms are communicatively integrated.

References

  • Aitchison, L., & Lengyel, M. (2017). With or without you: Predictive coding and Bayesian inference in the brain. Current Opinion in Biology, 46, 219–227.

    Google Scholar 

  • Anscombe, G. E. M. (1963). Intention (2nd ed.). Oxford: Basil Blackwell.

    Google Scholar 

  • Austin, J. L. (1962). How to do things with words. Oxford: Oxford University Press.

    Google Scholar 

  • Aydede, M. (2019). Pain. In E. Zalta (Ed.), The stanford encyclopedia of philosophy (2019th ed.). Washington: Spring.

    Google Scholar 

  • Bain, D. (2017). Evaluativist accounts of pain’s unpleasantness. In J. Corns (Ed.), The Routledge handbook of philosophy of pain. Abingdon: Routledge.

    Google Scholar 

  • Barrett, L., & Bar, M. (2009). See It with feeling: Affective predictions during object perception. Philosophical Transactions of the Royal Society B, 364(1521), 1325–1334.

    Google Scholar 

  • Birch, J. (2014). Propositional content in signaling systems. Philosophical Studies, 171, 493–512.

    Google Scholar 

  • Brentano, F. (1874/2014). Psychology from an empirical standpoint. Abingdon: Routledge.

  • Bruineberg, J., & Rietveld, E. (2014). Self-Organization, free energy minimization, and optimal grip on a field of affordances. Frontiers in Human Neuroscience, 8(599), 1–14.

    Google Scholar 

  • Cao, R. (2012). A telesemantic approach to information in the brain. Biology and Philosophy, 27, 49–71.

    Google Scholar 

  • Chemero, A. (2009). Radical embodied cognitive science. Cambridge: MIT Press.

    Google Scholar 

  • Clark, A. (2013). Whatever Next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36, 181–253.

    Google Scholar 

  • Clark, A. (2016). Surfing uncertainty: Prediction, action and the embodied mind. Oxford: Oxford University Press.

    Google Scholar 

  • Colombo, M., & Series, P. (2012). Bayes in the brain: On Bayesian modelling in neuroscience. The British Journal for the Philosophy of Science, 63, 697–723.

    Google Scholar 

  • Cutter, B., & Tye, M. (2011). Tracking Representationalism and the Painfulness of Pain. Philosophical Issues, 21(1), 90–109.

    Google Scholar 

  • Dennett, D. (1987). The intentional stance. Cambridge: MIT Press.

    Google Scholar 

  • Dretske, F. (1981). Knowledge and the flow of information. Cambridge: MIT Press.

    Google Scholar 

  • Dretske, F. (1983). Precis of knowledge and the flow of information. Behavioral and Brain Sciences, 6(55–90), 82–83.

    Google Scholar 

  • Eliasmith, C. (2005). Neurosemantics and categories. In H. Cohen & C. Lefevre (Eds.), Handbook of categorization in cognitive science (pp. 1035–1054). Amsterdam: Elsevier.

    Google Scholar 

  • Field, H. (1972). Tarski’s theory of truth. The Journal of Philosophy, 69, 347–375.

    Google Scholar 

  • Field, H. (1978). Mental representation. Erkenntnis, 13, 9–61.

    Google Scholar 

  • Fodor, J. (1987). Psychosemantics. Cambridge: MIT Press.

    Google Scholar 

  • Frege, G. (1892/1948). Sense and reference. Philosophical Review (1948) 57, 209–230.

  • Friston, K. (2005). A theory of cortical responses. Philosophical Transactions of the Royal Society B, Biological Sciences, 360(1456), 815–836.

    Google Scholar 

  • Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138.

    Google Scholar 

  • Friston, K., & Frith, C. (2015). Active inference, communication and hermeneutics. Cortex, 68, 129–143.

    Google Scholar 

  • Friston, K., & Stephan, K. (2007). Free energy and the brain. Synthese, 159, 417–458.

    Google Scholar 

  • Friston, K., Thornton, C., & Clark, A. (2012). Free-energy minimization and the dark- room problem. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2012.00130.

    Article  Google Scholar 

  • Gallagher, S. (2018). Decentering the Brain: Embodied cognition and the critique of neurocentrism and narrow-minded philosophy of mind. Constructivist Foundations, 14(1), 8–21.

    Google Scholar 

  • Gershman, S., & Daw, N. (2012). Perception, action, and utility: The tangled skein. In M. Rabinovich, K. Friston, & P. Varona (Eds.), Principles of brain dynamics (pp. 293–312). Cambridge: MIT.

    Google Scholar 

  • Gladziejewski, P. (2016). Predictive coding and representationalism. Synthese, 193, 559–582.

    Google Scholar 

  • Gladziejewski, P., & Milkowski, M. (2017). Structural representations: Causally relevant and different from detectors. Biology and Philosophy, 32, 337–355.

    Google Scholar 

  • Godfrey-Smith, P. (2012). Review of signals: Evolution, learning, and information, by brian Skyrms. Mind, 120(480), 1288–1297.

    Google Scholar 

  • Godfrey-Smith, P. (2013). Signals, icons, and beliefs. In D. Ryder, J. Kingsbury, & K. Williford (Eds.), Millikan and her critics (pp. 41–58). Wiley-Blackwell: Malden and Oxford.

    Google Scholar 

  • Godfrey-Smith, P. (2014). Sender–receiver systems within and between organisms. Philosophy of Science, 81, 866–878.

    Google Scholar 

  • Grice, H. P. (1957). Meaning. The Philosophical Review, 66(3), 377–388.

    Google Scholar 

  • Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and semantics (Vol. 3, pp. 41–58). New York: Academic Press.

    Google Scholar 

  • Harms, W. (2004). Primitive content, translation, and the evolution of meaning in animal communication. In D. K. Oller & U. Griebel (Eds.), Evolution of communication systems: A comparative approach (pp. 31–48). Cambridge: MIT.

    Google Scholar 

  • Hohwy, J. (2014). The predictive mind. Oxford: Oxford University Press.

    Google Scholar 

  • Hutto, D. (2018). Getting into predictive processing’s great guessing game: Bootstrap heaven or hell? Synthese, 195, 2445–2458.

    Google Scholar 

  • Hutto, D., & Myin, E. (2017). Evolving enactivism: Basic minds meet content. Cambridge: MIT Press.

    Google Scholar 

  • Isaac, A. (2019). The semantics latent in Shannon information. The British Journal for the Philosophy of Science, 70(1), 103–125.

    Google Scholar 

  • Jacob, P. (2019). Intentionality. In E. Zalta (Ed.), The stanford encyclopedia of philosophy (2019th ed.). Greensburg: Winter.

    Google Scholar 

  • Kiefer, A., & Hohwy, J. (2018). Content and misrepresentation in hierarchical generative models. Synthese, 195, 2387–2415.

    Google Scholar 

  • Kiefer, A. & Hohwy, J. (2019). Representation in the prediction error minimization framework. In S. Robins, J. Symons, & P. Calvo (Eds.), Routledge companion to the philosophy of psychology (2nd ed., Vol. 2, pp. 384–409). London.

  • Lean, O. (2014). Getting the most out of shannon information. Biology and Philosophy, 29, 395–413.

    Google Scholar 

  • Lewis, D. (1969). Convention. Cambridge: Harvard University Press.

    Google Scholar 

  • Lombardi, O. (2005). Dretske, Shannon’s theory, and the interpretation of information. Synthese, 144(1), 23–39.

    Google Scholar 

  • Lombardi, O., Holik, F., & Vanni, L. (2016). What is Shannon information? Synthese, 193, 1983–2012.

    Google Scholar 

  • Mackay, D. (1969). Information, mechanism, and meaning. Cambridge: MIT Press.

    Google Scholar 

  • Martinez, M. (2018). Representations are rate-distortion sweet spots. Proceedings of the Philosophy of Science Association (PSA2018). Pre-print.

  • Martinez, M., & Klein, C. (2016). Pain signals are predominantly imperative. Biology and Philosophy, 31, 283–298.

    Google Scholar 

  • McGrath, M., & Frank, D. (2018). Propositions. In E. Zalta (Ed.), The stanford encyclopedia of philosophy (2018th ed.). Washington: Spring.

    Google Scholar 

  • Meinong, A. (1904). Uber Gegenstandtheorie (English translation: The Theory of Objects). In R. Chisholm (Ed.), Realism and the background of phenomenology (p. 1960). Glencoe: The Free Press.

    Google Scholar 

  • Millikan, R. (1984). Language, thought, and other biological categories. Cambridge: MIT Press.

    Google Scholar 

  • Millikan, R. (1989). Biosemantics. Journal of Philosophy, 86, 281–297.

    Google Scholar 

  • Millikan, R. (1995). Pushmi-pullyu representations. Philosophical Perspectives, 9, 185–200.

    Google Scholar 

  • Orlandi, N. (2014). The innocent eye: Why vision is not a cognitive process. Oxford: Oxford University Press.

    Google Scholar 

  • Orlandi, N. (2018). Predictive perceptual systems. Synthese, 195, 2367–2386.

    Google Scholar 

  • Piccinini, G., & Scarantino, A. (2011). Information processing, computation, and cognition. Journal of Biological Physics, 37, 1–38.

    Google Scholar 

  • Pitt, D. (2020). Mental representation. In E. Zalta (Ed.), The stanford encyclopedia of philosophy (2020th ed.). Washington: Spring.

    Google Scholar 

  • Prosser, A., Friston, K., Bakker, N., & Parr, T. (2018). A Bayesian model of psychopathy: A model of lacks remorse and self-aggrandizing. Computational Psychiatry, 2, 92–140.

    Google Scholar 

  • Ramsey, W. (2007). Representation reconsidered. Cambridge: Cambridge University Press.

    Google Scholar 

  • Rao, R., & Ballard, D. (1999). Predictive coding in the visual cortex. Nature Neuroscience, 2, 79–87.

    Google Scholar 

  • Rescorla, M. (2017). Review of andy clark, surfing uncertainty: Prediction, action, and the embodied mind. Notre Dame Philosophical Reviews. https://ndpr.nd.edu/news/surfing-uncertainty-prediction-action-and-the-embodied-mind/.

  • Sayre, K. (1983). Some untoward consequences of Dretske’s “causal theory” of information. Behavioral and Brain Sciences, 6, 78–79.

    Google Scholar 

  • Scarantino, A. (2015). Information as a probabilistic difference maker. Australasian Journal of Philosophy, 93(3), 419–443.

    Google Scholar 

  • Scarantino, A., & Piccinini, G. (2010). Information without Truth. Metaphilosophy, 41(3), 313–330.

    Google Scholar 

  • Schiffer, S. (1981). Truth and the theory of content. In H. Parret & J. Bouveresse (Eds.), Meaning and understanding (pp. 204–224). Berlin: de Gruyter.

    Google Scholar 

  • Sengupta, B., Semmler, M. B., & Friston, K. J. (2013). Information and efficiency in a nervous system: A synthesis. PLoS Computational Biology, 9(7), e1003157.

    Google Scholar 

  • Shannon, C. (1948). A mathematical theory of communication. The Bell System Mathematical Journal, 27, 379–423.

    Google Scholar 

  • Shannon, C., & Weaver, W. (1949). The mathematical theory of communication. Urbana: University of Illinois Press.

    Google Scholar 

  • Shea, N. (2007). Consumers need information: Supplementing teleosemantics with an input condition. Philosophy and Phenomenological Research, 75(2), 404–435.

    Google Scholar 

  • Shea, N. (2012). Reward prediction errors are meta-representational. Nous, 48(2), 314–341.

    Google Scholar 

  • Shea, N. (2014). Neural signaling of probabilistic vectors. Philosophy of Science, 81, 902–913.

    Google Scholar 

  • Shea, N., Godfrey-Smith, P., & Cao, R. (2017). Content in Simple Signaling Systems. British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/axw036.

    Article  Google Scholar 

  • Skyrms, B. (2010a). Signals: Evolution, learning, and information. Oxford: Oxford University Press.

    Google Scholar 

  • Skyrms, B. (2010b). The flow of information in signaling games. Philosophical Studies, 147, 155–165.

    Google Scholar 

  • Soni, J., & Goodman, R. (2017). A mind at play: How claude shannon invented the information age. New York: Simon and Schuster.

    Google Scholar 

  • Spratling, M. (2016). Predictive coding as a model of cognition. Cognitive Processing, 17(3), 279–305.

    Google Scholar 

  • Spratling, M. (2017). A review of predictive coding algorithms. Brain and Cognition, 112, 92–97.

    Google Scholar 

  • Sprevak, M. (2019). Two kinds of information processing in cognition. Review of Philosophy and Psychology. https://doi.org/10.1007/s13164-019-00438-9.

    Article  Google Scholar 

  • Stegmann, U. (2015). Prospects for probabilistic theories of natural information. Erkenntnis, 80, 869–893.

    Google Scholar 

  • Stich, S. (1983). From folk psychology to cognitive science: The case against belief. Cambridge: MIT Press.

    Google Scholar 

  • Usher, M. (2001). A statistical-referential theory of content: Using information theory to account for misrepresentation. Mind and Language, 16(3), 311–334.

    Google Scholar 

  • Weaver, W. (1949). Recent contributions to the mathematical theory of communication. The Mathematical Theory of Communication, 95–117.

  • Wiech, K. (2016). Deconstructing the sensation of pain: The influence of cognitive processes on pain sensation. Science, 354(6312), 584–587.

    Google Scholar 

  • Williams, D. (2017). Predictive processing and the representation wars. Minds and Machines. https://doi.org/10.1007/s11023-017-9441-6.

    Article  Google Scholar 

  • Yablo, S. (2014). Aboutness. Princeton: Princeton University Press.

    Google Scholar 

Download references

Acknowledgements

I am very grateful to the following for comments and questions that were invaluable for helping me develop this paper: Alistair Isaac, Mark Sprevak, Robert Rupert, and Toby Mordkoff (early stage); Beate Krickel, Stephen Mann, and the entire lively audience at the conference “Mental Representations in a Mechanical World: The state of the debate in philosophy of mind and philosophy of science” (Nov. 28–29, 2019), at Ruhr-Universitat Bochum, and to Albert Newen for his support for my research visit at Bochum (middle stage); and three anonymous referees for this journal (final stage).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carrie Figdor.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Figdor, C. Shannon + Friston = Content: Intentionality in predictive signaling systems. Synthese 199, 2793–2816 (2021). https://doi.org/10.1007/s11229-020-02912-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-020-02912-9

Keywords

Navigation