Skip to main content
Log in

Natural probabilistic information

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Natural information refers to the information carried by natural signs such as that smoke is thought to carry natural information about fire. A number of influential philosophers have argued that natural information can also be utilized in a theory of mental content. The most widely discussed account of natural information (due to Dretske, in Knowledge and the flow of information, 1981/1999) holds that it results from an extremely strong relation between sign and signified (i.e. a conditional probability of 1). Critics have responded that it is doubtful that there are many strong relations of this sort in the natural world due to variability between signs and signified. In light of this observation, a promising suggestion is that much of the interesting natural information carried by natural signs is really information with a probabilistic content. However, Dretske’s theory cannot account for this information because it would require implausible second order objective probabilities. Given the most plausible understanding of the probabilities involved here, I argue that it is only sequences of traditional natural signs (not individual signs) that carry this probabilistic information. Several implications of this idea will be explored.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. A similar notion can be found in Grice (1957), though he called it ‘natural meaning’. Churchland and Churchland (1983) may have been the first to use the expression ‘natural information’.

  2. I say ‘partly’ because none of the above mentioned philosophers hold that mental content is strictly equivalent to informational content, primarily because mental representations are thought to be capable of misrepresenting while natural information cannot be false. I discuss this more in the next section of the paper.

  3. By a naturalized epistemology, I simply mean an epistemology that is sensitive to how organisms actually develop and acquire knowledge and other knowledge-like states. This needn’t involve all of Quine’s commitments for a naturalized epistemology.

  4. It should be noted that Dretske is here discussing what he calls the ‘nuclear’ sense of information that is operative in a variety of different contexts.

  5. I borrow the term ‘veridicality thesis’ from Scarantino and Piccinini (2010) though they formulate their own veridicality thesis somewhat differently. Floridi (2011) also develops a veridicality thesis that is closer to my own.

  6. My formulation of the veridicality thesis would even be accepted by Scarantino and Piccinini (2010) (see p. 318), who are otherwise critical of the idea that information requires truth.

  7. Again, Dretske is here discussing what he calls the nuclear sense of information.

  8. I am thankful to the reviewer for prompting me to address this point.

  9. For further arguments in favor of the veridicality thesis see Floridi (2011).

  10. An anonymous reviewer suggested that the questions that I am pursuing here are similar to what is called the ‘symbol grounding problem’ in artificial intelligence (see Floridi 2011, chaps. 6 and 7). The latter roughly refers to the challenge of showing how artificial agents could have meaningful symbols where the meaning is developed autonomously and not simply imputed to them by other intentional beings such as their human designers. It is possible that there is some overlap between this research program and my present concerns if the symbols in artificial agents constitute natural signs. However, most of the natural signs that I am concerned with exist independently of the presence or actions of any agents. As mentioned, these signs are thought to possess a meaning of sorts even if there are no such agents. The symbol grounding problem only focuses on the meaning of symbols within agents and for that reason the aims of the two projects diverge. ‘Meaning’ is also ambiguous and it isn’t clear to me that those who attempt to solve the symbol grounding problem are interested in the same sort of meaning as is possessed by natural signs. For instance, they might hope to account for the ability of the symbols to misrepresent. For these reasons, I think it best to treat the two projects as distinct.

  11. The preceding paragraphs are a much- compressed summary of Dretske’s arguments in chapter 2 of Knowledge and the Flow of Information (see Floridi 2013 and Piccinini and Scarantino 2011 for similar arguments).

  12. At one point, (Dretske (1988), p. 58) maintains that natural information is not subjective though it is relative to background knowledge. Even so, the latter is still problematic, given his use of natural information in his theory of mental content.

  13. (Cohen and Meskin (2006), p. 339) also worries that Dretske’s insistence on a probability of one is ‘too demanding’. (Godfrey-Smith (1992), p. 288) says that, ‘The 1981 [Dretske] notion of information seems so demanding that one wonders whether messages with interesting contents are ever really transmitted.’ Also see Suppes (1983) for a similar criticism.

  14. Loewer (1983) makes a similar point.

  15. I will later note an exception to this that signals carry natural information about their own presence that does not have this probabilistic form.

  16. Skyrms (2010) appears to make a similar suggestion that the informational content of signals sent in particular types of signaling games is probabilistic. Piccinini and Scarantino (2011) also emphasize that most of the natural information that signals carry in real world environments is to the effect that o is probably G. I borrow the expression ‘probabilistic natural information’ from Scarantino and Piccinini (2010).

  17. I postpone a discussion of whether probabilistic information, in this sense, is intentionality independent until the next section.

  18. Dretske expresses some skepticism about probabilistic information (see Dretske 1981/1999, p. 69) so he might not have seen this as a legitimate option, in any event.

  19. The problem here is more general and arises whenever the reference classes are identical for both the first and second order probabilities with a finite frequency interpretation.

  20. Gillies also discusses single case propensity theories. The points I make below apply to both long-run and single case propensity theories so for the sake of brevity I only discuss the former here.

  21. If the series of repetitions is long but finite then the relative frequencies are approximately equal to the probabilities. If the series of repetitions were infinite then the limiting relative frequencies would be equal to the probabilities.

  22. Paul Humphreys originally pointed out the inability for propensity interpretations to handle inverse probabilities. Loewer (1983) and Cohen and Meskin (2006) discuss how this creates difficulties for Dretske’s theory if he were to interpret conditional probabilities as propensities.

  23. Strictly speaking, this interpretation would maintain that smoke along with a set of repeatable conditions have a causal propensity of a certain strength to produce fire but that obviously gets the causal order backwards.

  24. Gillies (2000) ultimately attempts to explain inverse probabilities under the propensity interpretation by severing the connection between propensity and causality. This makes it much less clear what the propensities at issue are supposed to be.

  25. Of course, it is also possible that the proper interpretation of the probabilities involved here has not yet been articulated. I limit my discussion to the available major contenders.

  26. This is not to suggest that every organism that confronts this sequence will recognize the approximate probability values. Instead, I am simply maintaining that the information is available in the signal and an organism could learn about it from the signal.

  27. Admittedly, talk of ‘approximate probabilities’ is a bit vague but I don’t see how this vagueness can be avoided.

  28. I am thankful to an anonymous reviewer for bringing this objection to my attention.

  29. I want to thank an anonymous reviewer for reminding me of this point.

  30. Some representative papers include Knill and Pouget (2004), Jazayeri and Movshon (2006) and Ma et al. (2006).

References

  • Atkinson, D., & Peijnenburg, J. (2013). A consistent set of infinite-order probabilities. International Journal of Approximate Reasoning, 54(9), 1351–1360.

    Article  Google Scholar 

  • Churchland, P., & Churchland, P. (1983). Content: Semantic and information-theoretic. The Behavioral and Brain Sciences, 6, 67–68.

    Article  Google Scholar 

  • Cohen, J., & Meskin, A. (2006). An objective counterfactual theory of information. Australasian Journal of Philosophy, 84(3), 333–352.

    Article  Google Scholar 

  • Dretske, F. (1981/1999). Knowledge and the flow of information. California: CSLI Publications, Leland Stanford Junior University.

  • Dretske, F. (1983a). Precis of knowledge and the flow of information. The Behavioral and Brain Sciences, 6, 55–63.

    Article  Google Scholar 

  • Dretske, F. (1983b). Why information? The Behavioral and Brain Sciences, 6, 82–89.

    Article  Google Scholar 

  • Dretske, F. (1986). Misrepresentation. In R. Bogdan (Ed.), Belief, form, content and function (pp. 17–36). Oxford: Clarendon Press.

    Google Scholar 

  • Dretske, F. (1988). Explaining behavior: Reasons in a world of causes. Cambridge, MA: MIT Press.

    Google Scholar 

  • Dretske, F. (1990). Reply to reviewers. Philosophy and Phenomenological Research, 50(4), 819–839.

    Article  Google Scholar 

  • Eliasmith, C. (2005). A new perspective on representational problems. Journal of Cognitive Science, 6, 97–123.

    Google Scholar 

  • Floridi, L. (2011). The philosophy of information. Oxford, UK: Oxford University Press.

    Book  Google Scholar 

  • Floridi, L. (2013). Semantic conceptions of information. In The Stanford encyclopedia of philosophy. http://plato.stanford.edu/entries/information-semantic/.

  • Fodor, J. (1990a). A theory of content and other essays. Cambridge, MA: MIT Press.

    Google Scholar 

  • Fodor, J. (1990b). Information and representation. In P. Hanson (Ed.), Information, language and cognition. Vancouver: University of British Columbia Press.

    Google Scholar 

  • Gillies, D. (2000). Varieties of propensity. The British Journal for the Philosophy of Science, 51, 807–835.

    Article  Google Scholar 

  • Godfrey-Smith, P. (1992). Indication and adaptation. Synthese, 92(2), 283–312.

    Article  Google Scholar 

  • Grice, P. (1957). Meaning. The Philosophical Review, 66(3), 377–388.

    Article  Google Scholar 

  • Hajek, A. (2007). The reference class problem is your problem too. Synthese, 156, 563–585.

    Article  Google Scholar 

  • Hajek, A. (2009). Fifteen arguments against hypothetical frequentism. Erkenntnis, 70, 211–235.

    Article  Google Scholar 

  • Hajek, A. (2011). Interpretations of probability. In The Stanford encyclopedia of philosophy. http://plato.stanford.edu/entries/probability-interpret/.

  • Jazayeri, M., & Movshon, J. A. (2006). Optimal representations of sensory information by neural populations. Nature Neuroscience, 9(5), 690–696.

    Article  Google Scholar 

  • Knill, D., & Pouget, A. (2004). The Bayesian brain: The role of uncertainty in neural coding and computation. Trends in Neurosciences, 27(12), 712–719.

    Article  Google Scholar 

  • Kyburg, H. (1988). Higher order probabilities and intervals. International Journal of Approximate Reasoning, 2, 195–209.

    Article  Google Scholar 

  • Loewer, B. (1983). Information and belief. The Behavioral and Brain Sciences, 6, 75–76.

    Article  Google Scholar 

  • Ma, W. J., Beck, J., Latham, P., & Pouget, A. (2006). Bayesian inference with probabilistic population codes. Nature Neuroscience, 9(11), 1432–1438.

    Article  Google Scholar 

  • Millikan, R. (2001). What has natural information to do with intentional representation? In D. Walsh (Ed.), Naturalism, evolution and mind (pp. 105–125). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Millikan, R. (2007). An input conditions for teleosemantics? Reply to Shea (and Godfrey-Smith). Philosophy and Phenomenological Research, 75(2), 436–455.

    Article  Google Scholar 

  • Neander, K. (2013). Toward an informational teleosemantics. In J. Kingsbury, D. Ryder, & K. Willford (Eds.), Millikan and her critics. London: Blackwell.

    Google Scholar 

  • Piccinini, G., & Scarantino, A. (2011). Information processing, computation and cognition. Journal of Biological Physics, 37, 1–38.

    Article  Google Scholar 

  • Rieke, F., Warland, D., de Ruyter vanSteveninck, R., Bialek, W. (1997). Spikes: Exploring the neural code. Cambridge, MA: MIT Press.

  • Scarantino, A., & Piccinini, G. (2010). Information without truth. Metaphilosophy, 41(3), 313–330.

    Article  Google Scholar 

  • Shadlen, M. N., & Newsome, W. T. (1998). The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding. The Journal of Neuroscience, 18(10), 3870–3896.

    Google Scholar 

  • Shannon, C. (1948). A mathematical theory of communication. Bell Systems Technical Journal, 27, 279–423, 623–656.

  • Skyrms, B. (2010). Signals: Evolution, learning and information. New York: Oxford University Press.

    Book  Google Scholar 

  • Suppes, P. (1983). Probability and Information. The Behavioral and Brain Sciences, 6, 81.

    Article  Google Scholar 

  • Usher, M. (2001). A statistical referential theory of content: Using information theory to account for misrepresentation. Mind and Language, 16(3), 311–334.

    Article  Google Scholar 

Download references

Acknowledgments

I have benefitted greatly from discussions about information over many years with Karen Neander and Fred Dretske. Fred’s influence on the philosophy of information cannot be overstated and his groundbreaking work in this field remains the starting point for all philosophical discussions. I would also like to thank several anonymous reviewers and an audience at Virginia Tech for their thought provoking comments and discussion.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel M. Kraemer.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kraemer, D.M. Natural probabilistic information. Synthese 192, 2901–2919 (2015). https://doi.org/10.1007/s11229-015-0692-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-015-0692-6

Keywords

Navigation