Content and cluster analysis: Assessing representational similarity in neural systems

Philosophical Psychology 13 (1):47-76 (2000)
  Copy   BIBTEX

Abstract

If connectionism is to be an adequate theory of mind, we must have a theory of representation for neural networks that allows for individual differences in weighting and architecture while preserving sameness, or at least similarity, of content. In this paper we propose a procedure for measuring sameness of content of neural representations. We argue that the correct way to compare neural representations is through analysis of the distances between neural activations, and we present a method for doing so. We then use the technique to demonstrate empirically that different artificial neural networks trained by backpropagation on the same categorization task, even with different representational encodings of the input patterns and different numbers of hidden units, reach states in which representations at the hidden units are similar. We discuss how this work provides a rebuttal to Fodor and Lepore's critique of Paul Churchland's state space semantics.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,219

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2009-01-28

Downloads
265 (#72,807)

6 months
18 (#127,601)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

Connectionism.James Garson & Cameron Buckner - 2019 - Stanford Encyclopedia of Philosophy.
Distributed traces and the causal theory of constructive memory.John Sutton & Gerard O'Brien - 2023 - In Current Controversies in the Philosophy of Memory. Routledge. pp. 82-104. Translated by Andre Sant' Anna, Christopher McCarroll & Kourken Michaelian.
Content and Its vehicles in connectionist systems.Nicholas Shea - 2007 - Mind and Language 22 (3):246–269.

View all 19 citations / Add more citations