Skip to main content
Log in

Functionalism, integrity, and digital consciousness

  • Original Research
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

The prospect of consciousness in artificial systems is closely tied to the viability of functionalism about consciousness. Even if consciousness arises from the abstract functional relationships between the parts of a system, it does not follow that any digital system that implements the right functional organization would be conscious. Functionalism requires constraints on what it takes to properly implement an organization. Existing proposals for constraints on implementation relate to the integrity of the parts and states of the realizers of roles in a functional organization. This paper presents and motivates three novel integrity constraints on proper implementation not satisfied by current neural network models. It is proposed that for a system to be conscious, there must be a straightforward relationship between the material entities that compose the system and the realizers of functional roles, that the realizers of the functional roles must play their roles due to internal causal powers, and that they must continue to exist over time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. For a survey of varieties of functionalism, see (Maley and Piccinini, 2013).

  2. Nothing important here depends on a precise interpretation of ‘turns’. My central claims are consistent with both role and realizer functionalism and even with versions of dualism in which mental properties merely coincide with functional organizations (e.g. Chalmers, 1997).

  3. One might here distinguish a form of functionalism in which what matters is the computations a system performs from forms of functionalism in which what matters is which algorithms are implemented. I take it that there may be ways of implementing algorithms without performing computations; shaking a bag of potato chips may implement a parallel sorting algorithm that sorts chips by size from the top to the bottom of the bag without computing anything. Most computationalists seem more interested in algorithmic implementation than in computation per se. What this difference amounts to depends on our concept of computation: for instance, computations might be taken to involve operations on vehicles (Piccinini, 2015) while algorithms need not. The features that allow a machine to count as computing a certain algorithm in the sense that is relevant for understanding the practices of computer scientists may not be the same as the features that allow a system to count as implementing the functional role of consciousness. It isn’t completely obvious that human brains perform computations (though, see Piccinini, 2020), even if they clearly implement algorithms.

  4. Proponents of the Integrated Information Theory have endorsed a sort of canonical carving approach (Oizumi et al., 2014). IIT treats information (in the sense of how the state of the system at one time constrains past and future states) as central to consciousness; the canonical carving maximizes the amount of information in the system.

  5. Chalmers (1996, p. 328) anticipates a system like this and suggests that it would be impossibly complex. He is wary, however, of dismissing the problem because it is not physically viable. Some liberal implementations don’t live up to the spirit of functionalism. The mirror grid’s conceptual possibility shows that.

  6. Chalmers uses this to explain why the argument doesn’t support behaviorism: there may be no gradual transitions to behaviorally identical but organizationally alien minds. However, this constraint also limits its application to artificial minds. The argument can’t show that artificial structures that employ different fundamental subdivisions would be conscious. Contemporary computers have a completely different architecture that does not consist in individual circuits performing specific calculations. You can’t have a system that arbitrarily mingles GPU-optimized tensor arithmetic with neuronal signaling. It might be possible to replace individual neurons with separate computers, but this would be a different architecture from replacing all neurons with a single computer. The work done by individual neurons could be passed off to a single computer, but the way it would handle the different pieces of necessary work would be different from the way computers currently handle whole networks. If the system has to be wholly reconfigured as neurons are substituted, the piecemeal logic of the argument collapses. There is no way to get from a brain to a processor through gradual replacement.

  7. The exact details are dependent on the choice of processor architecture, other hardware, operating system, programming language, and compiler or interpreter. The claims here are based on a standard computer with an × 86 processor (Ledin & Farley, 2022) running Python through CPython (Dos Reis, 2021; Shaw, 2021), though they are generic enough to hold of standard processor architectures and programming languages.

  8. Consider a related puzzle: When encrypted, data loses any practically detectable internal structure. Homomorphic encryption schemes allow functions to be computed on encrypted data without decryption (Acar et al., 2018). Suppose we encrypted data representing a single human brain and introduced a function that updated that data in response to stimuli in the same fashion as a normal brain. Would the encrypted system still be conscious? Traditional expositions of functionalism are compatible with both answers. The material complexity constraint suggests that it would not, even if we accepted that a system that runs the corresponding functions on unencrypted data would.

References

  • Acar, A., Hidayet, A., Selcuk, A. U., & Mauro, C. (2018). A survey on homomorphic encryption schemes: theory and implementation. ACM Computing Surveys, 51(4), 1–35.

    Google Scholar 

  • Baars, B. J. (1993). A cognitive theory of consciousness. Cambridge University Press.

    Google Scholar 

  • Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Yin, T.L., Yuanzhi, L., Scott, L., Harsha, N., Hamid, P., Marco, T.R. & Yi, Z. (2023). Sparks of artificial general intelligence: early experiments with GPT-4. Preprint retrieved from http://arxiv.org/abs/2303.12712

  • Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., Deane, G., Fleming, S.M., Frith, C., Ji, X. & Kanai, R. (2023). Consciousness in artificial intelligence: Insights from the science of consciousness. Preprint retrieved from http://arxiv.org/abs/2308.08708

  • Cao, R. (2022). Multiple realizability and the spirit of functionalism. Synthese, 200(6), 1–31.

    Google Scholar 

  • Chalmers, D. J. (1996). Does a rock implement every finite-state automaton? Synthese, 108(3), 309–333.

    MathSciNet  Google Scholar 

  • Chalmers, D. J. (1997). The conscious mind. Oxford University Press.

    Google Scholar 

  • Chalmers, D. J. (2023). Could a large language model be conscious? Preprint retrieved from http://arxiv.org/abs/2303.07103

  • Chrisley, R. L. (1994). Why everything doesn’t realize every computation. Minds and Machines, 4(4), 403–420.

    Google Scholar 

  • Davies, M., Srinivasa, N., Lin, T., Chinya, G., Cao, Y., Choday, S. H., Dimou, G., Joshi, P., Imam, N., Jain, S., Liao, Y., Lin, C. K., Lines, A., Liu, R., Mathaikutty, D., McCoy, S., Paul, A., & Tse, J. (2018). Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro, 38(1), 82–99.

    Google Scholar 

  • Deco, G., Vidaurre, D., & Kringelbach, M. L. (2021). Revisiting the global workspace orchestrating the hierarchical organization of the human brain. Nature Human Behaviour, 5(4), 497–511.

    PubMed  PubMed Central  Google Scholar 

  • Dos Reis, A. J. (2021). C and C++ under the hood. Reis (self-published).

    Google Scholar 

  • Fodor, J. A. (1968). Psychological explanation: An introduction to the philosophy of psychology. Random House.

    Google Scholar 

  • Godfrey-Smith, P. (2009). Triviality arguments against functionalism. Philosophical Studies, 145(2), 273–295.

    Google Scholar 

  • Godfrey-Smith, P. (2016). Mind, matter, and metabolism. The Journal of Philosophy, 113(10), 481–506.

    Google Scholar 

  • Ielmini, D., Wang, Z., & Liu, Y. (2021). Brain-inspired computing via memory device physics. APL Materials. https://doi.org/10.1063/5.0047641

    Article  Google Scholar 

  • Koch, C., & Tononi, G. (2008). Can machines be conscious? IEEE Spectrum, 45(6), 55–59.

    Google Scholar 

  • Koch, C., & Tononi, G. (2017). Can we copy the brain? Can we quantify machine consciousness? IEEE Spectrum, 54(6), 64–69.

    Google Scholar 

  • Ledin, J., & Farley, D. (2022). Modern computer architecture and organization. Packt.

    Google Scholar 

  • Lewis, D. (1972). Psychophysical and theoretical identifications. Australasian Journal of Philosophy, 50(3), 249–258.

    MathSciNet  Google Scholar 

  • Lycan, W. (1981). Form, function, and feel. The Journal of Philosophy, 78(1), 24–50.

    Google Scholar 

  • Maley, C. J., & Piccinini, G. (2013). Get the latest upgrade: Functionalism 6.3.1. Philosophia Scientiae, 17(2), 135–149.

    Google Scholar 

  • Maley, C. J., & Piccinini, G. (2017). Of teleological functions for psychology and neuroscience. Explanation and integration in mind and brain science (pp. 236–256). Oxford University Press.

    Google Scholar 

  • Merolla, P. A., Arthur, J. V., Alvarez-Icaza, R., Cassidy, A. S., Sawada, J., Akopyan, F., Jackson, B. L., Imam, N., Guo, C., Nakamura, Y., Brezzo, B., Vo, I., Esser, S. K., Appuswamy, R., Taba, B., Amir, A., Flickner, M. D., Risk, W. P., Manohar, R., & Modha, D. S. (2014). A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345(6197), 668–673.

    ADS  CAS  PubMed  Google Scholar 

  • Oizumi, M., Albantakis, L., & Tononi, G. (2014). From the phenomenology to the mechanisms of consciousness: Integrated information theory 3.0. PLoS Computational Biology, 10(5), 1–25.

    Google Scholar 

  • Piccinini, G. (2010). The mind as neural software? Understanding functionalism, computationalism, and computational functionalism. Philosophy and Phenomenological Research, 81(2), 269–311.

    Google Scholar 

  • Piccinini, G. (2015). Physical computation: A mechanistic account. Oxford University Press.

    Google Scholar 

  • Piccinini, G. (2020). Neurocognitive mechanisms: Explaining biological cognition. Oxford University Press.

    Google Scholar 

  • Putnam, H. (1967). Psychological predicates. In W. H. Capitan & D. D. Merrill (Eds.), Art, mind, and religion (pp. 37–48). University of Pittsburgh Press.

    Google Scholar 

  • Putnam, H. (1988). Representation and reality. MIT Press.

    Google Scholar 

  • Schwitzgebel, E., & Garza, M. (2020). Designing AI with rights, consciousness, self-respect, and freedom. In S. M. Liao (Ed.), Ethics of artificial intelligence (pp. 459–479). Oxford University Press.

    Google Scholar 

  • Shagrir, O. (2020). In defense of the semantic view of computation. Synthese, 197(9), 4083–4108.

    MathSciNet  Google Scholar 

  • Shaw, A. (2021). CPython internals. Real Python.

    Google Scholar 

  • Shulman, C., & Bostrom, N. (2021). Sharing the world with digital minds. Rethinking Moral Status. https://doi.org/10.1093/oso/9780192894076.003.0018

    Article  Google Scholar 

  • Sprevak, M. (2018). Triviality arguments about computational implementation. In M. Sprevak & M. Colombo (Eds.), The Routledge handbook of the computational mind (pp. 175–191). Routledge.

    Google Scholar 

  • Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z. & Du, Y. (2023). A survey of large language models. Preprint retrieved from http://arxiv.org/abs/2303.18223

Download references

Acknowledgements

This paper benefited from discussions and comments by Brad Saad, Rob Long, Nick Bostrom, Steve Paterson, Patrick Butlin, George Deane, Carl Shulman, Susan Schneider, Samuel Duncan, David Sackris, Michael Gifford, James Lee, Andreas Mogensen, Heather Browning, Adam Bales, and Marcus Pivato.

Funding

It was supported by a grant from EA Fund’s Long-Term Future Fund.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Derek Shiller.

Ethics declarations

Competing interest

The author have no compering interest to delcare.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shiller, D. Functionalism, integrity, and digital consciousness. Synthese 203, 47 (2024). https://doi.org/10.1007/s11229-023-04473-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11229-023-04473-z

Keywords

Navigation