Skip to main content

Advertisement

Log in

Comments on “The Replication of the Hard Problem of Consciousness in AI and Bio-AI”

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

In their joint paper entitled “The Replication of the Hard Problem of Consciousness in AI and BIO-AI” (Boltuc et al. Replication of the hard problem of conscious in AI and Bio- AI: An early conceptual framework 2008), Nicholas and Piotr Boltuc suggest that machines could be equipped with phenomenal consciousness, which is subjective consciousness that satisfies Chalmer’s hard problem (We will abbreviate the hard problem of consciousness as “H-consciousness”). The claim is that if we knew the inner workings of phenomenal consciousness and could understand its’ precise operation, we could instantiate such consciousness in a machine. This claim, called the extra-strong AI thesis, is an important claim because if true it would demystify the privileged access problem of first-person consciousness and cast it as an empirical problem of science and not a fundamental question of philosophy. A core assumption of the extra-strong AI thesis is that there is no logical argument that precludes the implementation of H-consciousness in an organic or in-organic machine provided we understand its algorithm. Another way of framing this conclusion is that there is nothing special about H-consciousness as compared to any other process. That is, in the same way that we do not preclude a machine from implementing photosynthesis, we also do not preclude a machine from implementing H-consciousness. While one may be more difficult in practice, it is a problem of science and engineering, and no longer a philosophical question. I propose that Boltuc’s conclusion, while plausible and convincing, comes at a very high price; the argument given for his conclusion does not exclude any conceivable process from machine implementation. In short, if we make some assumptions about the equivalence of a rough notion of algorithm and then tie this to human understanding, all logical preconditions vanish and the argument grants that any process can be implemented in a machine. The purpose of this paper is to comment on the argument for his conclusion and offer additional properties of H-consciousness that can be used to make the conclusion falsifiable through scientific investigation rather than relying on the limits of human understanding.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. The term machine will be used exclusively here to mean any non-human organic or inorganic robot or entity. The term computer will not be used as the author feels this term should be reserved only for strictly Turing-equivalent systems.

  2. The kill command or system call is used to send signals to a process in a UNIX-based system.

  3. We may assume the most complex and powerful physics and chemistry equations conceivable.

  4. Substance here means some organic, inorganic or hybrid machine.

  5. We may even assume the H-consciousness grows on itself; in other words it may be a learning algorithm that takes time to fully evolve to full H-consciousness, but even in this case it demands some starting point representation.

  6. Non-Turing computable.

  7. Presumably not the Turing test, but some scientific test to see if the algorithm encapsulated in E, C has been met.

  8. Turing-computable function.

  9. This is a general term from computer science to refer to any process in an operating system in a running state which means it is executing in memory.

  10. We are ignoring special relativity here and assuming absolute time for the sake of simplicity; also we can ignore partial states of H-consciousness and only count full consciousness at time t and not some semi-conscious states; we are interested principally in when H-consciousness comes into full existence.

  11. This idea also matches well with hyper-computing which does not characterize algorithms as necessarily halting or non-halting. Also, many processes in modern operating systems are not designed to halt, but are designed to be killed (or die when the machine loses power).

References

  • Boltuc, Nicholas, Piotr. (2008). Replication of the hard problem of conscious in AI and Bio-AI: An early conceptual framework.

  • Chalmers, D. J. (1990). Consciousness and cognition. unpublished.

  • Chalmers, D. J. (1995). The puzzle of consciousness. Scientific American, 92–100.

  • Chalmers, D. J. (1997). Moving forward on the problem of consciousness. Journal of Consciousness Studies, 4(1), 3–46.

    Google Scholar 

  • Dodig-Crnkovic, G. (2008). Semantics of information as interactive computation. Presented at the Fifth International Workshop on Philosophy and Informatics, Kaiserslautern, Germany, April 2008.

  • Franklin, S., Baars, B. J., & Ramamurthy, U. (2008). “A Phenomenally Conscious Robot?” APA news letter on philosophy and computers.

  • Tanenbaum, A. S. (1994). Distributed operating systems. New Jersey: Prentice-Hall, Englewood Cliffs.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Blake H. Dournaee.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Dournaee, B.H. Comments on “The Replication of the Hard Problem of Consciousness in AI and Bio-AI”. Minds & Machines 20, 303–309 (2010). https://doi.org/10.1007/s11023-010-9188-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-010-9188-9

Keywords

Navigation