The human mind is constituted by inner, subjective, private, first-person conscious experiences that cannot be measured with physical devices or observed from an external, objective, public, third-person perspective. The qualitative, phenomenal nature of conscious experiences also cannot be communicated to others in the form of a message composed of classical bits of information. Because in a classical world everything physical is observable and communicable, it is a daunting task to explain how an empirically unobservable, incommunicable consciousness could have any physical (...) substrates such as neurons composed of biochemical molecules, water, and electrolytes. The challenges encountered by classical physics are exemplified by a number of thought experiments including the inverted qualia argument, the private language argument, the beetle in the box argument and the knowledge argument. These thought experiments, however, do not imply that our consciousness is nonphysical and our introspective conscious testimonies are untrustworthy. The principles of classical physics have been superseded by modern quantum physics, which contains two fundamentally different kinds of physical objects: unobservable quantum state vectors, which define what physically exists, and quantum operators (observables), which define what can physically be observed. Identifying consciousness with the unobservable quantum information contained by quantum physical brain states allows for application of quantum information theorems to resolve possible paradoxes created by the inner privacy of conscious experiences, and explains how the observable brain is constructed by accessible bits of classical information that are bound by Holevo's theorem and extracted from the physically existing quantum brain upon measurement with physical devices. (shrink)
This book addresses the fascinating cross-disciplinary field of quantum information theory applied to the study of brain function. It offers a self-study guide to probe the problems of consciousness, including a concise but rigorous introduction to classical and quantum information theory, theoretical neuroscience, and philosophy of the mind. It aims to address long-standing problems related to consciousness within the framework of modern theoretical physics in a comprehensible manner that elucidates the nature of the mind-body relationship. The reader also gains an (...) overview of methods for constructing and testing quantum informational theories of consciousness. (shrink)
Twenty five years ago, Sir John Carew Eccles together with Friedrich Beck proposed a quantum mechanical model of neurotransmitter release at synapses in the human cerebral cortex. The model endorsed causal influence of human consciousness upon the functioning of synapses in the brain through quantum tunneling of unidentified quasiparticles that trigger the exocytosis of synaptic vesicles, thereby initiating the transmission of information from the presynaptic towards the postsynaptic neuron. Here, we provide a molecular upgrade of the Beck and Eccles model (...) by identifying the quantum quasiparticles as Davydov solitons that twist the protein α-helices and trigger exocytosis of synaptic vesicles through helical zipping of the SNARE protein complex. We also calculate the observable probabilities for exocytosis based on the mass of this quasiparticle, along with the characteristics of the potential energy barrier through which tunneling is necessary. We further review the current experimental evidence in support of this novel bio-molecular model as presented. (shrink)
The essential biological processes that sustain life are catalyzed by protein nano-engines, which maintain living systems in far-from-equilibrium ordered states. To investigate energetic processes in proteins, we have analyzed the system of generalized Davydov equations that govern the quantum dynamics of multiple amide I exciton quanta propagating along the hydrogen-bonded peptide groups in α-helices. Computational simulations have confirmed the generation of moving Davydov solitons by applied pulses of amide I energy for protein α-helices of varying length. The stability and mobility (...) of these solitons depended on the uniformity of dipole-dipole coupling between amide I oscillators, and the isotropy of the exciton-phonon interaction. Davydov solitons were also able to quantum tunnel through massive barriers, or to quantum interfere at collision sites. The results presented here support a nontrivial role of quantum effects in biological systems that lies beyond the mechanistic support of covalent bonds as binding agents of macromolecular structures. Quantum tunneling and interference of Davydov solitons provide catalytically active macromolecular protein complexes with a physical mechanism allowing highly efficient transport, delivery, and utilization of free energy, besides the evolutionary mandate of biological order that supports the existence of such genuine quantum phenomena, and may indeed demarcate the quantum boundaries of life. (shrink)
The principles of classical physics, including deterministic dynamics and observability of physical states, are incompatible with the existence of unobservable conscious minds that possess free will. Attempts to directly accommodate consciousness in a classical world lead to philosophical paradoxes such as causally ineffective consciousness and possibility of alternate worlds in which functional brain isomorphs behave identically but lack conscious experiences. Here, we show that because Chalmers’ principle of organizational invariance is based on a deficient nineteenth century classical physics, it is (...) inherently flawed and implies evolutionary inexplicable epiphenomenal consciousness. Consequently, if consciousness is a fundamental ingredient of physical reality, no psychophysical laws such as Chalmers’ principle of organizational invariance are needed to establish correspondence between conscious experiences and brain function. Quantum mechanics is the most successful and only modern physical theory capable of naturally accommodating consciousness without violation of physical laws. (shrink)
Our conscious minds exist in the Universe, therefore they should be identified with physical states that are subject to physical laws. In classical theories of mind, the mental states are identified with brain states that satisfy the deterministic laws of classical mechanics. This approach, however, leads to insurmountable paradoxes such as epiphenomenal minds and illusionary free will. Alternatively, one may identify mental states with quantum states realized within the brain and try to resolve the above paradoxes using the standard Hilbert (...) space formalism of quantum mechanics. In this essay, we first show that identification of mind states with quantum states within the brain is biologically feasible, and then elaborating on the mathematical proofs of two quantum mechanical no-go theorems, we explain why quantum theory might have profound implications for the scientific understanding of one's mental states, self identity, beliefs and free will. (shrink)
Biological order provided by α-helical secondary protein structures is an important resource exploitable by living organisms for increasing the efficiency of energy transport. In particular, self-trapping of amide I energy quanta by the induced phonon deformation of the hydrogen-bonded lattice of peptide groups is capable of generating either pinned or moving solitary waves following the Davydov quasiparticle/soliton model. The effect of applied in-phase Gaussian pulses of amide I energy, however, was found to be strongly dependent on the site of application. (...) Moving solitons were only launched when the amide I energy was applied at one of the α-helix ends, whereas pinned solitons were produced in the α-helix interior. In this paper, we describe a general mechanism that launches moving solitons in the interior of the α-helix through phase-modulated Gaussian pulses of amide I energy. We also compare the predicted soliton velocity based on effective soliton mass and the observed soliton velocity in computer simulations for different parameter values of the isotropy of the exciton-phonon interaction. The presented results demonstrate the capacity for explicit control of soliton velocity in protein α-helices, and further support the plausibility of gradual optimization of quantum dynamics for achieving specialized protein functions through natural selection. (shrink)
Feynman's sum-over-histories formulation of quantum mechanics has been considered a useful calculational tool in which virtual Feynman histories entering into a coherent quantum superposition cannot be individually measured. Here we show that sequential weak values, inferred by consecutive weak measurements of projectors, allow direct experimental probing of individual virtual Feynman histories, thereby revealing the exact nature of quantum interference of coherently superposed histories. Because the total sum of sequential weak values of multitime projection operators for a complete set of orthogonal (...) quantum histories is unity, complete sets of weak values could be interpreted in agreement with the standard quantum mechanical picture. We also elucidate the relationship between sequential weak values of quantum histories with different coarse graining in time and establish the incompatibility of weak values for nonorthogonal quantum histories in history Hilbert space. Bridging theory and experiment, the presented results may enhance our understanding of both weak values and quantum histories. (shrink)
For any class of operators which transform unary total functions in the set of natural numbers into functions of the same kind, we define what it means for a real function to be uniformly computable or conditionally computable with respect to this class. These two computability notions are natural generalizations of certain notions introduced in a previous paper co-authored by Andreas Weiermann and in another previous paper by the same authors, respectively. Under certain weak assumptions about the class in question, (...) we show that conditional computability is preserved by substitution, that all conditionally computable real functions are locally uniformly computable, and that the ones with compact domains are uniformly computable. The introduced notions have some similarity with the uniform computability and its non-uniform extension considered by Katrin Tent and Martin Ziegler, however, there are also essential differences between the conditional computability and the non-uniform computability in question. (shrink)
In this paper we try to make a clear distinction between quantum mysticism and quantum mind theory. Quackery always accompanies science especially in controversial and still under development areas and since the quantum mind theory is a science youngster it must clearly demarcate itself from the great stuff of pseudo-science currently patronized by the term "quantum mind". Quantum theory has attracted a big deal of attention and opened new avenues for building up a physical theory of mind because its principles (...) and experimental foundations are as strange as the phenomenon of consciousness itself. Yet, the unwarranted recourse to paranormal phenomena as supporting the quantum mind theory plus the extremely bad biological mismodeling of brain physiology lead to great scepticism about the viability of the approach. We give as an example the Hameroff-Penrose Orch OR model with a list of twenty four problems not being repaired for a whole decade after the birth of the model in 1996. In the exposition we have tried not only to pesent critique of the spotted flaws, but to provide novel possibilities towards creation of neuroscientific quantum model of mind that incorporates all the available data from the basic disciplines (biochemistry, cell physiology, etc.) up to the clinical observations (neurology, neurosurgery, molecular psychiatry, etc.). Thus in a concise fashion we outline what can be done scientifically to improve the Q-mind theory and start a research programme (in Lakatos sense) that is independent on the particular flaws in some of the existing Q-mind models. (shrink)
In the beginning of the 20th century the groundbreaking work of Ramon y Cajal firmly established the neuron doctrine, according to which neurons are the basic structural and functional units of the nervous system. Von Weldeyer coined the term “neuron” in 1891, but the huge leap forward in neuroscience was due to Cajal’s meticulous microscopic observations of brain sections stained with an improved version of Golgi’s la reazione nera (black reaction). The latter improvement of Golgi’s technique made it possible to (...) visualize the arborizations of single neurons that were “colored brownish black even to their finest branchlets, standing out with unsurpassable clarity upon a transparent yellow background. All was sharp as a sketch with Chinese ink”. The high quality of both the visualization of individual nerve cells and the work performed on studying the anatomy of the central nervous system lead Ramon y Cajal to the conclusion that axons output the nervous impulses to the dendrites or the soma of other target neurons. (shrink)
In neurophysiology it is widely assumed that our mind operates in millisecond timescale. This view might be wrong, because if consciousness is quantum coherent phenomenon at the level of protein assemblies, then its dynamic timescale can be picosecond one.
One of the trademarks of Nicolai Hartmann’s ontology is his theory of levels of reality. Hartmann drew from many sources to develop his version of the theory. His essay “Die Anfänge des Schichtungsgedankens in der alten Philosophie” testifies of the fact that he drew from Plato, Aristotle, and Plotinus. But this text was written relatively late in Hartmann’s career, which suggests that his interest in the theories of levels of the ancients may have been retrospective. In “Nicolai Hartmann und seine (...) Zeitgenossen,” Martin Morgenstern puts the emphasis on contemporaries of Hartmann: Émile Boutroux, Max Scheler, Heinrich Rickert, Karl Jaspers, and Arnold Gehlen. But there is another plausible source for Hartmann’s conception of levels that has so far remained overlooked in the literature. Hartmann studied with and was influenced by Nikolai Lossky. Lossky has a theory of levels that he adopted from Vladimir Solovyov. Solovyov presents his theory of levels, among other places, in Oпpaвдaнie дoбpa, where he says that the five principal stages of the cosmogonic process of ascension toward universal perfection, which are given in experience, are the mineral or inorganic realm, the vegetal realm, the animal realm, the realm of natural humanity, and the realm of spiritual or divine humanity. This theory appears to bear significant similarities with the theory of levels of reality that Hartmann will develop a few decades later. Solovyov was widely read in Russia and it would be unlikely that Hartmann was not at least minimally acquainted with his work. Chances are that Hartmann came into contact with it in some details. An intellectual lineage could thus likely be traced from Hartmann back to Solovyov. In this paper, I document and discuss this possible lineage. (shrink)
The aim of this book is to present the fundamental theoretical results concerning inference rules in deductive formal systems. Primary attention is focused on: admissible or permissible inference rules the derivability of the admissible inference rules the structural completeness of logics the bases for admissible and valid inference rules. There is particular emphasis on propositional non-standard logics (primary, superintuitionistic and modal logics) but general logical consequence relations and classical first-order theories are also considered. The book is basically self-contained and special (...) attention has been made to present the material in a convenient manner for the reader. Proofs of results, many of which are not readily available elsewhere, are also included. The book is written at a level appropriate for first-year graduate students in mathematics or computer science. Although some knowledge of elementary logic and universal algebra are necessary, the first chapter includes all the results from universal algebra and logic that the reader needs. For graduate students in mathematics and computer science the book is an excellent textbook. (shrink)
People are remarkably smart: They use language, possess complex motor skills, make nontrivial inferences, develop and use scientific theories, make laws, and adapt to complex dynamic environments. Much of this knowledge requires concepts and this study focuses on how people acquire concepts. It is argued that conceptual development progresses from simple perceptual grouping to highly abstract scientific concepts. This proposal of conceptual development has four parts. First, it is argued that categories in the world have different structure. Second, there might (...) be different learning systems that evolved to learn categories of differing structures. Third, these systems exhibit differential maturational course, which affects how categories of different structures are learned in the course of development. And finally, an interaction of these components may result in the developmental transition from perceptual groupings to more abstract concepts. This study reviews a large body of empirical evidence supporting this proposal. (shrink)
Recently, psychologists have explored moral concepts including obligation, blame, and ability. While little empirical work has studied the relationships among these concepts, philosophers have widely assumed such a relationship in the principle that “ought” implies “can,” which states that if someone ought to do something, then they must be able to do it. The cognitive underpinnings of these concepts are tested in the three experiments reported here. In Experiment 1, most participants judge that an agent ought to keep a promise (...) that he is unable to keep, but only when he is to blame for the inability. Experiment 2 shows that such “ought” judgments correlate with judgments of blame, rather than with judgments of the agent’s ability. Experiment 3 replicates these findings for moral “ought” judgments and finds that they do not hold for nonmoral “ought” judgments, such as what someone ought to do to fulfill their desires. These results together show that folk moral judgments do not conform to a widely assumed philosophical principle that “ought” implies “can.” Instead, judgments of blame play a modulatory role in some judgments of obligation. (shrink)
In their paper published in 2017 in Philosophical Psychology, Ronja Rutschmann and Alex Wiegmann introduce a novel kind of lies, the indifferent lies. According to them, these lies are not intended to deceive simply because the liars do not care whether their audience is going to believe them or not. It seems as if indifferent lies avoid the objections raised against other kinds of lies supposedly not intended to deceive. I argue that this is not correct. Indifferent lies, too, are (...) either intended to deceive or are not lies at all, since they do not involve genuine assertions. (shrink)
This article defends the view that liars need not intend to deceive. I present common objections to this view in detail and then propose a case of a liar who can lie but who cannot deceive in any relevant sense. I then modify this case to get a situation in which this person lies intending to tell his hearer the truth and he does this by way of getting the hearer to recognize his intention to tell the truth by lying. (...) This case, and further cases that I develop from it, demonstrate that lying without the intention to deceive is possible. (shrink)
An algorithm recognizing admissibility of inference rules in generalized form (rules of inference with parameters or metavariables) in the intuitionistic calculus H and, in particular, also in the usual form without parameters, is presented. This algorithm is obtained by means of special intuitionistic Kripke models, which are constructed for a given inference rule. Thus, in particular, the direct solution by intuitionistic techniques of Friedman's problem is found. As a corollary an algorithm for the recognition of the solvability of logical equations (...) in H and for constructing some solutions for solvable equations is obtained. A semantic criterion for admissibility in H is constructed. (shrink)
We examine some of Connes’ criticisms of Robinson’s infinitesimals starting in 1995. Connes sought to exploit the Solovay model S as ammunition against non-standard analysis, but the model tends to boomerang, undercutting Connes’ own earlier work in functional analysis. Connes described the hyperreals as both a “virtual theory” and a “chimera”, yet acknowledged that his argument relies on the transfer principle. We analyze Connes’ “dart-throwing” thought experiment, but reach an opposite conclusion. In S , all definable sets of reals are (...) Lebesgue measurable, suggesting that Connes views a theory as being “virtual” if it is not definable in a suitable model of ZFC. If so, Connes’ claim that a theory of the hyperreals is “virtual” is refuted by the existence of a definable model of the hyperreal field due to Kanovei and Shelah. Free ultrafilters aren’t definable, yet Connes exploited such ultrafilters both in his own earlier work on the classification of factors in the 1970s and 80s, and in Noncommutative Geometry, raising the question whether the latter may not be vulnerable to Connes’ criticism of virtuality. We analyze the philosophical underpinnings of Connes’ argument based on Gödel’s incompleteness theorem, and detect an apparent circularity in Connes’ logic. We document the reliance on non-constructive foundational material, and specifically on the Dixmier trace −∫ (featured on the front cover of Connes’ magnum opus) and the Hahn–Banach theorem, in Connes’ own framework. We also note an inaccuracy in Machover’s critique of infinitesimal-based pedagogy. (shrink)
We discuss the multiple pass interferometer setup proposed by Unruh, and clarify some of the fundamental issues linked with complementarity. We explicitly state all mathematical instructions for manipulating the quantum amplitudes and assessing the probability distribution functions. In this respect we show that certain purely math logical limitations (requirement for consistency) prevent one to argue that there is one-to-one corespondence between paths 1 and 2 and the exit gates 10 and 9 ("which way" interpretation), and at the same time insist (...) on pure state density matrix, i. e. existent nonmeasured interference in the second building block of Unruh's interferometer. Furthermore one cannot even argue that Unruh's setup is described by mixed density matrix that keeps the one-to-one correspondence between the paths 1 and 2 and the exit gates. This last claim is mathematically consistent, however is experimentally disprovable - because one may potentially distinguish mixed quantum state from pure quantum state. One just lets the two beams captured at the exit gates cross each other. If interference can be observed the two exit gates were coherent and provide beams in pure state (superposition), while if interference cannot be observed, the state of the exit gates was mixed one. Since the captured beams at the exit gates in Unruh's experiment could interfere this implies that the whole setup is characterized with pure state density matrix and does not preserve the one-to-one correspondence between the entry points and exit gates, even in case where the destructive interference in arm 5 of the interferometer is not measured. Therefore the correct (experimentally plausible and mathematically consistent) exposition of complementarity introduced by Georgiev in 2004 is that Unruh's setup is characterized by pure state density matrix and does not keep the suggested by Unruh one-to-one correspondence. As an appendix is shown the equivalence between Unruh's setup and Afshar's setup and correct analysis of Afshar's experiment is also provided. (shrink)
Our access to computer-generated worlds changes the way we feel, how we think, and how we solve problems. In this review, we explore the utility of different types of virtual reality, immersive or non-immersive, for providing controllable, safe environments that enable individual training, neurorehabilitation, or even replacement of lost functions. The neurobiological effects of virtual reality on neuronal plasticity have been shown to result in increased cortical gray matter volumes, higher concentration of electroencephalographic beta-waves, and enhanced cognitive performance. Clinical application (...) of virtual reality is aided by innovative brain-computer interfaces, which allow direct tapping into the electric activity generated by different brain cortical areas for precise voluntary control of connected robotic devices. Virtual reality is also valuable to healthy individuals as a narrative medium for redesigning their individual stories in an integrative process of self-improvement and personal development. Future upgrades of virtual reality-based technologies promise to help humans transcend the limitations of their biological bodies and augment their capacity to mold physical reality to better meet the needs of a globalized world. (shrink)
Sorensen says that my assertion that p is a knowledge-lie if it is meant to undermine your justification for believing truly that ∼p, not to make you believe that p and that, therefore, knowledge-lies are not intended to deceive. It has been objected that they are meant to deceive because they are intended to make you more confident in a falsehood. In this paper, I propose a novel account according to which an assertion that p is a knowledge-lie if it (...) is intended not to provide evidence that p but to make you stop trusting all testimonies concerning whether p, which is how they undermine your testimonial knowledge. Because they are not intended to provide evidence that bears on the truth of p, they are not intended to make you more confident in a falsehood; therefore, knowledge-lies are not intended to deceive. This makes them a problem for the traditional account, which takes the intention to deceive as necessary for lying, and an interesting example of Kant's idea that allowing lies whenever one feels like it would bring it about that statements in general are not believed. (shrink)
In his 2018 AJP paper, Shlomo Cohen hints that deception could be a distinct subset of manipulation. We pursue this thought further, but by arguing that Cohen’s accounts of deception and manipulation are incorrect. Deception under uncertainty need not involve adding false premises to the victim’s reasoning but it must involve manipulating her response, and cases of manipulation that do not interfere with the victim’s reasoning, but rather utilize it, also exist. Therefore, deception under uncertainty must be constituted by covert (...) manipulation. (shrink)
Felix Klein and Abraham Fraenkel each formulated a criterion for a theory of infinitesimals to be successful, in terms of the feasibility of implementation of the Mean Value Theorem. We explore the evolution of the idea over the past century, and the role of Abraham Robinson's framework therein.
The research of the synergetic philosophy of history leads to a fundamentally new approach to the study of personality and rational understanding of the meaning of life. The heuristic role in the history synergetic philosophy is argued in the structuring of a new human philosophy in the context of the self-organization of man and mankind. It is this aspect that is specific to the synergetic philosophy of man.
The electric activities of cortical pyramidal neurons are supported by structurally stable, morphologically complex axo-dendritic trees. Anatomical differences between axons and dendrites in regard to their length or caliber reflect the underlying functional specializations, for input or output of neural information, respectively. For a proper assessment of the computational capacity of pyramidal neurons, we have analyzed an extensive dataset of three-dimensional digital reconstructions from the NeuroMorphoOrg database, and quantified basic dendritic or axonal morphometric measures in different regions and layers of (...) the mouse, rat or human cerebral cortex. Physical estimates of the total number and type of ions involved in neuronal electric spiking based on the obtained morphometric data, combined with energetics of neurotransmitter release and signaling fueled by glucose consumed by the active brain, support highly efficient cerebral computation performed at the thermodynamically allowed Landauer limit for implementation of irreversible logical operations. Individual proton tunneling events in voltage-sensing S4 protein alpha-helices of Na+, K+ or Ca2+ ion channels are ideally suited to serve as single Landauer elementary logical operations that are then amplified by selective ionic currents traversing the open channel pores. This miniaturization of computational gating allows the execution of over 1.2 zetta logical operations per second in the human cerebral cortex without combusting the brain by the released heat. (shrink)
In the basic modal language and in the basic modal language with the added universal modality, first-order definability of all formulas over the class of all frames is shown. Also, it is shown that the problems of modal definability of first-order sentences over the class of all frames in the languages and are both PSPACE-complete.
There are many blank areas in understanding the brain dynamics and especially how it gives rise to consciousness. Quantum mechanics is believed to be capable of explaining the enigma of conscious experience, however till now there is not good enough model considering both the data from clinical neurology and having some explanatory power! In this paper is presented a novel model in defence of macroscopic quantum events within and between neural cells. The beta-neurexin-neuroligin-1 link is claimed to be not just (...) the core of the central neural synapse, instead it is a device mediating entanglement between the cytoskeletons of the cortical neurons. Thus a macroscopic quantum state can extend throughout large brain cortical areas and the subsequent collapse of the wavefunction could affect simultaneously the subneuronal events in millions of neurons. The beta-neurexin-neuroligin-1 complex also controls the process of exocytosis and provides an interesting and simple mechanism for retrograde signalling during learning-dependent changes in synaptic connectivity. (shrink)