Can't We Just Say That Consciousness Depends On The Higher-Level Organization Of The System?

Functionalism, Broadly Construed

Functionalism, roughly, is the idea that consciousness is to be identified not with a particular physical implementation (like squishy gray brains or the particular neurons that the brains are made of), but rather with the functional organization of a system. The human brain, then, is seen by a functionalist as a particular physical implementation of a certain functional layout, but not necessarily the only possible implementation. The same functional organization could, presumably, be implemented by a computer (for example), which would then be conscious. It is not the actual physical substrate that matters to a functionalist, but the abstract schematic, or "block diagram" that it implements. The doctrine of functionalism may fairly be said to be the underlying assumption of the entire field of cognitive science.

Functionalism seems like a reasonable way to approach the question of consciousness, especially when contrasted with so-called identity theories. Those are theories which say that the conscious mind just is the neurology that implements it, the gray squishy stuff. Identity theories exclude the possibility that non-brain-based things could be minds, like computers or aliens. Functionalism is predicated on the notion of multiple realizability. This is the idea that there might be a variety of different realizations, or implementations, of a particular property, like consciousness. Another way of saying this is that there might be many micro states of affairs that all constitute the same macro state of affairs, and it is this macro state of affairs that defines the thing we are interested in. Put still another way, functionalism says that what makes a system whatever it is, is determined by the high-level organization of the system, and not the implementation details.

Black Boxes

In order to even have a block diagram of a given system, you have to draw blocks. It is tempting to be somewhat cavalier about how those blocks are drawn when reverse engineering an already-existing system, imposing an abstract organization on an incumbent implementation. Functionalism tends to assume that Nature drew the lines: that there is an objective line between the system itself and the environment with which it interacts (or the data it processes) and that there is a proper level of granularity to use when characterizing the system. Depending on how fine the granularity you use to characterize a system, and the principles by which you carry out your abstraction of it, its functional characterization changes. It is easy to gloss over the arbitrariness of the way these lines are drawn.

The functionalist examines a system, chooses an appropriate level of granularity, and starts drawing boxes. Within those boxes, the functionalist does not go, as long as the boxes themselves operate functionally in the way that they are supposed to. It is central to the idea of functionalism that how the functionality exhibited by the boxes is implemented simply does not matter at all to the functional characterization of the system overall. For this reason, the boxes are sometimes called "black boxes" - they are opaque.

It is worth noting that, as Russell pointed out, physicalism itself can be seen as a kind of functionalism. At the lowest level, every single thing that physics talks about (electrons, quarks, etc.) is defined in terms of its behavior with regard to other things in physics. If it swims like an electron and quacks like an electron, it's an electron. It simply makes no sense in physics to say that something might behave exactly like an electron, but not actually be one. Because physics as a field of inquiry has no place for the idea of qualitative essences, the smallest elements of physics are characterized purely in functional terms, as black boxes in a block diagram. What a photon is, is defined exclusively in terms of what it does, and what it does is (circularly) defined exclusively in terms of the other things in physics (electrons, quarks, etc., various forces, a few constants). Physics is a closed, circularly defined system, whose most basic units are defined functionally. Physics as a science does not care - and in fact can not care - about the intrinsic nature of matter, whatever it is that actually implements the functional characteristics exhibited by the lowest-level elements.

It could be argued that consciousness is an ad hoc concept, one of those may-be-seen-as kind of things. However I choose to draw my lines, whatever grain I use, however I gerrymander my abstract characterization of a system, if I can manage to characterize it as adhering to a certain functional layout in a way that does not actually contradict its physical implementation, it is conscious by definition. Consciousness in a given system just is my ability to characterize it in that certain way. To take this approach, however, is to define away the problem of consciousness.

This may well be the crucial point of the debate. I believe that consciousness is not, can not possibly be, an ad hoc concept in the way it would have to be for functionalism to be true. I am conscious, and no reformulation of the terms in which someone analyzes the system that is me will make me not conscious. That I am conscious is an absolutely true fact of nature. Similarly, (assuming that rocks are in fact not conscious) it is an absolute fact of nature that rocks are not conscious, no matter how one may analyze them. Simply deciding that "conscious" is synonymous with "being able to be characterized as having a functional organization that conforms to the following specifications…" does not address why we might regard conscious systems as particularly special or worthy of consideration.

Is The Design Inherent In The Implementation?

A good functionalist believes that in principle, a mind could be implemented on a properly programmed computer. Put another way, functionalists believe that the human brain is such a computer. But when we speak of the abstract functional organization of a computer system (as computer systems are currently understood), we are applying an arbitrary and explanatorily unnecessary metaphysical gloss to what is really a phonograph needle-like point of execution amidst a lot of inert data.

When a computer runs, during each timeslice its CPU (central processing unit) is executing an individual machine code instruction. No matter what algorithm it is executing, no matter what data structures it has in memory, at any given instant the computer is executing one very simple instruction, simpler even than a single line from a program in a high-level language like C or Python. In assembly language, the closest human-friendly relative of machine code, the instructions look like this: LDA, STA, JMP, etc. and they generally move a number, or a very small number of numbers, from one place to another inside the computer. Of the algorithm and data structures, no matter how fantastically complex or sublimely well constructed, the computer "knows" nothing, from the time it begins executing the program to the end. As far as the execution engine itself is concerned, everything but the current machine instruction and the current memory location or register being accessed might as well not exist - they may be considered to be external to the system at that instant.

But could we not say that the execution engine, the CPU, is not the system we are concerned about, but the larger system taken as a whole? Couldn't we draw a big circle around the whole computer, CPU, memory, algorithm, data structures and all? We could, I suppose, choose to look at a computer that way. Or we could choose to look at it my way, as a relatively simple, mindless execution engine amidst a sea of dead data, like an ant crawling over a huge gravel driveway. If I understand the functioning of the ant perfectly, and I have memorized the gravel or have easy access to the gravel, then I have 100% predictive power over the ant-and-driveway system. Any hard-nosed reductive materialist would have to concede that my understanding of that system, then, is complete and perfect. I am free to reject any "higher-level" interpretation of the system as an arbitrary metaphysical overlay on my complete and perfect understanding, even if it is compatible with my physical understanding. It is therefore highly suspect when broad laws and definitions about facts of Nature are constructed that depend solely on such high-level descriptions and metaphysical overlays.

The higher-level view of a system can not give you anything real that was not already there at the low level. The system exists at the low level. The high-level view of a system is just a way of thinking about it, and possibly a very useful way of thinking about it for certain purposes, but the system will do whatever it is that the system does whether you think about it that way or not. The high-level view of the system is, strictly speaking, explanatorily useless (although it may well be much, much easier for us, given our limited capacities, to talk about the system in high-level terms rather than in terms of its trillions of constituent atoms, for example).

Imagine that you are presented with a computer that appears to be intelligent - a true artificial intelligence (AI). Let us also say that, like Superman, you can use X-ray vision to see right into this computer and track every last diode as it runs. You see each machine language operation as it gets loaded into the CPU, you see the contents of every register and every memory location, you understand how the machine acts upon executing each instruction, and you are smart enough to keep track of all of this in your mind. You can walk the machine through its inputs in your mind, based solely on this transistor-level pile of knowledge of its interacting parts, and thus derive its output given any input, no matter how long the computation.

You do not, however, know the high-level design of the software itself. After quite some time, watching the machine operate, you could possibly reverse-engineer the architecture of the software. It is the block diagram of the software architecture that you would thereby derive that a functionalist would say determines the consciousness of the computer, but it is something you created, a story about the endless series of machine code operations you told yourself in order to organize those operations in your mind. This story may be "correct" in the sense that it is perfectly compatible with the actual physical system, and it may in fact be the same block diagram that the computer's designers had in their minds when they built it.

This only means, however, that the designers got you to draw a picture in your mind that matched the one in theirs. If I have a picture in my mind, and I create an artifact (for example, if I write a letter), and upon examining the artifact, you draw the same (or a similar) picture in your mind, we usually say that I have communicated with you using the artifact (i.e. the letter) as a medium. So if the designers of the AI had a particular block diagram in their minds when they built the AI, and upon exhaustive examination of the AI, you eventually derived the same block diagram, all that has happened is that the machine's designers have successfully (if inefficiently) communicated with you over the medium of the physical system they created.

The main point is that before you reverse-engineered the high-level design of the system, you already had what we must concede is a complete and perfect understanding of the system in that you understood in complete detail all of its micro-functionings, and you could predict, given the current state of the system, its future state at any time. In short, there was nothing actually there in terms of the system's objective, measurable behavior that you did not know about the system. But you just saw a huge collection of parts interacting according to their causal relations. There was no block diagram.

classic Rube Goldberg mechanism cartoon

A computer is a Rube Goldberg device, a complicated system of physical causes and effects. Parrot eats cracker, as cup spills seeds into pail, lever swings, igniting lighter, etc. In a Rube Goldberg device, where is the information? Is the cup of seeds a symbol, or is the sickle? Where is the "internal representation" or "model of self" upon which the machine operates? These are things we, as conscious observers (or designers) project into the machine: we design it with intuitions about information, symbols, and internal representation in our minds, and we build it in such a way as to emulate these things functionally.

The computer itself never "gets" the internal model, the information, the symbols. It is confined to an unimaginably limited ant's-eye view of what it is doing (LDA, STA, etc.). It never sees the big picture, little picture, or anything we would regard as a picture at all. By making the system more complex, we just put more links in the chain, make a larger Rube Goldberg machine. Any time we humans say that the computer understands anything at a higher level than the most micro of all possible levels, we are speaking metaphorically, anthropomorphizing the computer1.

A Hypothesis About Hypotheticals: Do Counterfactuals Count?

The functional block diagram itself does not, properly speaking, exist at any particular moment in a system to which it is attributed. Another way of putting this is to point out that the functional block diagram description of any system (or subsystem) is determined by an ethereal cloud of hypotheticals. You can not talk about any system's abstract functional organization without talking about what the system's components are poised to do, about their dispositions, tendencies, abilities or proclivities in certain hypothetical situations, about their purported latent potentials. What makes a given block in a functionalist's block diagram the block that it is, is not anything unique that it does at any single given moment with the inputs provided to it at that moment, but what it might do, over a range of inputs. The blocks must be defined and characterized in terms of hypotheticals.

It is all well and good to say, for example, that the Peripheral Awareness Manager takes input from the Central Executive and scans it according to certain matching criteria, and if appropriate, triggers an interrupt condition back to the Central Executive, but what does this mean? Isn't it basically saying that if the Peripheral Awareness Manager gets input X1 then it will trigger an interrupt, but if it gets input X2 then it won't? These are hypothetical situations. What makes the Peripheral Awareness Manager the Peripheral Awareness Manager is the fact that over time it will behave the way it should in all such hypothetical situations, not the way it actually behaves at any one particular moment.

Any integration of the components in a system that is characterized functionally is imaginary and speculative. If component A pulls this string, and it tugs on component B, then B will react by doing something else… Is there any way of talking about the relation between A and B that does not use the word "if"?

What If We Prune The Untaken Paths?

Imagine that we have a real artificial intelligence (AI), running in front of us. It is implemented on a computer, running an algorithm, operating on data, either in memory or from some kind of input/output devices. This AI is supposedly conscious because it is a faithful implementation of a certain functional layout, specified in terms of black boxes, interacting in certain ways. By the hypothesis of functionalism, it does not matter that the whole thing might really be just a single CPU running a program. That is, it may not physically be separate literal hardware boxes, but rather logically distinct programmatic modules, or subprocesses running concurrently on a single CPU. Let's let this AI run for five minutes, during which it interacts with its environment in some way (maybe it eats an ice cream cone). It claims to like the ice cream, but maybe the walnuts were a little soggy.

Now let us reset the AI and, having perfectly recorded the signals in its nervous system from before, we rerun the scenario, effectively feeding it canned data about the whole eating-ice-cream experience. It, being an algorithm, reacts in exactly the same way it did before (the walnuts are still soggy). We could do this 100 times, and on each run, the AI would be just as conscious as it was the first time.

But now, as engineers, we watch the low-level workings of the AI as it goes through this scenario, and we trace the execution paths of the various black boxes, subroutines, and software libraries. We notice that during the entire five minutes, some libraries were never invoked at all, so we remove them. Same with certain utility subroutines. For other functional parts of the system, we see that they were originally designed to do a whole lot of stuff given a broad range of possible inputs, with different potential behaviors based on those inputs depending on a similarly broad range of possible internal states. But for the five minute ice-cream test, they are only ever in a few states, and/or are only called upon to do a few things, and given only a few of their possible input values. In these cases we carefully remove the capability of even handling those inputs that are never presented, or the internal state transitions that are never performed. We sever the connections between modules that never talk to each other in our five minute test.

We may even be clever enough to intervene in the workings of our system during the test itself, erasing parts of the whole algorithm once they have been executed for the last time. We might also disconnect whole chunks of memory during the test, reconnecting them at just the moment the system needs them.

So now we have stripped down our original AI so that it would quickly fail any other "test" of its intelligence or consciousness. It is hardwired to handle only this ice-cream situation. We have effectively lobotomized it, dumbing it down to the point where it could only function for this particular data set. We have removed the generality of our AI, cutting out so much of its capability, rendering it so special-purpose, that no one, upon examining it, could infer the functional organization of the original full-blown AI in all its glorious complexity. It can no longer pretend to be a faithful implementation of the functionalist's AI, defined in terms of the black boxes that broadly do what they are supposed to. Our original black boxes just aren't there anymore. Not only is the data canned, but the system that operates on that data is also canned. At this point, we are on a slippery slope towards something like Ned Block's table-driven Turing Test beater.

But if we've done it right, the new, dumb system is doing exactly the same thing the original AI did, and it is doing it in exactly the same way. That is, not only is our dumb system behaving the same way to all outward experiences (walnuts still too soggy!), but all the internal flows of information and control are functioning as before, and even at the lowest levels, each individual machine instruction executed by the CPU is exactly the same, at every tick of the system clock, for the full five minutes.

We have two systems, then, side by side: our original robust AI, implementing the full high-level schematic with all the black boxes, and next to it, the dumb system that can only do one thing. But in this one scenario, both perform in exactly the same way, at the micro level as well as the macro. All the causal interactions among the relevant parts are behaving identically. The defining aspect of the black-box schematic is constituted by a whole cloud of potential behaviors, potential inputs, potential internal state transitions, most of which we clipped off for our special-purpose system. What is it about unrealized potentials that imparts the causal or constitutive power that it must for functionalism to be plausible?

The defining characteristics of the functionalist's black boxes disappear without a lot of behavioral dispositions over a range of possible input values, smeared out over time. But there is nothing in the system itself that knows about these hypotheticals, calculates them ahead of time, or that stands back and sees the complexity of the potential state transitions or input/output pairings.

At any given instant the system is in a particular state X, and if it gets input Y it does whatever it must do when it gets input Y in state X. But it can not "know" about all the other states it could have been in when it got input Y, nor can it "know" about all the other inputs it could have gotten in state X, any more than it could know that if it were rewritten, it would be a chess program instead of an AI.

We, as designers of the system, can envision the combinatorially explosive range of inputs the system would have to deal with, the spreading tree of possibilities. But the world of algorithms is a deterministic one, and there are no potentials, no possibilities. There is only what actually happens, and what does not happen doesn't exist and has no effect on the system. We anthropomorphize, and project our sense of decision-making, or will, onto our machines. In real life, there are no potential paths or states available to the machine. None that matter, anyway.

The black boxes that are definitional of a system to a functionalist are integrated, but only through individual specific interactions at particular moments. Whatever integration a system exhibits is purely functional, spread out over time, and takes the form of a whole bunch of "if…then" clauses. "If I am in state X and I get input Y, then do Z and transition to state F; else if…" I'm not saying that this type of "integration" is imaginary, just that it does not quite do justice to our intuitions about what "integrated" means. If you ask a module a particular question when it is in a particular state, it will give you the correct answer according to its functional specification. You can do a lot of complex work with such a scheme, but adherence to a whole mess of "if…then" clauses never amounts to anything beyond adherence to any one of them at any moment.

If a highly "integrated" system is running, and some of its submodules are not being accessed in a given moment, the system as a whole, its level of "integration", and our opinion about the system's consciousness, could not legitimately change if those submodules were missing entirely or disabled. We ought to be very careful about attributing explanatory power to something based on what it is poised to do according to our analysis. Poisedness is just a way of sneaking teleology in the back door, of imbuing a physical system with a ghostly, latent purpose. Poisedness is in the eye of the beholder. A dispositional state is an empty abstraction. A rock perched high up on a hill has a dispositional state: if nudged a certain way, it will roll down. A block of stone has a dispositional state: if chipped a certain way with a chisel, it will become Michelangelo's David. That, as the saying goes, plus fifty cents, will buy you a cup of coffee.

We have an intuition of holism. Any attempt to articulate that in terms of causal integration, smeared out over time, defined in terms of unrealized hypotheticals, fails. At any given instant, like the CPU, the system is just doing one tiny, stupid crumb of what, we, as intelligent observers, see that it might do when thought of as one continuous process, over time. To say that a system is conscious or not because of an airy-fairy cloud of unrealized hypothetical potentials sounds pretty spooky to me. In contrast, I am conscious right now, and my immediate and certain phenomenal experience is not contingent on any hypothetical speculations. My consciousness is not hypothetical - it is immediate. The term "if" does not figure into my evaluation of whether I am conscious or not.

Integrated Information Theory

IIT has gotten a lot of buzz recently. Proponents of IIT insist that it is not a functionalist theory, but I see it as the paradigmatic example of one. IIT claims to be able to quantify the degree of integration of a system in a variable called phi (Φ). IIT makes a great deal of reentrancy and feedback loops. All of this integration and reentrancy is functionally defined, however. The integration in integrated information theory is causal integration, smeared out over time, and attributes causal or constitutive properties to unrealized potential events and states.

An algorithmically implemented submodule is a deterministic, causal device. It does not know or care about self-reference. If it pushes a ping pong ball into its output tube, and the ball disappears, it's gone. If, a moment later, a ping pong ball emerges from its input tube, it doesn't make a bit of difference to the submodule whether that is the same ping pong ball or a different one sent from a distant submodule.

When we see a recursive computer routine, the Bertrand Russell in us kicks in, and we go: self-reference! Whoa… but the routine simply transferred control to another routine. The fact that the next routine is itself is not interesting and makes no functional difference. We have an intuition that self-reference is weird and special, but it is a mistake to suppose that a machine "acting on itself" must therefore be weird and special. We need to dig and figure out what self-reference means to us, and why it is weird and special in our case.

Besides assuming that there is something special or magic about feedback as opposed to feed-forward signals in themselves, IIT relies upon potential actions and connections, by blunt assertion: if a module is missing or disabled, the phi of the overall system is decreased, but if the module is merely not doing anything at the moment, it still contributes to phi in some ghostly unspecified way.

Worse, IIT bluntly asserts an identity between full-blown qualitative consciousness and phi (i.e. causal integration). It is a brute identity theory, albeit a functionalist one. IIT is the worst of both worlds. It fails to explain consciousness in a convincing way while cleaving to a materialistic world view, but also takes consciousness seriously in the way the materialists say we shouldn't. It's like panpsychism, but less plausible.

Life Is Real. Isn't It Defined "Merely" Functionally?

Couldn't this argument be used to declare the concept of life out of bounds as well? After all, life is a quality that is characterized exclusively by an elaborate functional description, one that involves reproduction, incorporating external stuff into oneself, localized thwarting of entropy, etc. Life is not characterized by any particular physical implementation: if we were visited by aliens tomorrow who were silicon-based instead of carbon-based, we would nevertheless not hesitate to call them alive (assuming they were capable of functions analogous to reproduction, metabolism, consumption, etc.).

But according to the above argument, I am alive right now, even though our definitions of what it means to be alive all involve functional descriptions of the processes that sustain life, and these functional descriptions, in turn, are built on an ethereal cloud of hypotheticals. There is nothing in a living system that knows about these hypotheticals, or calculates them, so how can we say that right here and now, one system is alive and another dead, when they are both doing the same thing right here and now, but one conforms to the functional definition of a living thing, and one does not? Therefore, there must be some magical quality of life that can not be captured by any functional description. Yet we know this is not true of life, so why should we think it is true of consciousness?

Like so many other arguments, it comes down to intuitions about the kind of thing consciousness is. Life is, at heart, an ad hoc concept. The distinction between living and non-living things, while extremely important to us, and seemingly unambiguous, is not really a natural distinction. The universe doesn't know life from non-life. As far as the universe is concerned, its all just atoms and molecules doing what they do.

People observe regularities and make distinctions based on what is important to them at the levels at which they commonly operate. We see a lot of things happening around us, and take a purple crayon and draw a line around a certain set of systems we observe and say, "within this circle is life. Outside of it is non-life." Life just is conformance to a class of functional descriptions. It is a quick way of saying, "yeah, all the systems that seem more or less to conform to this functional description." It is a rough and ready concept, not an absolute one. Nature has not seen fit to present us with many ambiguous borderline cases, but one can, with a little imagination, come up with conceivable ones. It is useful for us to classify the things in the world into groups along these lines, so we invent this abstraction, "life", whose definition gets more elaborate and more explicitly functional as the centuries progress. We observe behaviors over time, and make distinctions based on our observations and expectations of this behavior. So life, while perfectly real as far as our need to classify things is concerned, has no absolute reality in nature, the way mass and charge do.

This is not to denigrate the concept of life or to say that the concept is meaningless, or that any life science is on inherently shaky foundations. The study of life and living systems, besides being fascinating, is a perfectly fine, upstanding hard science, with precise ways of dealing with its subject. I am just saying that "life" is a convenient abstraction that we create, based on distinctions that, while obvious to any five-year-old, are not built in to the fabric of the universe. Crucially, as we examine life in our world, every single thing we have ever observed about life is comfortably accommodated by this functional understanding of the concept, even if, strictly speaking, it is a little ad hoc.

To be a functionalist is to believe that consciousness is also such a concept, that it is just a handy distinction with no absolute basis in reality. I maintain, however, that our experience of consciousness (which is to say, simply our experience) has an immediacy that belies that. We did not create the notion of consciousness to broadly categorize certain systems as being distinct from other systems based on observed functional behavior over time. Consciousness just is, right now.

What If We Gerrymander The Low-Level Components?

What's more, we can squeeze all kinds of functional descriptions out of different physical systems. Gregg Rosenberg has pointed out that the worldwide system of ocean currents, viewed at the molecular level, is hugely complex, considerably more so than Einstein's brain viewed at the neuronal level. I do not think I am going out on a limb by saying that the worldwide system of ocean currents is not conscious.

What if, however, we analyzed the world's oceans in such a way that we broke them down into one inch cubes, and considered each such cube a logic component, perhaps a logic gate. Each such cube (except those at the very bottom or surface of the ocean) abuts six neighbors face-to-face, and touches 20 others tangentially at the corners and edges. Now choose some physical aspect of each of these cubes of water that is likely to influence neighboring cubes, say micro-changes in temperature, or direction of water flow, or rate of change of either of them, and let this metric be considered the "signal" (0 or 1, or whatever the logic component deals with).

Now suppose that for three and a half seconds in 1953, just by chance, all of the ocean's currents analyzed in just this way actually implemented exactly the functional organization that a functionalist would say is the defining characteristic of a mind. Were the oceans conscious for those three and a half seconds? What if we had used cubic centimeters instead of cubic inches? Or instead of temperature, or direction of water flow, we used some other metric as the signal, like average magnetic polarity throughout each of the cubes? If we change the units in which we are interested in these ways, our analysis of the logical machine thereby implemented changes, as does the block diagram. Would the oceans not have been conscious because of these sorts of changes of perspective on our part?

What if we gerrymander our logic components, so that instead of fixed cubes, each logic component is implemented by whatever amorphous, constantly changing shape of seawater is necessary to shoehorn the oceans into our functional description so that we can say that the oceans are right now implementing our conscious functional machine? This is a bit outrageous, as we are clearly having our chunking of logic components do all the heavy lifting. Nevertheless, as long as it is conceivable that we could do this, even though it would be very difficult to actually specify the constantly changing logic components, we would have to concede that the oceans are conscious right now. Is it not clear that there is an uncomfortable arbitrariness here, that a functionalist could look at any given system in certain terms and declare it to be conscious, but look at it in some other terms and declare it not conscious?

Our deciding that a system is conscious should not depend on our method of analysis in this way. I just am conscious, full stop. My consciousness is not a product of some purported functional layout of my brain, when looked at in certain terms, at some level of granularity. It does not cease to be because my brain is looked at in some other terms at some other level of granularity. That I am conscious right now is not open to debate, it is not subject to anyone's perspective when analyzing the physical makeup of my brain. Consciousness really does exist in the Hard Problem sense, in all its spooky, mysterious, ineffable glory. But it does not exist by virtue of a purported high-level functional organization of the conscious system. The high-level functional organization of a system simply does not have the magical power to cause something like consciousness to spring into existence, beyond any power already there in the low-level picture of the same system. As soon as we start talking about things that are "realized" or "implemented" by something else, we have entered the realm of the may-be-seen-as, and we have left the realm of the just-is, which is the realm to which consciousness belongs.

1 Don't anthropomorphize computers. They don't like it.