Back    All discussions

2011-06-30
Is intelligence a prerequisite for consciousness, or vice versa?

Of course those aren't the only two options, the others are: 1. consciousness or intelligence can exist completely separate from the other, and 2. they are the same thing, or just two parts of the same process, so there's no separating them.

 

Unfortunately, we don't have agreed upon and tested definitions for both intelligence and consciousness, and can't even definitively state whether any arbitrarily chosen subject falls in to either category. We only have one set of subjects (humans) that we can say with surety are both conscious and intelligent, and that limits our ability to answer this question. But given these limitations, can we imagine a person who's intelligent, but not conscious? What about conscious, but not intelligent?

 

At least in humans, I would expect that the two processes are so intertwined, having evolved together using the same biological resources in the brain, that we couldn't have someone who's conscious, but not intelligent or vice versa. However, is it possible to design a machine which would be intelligent, but not conscious?


2011-07-04
Is intelligence a prerequisite for consciousness, or vice versa?
Reply to Tristan Cunha
If you're an undergraduate at Tufts University, you really should be trying to get a hold of Dan Dennett to discuss this. I'm not sure how available he is there, but he's really the person you want to be talking to as he's a Ryle student, who you should absolutely read (The Concept of Mind and maybe the article "Knowing-how and Knowing-that" as well) if this question interests you.
As you point out, the definitions of the terms have some play in them, and this will make answering the question difficult. However, I believe the answer that Ryle would give would be, "Yes, intelligence precedes consciousness." Ryle was very against certain ways of understanding consciousness/the mind though, and so if the term consciousness were being used according to one of those ways of understanding the mind, then I imagine the answer would be, "There is only intelligence."

2011-07-04
Is intelligence a prerequisite for consciousness, or vice versa?
Reply to Tristan Cunha
Intelligence viewed from the common sense approach is a quality which denotes an ability to respond more or less correctly and quickly to the circumstances. If consciousness is not present, how the mechanism, be it human or a machine can understand, what the situation or circumstance is demanding?

2011-07-04
Is intelligence a prerequisite for consciousness, or vice versa?
Reply to Tristan Cunha
I think that a week or two working as orderly on a neurosurgical ward would provide the interesting answers to your thread question. I strongly recommend it to anyone considering being a philosopher of mind. A little basic fieldwork always pays off. We are far from having only 'one set of subjects'.
But you seem to be more interested in a different question - something like 'can a machine carry out tasks based on logical rules (which seems to be so) without this being associated with an instance of a sense of knowing that the solutions to the problems we see "the machine" as the solver of are solutions to such problems'. The trouble with this is that the machine that is my laptop on which I type this post has no definable boundaries. Its activities are linked in with the entire world wide web and beyond. The same applies to human beings. We have got used to the idea that an instance of a sense of knowing belongs to a problem solving 'system' (a person) but neurological disease raises all sorts of serious paradoxes for this approach. 

I think we should consider the possibility that an instance of a sense of knowing is associated with a locus or juncture where that which is known is the input. If so the juncture in a computer should be a gate through which a data string is passed. It is hard to see how you get a sense of knowing you are solving what we call a problem on the basis of either getting a 0 or a 1 as input. The brain differs in two major respects. Firstly, the data being sent around is sent in thousands of copies to thousands of places at once. Secondly, the integrators (neurons) receive inputs easily complex enough to encode something describable in a thousand words (I doubt we ever have a sense of knowing that cannot be). So there there should be lots of neurons receiving an input encoding such a sense, just as there are lots encoding the presence of a red blob. We accept that neurons get inputs that encode a red blob but strangely we do not tend to think of neurons getting inputs that encode a sense of knowing. There seems to be no logical reason for this. People often ask 'which one is me?' and find it very hard to see the unjustified premise implicit in this question.



2011-07-04
Is intelligence a prerequisite for consciousness, or vice versa?
Reply to Tristan Cunha
I can imagine a being which experiences nothing but pain but which has no idea what it is experiencing. Since it experiences pain, it is conscious; since it has no idea what it is experiencing, it isn't intelligent. Contrary to the suggestion that we only have sufficient reason to believe that other humans are conscious, I think we have good reason to believe that many animals of all sorts feel pain and pleasure, and hence are conscious. Why? Well, they react like we do when we experience pain or pleasure, and we have no more reason to believe that they are non-conscious "animal-zombies" than we do to believe that other humans are non-conscious "human zombies". It seems like there are certain primitive animals which can experience something like pain, but which we would be hard-pressed to call intelligent, like goldfish. So, this seems like plenty of reason to believe that intelligence isn't a prerequisite to consciousness.

Whether consciousness is a prerequisite to intelligence is a more interesting question, and it seems to me to depend on how we cash out "intelligence". "Intelligent" is one of those one-place predicates which has its origin in a two-place "scalar" relation -- the "more intelligent than" or "less intelligent than" relation. It is a lot like "tall" and "fat" in this regard -- a person is tall iff they are taller than x, a person is fat iff they are fatter than x -- where x stands for whichever arbitrary "borderline" of fatness or tallness is operative in the context our discussion. You can generate sorites paradoxes with "intelligent" in the same way you can with "tall" or "fat" or "loud" or "bald". So, whether or not a thing has intelligence is going to be relative to (a) some way of ranking the scale of things as more or less intelligent and (b) some arbitrary line on the scale above which things are intelligent and below which they are not.

Notice how different this is from the predicate "is conscious" -- a thing either is conscious or it isn't, and while a thing can be conscious of more or less than some other thing, there's no natural sense to talk about a thing being more conscious or less conscious. (Maybe early in the morning I am 'less conscious' than after I have had coffee, but I think this means only that I am not conscious of as much, e.g., I am more likely to stub my toe). You can't generate sorites paradoxes with "is conscious".

Three ways of ranking things on the scale as "more intelligent" or "less intelligent" come to mind, though I'm sure there are other ways to do it:

1. The first way is to consider a thing more intelligent than another if it possess more of the virtues involved in processing information (speed, memory, low rate of corruption of information, etc), where 'information' is a technical term that does not imply a subject's understanding of the information, or a representational quality to the information, but only a preservation of certain patterns which match up with real-world probabilities. In this sense, some computers are more intelligent than other computers, some computers are more intelligent than Jeopardy contestants, certain complex biological processes might be deemed "intelligent", or so on. In this sense, consciousness is not a prerequisite to intelligence.

2. The second way to consider a thing more intelligent than another is if it possesses more of the virtues involved in making use of mental representations of the world, such as those obtained through perception. For example, an animal which runs through the maze the same way each time without learning is not so intelligent as one which forms a mental image of the maze from its past failures to figure out a way out. In this sense, consciousness is a prerequisite to intelligence iff consciousness is a prerequisite to having mental representations the world. This is a debated question.

3. The third way to consider a thing more intelligent than another is if it possesses more of the virtues involved in understanding and using concepts. I think this is the sense we typically have in mind when we say that certain people are more intelligent than others because of IQ scores or so on. Here it seems like everything in the extension of "intelligent" (even if one draws the line very low) is going to be in the extension of "conscious", maybe for some sort of biological reasons as you suggest. Whether that means consciousness is necessarily a prerequisite to intelligence (as opposed to this being a contingent matter of how things worked out in our world), will depend on whether or not consciousness is necessarily a prerequisite to using concepts. Here I suspect that while a being might *have* many concepts while not conscious, a being can only *use* concepts if it uses a processes which is either conscious or potentially accessible to consciousness. If my hunch is right, then consciousness would be a prerequisite to 'intelligence' in this sense.

2011-07-11
Is intelligence a prerequisite for consciousness, or vice versa?
Reply to Tristan Cunha
One interesting testbed for this issue is AI - many people would say that we have some intelligence in current artificial systems, but little or no consciousness. I suppose this is similar to the suggestion of work in neurosurgery ward (I have done that) or look at other animals. --- 
Advertisement: I organise a conference on philosophy of AI where this might be a suitable topic: http://www.pt-ai.org

2011-09-22
Is intelligence a prerequisite for consciousness, or vice versa?
Reply to Tristan Cunha
After giving the issue more thought, and specifically thinking about progress in AI I think the key to understanding intelligence may be to think of it as a tool for understanding the world. And the most basic way of understanding the world is to find patterns in the world because if a pattern exists it's either 1. Caused by something in or about the world or 2. A random fluke. If we can pick out the true patterns from the flukes we can make predictions about the world and the things in it, and the fundamentals that govern it. If we want a basic framework for describing intelligence as the ability to process patterns like this the required capabilities for something to be intelligent could be thought of as:

Find patterns in observations
Store the patterns for later use
Recognize when part of the stored patterns are observed again
Predict the rest of the pattern based on the observed part

Can we build up a full model of intelligence from this framework? I believe so, and I think it's possible to have a theory of inteligence like this without requiring consciousness. First let's clarify some details for the different steps. First step - finding patterns in observations would seem to involve matching up two or more portions of our observations, either from different areas or different times, to see if they match. If parts of our observations repeat or match, then there may be a pattern there. Second step - Obviously once a potential pattern is found it has to be stored for use somehow, and the ability to store patterns is also needed for the first step (as part of comparing potential pattern segments across time or space). Third step - once a pattern is stored we can then try to match up new observations against the pattern to see if part of the current observations match part of one of our stored patterns. This won't always be a simple process - it will have to also include the ability to recognize two (or more) patterns in a single observation. For example, one stored pattern could be how a ball thrown up in the air reacts to a strong wind, and another stored pattern could be how a ball thrown to us follows a parabola, if we then see a ball thrown to us in a strong wind we can recognize both patterns. We should also include the ability to recognize nested patterns. For example, mice like cheese and will try to eat it and if anything touches the cheese on a set mousetrap it will snap shut. Our ability to understand the world is based on the ability to constantly recognize many concurrent patterns of varying complexity. Step four - once we have recognized that part of our observations match part of a stored pattern we can make predictions about what things are like outside of our current observations (either not observable, or before we started observing, or at a point in the future). In the previous examples we can predict where to stand to catch the ball, or predict who threw the ball, or predict that the mouse will get caught in the trap, ect. The more patterns we can find, store and recognize the more predictions we can make about the world. Either what underlying forces motivate things to act like they do, what set the current events we're observing in motion, or what the result of our current observations will be.

Thinking about intelligence using this framework leads to some interesting conclusions:

1. If something is missing out on one of these capabilities it could appear to be intelligent, but ultimately we would realize it isn't. For example, a single celled organism can have some complicated reactions to it's environment, but if all of it's pre-programmed, the result of evolution, then it can't react to new environments or new changes in its environment and it's not displaying intelligence. The same is true of programmed robots, if they're relying on our intelligence, then we're doing some (or all) of the steps for them.
2. If something has a limit on one of these capabilities, we may consider it intelligent, but only in a limited way. Consider the hypothetical example of a robot that learns to retrieve balls, it's ability to "Find" patterns is limited to only balls, but if it can correctly learn to retrieve new and different kinds of balls - beach balls, tennis balls, ect. then we may consider it to have a limited kind of intelligence because it's carrying out all of the steps, even if the "Find" step is extremely limited.
3. How intelligent something is is not only dependent on it's ability to process each of these steps, but also how much time it's had to observe the world, and what kinds of observations it has access to. A super intelligent machine that's only observed strings of random numbers wouldn't be able to find any true patterns, it wouldn't be able to exercise it's intelligent ability. And a gifted child may not be able to make as many or as accurate predictions as someone who learns (finds and stores patterns) slower, but has the benefit of much more experience.
4. The speed with which something can perform each step is important, more patterns can be found, and more predictions can be made in the same time. This is especially so for the predict step. If you can't make the prediction of where the ball is going to land in time to move there then it's not very useful.
5. This definition of intelligence doesn't include the idea of consciousness, even though our experience is mostly with ourselves: intelligent animals that are conscious. If we define intelligence this way it would seem that consciousness is not a prerequisite for intelligence, it would seem we could imagine a machine that carries out these steps that we wouldn't consider conscious, but we would consider intelligent.
6. However, we can also propose interesting ideas for the purpose consciousness serves for an intelligent being. For example we can think of consciousness as the process of attempting to match many (or all) of our stored patterns constantly in a parallel process to our current observations. The patterns that match, or have the best potential for being true in our observations would create our conscious experience of the world. For example, our visual experience of the world could be created from our stored patterns of visual information - how different kinds of 3D objects change as viewed from different positions and times, how different surfaces react to light, ect. The patterns that we can pick out in our observations can be used to create a prediction of a 3D world.(I've got lots more thoughts about how consciousness would work using this model of intelligence, but I'm going to try to stay on topic.)
7. From my (limited) understanding of AI, the goal is usually to program a machine with sets of rules that will let it solve the problems we want it to solve. The better we can make the rules, the better it performs. But, if we look at it using this framework, we could instead think about creating a machine that doesn't have any built in rules. If it has the ability to complete these steps (find, store, recognize and predict) in the simplest and most general terms then the only thing it would need to be intelligent (at least in some limited sense) would be experience. Although, we should expect it to take a lot of experience. A human brain has many times more processing power than any machine we've built, and it still takes somewhere between 2 and 20 years for it to be able to learn inteligent tasks. If we assume we're working with machines that aren't as powerful, and have hardware that's probably optimized more for carrying out calculations than pattern recognition, then it could take years of observations for a machine to reach even very basic levels of intelligence.

2011-09-22
Is intelligence a prerequisite for consciousness, or vice versa?
Reply to Tristan Cunha

The first distinction to make is between consciousness and self-consciousness, since often people will use those terms interchangeably. I think it is likely that there are some animals that are in some sense conscious but not self-conscious. That is, they may be conscious of aspects of their environment (call this, perhaps, 'perceptual consciousness') but not necessarily self-conscious (in the sense of being aware of their own existence as psychological subjects).

 

Now, if the question becomes 'Is intelligence a prerequisite for self-consciousness, or vice versa?', then my answer is that they are inextricably linked. More precisely, I suggest that concept possession (the essence of intelligence, if you will) is constitutive of self-consciousness. For a defence of this position look here: http://www.sciencedirect.com/science/article/pii/S1053810011000456


2011-09-26
Is intelligence a prerequisite for consciousness, or vice versa?
Reply to Tristan Cunha
Hi Tristan, 

I'm glad you have brought up this issue. I've been delaying a paper (on the meaning of 'mental') precisely because I have been unable to make my mind up on whether or not intelligence and consciousness are mutually dependant. I agree with Tim that The Concept of Mind is a must read, though I'm not sure Ryle would say that "intelligence precedes consciousness". At any rate, and more to the point at issue (Is intelligence a prerequisite for consciousness, or vice versa?), he did take it that neither entails or presupposes the other. On his view, the distinction between intelligent and non-intelligent behaviour cut across the distinction between what happens (or is done) inside and outside consciousness. As he puts it: “The distinction between talking sense and babbling, or between thinking what one is saying and merely saying, cuts across the distinction between talking aloud and talking to oneself. What makes a verbal operation an exercise of intellect is independent of what makes it public or private. Arithmetic done with pencil and paper may be more intelligent than mental arithmetic, and the public tumblings of the clown may be more intelligent than the tumblings which he merely ‘sees’ in his mind’s eye or ‘feels’ in his mind’s legs”. 
In a similar vein, Feigl (in another classic work in the philosophy of mind, The Mental and the Physical) contended that the concept of the sapient and the concept of the sentient are not coextensive. I guess Jeff points to the same by noting that there are certain primitive animals which are able to experience something like pain “but which we would be hard-pressed to call intelligent”. To quote Ryle again: “Deafness is not a species of stupidity, nor is a squint any sort of turpitude; the retriever’s keenness of scent does not prove him intelligent; and we do not try to train or shame children out of colour-blindness or think of them as mentally defective (...) Having a sensation is not an exercise of a quality of intellect or character. Hence we are not too proud to concede sensations to reptiles”. 

On the other hand, however, the idea that intelligence and consciousness necessitates each other has a long history in philosophy: “There can be no intellection without previous sense-perception; and there is no sense-impression which does not start some ripple of intelligence” (Thorburn 1917). This way of seeing things may explain why W. James talked rather indistinctly of intelligence, consciousness and even conscious intelligence. Perhaps he thought the sort of intelligence we humans have is necessarily conscious. As an Aristotelian tradition goes, Nibil est in intellectu quod non prius fuerit in sensu. Conversely, from a Kantian (or perhaps semi-Kantian) point of view consciousness might be thought as entailing intelligence, at least if you think that there can be no experience without concepts. 

Unfortunately, we cannot settle the controversy by simply appealing to our intuitions. For there seem to be a conflict of intuitions among philosophers. Some think he retriever’s keenness of scent does not prove him intelligent; others take reptiles and even more primitive forms of life to be both intelligent and conscious, and to be the the former in virtue of their being the latter or vice-verse. (I believe something along these lines can be found in the work of philosophers like Bergson, am I right? Not sure about James.) Thus, for example, Vincent’s point that “many people would say that we have some intelligence in current artificial systems, but little or no consciousness” is not really that helpful, because in fact also many people would intuitively deny that we have some intelligence in current artificial systems. If we want to solve the issue by appealing to intuitions, we need to find uncontroversial intuitions. Otherwise we have not gone any further. 

So Tristan, I wouldn’t be as optimistic about getting anywhere without first offering a definition of intelligence. I do not think we need a definition of consciousness, since anyone who has conscious experiences knows what that is. Actually, I think we cannot produce such a definition, for very much the same reason one cannot fully describe what it is like to see a colour or to hear a sound. But the concept of intelligence is quite different. You can analyse it, and you must if you want to establish whether or not intelligence and consciousness depend on each other. (For some relatively contemporary philosophical work on the concept of intelligence, I can suggest Altman's (1997) book The concept of intelligence and Sterret's (2002) paper “Too many instincts: contrasting philosophical views on intelligence in humans and non-humans” -- as well as, of course, Ryle's The Concept of Mind. To me, Ryle's proposal is the best one. Unfortunately, some exegesis is required to capture it. His basic thought is that being intelligent, or having intelligence, consists in having the capacity to exhibit both voluntary and goal-directed behaviour -- or, as he puts it sometimes, the capacity to "try to get things right". 

Hope this was helpful and motivate more thoughts on the issue. 

Alfredo. 

 

 





 

 


 


2011-10-06
Is intelligence a prerequisite for consciousness, or vice versa?
Reply to Alfredo Gaete

I agree that our intuition isn't the most useful guide in this situation since every human who is intelligent also has consciousness, so we can't directly study one without the other. We should assume that both intelligence and consciousness evolved as tools which are useful, but not sufficient for survival. I would guess that most likely it's possible to have intelligence without consciousness; even if there aren't any examples of this "in nature" because I think we can imagine a machine (or "agent") that is just intelligent. It wouldn't be independent in anyway, it would just take in sense data and pump out predictions for any types of patterns it has access to. But I don't know what it would mean to have an agent that is conscious, but isn't intelligent. Ultimately it would seem that all examples of consciousness we've seen could be thought of as ways to make sense data useful, and in particular to make intelligent predictions about the state of the world based on sensory information. If this is the case we wouldn't expect to see any animals which are conscious but not intelligent, but it wouldn't immediately rule out the possibility of consciousness without intelligence.

A counter example might be a simple animal of some sort that appears to feel pain, but doesn't appear to be intelligent at all. But we have no way of distinguishing between "pain" and "avoidance behavior" or similarly between "pleasure" and "seeking behavior." Even a plant will turn to face the sun, and I don't think that that's a sign of feeling pleasure.

Additionally any agent that's intelligent is going to need someway to make the predictions it has available useful, someway to judge what to do based on the new prediction it has. One way to do this is by being conscious, but I think it's possible to just have at least one sense be a "privileged" sense such that predictions are always checked to see how they correlate with this one sense. For example, a simple animal could have several senses to detect its relationship with the world, plus a privileged sense which detects the presence of sugar molecules. If this animal has a nervous system capable of intelligence but not consciousness (and obviously some way to interact with the world too) then we can imagine a system where by:
1. Sense data is processed to look for patterns (intelligence)
2. Output is processed again, but compared with the privileged sense (detecting sugar)
3. Output is based on predictions which correlate with finding sugar.

I can imagine that such an animal wouldn't have any feelings, or be conscious. Of course without a better understanding of the process that creates consciousness it's not possible to say if this is consciousness or not, or even what it would feel like to be that animal. But at the very least it seems logically consistent to imagine intelligence without consciousness, while I'm not sure what an example of consciousness without intelligence would be.

Also, in regards to Ryle's description of intelligence as the capacity to "get things right." It would seem that if we define intelligence as making predictions (or more exactly finding, storing, recognizing and predicting patterns) intelligence is a pre-requisite to "get things right" but is not sufficient in itself. To be useful an intelligent agent also needs some kind of utility function to tell it what to do with the predictions.