In Philosophy Without Intuitions, Herman Cappelen focuses on the metaphilosophical thesis he calls Centrality: Contemporary analytic philosophers rely on intuitions as evidence for philosophical theories. Using linguistic and textual analysis, he argues that Centrality is false. He also suggests that because most philosophers accept Centrality, they have mistaken beliefs about their own methods.
B Chandrasekaran writes: It appears that there are three realms: the realm of matter, the realm of representations, and the realm of qualia/intentions/consciousness, not just two: matter and consciousness. I like this distinction, although I think there might more naturally be four realms to distinguish.
This is just a beginning categorization. I claim no 'objective correctness' for it. And of course the categories can be fluid, and the same joke can be a member of more than one category (and perhaps it will be funnier if it is). But thinking about the jokes which I can recall from the Humour Weekend, most seem to fall squarely into one or another category, indicating that perhaps this is a useful way of dividing jokes. It seems to me (...) that the "causes of humour" in all 4 classes are different, coming from different parts of the brain. (shrink)
It is fairly well-known that certain hard computational problems (that is, 'difficult' problems for a digital processor to solve) can in fact be solved much more easily with an analog machine. This raises questions about the true nature of the distinction between analog and digital computation (if such a distinction exists). I try to analyze the source of the observed difference in terms of (1) expanding parallelism and (2) more generally, infinite-state Turing machines. The issue of discreteness vs continuity will (...) also be touched upon, although it is not so important for analyzing these particular problems. (shrink)
It's very interesting to see neurophysiological evidence brought to bear on the puzzling question of conscious experience. Many have observed that information-processing models of cognition seem to leave consciousness untouched; it is natural to hope that turning to neurophysiology might lead us to the Holy Grail. Still, I think there are reasons to be skeptical. There are good reasons to suppose that neurophysiological investigation contributes to cognitive explanation at best in virtue of constraining the information-processing structure of cognition. Of course (...) this is a very large and significant role for it to play, but it may be over-optimistic to suppose that it can play some further explanatory role, taking us where information-processing theories cannot. If so, then neurophysiological accounts will be no more and no less successful at dealing with consciousness than information-processing accounts are. (shrink)
We could have been characters in a huge computer simulation. It is a familiar idea that the whole world might be simulated on a computer, and things would seem exactly the same to us (and indeed, who is to say that we are not).
Thanks to all the people who responded to my enquiry about the status of the Continuum Hypothesis. This is a really fascinating subject, which I could waste far too much time on. The following is a summary of some aspects of the feeling I got for the problems. This will be old hat to set theorists, and no doubt there are a couple of embarrassing misunderstandings, but it might be of some interest to non professionals.
What follows are compressed versions of three lectures on the subject of "Mind and Modality", given at Princeton University the week of October 12-16, 1998. The first two form a series; the third stands alone to some extent. All are philosophically technical, and probably of interest mainly to philosophers. I hope that they make sense, at least to those familiar with my book _The Conscious Mind_ . Lecture 1 recapitulates some of the material in the book in a somewhat different (...) form, and adds some further material on conditionals and on Kripke. Note that section has a more or less definitive formalization of the anti-materialist argument from the book (lots of people have asked for this). Lecture 2 pushes deeper into the heart of modality, further investigating the conceivability/possibility relationship and the epistemology of modality (with some material on the scrutability of truth in general), and arguing for a sort of modal rationalism. Lecture 3 gives an analysis of the content of beliefs about experiences, and applies this to a number of epistemological issues, including incorrigibility and the dialectic on "The Myth of the Given". (shrink)
In article <firstname.lastname@example.org> email@example.com writes: Reminds me of a friend of mine who claims that the number 17 is "the most random" number. His proof ran as follows: pick a number. It's not really as good a random number as 17, is it? (Invariable Answer: "Umm, well, no...") This reminds me of a little experiment I did a couple of years ago. I stood on a busy street corner in Oxford, and asked passers by to "name a random number between (...) zero and infinity." I was wondering what this "random" distribution would look like. (shrink)
Intro to what "first person" and "third person" mean. (outline the probs of the first person) (convenience of third person vs absoluteness of first person) (explain terminology) Dominance of third person, reasons. (embarassment with first person) (division of reactions) (natural selection - those who can make the most noise) (analogy with behaviourism) Reductionism, hard line and soft line Appropriation of first person terms by reductionists.
(1a) If Prince Albert Victor killed those people, he is Jack the Ripper (and Jack the Ripper killed those people). (1b) If Prince Albert Victor had killed those people, Jack the Ripper wouldn't have (and Prince Albert wouldn't have been Jack the Ripper).
A wealthy eccentric places two envelopes in front of you. She tells you that both envelopes contain money, and that one contains twice as much as the other, but she does not tell you which is which. You are allowed to choose one envelope, and to keep all the money you find inside.
The project that Dan Lloyd has undertaken is admirable and audacious. He has tried to boil down the substrate of information-processing that underlies conscious experience to some very simple elements, in order to gain a better understanding of the phenomenon. Some people will suspect that by considering a model as simple as a connectionist network, Dan has thrown away everything that is interesting about consciousness. Perhaps there is something to that complaint, but I will take a different tack. It seems (...) to me that if we apply his own reasoning, we can see that Dan has not taken things far _enough_. When we have boiled things down to a system as simple as a connectionist network, it seems faint-hearted to stop there, and perhaps a little arbitrary as well. So I will take things further, and ask what seems to be the really interesting question in the vicinity: what is it like to be a thermostat? (shrink)
What are the philosophical views of contemporary professional philosophers? Are more philosophers theists or atheists? Physicalists or non-physicalists? Deontologists, consequentialists, or virtue ethicists? We surveyed many professional philosophers in order to help determine the answers to these and other questions. This article documents the results.
Graeme Forbes (2011) raises some problems for two-dimensional semantic theories. The problems concern nested environments: linguistic environments where sentences are nested under both modal and epistemic operators. Closely related problems involving nested environments have been raised by Scott Soames (2005) and Josh Dever (2007). Soames (forthcoming) goes so far as to say that nested environments pose the “chief technical problem” for strong two-dimensionalism. We might call the problem of handling nested environments within two-dimensional semantics the nesting problem. We first lay (...) out the basic principles of two-dimensional semantics and a simple treatment of necessity and apriority operators, and spell out how Forbes' puzzle arises within this framework. We then show how a generalized version of the puzzle arises independently of two-dimensional semantics. We go on to spell out a two-dimensional treatment of attitude verbs and spell out a two-dimensional treatment of the apriority operator that fits the two-dimensional treatment of attitude verbs and show how these handle Forbes' puzzles. (shrink)
I would like to thank the authors of the 26 contributions to this symposium on my article “The Singularity: A Philosophical Analysis”. I learned a great deal from the reading their commentaries. Some of the commentaries engaged my article in detail, while others developed ideas about the singularity in other directions. In this reply I will concentrate mainly on those in the first group, with occasional comments on those in the second. A singularity (or an intelligence explosion) is a rapid (...) increase in intelligence to superintelligence (intelligence of far greater than human levels), as each generation of intelligent systems creates more intelligent systems in turn. The target article argues that we should take the possibility of a singularity seriously, and argues that there will be superintelligent systems within centuries unless certain specific defeating conditions obtain. (shrink)
It is widely believed that for all p, or at least for all entertainable p, it is knowable a priori that (p iff actually p). It is even more widely believed that for all such p, it is knowable that (p iff actually p). There is a simple argument against these claims from four antecedently plausible premises.
Computation is central to the foundations of modern cognitive science, but its role is controversial. Questions about computation abound: What is it for a physical system to implement a computation? Is computation sufficient for thought? What is the role of computation in a theory of cognition? What is the relation between different sorts of computational theory, such as connectionism and symbolic computation? In this paper I develop a systematic framework that addresses all of these questions. Justifying the role of computation (...) requires analysis of implementation, the nexus between abstract computations and concrete physical systems. I give such an analysis, based on the idea that a system implements a computation if the causal structure of the system mirrors the formal structure of the computation. This account can be used to justify the central commitments of artificial intelligence and computational cognitive science: the thesis of computational sufficiency, which holds that the right kind of computational structure suffices for the possession of a mind, and the thesis of computational explanation, which holds that computation provides a general framework for the explanation of cognitive processes. The theses are consequences of the facts that (a) computation can specify general patterns of causal organization, and (b) mentality is an organizational invariant, rooted in such patterns. Along the way I answer various challenges to the computationalist position, such as those put forward by Searle. I close by advocating a kind of minimal computationalism, compatible with a very wide variety of empirical approaches to the mind. This allows computation to serve as a true foundation for cognitive science. (shrink)
The objects of credence are the entities to which credences are assigned for the purposes of a successful theory of credence. I use cases akin to Frege's puzzle to argue against referentialism about credence: the view that objects of credence are determined by the objects and properties at which one's credence is directed. I go on to develop a non-referential account of the objects of credence in terms of sets of epistemically possible scenarios.
When I say ‘Hesperus is Phosphorus’, I seem to express a proposition. And when I say ‘Joan believes that Hesperus is Phosphorus’, I seem to ascribe to Joan an attitude to the same proposition. But what are propositions? And what is involved in ascribing propositional attitudes?
W.V. Quine’s article “Two Dogmas of Empiricism” is one of the most influential works in 20thcentury philosophy. The article is cast most explicitly as an argument against logical empiricists such as Carnap, arguing against the analytic/synthetic distinction that they appeal to along with their verificationism. But the article has been read much more broadly as an attack on the notion..
There are many ways the world might be, for all I know. For all I know, it might be that there is life on Jupiter, and it might be that there is not. It might be that Australia will win the next Ashes series, and it might be that they will not. It might be that my great-grandfather was my great-grandmother's second cousin, and it might be that he was not. It might be that copper is a compound, and it (...) might be that it is not. (shrink)
The philosophical interest of verbal disputes is twofold. First, they play a key role in philosophical method. Many philosophical disagreements are at least partly verbal, and almost every philosophical dispute has been diagnosed as verbal at some point. Here we can see the diagnosis of verbal disputes as a tool for philosophical progress. Second, they are interesting as a subject matter for first-order philosophy. Reflection on the existence and nature of verbal disputes can reveal something about the nature of concepts, (...) language, and meaning. In this article I first characterize verbal disputes, spell out a method for isolating and resolving them, and draw out conclusions for philosophical methodology. I then use the framework to draw out consequences in first-order philosophy. In particular, I argue that the analysis of verbal disputes can be used to support the existence of a distinctive sort of primitive concept and that it can be used to reconstruct a version of an analytic/synthetic distinction, where both are characterized in dialectical terms alone. (shrink)
What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far (...) surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. The key idea is that a machine that is more intelligent than humans will be better than humans at designing machines. So it will be capable of designing a machine more intelligent than the most intelligent machine that humans can design. So if it is itself designed by humans, it will be capable of designing a machine more intelligent than itself. By similar reasoning, this next machine will also be capable of designing a machine more intelligent than itself. If every machine in turn does what it is capable of, we should expect a sequence of ever more intelligent machines. This intelligence explosion is sometimes combined with another idea, which we might call the “speed explosion”. The argument for a speed explosion starts from the familiar observation that computer processing speed doubles at regular intervals. Suppose that speed doubles every two years and will do so indefinitely. Now suppose that we have human-level artificial intelligence 1 designing new processors. Then faster processing will lead to faster designers and an ever-faster design cycle, leading to a limit point soon afterwards. The argument for a speed explosion was set out by the artificial intelligence researcher Ray Solomonoff in his 1985 article “The Time Scale of Artificial Intelligence”.1 Eliezer Yudkowsky gives a succinct version of the argument in his 1996 article “Staring at the Singularity”: “Computing speed doubles every two subjective years of work.. (shrink)
The basic question of ontology is “What exists?”. The basic question of metaontology is: are there objective answers to the basic question of ontology? Here ontological realists say yes, and ontological anti-realists say no. (Compare: The basic question of ethics is “What is right?”. The basic question of metaethics is: are there objective answers to the basic question of ethics? Here moral realists say yes, and moral anti-realists say no.) For example, the ontologist may ask: Do numbers exist? The Platonist (...) says yes, and the nominalist says no. The metaontologist may ask: is there an objective fact of the matter about whether numbers exist? The ontological realist says yes, and the ontological anti-realist says no. Likewise, the ontologist may ask: Given two distinct entities, when does a mereological sum of those entities exist? The universalist says always, while the nihilist says never. The metaontologist may ask: is there an objective fact of the matter about whether the mereological sum of two distinct entities exists? The ontological realist says yes, and the ontological anti-realist says no. Ontological realism is often traced to Quine (1948), who held that we can determine what exists by seeing which entities are endorsed by our best scientiﬁc theory of the world. In recent years, the practice of ontology has often presupposed an ever-stronger ontological realism, and strong versions of ontological realism have received explicit statements by Fine (this volume), Sider (2001; this volume), van Inwagen (1998; this volume), and others. (shrink)
A number of popular arguments for dualism start from a premise about an epistemic gap between physical truths about truths about consciousness, and infer an ontological gap between physical processes and consciousness. Arguments of this sort include the conceivability argument, the knowledge argument, the explanatory-gap argument, and the property dualism argument. Such arguments are often resisted on the grounds that epistemic premises do not entail ontological conclusion. My view is that one can legitimately infer ontological conclusions from epistemic premises, if (...) one is very careful about how one reasons. To do so, the best way is to reason first from epistemic premises to modal conclusions (about necessity and possibility), and from there to ontological conclusions. Here, the crucial issue is the link between the epistemic and modal domains. How can one reason from theses about what is knowable or conceivable to theses about what is necessary or possible? To bridge the epistemic and modal domains, the framework of two-dimensional semantics can play a central role. I have used this framework in earlier work (Chalmers 1996) to mount an argument against materialism. Here, I want to revisit the argument, laying it out in a more explicit and careful form, and responding to a number of objections. In what follows I will concentrate mostly on the conceivability argument. I think that very similar considerations apply to the other arguments mentioned above, however. In the final section of the paper, I show how this analysis might yield a unified treatment of a number of anti-materialist arguments. (shrink)
What is consciousness? How does the subjective character of consciousness fit into an objective world? How can there be a science of consciousness? In this sequel to his groundbreaking and controversial The Conscious Mind, David Chalmers develops a unified framework that addresses these questions and many others. Starting with a statement of the "hard problem" of consciousness, Chalmers builds a positive framework for the science of consciousness and a nonreductive vision of the metaphysics of consciousness. He replies to many critics (...) of The Conscious Mind, and then develops a positive theory in new directions. The book includes original accounts of how we think and know about consciousness, of the unity of consciousness, and of how consciousness relates to the external world. Along the way, Chalmers develops many provocative ideas: the "consciousness meter", the Garden of Eden as a model of perceptual experience, and The Matrix as a guide to the deepest philosophical problems about consciousness and the external world. This book will be required reading for anyone interested in the problems of mind, brain, consciousness, and reality. (shrink)
A month ago, I bought an iPhone. The iPhone has already taken over some of the central functions of my brain. It has replaced part of my memory, storing phone numbers and addresses that I once would have taxed my brain with. It harbors my desires: I call up a memo with the names of my favorite dishes when I need to order at a local restaurant. I use it to calculate, when I need to figure out bills and tips. (...) It is a tremendous resource in an argument, with Google ever present to help settle disputes. I make plans with it, using its calendar to help determine what I can and can’t do in the coming months. I even daydream on the iPhone, idly calling up words and images when my concentration slips. Friends joke that I should get the iPhone implanted into my brain. But if Andy Clark is right, all this would do is speed up the processing, and free up my hands. The iPhone is part of my mind already. Clark is a connoisseur of the myriad ways in which the mind relies on the world to get its work done. The first part of this marvelous book explores some of these ways: the extension of our bodies, the extension of our senses, and crucially, the use of language as a tool to extend our thought. The second part of the book defends the thesis that in at least some of these cases, the world is not serving as a mere instrument for the mind. Rather, the relevant parts of the world have become parts of my mind. My iPhone is not my tool, or at least it is not wholly my tool. Parts of it have become parts of me. This is the thesis of the extended mind: when parts of the environment are coupled to the brain in the right way, they become parts of the mind. The thesis has a long history: I am told that there are hints of it in Dewey, Heidegger, and Wittgenstein. But no-one has done as much to give life to the idea as Andy Clark. In a series of important books and articles—Being There, Natural-Born Cyborgs, “Magic words: How language augments human computation”, and many others—he has explored the many ways in which the boundaries between mind and world are far more flexible than one might have thought.. (shrink)
Growing up, I was a mathematics and science geek. I read everything I could in these areas. Every now and then, something would point in a philosophical direction. Perhaps my most important influence was reading Hofstadter’s Gödel, Escher, Bach as a teenager. I read it initially for the mathematical parts, but it planted a seed for thinking about the mind. Later, Hofstadter and Dennett’s The Mind’s I got me thinking more about the mind–body problem in particular.
Confronted with the apparent explanatory gap between physical processes and consciousness, there are many possible reactions. Some deny that any explanatory gap exists at all. Some hold that there is an explanatory gap for now, but that it will eventually be closed. Some hold that the explanatory gap corresponds to an ontological gap in nature.
Frank Ramsey (1931) wrote: If two people are arguing 'if p will q?' and both are in doubt as to p, they are adding p hypothetically to their stock of knowledge and arguing on that basis about q. We can say that they are fixing their degrees of belief in q given p. Let us take the first sentence the way it is often taken, as proposing the following test for the acceptability of an indicative conditional: ‘If p then q’ (...) is acceptable to a subject S iff, were S to accept p and consider q, S would accept q. Now consider an indicative conditional of the form (1) If p, then I believe p. Suppose that you accept p and consider ‘I believe p’. To accept p while rejecting ‘I believe p’ is tantamount to accepting the Moore-paradoxical sentence ‘p and I do not believe p’, and so is irrational. To accept p while suspending judgment about ‘I believe p’ is irrational for similar reasons. So rationality requires that if you accept p and consider ‘I believe p’, you accept ‘I believe p’. (shrink)
In the Garden of Eden, we had unmediated contact with the world. We were directly acquainted with objects in the world and with their properties. Objects were simply presented to us without causal mediation, and properties were revealed to us in their true intrinsic glory.
At the April 2006 meeting of the Central Division of the American Philosophical Association, in an author-meets-critics session on Scott Soames' book _Reference and Description: The Case Against Two-Dimensionalism_ , I presented a comment on Soames' book, "Scott Soames' Two-Dimensionalism" . The other critic was Robert Stalnaker. Soames presented his response to critics . Below is a reply to Soames' response to me, for those who were at the session and interested others. Note that this response was mostly written before (...) the session, except for one or two paragraphs where the discussion in the session is mentioned. (shrink)
The term ‘emergence’ often causesconfusion in science and philosophy, as it is used to express at leasttwo quite different concepts. We can label these concepts _strong_ _emergence_ and _weak emergence_. Both of these concepts are important, but it is vital to keep them separate.
Scott Soames’ Reference and Description contains arguments against a number of different versions of two-dimensional semantics. After early chapters on descriptivism and on Kripke’s anti-descriptivist arguments, a chapter each is devoted to the roots of twodimensionalism in “slips, errors, or misleading suggestions” by Kripke and Kaplan, and to the two-dimensional approaches developed by Stalnaker (1978) and by Davies and Humberstone (1981). The bulk of the book (about 200 pages) is devoted to “ambitious twodimensionalism”, attributed to Frank Jackson, David Lewis, and (...) me. After a quick overview of two-dimensional approaches, I will focus on Soames’ discussion of ambitious twodimensionalism. I will then turn to a system advocated by Soames that is itself strikingly reminiscent of a two-dimensional approach. Two-dimensional semantic theories are varieties of possible-worlds semantics on which linguistic items can be evaluated relative to possibilities in two different ways, yielding two sorts of intensional semantic values, which can be seen as two “dimensions” of meaning. The second dimension is the familiar sort of Kripkean evaluation in metaphysically possible worlds, so that necessarily coextensive terms (such as ‘Hesperus’ and ‘Phosphorus’ or ‘water’ and ‘H2O’) always have the same semantic value. The first dimension behaves differently, so that there are typically at least some cases where necessarily coextensive terms have different semantic values on the first dimension. For this reason, the two-dimensional framework is sometimes seen as a way of granting many of the insights of a Kripkean approach to meaning (on the second dimension), while retaining elements of a Fregean approach to meaning (on the first dimension). (shrink)
Why is two-dimensional semantics important? One can think of it as the most recent act in a drama involving three of the central concepts of philosophy: meaning, reason, and modality. First, Kant linked reason and modality, by suggesting that what is necessary is knowable a priori, and vice versa. Second, Frege linked reason and meaning, by proposing an aspect of meaning (sense) that is constitutively tied to cognitive signi?cance. Third, Carnap linked meaning and modality, by proposing an aspect of meaning (...) (intension) that is constitutively tied to possibility and necessity. (shrink)
Two-dimensional approaches to semantics, broadly understood, recognize two "dimensions" of the meaning or content of linguistic items. On these approaches, expressions and their utterances are associated with two different sorts of semantic values, which play different explanatory roles. Typically, one semantic value is associated with reference and ordinary truth-conditions, while the other is associated with the way that reference and truth-conditions depend on the external world. The second sort of semantic value is often held to play a distinctive role in (...) analyzing matters of cognitive significance and/or context-dependence. (shrink)
The Matrix presents a version of an old philosophical fable: the brain in a vat. A disembodied brain is floating in a vat, inside a scientist’s laboratory. The scientist has arranged that the brain will be stimulated with the same sort of inputs that a normal embodied brain receives. To do this, the brain is connected to a giant computer simulation of a world. The simulation determines which inputs the brain receives. When the brain produces outputs, these are fed back (...) into the simulation. The internal state of the brain is just like that of a normal brain, despite the fact that it lacks a body. From the brain’s point of view, things seem very much as they seem to you and me. (shrink)
In recent years there has been an explosion of scientific work on consciousness in cognitive neuroscience, psychology, and other fields. It has become possible to think that we are moving toward a genuine scientific understanding of conscious experience. But what is the science of consciousness all about, and what form should such a science take? This chapter gives an overview of the agenda.
John Perry's book Knowledge, Possibility, and Consciousness is a lucid and engaging defense of a physicalist view of consciousness against various anti-physicalist arguments. In what follows, I will address Perry's responses to the three main anti-physicalist arguments he discusses: the zombie argument (focusing on imagination), the knowledge argument (focusing on indexicals), and the modal argument (focusing on intensions).
*[[This paper is largely based on material in other papers. The first three sections and the appendix are drawn with minor modifications from Chalmers 2002c (which explores issues about phenomenal concepts and beliefs in much more depth, mostly independently of questions about materialism). The main ideas of the last three sections are drawn from Chalmers 1996, 1999, and 2002a, although with considerable revision and elaboration. ]].
Consciousness and intentionality are perhaps the two central phenomena in the philosophy of mind. Human beings are conscious beings: there is something it is like to be us. Human beings are intentional beings: we represent what is going on in the world.Correspondingly, our specific mental states, such as perceptions and thoughts, very often have a phenomenal character: there is something it is like to be in them. And these mental states very often have intentional content: they serve to represent the (...) world. On the face of it, consciousness and intentionality are intimately connected. Our most important conscious mental states are intentional states: conscious experiences often inform us about the state of the world. And our most important intentional mental states are conscious states: there is often something it is like to represent the external world. It is natural to think that a satisfactory account of consciousness must respect its intentional structure, and that a satisfactory account of intentionality must respect its phenomenological character.With this in mind, it is surprising that in the last few decades, the philosophical study of consciousness and intentionality has often proceeded in two independent streams. This wasnot always the case. In the work of philosophers from Descartes and Locke to Brentano and Husserl, consciousness and intentionality were typically analyzed in a single package. But in the second half of the twentieth century, the dominant tendency was to concentrate on onetopic or the other, and to offer quite separate analyses of the two. On this approach, the connections between consciousness and intentionality receded into the background.In the last few years, this has begun to change. The interface between consciousness and intentionality has received increasing attention on a number of fronts. This attention has focused on such topics as the representational content of perceptual experience, the higherorder representation of conscious states, and the phenomenology of thinking. Two distinct philosophical groups have begun to emerge. One group focuses on ways in which consciousness might be grounded in intentionality. The other group focuses on ways in which intentionality might be grounded in consciousness. (shrink)
At any given time, a subject has a multiplicity of conscious experiences. A subject might simultaneously have visual experiences of a red book and a green tree, auditory experiences of birds singing, bodily sensations of a faint hunger and a sharp pain in the shoulder, the emotional experience of a certain melancholy, while having a stream of conscious thoughts about the nature of reality. These experiences are distinct from each other: a subject could experience the red book without the singing (...) birds, and could experience the singing birds without the red book. But at the same time, the experiences seem to be tied together in a deep way. They seem to be unified, by being aspects of a single encompassing state of consciousness. (shrink)