This paper offers a novel way of reconstructing conceptual change in empirical theories. Changes occur in terms of the structure of the dimensions—that is to say, the conceptual spaces—underlying the conceptual framework within which a given theory is formulated. Five types of changes are identified: (1) addition or deletion of special laws, (2) change in scale or metric, (3) change in the importance of dimensions, (4) change in the separability of dimensions, and (5) addition or deletion of dimensions. Given this (...) classification, the conceptual development of empirical theories becomes more gradual and rationalizable. Only the most extreme type—replacement of dimensions—comes close to a revolution. The five types are exemplified and applied in a case study on the development within physics from the original Newtonian mechanics to special relativity theory. (shrink)
This paper concerns voting with logical consequences, which means that anybody voting for an alternative x should vote for the logical consequences of x as well. Similarly, the social choice set is also supposed to be closed under logical consequences. The central result of the paper is that, given a set of fairly natural conditions, the only social choice functions that satisfy social logical closure are oligarchic (where a subset of the voters are decisive for the social choice). The set (...) of conditions needed for the proof include a version of Independence of Irrelevant Alternatives that also plays a central role in Arrow's impossibility theorem. (Published Online July 11 2006) Footnotes1 Much of this article was written while the author was a fellow at the Swedish Collegium for Advanced Study in the Social Sciences (SCASSS) in Uppsala. I want to thank the Collegium for providing me with excellent working conditions. Wlodek Rabinowicz and other fellows gave me valuable comments at a seminar at SCASSS when an early version of the paper was presented. I also want to thank Luc Bovens, Franz Dietrich, Christian List and an anonymous referee for their excellent comments on a later version. The final version was prepared during a stay at Oxford University for which I am grateful to the British Academy. (shrink)
Our ability to think is one of our most puzzling characteristics. What it would be like to be unable to think? What would it be like to lack self-awareness? The complexity of this activity is striking. Thinking involves the interaction of a range of mental processes - attention, emotion, memory, planning, self-consciousness, free will, and language. So where did these processes arise? What evolutionary advantages were bestowed upon those with an ability to deceive, to plan, to empathize, or to understand (...) the intentions of others? -/- In this compelling work, Peter Gärdenfors embarks on an evolutionary detective story to try and solve one of the big mysteries surrounding human existence - how has the modern human being's way of thinking come into existence. He starts by taking in turn the more basic cognitive processes, such as attention and memory, then builds upon these to explore more complex behaviours, such as self-consciousness, mindreading, and imitation. Having done this, he examines the consequences of "putting thought into the world", using external media like cave paintings, drawings and writing. -/- Immensely readable and humorous, the book will be valuable for students in psychology and biology, whilst remaining accessible to readers of popular science. (shrink)
We focus on two problems with the evolutionary scenario proposed: (1) It bypasses the question of the origins of the communicative and semiotic features that make language distinct from, say, pleasant but meaningless sounds. (2) It does little to explain the absence of language in, for example, chimpanzees: Most of the selection pressures invoked apply just as strongly to chimps. We suggest how these problems could possibly be amended.
We find that the nature and origin of the proposed “dialogical cognitive representations” in the target article is not sufficiently clear. Our proposal is that (triadic) bodily mimesis and in particular mimetic schemas – prelinguistic representational, intersubjective structures, emerging through imitation but subsequently interiorized – can provide the necessary link between private sensory-motor experience and public language. In particular, we argue that shared intentionality requires triadic mimesis.
The dominating models of information processes have been based on symbolic representations of information and knowledge. During the last decades, a variety of non-symbolic models have been proposed as superior. The prime examples of models within the non-symbolic approach are neural networks. However, to a large extent they lack a higher-level theory of representation. In this paper, conceptual spaces are suggested as an appropriate framework for non- symbolic models. Conceptual spaces consist of a number of 'quality dimensions' that often are (...) derived from perceptual mechanisms. It will be outlined how conceptual spaces can represent various kind of information and how they can be used to describe concept learning. The connections to prototype theory will also be presented. (shrink)
I focus on the distinction between sensation and perception. Perceptions contain additional information that is useful for interpreting sensations. Following Grush, I propose that emulators can be seen as containing (or creating) hidden variables that generate perceptions from sensations. Such hidden variables could be used to explain further cognitive phenomena, for example, causal reasoning.
We trace the difference between the ways in which apes and humans co–operate to differences in communicative abilities, claiming that the pressure for future–directed co–operation was a major force behind the evolution of language. Competitive co–operation concerns goals that are present in the environment and have stable values. It relies on either signalling or joint attention. Future–directed co–operation concerns new goals that lack fixed values. It requires symbolic communication and context–independent representations of means and goals. We analyse these ways of (...) co–operating in game–theoretic terms and submit that the co–operative strategy of games that involve shared representations of future goals may provide new equilibrium solutions. (shrink)
In contrast to symbolic or associationist representations, I advocate a third form of representing information that employs geometrical structures. I argue that this form is appropriate for modelling concept learning. By using the geometrical structures of what I call conceptual spaces, I define properties and concepts. A learning model that shows how properties and concepts can be learned in a simple but naturalistic way is then presented. I also discuss the advantages of the geometric approach over the symbolic and associationist (...) traditions. (shrink)
Bloom argues that concepts depend on psychological essentialism. He rejects the proposal that concepts are based on perceptual similarity spaces because it cannot account for how we handle new properties and does not fit with our intuitions about essences. I argue that by using a broader notion of similarity space, it is possible to explain these features of concepts.
To evaluate the success of simple heuristics we need to know more about how a relevant heuristic is chosen and how we learn which cues are relevant. These meta-abilities are at the core of ecological rationality, rather than the individual heuristics.
Corresponding to Glenberg's distinction between the automatic and effortful modes of memory, I propose a distinction between cued and detached mental representations. A cued representation stands for something that is present in the external situation of the representing organism, while a detached representation stands for objects or events that are not present in the current situation. This distinction is important for understanding the role of memory in different cognitive functions like planning and pretense.
The so called Ramsey test is a semantic recipe for determining whether a conditional proposition is acceptable in a given state of belief. Informally, it can be formulated as follows: (RT) Accept a proposition of the form "if A, then C" in a state of belief K, if and only if the minimal change of K needed to accept A also requires accepting C. In Gärdenfors (1986) it was shown that the Ramsey test is, in the context of some other (...) weak conditions, on pain of triviality incompatible with the following principle, which was there called the preservation criterion: (P) If a proposition B is accepted in a given state of belief K and the proposition A is consistent with the beliefs in K, then B is still accepted in the minimal change of K needed to accept A. (RT) provides a necessary and sufficient criterion for when a 'positive' conditional should be included in a belief state, but it does not say anything about when the negation of a conditional sentence should be accepted. A very natural candidate for this purpose is the following negative Ramsey test: (NRT) Accept the negation of a proposition of the form "if A, then C" in a consistent state of belief K, if and only if the minimal change of K needed to accept A does not require accepting C. This note shows that (NRT) leads to triviality results even in the absence of additional conditions like (P). (shrink)
A general criterion for the theory of belief revision is that when we revise a state of belief by a sentence A, as much of the old information as possible should be retained in the revised state of belief. The motivating idea in this paper is that if a belief B is irrelevant to A, then B should still be believed in the revised state. The problem is that the traditional definition of statistical relevance suffers from some serious shortcomings and (...) cannot be used as a tool for defining belief revision processes. In particular, the traditional definition violates the requirement that if A is irrelevant to C and B is irrelevant to C, then A&B is irrelevant to C. In order to circumvent these drawbacks, I develop an amended notion of relevance which has the desired properties. On the basis of the new definition, I outline how it can be used to simplify a construction of a belief revision method. (shrink)
The analyses of explanation and causal beliefs are heavily dependent on using probability functions as models of epistemic states. There are, however, several aspects of beliefs that are not captured by such a representation and which affect the outcome of the analyses. One dimension that has been neglected in this article is the temporal aspect of the beliefs. The description of a single event naturally involves the time it occurred. Some analyses of causation postulate that the cause must not occur (...) later than the effect. If we want this kind of causality it is easy to add the appropriate clause to ( CAUS ). An alternative is not to rule out backwards causation or causal loops a priori , but expect that ( CAUS ), via the properties of the contraction P C - , will result in the desired temporal relation between C and E . One way of ensuring this is to postulate that when the probability function P is contracted to P C - , the probabilities of all events that occurred before C remain the same in P C - as in P . This means that all beliefs about the history of events up to C are left unaltered in the construction of the hypothetical state of belief P C - . In conclusion, I hope to have shown that, in spite of these limitations, ( EXP ) and ( CAUS ) provide viable analyses of explanation and causality between single events for the case when epistemic states can be described by probability functions. I have also shown that the two analyses can be used to explicate the close connections between the two notions. These analyses reduce the problems of explanation and causality, hopefully in a non-circular way, to the problem of identifying contractions of states of belief. (shrink)
A computational theory of induction must be able to identify the projectible predicates, that is to distinguish between which predicates can be used in inductive inferences and which cannot. The problems of projectibility are introduced by reviewing some of the stumbling blocks for the theory of induction that was developed by the logical empiricists. My diagnosis of these problems is that the traditional theory of induction, which started from a given (observational) language in relation to which all inductive rules are (...) formulated, does not go deep enough in representing the kind of information used in inductive inferences. As an interlude, I argue that the problem of induction, like so many other problems within AI, is a problem of knowledge representation. To the extent that AI-systems are based on linguistic representations of knowledge, these systems will face basically the same problems as did the logical empiricists over induction. In a more constructive mode, I then outline a non-linguistic knowledge representation based on conceptual spaces. The fundamental units of these spaces are "quality dimensions". In relation to such a representation it is possible to define "natural" properties which can be used for inductive projections. I argue that this approach evades most of the traditional problems. (shrink)
The purpose of this note is to formulate some weaker versions of the so called Ramsey test that do not entail the following unacceptable consequenceIf A and C are already accepted in K, then if A, then C is also accepted in K. and to show that these versions still lead to the same triviality result when combined with a preservation criterion.
Using probability functions defined over a simple language as models of states of belief, my goal in this article has been to analyse contractions and revisions of beliefs. My first strategy was to formulate postulates for these processes. Close parallels between the postulates for contractions and the postulates for revisions have been established - the results in Section 5 show that contractions and revisions are interchangeable. As a second strategy, some suggestions for more or less explicit constructive definitions of the (...) revision process (and indirectly also of the contraction process) were then presented. However, the results in Section 6 are less conclusive than in the earlier ones. This problem area still awaits further development. (shrink)
This paper extends earlier work by its authors on formal aspects of the processes of contracting a theory to eliminate a proposition and revising a theory to introduce a proposition. In the course of the earlier work, Gardenfors developed general postulates of a more or less equational nature for such processes, whilst Alchourron and Makinson studied the particular case of contraction functions that are maximal, in the sense of yielding a maximal subset of the theory (or alternatively, of one of (...) its axiomatic bases), that fails to imply the proposition being eliminated. In the present paper, the authors study a broader class, including contraction functions that may be less than maximal. Specifically, they investigate "partial meet contraction functions", which are defined to yield the intersection of some nonempty family of maximal subsets of the theory that fail to imply the proposition being eliminated. Basic properties of these functions are established: it is shown in particular that they satisfy the Gardenfors postulates, and moreover that they are sufficiently general to provide a representation theorem for those postulates. Some special classes of partial meet contraction functions, notably those that are "relational" and "transitively relational", are studied in detail, and their connections with certain "supplementary postulates" of Gardenfors investigated, with a further representation theorem established. (shrink)
It is argued that it is not sufficient to consider only the sentences included in the explanans and explanandum when determining whether they constitute an explanation, but these sentences must always be evaluated relative to a knowledge situation. The central criterion on an explanation is that the explanans in a non-trivial way increases the belief value of the explanandum, where the belief value of a sentence is determined from the given knowledge situation. The outlined theory of explanations is applied to (...) some well-known examples and is also compared to other theories of explanation. (shrink)