From the perspective of biological cybernetics, “real world” robots have no fundamental advantage over computer simulations when used as models for biological behavior. They can even weaken biological relevance. From an engineering point of view, however, robots can benefit from solutions found in biological systems. We emphasize the importance of this distinction and give examples for artificial systems based on insect biology.
This article shows that a slight variation of the argument in Milne 1996 yields the log‐likelihood ratio l rather than the log‐ratio measure r as “the one true measure of confirmation.” *Received December 2006; revised December 2007. †To contact the author, please write to: Formal Epistemology Research Group, Zukunftskolleg and Department of Philosophy, University of Konstanz, P.O. Box X906, 78457 Konstanz, Germany; e‐mail: franz.huber@uni‐konstanz.de.
This paper starts by indicating the analysis of Hempel’s conditions of adequacy for any relation of confirmation (Hempel, 1945) as presented in Huber (submitted). There I argue contra Carnap (1962, Section 87) that Hempel felt the need for two concepts of confirmation: one aiming at plausible theories and another aiming at informative theories. However, he also realized that these two concepts are conflicting, and he gave up the concept of confirmation aiming at informative theories. The main part of the (...) paper consists in working out the claim that one can have Hempel’s cake and eat it too — in the sense that there is a logic of theory assessment that takes into account both of the two conflicting aspects of plausibility and informativeness. According to the semantics of this logic, α is an acceptable theory for evidence β if and only if α is both sufficiently plausible given β and sufficiently informative about β. This is spelt out in terms of ranking functions (Spohn, 1988) and shown to represent the syntactically specified notion of an assessment relation. The paper then compares these acceptability relations to explanatory and confirmatory consequence relations (Flach, 2000) as well as to nonmonotonic consequence relations (Kraus et al., 1990). It concludes by relating the plausibility-informativeness approach to Carnap’s positive relevance account, thereby shedding new light on Carnap’s analysis as well as solving another problem of confirmation theory. (shrink)
Medicine in a Neurocentric World: About the Explanatory Power of Neuroscientific Models in Medical Research and Practice Content Type Journal Article Category Editorial Notes Pages 307-313 DOI 10.1007/s12376-009-0036-2 Authors LaraHuber, University Medical Center of the Johannes Gutenberg University Mainz Institute for History, Philosophy and Ethics of Medicine Am Pulverturm 13 55131 Mainz Germany Lara K. Kutschenko, University Medical Center of the Johannes Gutenberg University Mainz Institute for History, Philosophy and Ethics of Medicine Am Pulverturm 13 55131 (...) Mainz Germany Journal Medicine Studies Online ISSN 1876-4541 Print ISSN 1876-4533 Journal Volume Volume 1 Journal Issue Volume 1, Number 4. (shrink)
Recent findings indicate that the constituting digits of multi-digit numbers are processed, decomposed into units, tens, and so on, rather than integrated into one entity. This is suggested by interfering effects of unit digit processing on two-digit number comparison. In the present study, we extended the computational model for two-digit number magnitude comparison of Moeller, Huber, Nuerk, and Willmes (2011a) to the case of three-digit number comparison (e.g., 371_826). In a second step, we evaluated how hundred-decade and hundred-unit compatibility (...) effects were moderated by varying the percentage of within-hundred (e.g., 539_582) and within-hundred-and-decade filler items (e.g., 483_489). From the results we predict that numerical distance as well as compatibility effects should indeed be modulated by the relevance of tens and units in three-digit number magnitude comparison: While in particular the hundred distance effect should decrease, we predict hundred-decade and hundred-unit compatibility effects to increase with the relevance of tens and units. (shrink)
Degrees of belief are familiar to all of us. Our conﬁdence in the truth of some propositions is higher than our conﬁdence in the truth of other propositions. We are pretty conﬁdent that our computers will boot when we push their power button, but we are much more conﬁdent that the sun will rise tomorrow. Degrees of belief formally represent the strength with which we believe the truth of various propositions. The higher an agent’s degree of belief for a particular (...) proposition, the higher her conﬁdence in the truth of that proposition. For instance, Sophia’s degree of belief that it will be sunny in Vienna tomorrow might be .52, whereas her degree of belief that the train will leave on time might be .23. The precise meaning of these statements depends, of course, on the underlying theory of degrees of belief. These theories offer a formal tool to measure degrees of belief, to investigate the relations between various degrees of belief in different propositions, and to normatively evaluate degrees of belief. (shrink)
Ranking functions have been introduced under the name of ordinal conditional functions in Spohn (1988; 1990). They are representations of epistemic states and their dynamics. The most comprehensive and up to date presentation is Spohn (manuscript).
We argue that a semantics for counterfactual conditionals in terms of comparative overall similarity faces a formal limitation due to Arrow’s impossibility theorem from social choice theory. According to Lewis’s account, the truth-conditions for counterfactual conditionals are given in terms of the comparative overall similarity between possible worlds, which is in turn determined by various aspects of similarity between possible worlds. We argue that a function from aspects of similarity to overall similarity should satisfy certain plausible constraints while Arrow’s impossibility (...) theorem rules out that such a function satisfies all the constraints simultaneously. We argue that a way out of this impasse is to represent aspectual similarity in terms of ranking functions instead of representing it in a purely ordinal fashion. Further, we argue against the claim that the determination of overall similarity by aspects of similarity faces a difficulty in addition to the Arrovian limitation, namely the incommensurability of different aspects of similarity. The phenomena that have been cited as evidence for such incommensurability are best explained by ordinary vagueness. (shrink)
This paper presents a new analysis of C.G. Hempel’s conditions of adequacy for any relation of confirmation [Hempel C. G. (1945). Aspects of scientific explanation and other essays in the philosophy of science. New York: The Free Press, pp. 3–51.], differing from the one Carnap gave in §87 of his [1962. Logical foundations of probability (2nd ed.). Chicago: University of Chicago Press.]. Hempel, it is argued, felt the need for two concepts of confirmation: one aiming at true hypotheses and another (...) aiming at informative hypotheses. However, he also realized that these two concepts are conflicting, and he gave up the concept of confirmation aiming at informative hypotheses. I then show that one can have Hempel’s cake and eat it too. There is a logic that takes into account both of these two conflicting aspects. According to this logic, a sentence H is an acceptable hypothesis for evidence E if and only if H is both sufficiently plausible given E and sufficiently informative about E. Finally, the logic sheds new light on Carnap’s analysis. (shrink)
Purpose: Whereas ethical considerations on imaging techniques and interpretations of neuroimaging results flourish, there is not much work on their preconditions. In this paper, therefore, we discuss epistemological considerations on neuroimaging and their implications for neuroethics. Results: Neuroimaging uses indirect methods to generate data about surrogate parameters for mental processes, and there are many determinants influencing the results, including current hypotheses and the state of knowledge. This leads to an interdependence between hypotheses and data. Additionally, different levels of description are (...) involved, especially when experiments are designed to answer questions pertaining to broad concepts like the self, empathy or moral intentions. Interdisciplinary theoretical frameworks are needed to integrate findings from the life sciences and the humanities and to translate between them. While these epistemological issues are not specific for neuroimaging, there are some reasons why they are of special importance in this context: Due to their inferential proximity, 'neuro-images' seem to be self-evident, suggesting directness of observation and objectivity. This has to be critically discussed to prevent overinterpretation. Additionally, there is a high level of attention to neuroimaging, leading to a high frequency of presentation of neuroimaging data and making the critical examination of their epistemological properties even more pressing. Conclusions: Epistemological considerations are an important prerequisite for neuroethics. The presentation and communication of the results of neuroimaging studies, the potential generation of new phenomena and new 'dysfunctions' through neuroimaging, and the influence on central concepts at the foundations of ethics will be important future topics for this discipline. (shrink)
Epistemology is the study of knowledge and justified belief. Belief is thus central to epistemology. It comes in a qualitative form, as when Sophia believes that Vienna is the capital of Austria, and a quantitative form, as when Sophia's degree of belief that Vienna is the capital of Austria is at least twice her degree of belief that tomorrow it will be sunny in Vienna. Formal epistemology, as opposed to mainstream epistemology (Hendricks 2006), is epistemology done in a formal way, (...) that is, by employing tools from logic and mathematics. The goal of this entry is to give the reader an overview of the formal tools available to epistemologists for the representation of belief. A particular focus will be the relation between formal representations of qualitative belief and formal representations of quantitative degrees of belief. (shrink)
Bayesianism is the position that scientific reasoning is probabilistic and that probabilities are adequately interpreted as an agent's actual subjective degrees of belief, measured by her betting behaviour. Confirmation is one important aspect of scientific reasoning. The thesis of this paper is the following: if scientific reasoning is at all probabilistic, the subjective interpretation has to be given up in order to get right confirmation—and thus scientific reasoning in general. The Bayesian approach to scientific reasoning Bayesian confirmation theory The example (...) The less reliable the source of information, the higher the degree of Bayesian confirmation Measure sensitivity A more general version of the problem of old evidence Conditioning on the entailment relation The counterfactual strategy Generalizing the counterfactual strategy The desired result, and a necessary and sufficient condition for it Actual degrees of belief The common knock-down feature, or ‘anything goes’ The problem of prior probabilities. (shrink)
The Spohnian paradigm of ranking functions is in many respects like an order-of-magnitude reverse of subjective probability theory. Unlike probabilities, however, ranking functions are only indirectly—via a pointwise ranking function on the underlying set of possibilities W —defined on a field of propositions A over W. This research note shows under which conditions ranking functions on a field of propositions A over W and rankings on a language L are induced by pointwise ranking functions on W and the set of (...) models for L, ModL, respectively. (shrink)
The paper provides an argument for the thesis that an agent’s degrees of disbelief should obey the ranking calculus. This Consistency Argument is based on the Consistency Theorem. The latter says that an agent’s belief set is and will always be consistent and deductively closed iff her degrees of entrenchment satisfy the ranking axioms and are updated according to the ranktheoretic update rules.
Taking the visual appeal of the ‘bell curve’ as an example, this paper discusses in how far the availability of quantitative approaches (here: statistics) that comes along with representational standards immediately affects qualitative concepts of scientific reasoning (here: normality). Within the realm of this paper I shall focus on the relationship between normality, as defined by scientific enterprise, and normativity, that result out of the very processes of standardisation itself. Two hypotheses are guiding this analysis: (1) normality, as it is (...) defined by the natural and the life sciences, must be regarded as an ontological, but epistemological important fiction and (2) standardised, canonical visualisations (such as the ‘bell curve’) impact on scientific thinking and reasoning to a significant degree. I restrict my analysis to the epistemological function of scientific representations of data: This means identifying key strategies of producing graphs and images in scientific practice. As a starting point, it is crucial to evaluate to what degree graphs and images could be seen as guiding scientific reasoning itself, for instance in attributing to them a certain epistemological function within a given field of research. (shrink)
Philosophically, one of the most important questions in the enterprise termed confirmation theory is this: Why should one stick to well confirmed theories rather than to any other theories? This paper discusses the answers to this question one gets from absolute and incremental Bayesian confirmation theory. According to absolute confirmation, one should accept ''absolutely well confirmed'' theories, because absolute confirmation takes one to true theories. An examination of two popular measures of incremental confirmation suggests the view that one should stick (...) to incrementally well confirmed theories, because incremental confirmation takes one to (the most) informative (among all) true theories. However, incremental confirmation does not further this goal in general. I close by presenting a necessary and sufficient condition for revealing the confirmational structure in almost every world when presented separating data. (shrink)
This paper discusses an almost sixty year old problem in the philosophy of science -- that of a logic of confirmation. We present a new analysis of Carl G. Hempel's conditions of adequacy (Hempel 1945), differing from the one Carnap gave in §87 of his Logical Foundations of Probability (1962). Hempel, it is argued, felt the need for two concepts of confirmation: one aiming at true theories and another aiming at informative theories. However, he also realized that these two concepts (...) are conflicting, and he gave up the concept of confirmation aiming at informative theories. We then show that one can have Hempel's cake and eat it, too: There is a (rank-theoretic and genuinely nonmonotonic) logic of confirmation -- or rather, theory assessment -- that takes into account both of these two conflicting aspects. According to this logic, a statement H is an acceptable theory for the data E if and only if H is both sufficiently plausible given E and sufficiently informative about E. Finally, the logic sheds new light on Carnap's analysis (and solves another problem of confirmation theory). (shrink)
The problem addressed in this paper is “the main epistemic problem concerning science”, viz. “the explication of how we compare and evaluate theories [...] in the light of the available evidence” (van Fraassen, BC, 1983, Theory comparison and relevant Evidence. In J. Earman (Ed.), Testing scientific theories (pp. 27–42). Minneapolis: University of Minnesota Press). Sections 1– 3 contain the general plausibility-informativeness theory of theory assessment. In a nutshell, the message is (1) that there are two values a theory should exhibit: (...) truth and informativeness—measured respectively by a truth indicator and a strength indicator; (2) that these two values are conflicting in the sense that the former is a decreasing and the latter an increasing function of the logical strength of the theory to be assessed; and (3) that in assessing a given theory by the available data one should weigh between these two conflicting aspects in such a way that any surplus in informativeness succeeds, if the shortfall in plausibility is small enough. Particular accounts of this general theory arise by inserting particular strength indicators and truth indicators. In Section 4 the theory is spelt out for the Bayesian paradigm of subjective probabilities. It is then compared to incremental Bayesian confirmation theory. Section 4 closes by asking whether it is likely to be lovely. Section 5 discusses a few problems of confirmation theory in the light of the present approach. In particular, it is briefly indicated how the present account gives rise to a new analysis of Hempel’s conditions of adequacy for any relation of confirmation (Hempel, CG, 1945, Studies in the logic of comfirmation. Mind, 54, 1–26, 97–121.), differing from the one Carnap gave in § 87 of his Logical foundations of probability (1962, Chicago: University of Chicago Press). Section 6 adresses the question of justification any theory of theory assessment has to face: why should one stick to theories given high assessment values rather than to any other theories? The answer given by the Bayesian version of the account presented in section 4 is that one should accept theories given high assessment values, because, in the medium run, theory assessment almost surely takes one to the most informative among all true theories when presented separating data. The concluding section 7 continues the comparison between the present account and incremental Bayesian confirmation theory. (shrink)
Traditionally, discussion about neuroimaging focuses on methodological improvement and neurobiological findings. In current psychiatric neuroimaging, the research focus broadens and includes concepts such as the self, personality, well-being, and psychiatric disease. This calls for the inclusion of disciplines like psychology and philosophy in a dialogue with neuroscience. Furthermore, it raises the question of how theories from these areas relate to neuroimaging findings: are results generated by objective data independent of theories? Is there an epistemological priority for the theories used for (...) generating hypotheses and for interpreting the results? Or do theoretical concepts and neuroimaging data influence each other? In this paper, we will discuss these positions concerning the priority of concepts and data in neuroimaging and provide arguments for an interdependence of concepts and data. An awareness of these considerations may help professionals from the life sciences and humanities as well as laypersons to avoid misunderstandings and oversimplifications. (shrink)
Logic is the study of the quality of arguments. An argument consists of a set of premises and a conclusion. The quality of an argument depends on at least two factors: the truth of the premises, and the strength with which the premises confirm the conclusion. The truth of the premises is a contingent factor that depends on the state of the world. The strength with which the premises confirm the conclusion is supposed to be independent of the state of (...) the world. Logic is only concerned with this second, logical factor of the quality of arguments. (shrink)
The problem adressed in this paper is “the main epistemic problem concerning science”, viz. “the explication of how we compare and evaluate theories [...] in the light of the available evidence” (van Fraassen 1983, 27).
This paper tries to explain why the Lenski (1970) theory of stratification based on ecology and subsistence technology had relatively little effect on theories of sex inequality. In cultural anthropology, generalization was held to be impossible. Feminist explanation in sociology was social-psychological. Moreover, by the 1980s, the bias against biology in feminist theory came to include all of science. Exceptions to these trends include the work of Blumberg, Chafetz, Collins, Coltrane, and Turner. Whether feminist sociologists will follow their lead remains (...) to be seen. (shrink)
Given that visualisations via medical imaging have tremendously increased over the last decades, the overall presence of colour-coded brain slices generated on the basis of functional imaging, i.e. neuroimaging techniques, have led to the assumption of so-called kinds of brains or cognitive profiles that might be especially related to non-healthy humans affected by neurological, neuropsychological or psychiatric syndromes or disorders. In clinical contexts especially, one must consider that visualisations through medical imaging are suggestive in a twofold way. Imaging data not (...) only visually render pathological entities, but also tend to represent objective and concrete evidence for these psychophysical states in question. This article aims to identify key issues in visually rendering psychiatric disorders via functional approaches of imaging within the neurosciences from an epistemological point of view. (shrink)
Crupi et al. () propose a generalization of Bayesian conﬁrmation theory that they claim to adequately deal with conﬁrmation by uncertain evidence. Consider a series of points of time t0, . . . , ti, . . . , tn such that the agent’s subjective probability for an atomic proposition E changes from Pr0(E) at t0 to . . . to Pri(E) at ti to . . . to Prn(E) at tn. It is understood that the agent’s subjective probabilities change (...) for E and no logically stronger proposition, and that the agent updates her subjective probabilities by Jeffrey conditionalization. For this speciﬁc scenario the authors propose to take the difference between Pr0(H) and Pri(H) as the degree to which E conﬁrms H for the agent at time ti (relative to time t0), C0,i(H, E). This proposal is claimed to be adequate, because.. (shrink)
The automatic tendency to anthropomorphize our interaction partners and make use of experience acquired in earlier interaction scenarios leads to the suggestion that social interaction with humanoid robots is more pleasant and intuitive than that with industrial robots. An objective method applied to evaluate the quality of human–robot interaction is based on the phenomenon of motor interference (MI). It claims that a face-to-face observation of a different (incongruent) movement of another individual leads to a higher variance in one’s own movement (...) trajectory. In social interaction, MI is a consequence of the tendency to imitate the movement of other individuals and goes along with mutual rapport, sense of togetherness, and sympathy. Although MI occurs while observing a human agent, it disappears in case of an industrial robot moving with piecewise constant velocity. Using a robot with human-like appearance, a recent study revealed that its movements led to MI, only if they were based on human prerecording (biological velocity), but not on constant (artificial) velocity profile. However, it remained unclear, which aspects of the human prerecorded movement triggered MI: biological velocity profile or variability in movement trajectory. To investigate this issue, we applied a quasi-biological minimum-jerk velocity profile (excluding variability in the movement trajectory as an influencing factor of MI) to motion of a humanoid robot, which was observed by subjects performing congruent or incongruent arm movements. The increase in variability in subjects’ movements occurred both for the observation of a human agent and for the robot performing incongruent movements, suggesting that an artificial human-like movement velocity profile is sufficient to facilitate the perception of humanoid robots as interaction partners. (shrink)
There is an increasing interest in how managers describe and respond to what they regard as moral versus nonmoral problems in organizations. In this study, forty managers described a moral problem and a nonmoral problem that they had encountered in their organization, each of which had been resolved. Analyses indicated that: (1) the two types of problems could be significantly differentiated using four of Jones' (1991) components of moral intensity; (2) the labels managers used to describe problems varied systematically between (...) the two types of problems and according to the problem's moral intensity; and (3) problem management processes varied according to the problem's type and moral intensity. (shrink)
Marx, like many of his contemporaries, uncritically assumed that humanity develops from primitive beginnings to ever more perfect stages. In his theory of human development he measured progress by two main standards: the decrease of all forms of dependence, and the increase of universality in man's relations to nature and to his fellow man. In our century, not only have new structures of power and dependence emerged, but successive movements have also been generated to restore the more ordered and limited (...) relationships of the past. If belief in progress is nowadays no longer self-evident, such a state of affairs can help us reflect on the conditions necessary to realize the values which determined Marx's categorical imperative, or his insistence that we overthrow all relations by which man is made a debased and enslaved being. One of these conditions is the voluntary limitation of our needs: the need to use material goods without regard for others, the need to build up or maintain security even at the cost of violence, and the need to restrict the circle of those with whom we identify to our own particular culture, race, class or ways of thinking and acting. (shrink)
From the standpoint of decision research, investigating global heuristics like LEX is not fruitful, because we know already that people use partial heuristics instead. It is necessary (1) to identify partial heuristics in different tasks, and (2) to investigate rules governing their application and especially their combination. Furthermore, research is necessary into the adequate level of resolution of the elements in the toolbox.
The threat of bioterrorism, the emergence of the SARS epidemic, and a recent focus on professionalism among physicians, present a timely opportunity for a review of, and renewed commitment to, physician obligations to care for patients during epidemics. The professional obligation to care for contagious patients is part of a larger "duty to treat," which historically became accepted when 1) a risk of nosocomial infection was perceived, 2) an organized professional body existed to promote the duty, and 3) the public (...) came to rely on the duty. Physicians' responses to epidemics from the Hippocratic era to the present suggests an evolving acceptance of the professional duty to treat contagious patients, reaching a long-held peak between 1847 and the1950's. There has been some professional retrenchment against this duty to treat in the last 40 years but, we argue, conditions favoring acceptance of the duty are met today. A renewed embrace of physicians' duty to treat patients during epidemics, despite conditions of personal risk, might strengthen medicine's relationship with society, improve society's capacity to prepare for threats such as bioterrorism and new epidemics, and contribute to the development of a more robust and meaningful medical professionalism. (shrink)
In his (1996) Peter Milne shows that r (H, E, B) = log [Pr (H | E ∩ B) / Pr (H | B)] is the one true measure of conﬁrmation in the sense that r is the only function satisfying the following ﬁve constraints on measures of conﬁrmation C.
Byrne & Russon use novelty as the primary requirement for providing evidence of true imitation in animals. There are three reasons to object to this. First, experiential learning cannot always be completely excluded as an alternative explanation of the observed behavior. Second, the imitator's manipulations performed during ontogeny cannot be known in full detail. Finally, there is at present only a weak understanding of how novel forms emerge. Data from our own recent experiments will be used to emphasize the need (...) for a tighter methodology in imitation experiments. (shrink)
To what extent do keas, Nestor notabilis , learn from each other? We tested eighteen captive keas, New Zealand parrots, in a tool use task involving visual feature discrimination and social learning. The keas were presented with two adjacent tubes, each containing a physically distinct baited platform. One platform could be collapsed by insertion of a block into the tube to release the bait; the other platform could not be collapsed. In contrast to birds that acted on their own (“individual (...) learners“), birds that could observe a demonstrator bird operated the collapsible platform first. However, they soon changed their behaviour to inserting blocks indiscriminately in either tube. When we reversed the collapsibility of the platforms, only adult observers but neither their demonstrators that had individually learnt nor the juveniles immediately altered their former preference. Observers, however, did not simply reverse their initial preference but rather moved to and then stayed at a chance performance as to where to insert a block first. In conclusion, the keas' overt exploration soon overrode the effect of social learning. We argue that such behaviour might help keas to find more efficient extractive foraging techniques in their native variable, low-risk environment. Keywords: tool use behaviour; social learning; reversal learning; physical cognition. (shrink)
Bayesian epistemology postulates a probabilistic analysis of many sorts of ordinary and scientific reasoning. Huber () has provided a novel criticism of Bayesianism, whose core argument involves a challenging issue: confirmation by uncertain evidence. In this paper, we argue that under a properly defined Bayesian account of confirmation by uncertain evidence, Huber's criticism fails. By contrast, our discussion will highlight what we take as some new and appealing features of Bayesian confirmation theory. Introduction Uncertain Evidence and Bayesian Confirmation (...) Bayesian Confirmation by Uncertain Evidence: Test Cases and Basic Principles CiteULike Connotea Del.icio.us What's this? (shrink)
The social philosophy of meaning and emotions represented in the work of Susanne Langer was recognized by Talcott Parsons, but has yet to be incorporated into mainstream sociological theoritizations. Langer's work is as potentially important to contemporary microsociology, and the sociology of emotions, as the work of Peirce, Mead, or Schutz. The impediment to appreciating her work resides in contemporary confusions regarding the nature of logic. Sociologists often subscribe to Wittgenstein's denial of the validity of formal logic in constructing (...) theories of human behavior. Langer has been misunderstood because her theoretizations address more than discursive logics and meanings. The thrust of Langer's work is that logic and meaning exist on a nondiscursive level of emotions. Though her work is more than 50 years old, we are now in a position to appreciate it because we are now exploring and conceptualizing the notion of social inferencing as existing beyond formal logic. (shrink)
Consciousness is a central theme of Susanne Langer's three-volume work Mind: An essay on human feeling. Langer proposes an evolutionary history of consciousness in order to establish a biological vocabulary for discussing the subject. This vocabulary is based on the qualities of organic processes rather than generic material objects. Her historical scenario and new terminology suggest that Langer views the “cash value” of consciousness in terms of symbolic thinking and aesthetics. This paper provides an overview of Langer's proposed evolutionary (...) scenario of consciousness, along with an examination of her process-oriented philosophy of mind. It is suggested that Langer's basic ideas are importantly similar to those present in dynamical systems theory. As research on consciousness in dynamical systems theory is still young, researchers in this field may find in Langer's work ideas for future exploration, particularly in its connection with aesthetics. (shrink)