A growing body of research suggests that follower perceptions of ethical leadership are associated with beneficial follower outcomes. However, some empirical researchers have found contradictory results. In this study, we use social learning and social exchange theories to test the relationship between ethical leadership and follower work outcomes. Our results suggest that ethical leadership is related positively to numerous follower outcomes such as perceptions of leader interactional fairness and follower ethical behavior. Furthermore, we explore how ethical leadership relates to and (...) is different from other leadership styles such as transformational and transactional leadership. Results suggest that ethical leadership is positively associated with transformational leadership and the contingent reward dimension of transactional leadership. With respect to the moderators, our results show mixed evidence for publication bias. Finally, geographical locations of study samples moderated some of the relationships between ethical leadership and follower outcomes, and employee samples from public sector organizations showed stronger mean corrected correlations for ethical leadership–follower outcome relationships. (shrink)
Using a single spin-1 object as an example, we discuss a recent approach to quantum entanglement. [A.A. Klyachko and A.S. Shumovsky, J. Phys: Conf. Series 36, 87 (2006), E-print quant-ph/0512213]. The key idea of the approach consists in presetting of basic observables in the very definition of quantum system. Specification of basic observables defines the dynamic symmetry of the system. Entangled states of the system are then interpreted as states with maximal amount of uncertainty of all basic observables. The approach (...) gives purely physical picture of entanglement. In particular, it separates principle physical properties of entanglement from inessential. Within the model example under consideration, we show relativity of entanglement with respect to dynamic symmetry and argue existence of single-particle entanglement. A number of physical examples are considered. (shrink)
l. There is an antinomy in Hare's thought between Ought-Implies-Can and No-Indicatives-from-Imperatives. It cannot be resolved by drawing a distinction between implication and entailment. 2. Luther resolved this antinomy in the l6th century, but to understand his solution, we need to understand his problem. He thought the necessity of Divine foreknowledge removed contingency from human acts, thus making it impossible for sinners to do otherwise than sin. 3. Erasmus objected (on behalf of Free Will) that this violates Ought-Implies-Can which he (...) supported with Hare-style ordinary language arguments. 4. Luther a) pointed out the antinomy and b) resolved it by undermining the prescriptivist arguments for Ought-Implies-Can. 5. We can reinforce Luther's argument with an example due to David Lewis. 6. Whatever its merits as a moral principle, Ought-Implies-Can is not a logical truth and should not be included in deontic logics. Most deontic logics, and maybe the discipline itself, should therefore be abandoned. 7. Could it be that Ought-Conversationally-Implies-Can? Yes - in some contexts. But a) even if these contexts are central to the evolution of Ought, the implication is not built into the semantics of the word; b) nor is the parallel implication built into the semantics of orders; and c) in some cases Ought conversationally implies Can, only because Ought-Implies-Can is a background moral belief. d) Points a) and b) suggest a criticism of prescriptivism - that Oughts do not entail imperatives but that the relation is one of conversational implicature. 8. If Ought-Implies-Can is treated as a moral principle, Erasmus' argument for Free Will can be revived (given his Christian assumptions). But it does not 'prove' Pelagianism as Luther supposed. A semi-Pelagian alternative is available. (shrink)
The field of rhetoric provides unique frameworks and tools for understanding the role of language in moral reasoning and corruption. Drawing on a discursive understanding of the self, we focus on how the rhetoric of conversations constructs and shapes our moral reasoning and moral behavior. Using rhetorical appeals and a moral development framework, we construct three propositions that use variation in the rhetoric of conversations to identify and predict corruption. We discuss some of the implications of our model.
To commence any answer to the question “Can democracy promote the general welfare?” requires attention to the meaning of “general welfare.” If this term is drained of all significance by being defined as “whatever the political decision process determines it to be,” then there is no content to the question. The meaning of the term can be restored only by classifying possible outcomes of democratic political processes into two sets – those that are general in application over all citizens and (...) those that are discriminatory. (shrink)
Wittgenstein's Tractatus contains a wide range of profound insights into the nature of logic and language – insights which will survive the particular theories of the Tractatus and seem to me to mark definitive and unassailable landmarks in our understanding of some of the deepest questions of philosophy. And yet alongside these insights there is a theory of the nature of the relation between language and reality which appears both to be impossible to work out in detail in a way (...) which is completely satisfactory, and to be bizarre and incredible. I am referring to the so-called logical atomism of the Tractatus. The main outlines of this theory at least are clear and familiar: there are elementary propositions which gain their sense from being models of possible states of affairs; such propositions are configurations of names of simple objects, signifying that those simples are analogously configured; every proposition has its sense through being analysable as a truth-functional compound of elementary propositions, thus deriving its sense from the sense of the elementary propositions when this view is taken in conjunction with the idea that the sense of a proposition is completely specified by specifying its truth-conditions. In this way the Tractatus incorporates in its working out a philosophical system analogous to the classical philosophical systems of Leibniz or Spinoza which are regarded by many people, in a sense rightly, as the prehistoric monsters of philosophy which are not to be studied as living organisms, but studied as the curiosities of human thought. And we may here agree that in the end we must simply reject a philosophy which incorporates such features as its postulation of simple eternal objects, or of a possibility of an analysis of a proposition which was presented as a pre-condition for the propositions that we ordinarily utter to make sense, and yet the specific form of which we are unaware of, and so on. (shrink)
In biobanks, a broader model of consent is often used and justified by a range of different strategies that make reference to the potential benefits brought by the research it will facilitate combined with the low level of risk involved (provided adequate measures are in place to protect privacy and confidentiality) or a questioning of the centrality of the notion of informed consent. Against this, it has been suggested that the lack of specific information about particular uses of the samples (...) means that such consent cannot be fully autonomous and so is unethical. My answer to the title question is a definite ‘yes’. Broad consent can be informed consent and is justified by appeal to the principle of respect for autonomy. Indeed, I will suggest that the distinction between the various kinds of consent is not a distinction between kinds of consent but between the kinds of choice a person makes. When an individual makes a choice (of any kind) it is important that they do so according to the standards of informed consent and consistent with the choice that they are making. (shrink)
In recent years, a particular doctrine about forms of life has come to be associated with Wittgenstein's name by followers and critics of his philosophy alike. It is not a doctrine which Wittgenstein espoused or even, given his understanding of philosophy, one which he could have accepted; nor is it worthy of acceptance on its own merits. I shall here outline the standard interpretation of Wittgenstein's remarks on forms of life, consider the textual basis for such a reading of Wittgenstein, (...) present a more consistent reading of the texts, place the problem of forms of life within a wider philosophical context, and show the ways in which it is indeed possible to say that a form of life is wrong. In the process, I shall note some important similarities between Wittgenstein's actual position, Quine's analysis of scientific knowledge, and Hans-Georg Gadamer's claims about the fusion of horizons. (shrink)
This article will rework the classical question ‘Can a machine think?’ into a more specific problem: ‘Can a machine think anything new?’ It will consider traditional computational tasks such as prediction and decision-making, so as to investigate whether the instrumentality of these operations can be understood in terms of the creation of novel thought. By addressing philosophical and technoscientific attempts to mechanise thought on the one hand, and the philosophical and cultural critique of these attempts on the other, I will (...) argue that computation’s epistemic productions should be assessed vis-à-vis the logico-mathematical specificity of formal axiomatic systems. Such an assessment requires us to conceive automated modes of thought in such a way as to supersede the hope that machines might replicate human cognitive faculties, and to thereby acknowledge a form of onto-epistemological autonomy in automated ‘thinking’ processes. This involves moving beyond the view that machines might merely simulate humans. Machine thought should be seen as dramatically alien to human thought, and to the dimension of lived experience upon which the latter is predicated. Having stepped outside the simulative paradigm, the question ‘Can a machine think anything new?’ can then be reformulated. One should ask whether novel behaviour in computing might come not from the breaking of mechanical rules, but from following them: from doing what computers do already, and not what we might think they should be doing if we wanted them to imitate us. (shrink)
It is bad news to find out that one's cognitive or perceptual faculties are defective. Furthermore, it’s not always transparent how one ought to revise one's beliefs in light of such news. Two sorts of news should be distinguished. On the one hand, there is news that a faculty is unreliable -- that it doesn't track the truth particularly well. On the other hand, there is news that a faculty is anti-reliable -- that it tends to go positively wrong. These (...) two sorts of news call for extremely different responses. We provide accounts of these responses, and prove bounds on the degree to which one can reasonably count oneself as mistaken about a given subject matter. (shrink)
This article will rework the classical question ‘Can a machine think?’ into a more specific problem: ‘Can a machine think anything new?’ It will consider traditional computational tasks such as prediction and decision-making, so as to investigate whether the instrumentality of these operations can be understood in terms of the creation of novel thought. By addressing philosophical and technoscientific attempts to mechanise thought on the one hand, and the philosophical and cultural critique of these attempts on the other, I will (...) argue that computation’s epistemic productions should be assessed vis-à-vis the logico-mathematical specificity of formal axiomatic systems. Such an assessment requires us to conceive automated modes of thought in such a way as to supersede the hope that machines might replicate human cognitive faculties, and to thereby acknowledge a form of onto-epistemological autonomy in automated ‘thinking’ processes. This involves moving beyond the view that machines might merely simulate humans. Machine thought should be seen as dramatically alien to human thought, and to the dimension of lived experience upon which the latter is predicated. Having stepped outside the simulative paradigm, the question ‘Can a machine think anything new?’ can then be reformulated. One should ask whether novel behaviour in computing might come not from the breaking of mechanical rules, but from following them: from doing what computers do already, and not what we might think they should be doing if we wanted them to imitate us. (shrink)
Open peer commentary on the article “Design Research as a Variety of Second-Order Cybernetic Practice” by Ben Sweeting. Upshot: Based on Sweeting’s central question of what design can bring to cybernetics, this commentary extends and adds further depth to the target article. Aspects discussed include the nature of practice in relation to design, the introduction of designerly ways of acting and thinking through acting to cybernetics, and the re-introduction of material experimentation typical of early cybernetics.
Classical (Bayesian) probability (CP) theory has led to an influential research tradition for modeling cognitive processes. Cognitive scientists have been trained to work with CP principles for so long that it is hard even to imagine alternative ways to formalize probabilities. However, in physics, quantum probability (QP) theory has been the dominant probabilistic approach for nearly 100 years. Could QP theory provide us with any advantages in cognitive modeling as well? Note first that both CP and QP theory share the (...) fundamental assumption that it is possible to model cognition on the basis of formal, probabilistic principles. But why consider a QP approach? The answers are that (1) there are many well-established empirical findings (e.g., from the influential Tversky, Kahneman research tradition) that are hard to reconcile with CP principles; and (2) these same findings have natural and straightforward explanations with quantum principles. In QP theory, probabilistic assessment is often strongly context- and order-dependent, individual states can be superposition states (that are impossible to associate with specific values), and composite systems can be entangled (they cannot be decomposed into their subsystems). All these characteristics appear perplexing from a classical perspective. However, our thesis is that they provide a more accurate and powerful account of certain cognitive processes. We first introduce QP theory and illustrate its application with psychological examples. We then review empirical findings that motivate the use of quantum theory in cognitive theory, but also discuss ways in which QP and CP theories converge. Finally, we consider the implications of a QP theory approach to cognition for human rationality. (shrink)
Open peer commentary on the article “Sensorimotor Direct Realism: How We Enact Our World” by Michael Beaton. Upshot: The target article convincingly argues in favor of the idea that the sensorimotor account of perception provides a positive scientific context for direct realism. In some cases, however, perception and experience do not seem to fit easily with sensorimotor direct realism. This raises a question of scope that requires further elaboration.
I defend the following version of the ought-implies-can principle: (OIC) by virtue of conceptual necessity, an agent at a given time has an (objective, pro tanto) obligation to do only what the agent at that time has the ability and opportunity to do. In short, obligations correspond to ability plus opportunity. My argument has three premises: (1) obligations correspond to reasons for action; (2) reasons for action correspond to potential actions; (3) potential actions correspond to ability plus opportunity. In the (...) bulk of the paper I address six objections to OIC: three objections based on putative counterexamples, and three objections based on arguments to the effect that OIC conflicts with the is/ought thesis, the possibility of hard determinism, and the denial of the Principle of Alternate Possibilities. (shrink)
This paper explores the role and resolution of disagreements between physicians and their diagnostic AI-based decision support systems. With an ever-growing number of applications for these independently operating diagnostic tools, it becomes less and less clear what a physician ought to do in case their diagnosis is in faultless conflict with the results of the DSS. The consequences of such uncertainty can ultimately lead to effects detrimental to the intended purpose of such machines, e.g. by shifting the burden of proof (...) towards a physician. Thus, we require normative clarity for integrating these machines without affecting established, trusted, and relied upon workflows. In reconstructing different causes of conflicts between physicians and their AI-based tools—inspired by the approach of “meaningful human control” over autonomous systems and the challenges to resolve them—we will delineate normative conditions for “meaningful disagreements”. These incorporate the potential of DSS to take on more tasks and outline how the moral responsibility of a physician can be preserved in an increasingly automated clinical work environment. (shrink)
Scott Williams’s Latin Social model of the Trinity holds that the trinitarian persons have between them a single set of divine mental powers and a single set of divine mental acts. He claims, nevertheless, that on his view the persons are able to use indexical pronouns such as “I.” This claim is examined and is found to be mistaken.
This white paper aims to identify an open problem in 'Quantum Physics and the Nature of Reality' -namely whether quantum theory and special relativity are formally compatible-, to indicate what the underlying issues are, and put forward ideas about how the problem might be addressed.
To later generations, much of the moral philosophy of the twentieth century will look like a struggle to escape from utilitarianism. We seem to succeed in disproving one utilitarian doctrine, only to find ourselves caught in the grip of another. I believe that this is because a basic feature of the consequentialist outlook still pervades and distorts our thinking: the view that the business of morality is to bring something about . Too often, the rest of us have pitched our (...) protests as if we were merely objecting to the utilitarian account of what the moral agent ought to bring about or how he ought to do it. Deontological considerations have been characterized as “side constraints,” as if they were essentially restrictions on ways to realize ends. More importantly, moral philosophers have persistently assumed that the primal scene of morality is a scene in which someone does something to or for someone else. This is the same mistake that children make about another primal scene. The primal scene of morality, I will argue, is not one in which I do something to you or you do something to me, but one in which we do something together. The subject matter of morality is not what we should bring about, but how we should relate to one another. If only Rawls has succeeded in escaping utilitarianism, it is because only Rawls has fully grasped this point. His primal scene, the original position, is one in which a group of people must make a decision together. Their task is to find the reasons they can share. (shrink)
Open peer commentary on the article “Cybernetic Foundations for Psychology” by Bernard Scott. Upshot: Scott’s proposal is well-founded and opens interesting possibilities. We selected some critical aspects of his argumentation and discuss them in the context of the constructivist perspective. We highlight as Scott’s “blind spot” his statement - presented without further argument - of the need for a conceptual and theoretical unification of psychology from the perspective of second-order cybernetics. We find this especially worrisome as it is based on (...) only one version of cybernetics. (shrink)
Reference is a central topic in philosophy of language, and has been the main focus of discussion about how language relates to the world. R. M. Sainsbury sets out a new approach to the concept, which promises to bring to an end some long-standing debates in semantic theory.There is a single category of referring expressions, all of which deserve essentially the same kind of semantic treatment. Included in this category are both singular and plural referring expressions, complex and non-complex referring (...) expressions, and empty and non-empty referring expressions. Referring expressions are to be described semantically by a reference condition, rather than by being associated with a referent. In arguing for these theses, Sainsbury's book promises to end the fruitless oscillation between Millian and descriptivist views. Millian views insist that every name has a referent, and find it hard to give a good account of names which appear not to have referents, or at least are not known to do so, like ones introduced through error, ones where it is disputed whether they have a bearer and ones used in fiction. Descriptivist theories require that each name be associated with some body of information. These theories fly in the face of the fact names are useful precisely because there is often no overlap of information among speakers and hearers. The alternative position for which the book argues is firmly non-descriptivist, though it also does not require a referent. A much broader view can be taken of which expressions are referring expressions: not just names and pronouns used demonstratively, but also some complex expressions and some anaphoric uses of pronouns.Sainsbury's approach brings reference into line with truth: no one would think that a semantic theory should associate a sentence with a truth value, but it is commonly held that a semantic theory should associate a sentence with a truth condition, a condition which an arbitrary state of the world would have to satisfy in order to make the sentence true. The right analogy is that a semantic theory should associate a referring expression with a reference condition, a condition which an arbitrary object would have to satisfy in order to be the expression's referent.Lucid and accessible, and written with a minimum of technicality, Sainsbury's book also includes a useful historical survey. It will be of interest to those working in logic, mind, and metaphysics as well as essential reading for philosophers of language. (shrink)
When healthcare professionals feel constrained from acting in a patient’s best interests, moral distress ensues. The resulting negative sequelae of burnout, poor retention rates, and ultimately poor patient care are well recognized across healthcare providers. Yet an appreciation of how particular disciplines, including physicians, come to be “constrained” in their actions is still lacking. This paper will examine how the application of shared decision-making may contribute to the experience of moral distress for physicians and why such distress may go under-recognized. (...) Appreciation of these dynamics may assist in cross-discipline sensitivity, enabling more constructive dialogue and collaboration. (shrink)
Despite establishing the gendered construction of infertility, most research on the subject has not examined how individuals with such reproductive difficulty negotiate their own sense of gender. I explore this gap through 58 interviews with women who are medically infertile and involuntarily childless. In studying how women achieve their gender, I reveal the importance of the body to such construction. For the participants, there is not just a motherhood mandate in the United States, but a fertility mandate—women are not just (...) supposed to mother, they are supposed to procreate. Given this understanding, participants maintain their gender by denying their infertile status. They do so through reliance on essentialist notions, using their bodies as a means of constructing a gendered sense of self. Using the tenets of transgender theory, this study not only informs our understanding of infertility, but also our broader understanding of the relationship between gender, identity, and the body, exposing how individuals negotiate their gender through physical as well as institutional and social constraints. (shrink)
People have concerns, and ethicists often respond to them with philosophical arguments. But can conceptual constructions properly address fears and anxieties? It is argued in this paper that while it is possible to voice, clarify, create and—to a certain extent—tackle concerns by arguments, more concrete practices, choices, and actions are normally needed to produce proper responses to people’s worries. While logical inconsistencies and empirical errors can legitimately be exposed by arguments, the situation is considerably less clear when it comes to (...) moral, cultural, and emotional norms, values, and expectations. (shrink)
Listening to someone from some distance in a crowded room you may experience the following phenomenon: when looking at them speak, you may both hear and see where the source of the sounds is; but when your eyes are turned elsewhere, you may no longer be able to detect exactly where the voice must be coming from. With your eyes again fixed on the speaker, and the movement of her lips a clear sense of the source of the sound will (...) return. This ‘ventriloquist’ effect reflects the ways in which visual cognition can dominate auditory perception. And this phenomenological observation is one what you can verify or disconfirm in your own case just by the slightest reflection on what it is like for you to listen to someone with or without visual contact with them. (shrink)
If time travel is possible, presumably so is my shooting my younger self ; then apparently I can kill him – I can commit retrosuicide. But if I were to kill him I would not exist to shoot him, so how can I kill him? The standard solution to this paradox understands ability as compossibility with the relevant facts and points to an equivocation about which facts are relevant: my killing YS is compossible with his proximity but not with his (...) survival, so I can kill him if facts like his survival are irrelevant but I cannot if they are relevant. I identify a lacuna in this solution, namely its reliance without argument on the hidden assumption that my killing YS is possible: if it is impossible, it is not compossible with anything. I argue that this lacuna is important, and I sketch a different solution to the paradox. (shrink)
Determines the implications of Christian religious conviction for moral conduct through extensive philosophical inquiry into an incident involving an ethical ...
R. M. Adams’s essay, “Must God Create the Best?” can be interpreted as offering a theodicy for God’s creating morally less perfect beings than he could have created. By creating these morally less perfect beings, God is bestowing grace upon them, which is an unmerited or undeserved benefit. He does so, however, in advance of the free moral misdeeds that render them undeserving. This requires that God have middle knowledge, pace Adams’s version of the Free Will Theodicy, of what would (...) result from his actualization of possible free persons. It is argued that God’s possession of such middle knowledge negates the freedom of created beings, since God completely determines every action of every created person. And since they are not free, they cannot qualify as morally unmeritorious or undeserving. And, with that, Adams’s theodicy of grace-in-advance collapses. (shrink)
We regularly wield powers that, upon close scrutiny, appear remarkably magical. By sheer exercise of will, we bring into existence things that have never existed before. With but a nod, we effect the disappearance of things that have long served as barriers to the actions of others. And, by mere resolve, we generate things that pose significant obstacles to others' exercise of liberty. What is the nature of these things that we create and destroy by our mere decision to do (...) so? The answer: the rights and obligations of others. And by what seemingly magical means do we alter these rights and obligations? By making promises and issuing or revoking consent When we make promises, we generate obligations for ourselves, and when we give consent, we create rights for others. Since the rights and obligations that are affected by means of promising and consenting largely define the boundaries of permissible action, our exercise of these seemingly magical powers can significantly affect the lives and liberties of others. (shrink)
A self-fulfilling prophecy (SFP) in neuroprognostication occurs when a patient in coma is predicted to have a poor outcome, and life-sustaining treatment is withdrawn on the basis of that prediction, thus directly bringing about a poor outcome (viz. death) for that patient. In contrast to the predominant emphasis in the bioethics literature, we look beyond the moral issues raised by the possibility that an erroneous prediction might lead to the death of a patient who otherwise would have lived. Instead, we (...) focus on the problematic epistemic consequences of neuroprognostic SFPs in settings where research and practice intersect. When this sort of SFP occurs, the problem is that physicians and researchers are never in a position to notice whether their original prognosis was correct or incorrect, since the patient dies anyway. Thus, SFPs keep us from discerning false positives from true positives, inhibiting proper assessment of novel prognostic tests. This epistemic problem of SFPs thus impedes learning, but ethical obligations of patient care make it difficult to avoid SFPs. We then show how the impediment to catching false positive indicators of poor outcome distorts research on novel techniques for neuroprognostication, allowing biases to persist in prognostic tests. We finally highlight a particular risk that a precautionary bias towards early withdrawal of life-sustaining treatment may be amplified. We conclude with guidelines about how researchers can mitigate the epistemic problems of SFPs, to achieve more responsible innovation of neuroprognostication for patients in coma. (shrink)
McCarthy, Homan, and Rozier’s presentation of theological anthropology and its contribution to secular bioethics suffers from three primary limitations. First, the article re...
Anti-behaviorist arguments against the validity of the Turing Test as a sufficient condition for attributing intelligence are based on a memorizing machine, which has recorded within it responses to every possible Turing Test interaction of up to a fixed length. The mere possibility of such a machine is claimed to be enough to invalidate the Turing Test. I consider the nomological possibility of memorizing machines, and how long a Turing Test they can pass. I replicate my previous analysis of this (...) critical Turing Test length based on the age of the universe, show how considerations of communication time shorten that estimate and allow eliminating the sole remaining contingent assumption, and argue that the bound is so short that it is incompatible with the very notion of the Turing Test. I conclude that the memorizing machine objection to the Turing Test as a sufficient condition for attributing intelligence is invalid. (shrink)
We consider the problem of extending a (complete) order over a set to its power set. The extension axioms we consider generate orderings over sets according to their expected utilities induced by some assignment of utilities over alternatives and probability distributions over sets. The model we propose gives a general and unified exposition of expected utility consistent extensions whilst it allows to emphasize various subtleties, the effects of which seem to be underestimated – particularly in the literature on strategy-proof social (...) choice correspondences. (shrink)
Insights from contemporary psychology can illuminate the common psychological processes that facilitate unethical decision making. I will illustrate several of these processes and describe steps that may be taken to reduce or eliminate the undesirable consequences of these processes. A generic problem with these processes is that they are totally invisible to decision makers – i. e., decision makers are convinced that their decisions are ethically and managerially sound.
R. M. Adams’s essay, “Must God Create the Best?” can be interpreted as offering a theodicy for God’s creating morally less perfect beings than he could have created. By creating these morally less perfect beings, God is bestowing grace upon them, which is an unmerited or undeserved benefit. He does so, however, in advance of the free moral misdeeds that render them undeserving. This requires that God have middle knowledge, pace Adams’s version of the Free Will Theodicy, of what would (...) result from his actualization of possible free persons. It is argued that God’s possession of such middle knowledge negates the freedom of created beings, since God completely determines every action of every created person. And since they are not free, they cannot qualify as morally unmeritorious or undeserving. And, with that, Adams’s theodicy of grace-in-advance collapses. (shrink)
In this paper, I discuss the relationship between bodily experiences in dreams and the sleeping, physical body. I question the popular view that dreaming is a naturally and frequently occurring real-world example of cranial envatment. This view states that dreams are functionally disembodied states: in a majority of dreams, phenomenal experience, including the phenomenology of embodied selfhood, unfolds completely independently of external and peripheral stimuli and outward movement. I advance an alternative and more empirically plausible view of dreams as weakly (...) phenomenally-functionally embodied states. The view predicts that bodily experiences in dreams can be placed on a continuum with bodily illusions in wakefulness. It also acknowledges that there is a high degree of variation across dreams and different sleep stages in the degree of causal coupling between dream imagery, sensory input, and outward motor activity. Furthermore, I use the example of movement sensations in dreams and their relation to outward muscular activity to develop a predictive processing account. I propose that movement sensations in dreams are associated with a basic and developmentally early kind of bodily self-sampling. This account, which affords a central role to active inference, can then be broadened to explain other aspects of self- and world-simulation in dreams. Dreams are world-simulations centered on the self, and important aspects of both self- and world-simulation in dreams are closely linked to bodily self-sampling, including muscular activity, illusory own-body perception, and vestibular orienting in sleep. This is consistent with cognitive accounts of dream generation, in which long-term beliefs and expectations, as well as waking concerns and memories play an important role. What I add to this picture is an emphasis on the real-body basis of dream imagery. This offers a novel perspective on the formation of dream imagery and suggests new lines of research. (shrink)