Stakeholder theories propose that managers are responsible not only for maximizing shareholder value, but also for taking into account the well being of other parties affected by corporate decisions. While the language of stakeholder theory has been taken up in industries like mining, controversy remains. Disagreements arise not only about the apportionment of costs and benefits among stakeholders, but about who counts as a stakeholder and about how "costs" and "benefits" are to be conceived. This paper investigates these questions empirically (...) by examining how managers in one mining company talk about corporate responsibilities and by analysing the explicit and implicit values systems and moral logics which inform this talk. The investigations discovered that while some claims by stakeholder groups were readily accommodated by managers, others were not. Analysis of the value frameworks employed by the mangers confirms the views of leading stakeholder theorists that stakeholder theory is grounded in the realities of management practice and behaviour. (shrink)
Alan Turing’s pioneering work on computability, and his ideas on morphological computing support Andrew Hodges’ view of Turing as a natural philosopher. Turing’s natural philosophy differs importantly from Galileo’s view that the book of nature is written in the language of mathematics (The Assayer, 1623). Computing is more than a language used to describe nature as computation produces real time physical behaviors. This article presents the framework of Natural info-computationalism as a contemporary natural philosophy that builds on the legacy (...) of Turing’s computationalism. The use of info-computational conceptualizations, models and tools makes possible for the first time in history modeling of complex self-organizing adaptive systems, including basic characteristics and functions of living systems, intelligence, and cognition. (shrink)
Alan Turing is known for both his mathematical creativity and genius and role in cryptography war efforts, and for his homosexuality, for which he was persecuted. Yet there is little work that brings these two parts of his life together. This paper deconstructs and moves beyond the extant stereotypes around perceived associations between gay men and creativity, to consider how Turing’s lived experience as a queer mathematician provides a rich seam of insight into the ways in which his life, (...) relationships, and working environment shaped his work. (shrink)
In this in ter view, the pres ti gious an thro - pol o gist, his to rian and T.V. anaouncer, Alan Macfarlane com ments on some of the is sues that have been ad dressed in his writ ings. His main the o ret i cal con cern has been to study the pe cu - liar con di tions that gave rise to the mod e..
This small book packs a considerable theoretical and practical punch. Alan Ware challenges much received wisdom about the dynamics of two party politics. In the process, he adds considerably to contemporary discussion of the intersection of structure and agency in the development and adaptation of political systems. Ware picks out two party systems for concentrated attention because of their relative tractability in his words: these systems are ideal for analysing the capacity of parties to pursue their interests in (...) the face of both other actors within the political system and also of elements within the party itself. (shrink)
A major voice in late twentieth-century philosophy, Alan Donagan is distinguished for his theories on the history of philosophy and the nature of morality. The Philosophical Papers of Alan Donagan, volumes 1 and 2, collect 28 of Donagan's most important and best-known essays on historical understanding and ethics from 1957 to 1991. Volume 2 addresses issues in the philosophy of action and moral theory. With papers on Kant, von Wright, Sellars, and Chisholm, this volume also covers a range (...) of questions in applied ethics--from the morality of Truman's decision to drop atomic bombs on Hiroshima and Nagasaki to ethical questions in medicine and law. (shrink)
D. Alan Shewmon has advanced a well-documented challenge to the widely accepted total brain death criterion for death of the human being. We show that Shewmon's argument against this criterion is unsound, though he does refute the standard argument for that criterion. We advance a distinct argument for the total brain death criterion and answer likely objections. Since human beings are rational animals – sentient organisms of a specific type – the loss of the radical capacity for sentience involves (...) a substantial change, the passing away of the human organism. In human beings total brain death involves the complete loss of the radical capacity for sentience, and so in human beings total brain death is death. (shrink)
This paper concerns Alan Turing’s ideas about machines, mathematical methods of proof, and intelligence. By the late 1930s, Kurt Gödel and other logicians, including Turing himself, had shown that no finite set of rules could be used to generate all true mathematical statements. Yet according to Turing, there was no upper bound to the number of mathematical truths provable by intelligent human beings, for they could invent new rules and methods of proof. So, the output of a human mathematician, (...) for Turing, was not a computable sequence (i.e., one that could be generated by a Turing machine). Since computers only contained a finite number of instructions (or programs), one might argue, they could not reproduce human intelligence. Turing called this the “mathematical objection” to his view that machines can think. Logico-mathematical reasons, stemming from his own work, helped to convince Turing that it should be possible to reproduce human intelligence, and eventually compete with it, by developing the appropriate kind of digital computer. He felt it should be possible to program a computer so that it could learn or discover new rules, overcoming the limitations imposed by the incompleteness and undecidability results in the same way that human mathematicians presumably do. (shrink)
Alan Gewirth's Reason and Morality , in which he set forth the Principle of Generic Consistency, is a major work of modern ethical theory that, though much debated and highly respected, has yet to gain full acceptance. Deryck Beyleveld contends that this resistance stems from misunderstanding of the method and logical operations of Gewirth's central argument. In this book Beyleveld seeks to remedy this deficiency. His rigorous reconstruction of Gewirth's argument gives its various parts their most compelling formulation and (...) clarifies its essential logical structure. Beyleveld then classifies all the criticisms that Gewirth's argument has received and measures them against his reconstruction of the argument. The overall result is an immensely rich picture of the argument, in which all of its complex issues and key moves are clearly displayed and its validity can finally be discerned. The comprehensiveness of Beyleveld's treatment provides ready access to the entire debate surrounding the foundational argument of Reason and Morality . It will be required reading for all who are interested in Gewirth's theory and deontological ethics and will be of central importance to moral and legal theorists. (shrink)
An explanation is given of why it is in the nature of inquiry into whether or not p that its aim is fully achieved only if one comes to know that p or to know that not-p and, further, comes to know how one knows, either way. In the absence of the latter one is in no position to take the inquiry to be successfully completed or to vouch for the truth of the matter in hand. An upshot is that (...) although knowledge matters because truth matters this should not be understood to mean that knowledge matters because true belief matters. (shrink)
Dissertação de mestrado de: BARROS, Brasil Fernandes de. Religião e Espiritismo: o conceito de religião da Doutrina Espírita segundo a concepção de Alan Kardec. 2018. Dissertação – Programa de Pós-graduação em Ciências da Religião, Pontifícia Universidade Católica de Minas Gerais, Belo Horizonte, MG.
In a recent article in this journal, Alan Thomas presents a novel defence of what I call ‘Rawlsian Institutionalism about Justice’ against G. A. Cohen’s well-known critique. In this response I aim to defend Cohen’s rejection of Institutionalism against Thomas’s arguments. In part this defence requires clarifying precisely what is at issue between Institutionalists and their opponents. My primary focus, however, is on Thomas’s critical discussion of Cohen’s endorsement of an ethical prerogative, as well as his appeal to the (...) institutional framework of a ‘property-owning democracy’ in his elaboration of the precise institutional requirements of Rawlsian Institutionalist justice, and his related claim that Cohen’s rejection of Institutionalism involves an objectionable ‘double counting’ of the demands of justice. I argue that once we are clear about both the kind of justification that can be given for a prerogative within a plausible ethical theory, and about the key points of departure between Institutionalist views and their rivals, Cohen’s rejection of Institutionalism appears well-motivated, and Thomas’s claim that his view is guilty of double counting the demands of justice can be seen to be mistaken. (shrink)
In his article, ‘Gratuitous evil and divine providence’, Alan Rhoda claims to have produced an uncontroversial theological premise for the evidential argument from evil. I argue that his premise is by no means uncontroversial among theists, and I doubt that any premise can be found that is both uncontroversial and useful for the argument from evil.
In his article, 'Gratuitous evil and divine providence', Alan Rhoda claims to have produced an uncontroversial theological premise for the evidential argument from evil. I argue that his premise is by no means uncontroversial among theists, and I doubt that any premise can be found that is both uncontroversial and useful for the argument from evil.
Alan Carter's recent review in Mind of my Ethics of the Global Environment combines praise of biocentric consequentialism with criticisms that it could advocate both minimal satisfaction of human needs and the extinction of ‘inessential species’ for the sake of generating extra people; Carter also maintains that as a monistic theory it is predictably inadequate to cover the full range of ethical issues, since only a pluralistic theory has this capacity. In this reply, I explain how the counter-intuitive implications (...) of biocentric consequentialism suggested by Carter are not implications, and argue that since pluralistic theories either generate contradictions or collapse into monistic theories, the superiority of pluralistic theories is far from predictable. Thus Carter's criticisms fail to undermine biocentric consequentialism as a normative theory applicable to the generality of ethical issues. (shrink)
Alan Weir’s new book is, like Darwin’s Origin of Species, ‘one long argument’. The author has devised a new kind of have-it-both-ways philosophy of mathematics, supposed to allow him to say out of one side of his mouth that the integer 1,000,000 exists and even that the cardinal ℵω exists, while saying out of the other side of his mouth that no numbers exist at all, and the whole book is devoted to an exposition and defense of this new (...) view. The view is presented in the book in a way that can make it difficult for the reader to trace the main line of argument: with a great deal of apparatus, and with a great many digressions into subordinate issues. In what follows I will try to stick to what I take to be the essentials, even at the risk of oversimplifying some central but complicated issues, and at the cost of neglecting some interesting but peripheral ones.In chapter 1, the author introduces a distinction between what he calls ‘two aspects of meaning’ and dubs informational content and metaphysical content. Informational content is the aspect of meaning of primary interest to linguists, and the one of which speakers themselves are generally aware, at least upon reflection. Metaphysical content is supposed to be another aspect of meaning primarily of interest to philosophers. The basic idea is that if there are standards of correctness for assertions of a certain kind, then such an assertion may be called ‘true’ when those standards are met, even though the kind of correctness involved is not correctness in representing how the world is. What the world must be like in order for the utterance to be true is the metaphysical content of the assertion, but it need not be part of its …. (shrink)
The December 2008 White Paper (WP) on “Brain Death” published by the President’s Council on Bioethics (PCBE) reaffirmed its support for the traditional neurological criteria for human death. It spends considerable time explaining and critiquing what it takes to be the most challenging recent argument opposing the neurological criteria formulated by D. Alan Shewmon, a leading critic of the “whole brain death” standard. The purpose of this essay is to evaluate and critique the PCBE’s argument. The essay begins with (...) a brief background on the history of the neurological criteria in the United States and on the preparation of the 2008 WP. After introducing the WP’s contents, the essay sets forth Shewmon’s challenge to the traditional neurological criteria and the PCBE’s reply to Shewmon. The essay concludes by critiquing the WP’s novel justification for reaffirming the traditional conclusion, a justification the essay finds wanting. (shrink)
In his short life, Alan Turing (1912-1954) made foundational contributions to philosophy, mathematics, biology, artificial intelligence, and computer science. He, as much as anyone, invented and showed how to program the digital electronic computer. From September, 1939, his work on computation was war-driven and brutally practical. He developed high speed computing devices needed to decipher German Enigma Machine messages to and from U-boats, countering the most serious threat by far to Britain..
Economic approaches to both social evaluation and decision-making are typically Paretian or utilitarian in nature and so display commitments to both welfarism and consequentialism. The contrast between the economic approach and any rights-based social philosophy has spawned a large literature that may be divided into two branches. The first is concerned with the compatibility of rights and utilitarianism seen as independent moral forces. This branch of the literature may be characterized as an example of the broader debate between the teleological (...) and deontological approaches. The second is concerned with the possibility that substantial rights may be grounded in utilitarianism with the moral force of rights being derived from more basic commitments to welfarism and consequentialism. This branch of the literature may be characterized as an exploration of the flexibility of the teleological approach, and, in particular, its ability to give rise to views more normally associated with the deontological approach. This essay is concerned with the second branch of the literature. (shrink)
It has been just over 100 years since the birth of Alan Turing and more than 65 years since he published in Mind his seminal paper, Computing Machinery and Intelligence. In the Mind paper, Turing asked a number of questions, including whether computers could ever be said to have the power of “thinking”. Turing also set up a number of criteria—including his imitation game—under which a human could judge whether a computer could be said to be “intelligent”. Turing’s paper, (...) as well as his important mathematical and computational insights of the 1930s and 1940s led to his popular acclaim as the “Father of Artificial Intelligence”. In the years since his paper was published, however, no computational system has fully satisfied Turing’s challenge. In this paper we focus on a different question, ignored in, but inspired by Turing’s work: How might the Artificial Intelligence practitioner implement “intelligence” on a computational device? Over the past 60 years, although the AI community has not produced a general-purpose computational intelligence, it has constructed a large number of important artifacts, as well as taken several philosophical stances able to shed light on the nature and implementation of intelligence. This paper contends that the construction of any human artifact includes an implicit epistemic stance. In AI this stance is found in commitments to particular knowledge representations and search strategies that lead to a product’s successes as well as its limitations. Finally, we suggest that computational and human intelligence are two different natural kinds, in the philosophical sense, and elaborate on this point in the conclusion. (shrink)
As a preliminary to the justification of equal opportunity, we require a few words on the concept. An opportunity is a chance to attain some goal or obtain some benefit. More precisely, it is the lack of some obstacle or obstacles to the attainment of some goal or benefit. Opportunities are equal in some specified or understood sense when persons face roughly the same obstacles or obstacles of roughly the same difficulty of some specified or understood sort. In different contexts (...) we might have different sorts of benefits or obstacles in mind. But in the current social context, and in the context of this discussion, we refer to educational and occupational opportunities, chances to attain the benefits of higher education and of socially and economically desirable positions, benefits assumed to be desired by many or most individuals, other things being equal. And we generally divide obstacles into two broad classes: those imposed by the social system or by other persons in the society, for example, the hardships of life in the lower economic classes or barriers from prejudices based on race, sex, or ethnic background; and those imposed by natural disabilities, for example, low intelligence or lack of talents. The initial question is whether a moral society is obligated to create equality in opportunities in the senses just defined. I shall assume here initially that there is some such obligation on the part of society or the state, although I shall specify its nature and limits more precisely below. With the exception of certain libertarians, almost everyone, liberal and conservative alike, agrees in this assumption. (shrink)
An ambitious ethical theory ---Alan Gewirth's "Principle of Generic Consistency"--- is encoded and analysed in Isabelle/HOL. Gewirth's theory has stirred much attention in philosophy and ethics and has been proposed as a potential means to bound the impact of artificial general intelligence.
Alan Gewirth's _Reason and Morality_, in which he set forth the Principle of Generic Consistency, is a major work of modern ethical theory that, though much debated and highly respected, has yet to gain full acceptance. Deryck Beyleveld contends that this resistance stems from misunderstanding of the method and logical operations of Gewirth's central argument. In this book Beyleveld seeks to remedy this deficiency. His rigorous reconstruction of Gewirth's argument gives its various parts their most compelling formulation and clarifies (...) its essential logical structure. Beyleveld then classifies all the criticisms that Gewirth's argument has received and measures them against his reconstruction of the argument. The overall result is an immensely rich picture of the argument, in which all of its complex issues and key moves are clearly displayed and its validity can finally be discerned. The comprehensiveness of Beyleveld's treatment provides ready access to the entire debate surrounding the foundational argument of _Reason and Morality_. It will be required reading for all who are interested in Gewirth's theory and deontological ethics and will be of central importance to moral and legal theorists. (shrink)
As is well known, Alan Turing drew a line, embodied in the "Turing test," between intellectual and physical abilities, and hence between cognitive and natural sciences. Less familiarly, he proposed that one way to produce a "passer" would be to educate a "child machine," equating the experimenter's improvements in the initial structure of the child machine with genetic mutations, while supposing that the experimenter might achieve improvements more expeditiously than natural selection. On the other hand, in his foundational "On (...) the chemical basis of morphogenesis," Turing insisted that biological explanation clearly confine itself to purely physical and chemical means, eschewing vitalist and teleological talk entirely and hewing to D'Arcy Thompson's line that "evolutionary 'explanations,'" are historical and narrative in character, employing the same intentional and teleological vocabulary we use in doing human history, and hence, while perhaps on occasion of heuristic value, are not part of biology as a natural science. To apply Turing's program to recent issues, the attempt to give foundations to the social and cognitive sciences in the "real science" of evolutionary biology (as opposed to Turing's biology) is neither to give foundations, nor to achieve the unification of the social/cognitive sciences and the natural sciences. (shrink)
Alan Shewmons article, The brain and somatic integration: Insights into the standard biological rationale for equating brain death with death (2001), strikes at the heart of the standard justification for whole brain death criteria. The standard justification, which I call the standard paradigm, holds that the permanent loss of the functions of the entire brain marks the end of the integrative unity of the body. In my response to Shewmons article, I first offer a brief summary of the standard (...) paradigm and cite recent work by advocates of whole brain criteria who tenaciously cling to the standard paradigm despite increasing evidence showing that it has significant weaknesses. Second, I address Shewmons case against the standard paradigm, arguing that he is successful in showing that whole brain dead patients have integrated organic unity. Finally, I discuss some minor problems with Shewmons article, along with suggestions for further elaboration. (shrink)
Between inventing the concept of a universal computer in 1936 and breaking the German Enigma code during World War II, Alan Turing, the British founder of computer science and artificial intelligence, came to Princeton University to study mathematical logic. Some of the greatest logicians in the world--including Alonzo Church, Kurt Gödel, John von Neumann, and Stephen Kleene--were at Princeton in the 1930s, and they were working on ideas that would lay the groundwork for what would become known as computer (...) science. Though less well known than his other work, Turing's 1938 Princeton PhD thesis, "Systems of Logic Based on Ordinals," which includes his notion of an oracle machine, has had a lasting influence on computer science and mathematics. This book presents a facsimile of the original typescript of the thesis along with essays by Andrew Appel and Solomon Feferman that explain its still-unfolding significance. A work of philosophy as well as mathematics, Turing's thesis envisions a practical goal--a logical system to formalize mathematical proofs so they can be checked mechanically. If every step of a theorem could be verified mechanically, the burden on intuition would be limited to the axioms. Turing's point, as Appel writes, is that "mathematical reasoning can be done, and should be done, in mechanizable formal logic." Turing's vision of "constructive systems of logic for practical use" has become reality: in the twenty-first century, automated "formal methods" are now routine. Presented here in its original form, this fascinating thesis is one of the key documents in the history of mathematics and computer science. (shrink)
In this book Alan Haworth tends to sneer at libertarians. However, there are, I believe, a few sound criticisms. I have always held similar opinions of Murray Rothbard‟s and Friedrich Hayek‟s definitions of liberty and coercion, Robert Nozick‟s account of natural rights, and Hayek‟s spontaneous-order arguments. I urge believers of these positions to read Haworth. But I don‟t personally know many libertarians who believe them (or who regard Hayek as a libertarian).
Alan White’s review in The Owl, 22, 1 : 91–96, of my book, Hegel, Nietzsche, and the Criticism of Metaphysics, offers a generous appraisal of what he considers to be the book’s merits and faults. White is clearly not satisfied that the book has successfully accomplished what it set out to achieve. However, after having been told by one reviewer that what “plainly” lay closest to my heart was a full-blooded defense of Hegel, and after having been scolded by (...) another reviewer for not having “engaged” with Nietzsche in the manner of Heidegger and for daring to suggest that Nietzsche might have been misguided in his thinking, it it pleasing to read a review - there have fortunately been one or two others as well - which considers worthwhile not only the project of the book itself, but also to explain in some detail what the book is actually about. For that I am very grateful. (shrink)
C. J. Mews - Logic, Theology, and Poetry in Boethius, Abelard, and Alan of Lille: Words in the Absence of Things - Journal of the History of Philosophy 45:2 Journal of the History of Philosophy 45.2 327-328 Muse Search Journals This Journal Contents Reviewed by Constant J. Mews Monash University Eileen C. Sweeney. Logic, Theology, and Poetry in Boethius, Abelard, and Alan of Lille: Words in the Absence of Things. The New Middle Ages. London: Palgrave MacMillan, 2006. Pp. (...) xii + 248. Cloth, $65.00. Boethius, Abelard, and Alan of Lille all crystallized their thoughts in poetry as much as in prose. In seeking to condense their achievement into a slim monograph, Sweeney synthesizes much thought into relatively few pages. Her ambition pays off. Her opening chapter on Boethius.. (shrink)
In this paper we present the syntax and semantics of a temporal action language named Alan, which was designed to model interactive multimedia presentations where the Markov property does not always hold. In general, Alan allows the specification of systems where the future state of the world depends not only on the current state, but also on the past states of the world. To the best of our knowledge, Alan is the first action language which incorporates causality (...) with temporal formulas. In the process of defining the effect of actions we define the closure with respect to a path rather than to a state, and show that the non-Markovian model is an extension of the traditional Markovian model. Finally, we establish relationship between theories of Alan and logic programs. (shrink)
The origin of my article lies in the appearance of Copeland and Proudfoot's feature article in Scientific American, April 1999. This preposterous paper, as described on another page, suggested that Turing was the prophet of 'hypercomputation'. In their references, the authors listed Copeland's entry on 'The Church-Turing thesis' in the Stanford Encyclopedia. In the summer of 1999, I circulated an open letter criticising the Scientific American article. I included criticism of this Encyclopedia entry. This was forwarded to Prof. Ed Zalta, (...) editor of the Encyclopedia, and after some discussion he invited me to submit an entry on ' Alan Turing.'. (shrink)
Alan Gewirth's Reason and Morality directed philosophical attention to the possibility of presenting a rational and rigorous demonstration of fundamental moral principles. Now, these previously unpublished essays from some of the most distinguished philosophers of our generation subject Gewirth's program to thorough evaluation and assessment. In a tour de force of philosophical analysis, Professor Gewirth provides detailed replies to all of his critics--a major, genuinely clarifying essay of intrinsic philosophical interest.
Explaining his now famous parody in Social Text's "Science Wars" issue, Alan Sokal writes in Dissent : But why did I do it? I confess that I'm an unabashed Old Leftist who never quite understood how deconstruction was supposed to help the working class. And I'm a stodgy old scientist who believes, naively, that there exists an external world, that there exist objective truths about that world, and that my job is to discover some of them. There is much (...) to note in this "confession." Why choose a hoax on Social Text to make these points? Did Sokal believe its editors were unabashed deconstructionists who doubted the existence of an external world or that they were anti-science? If so, he has either misread the burden of its seventeen-year history or was capricious in his choice. If not, then he has perpetuated the saddest hoax of all: on himself. For the fact is that Social Text, of which I am a founder and in whose editorial collective I served until this year, has never been in the deconstructionist camp; nor do its editors or the preponderance of its contributors doubt the existence of a material world. What is at issue is whether our knowledge of it can possibly be free of social and cultural presuppositions. (shrink)
Alan Musgrave has been one of the most important philosophers of science in the last quarter of the 20th century. He has exemplified an exceptional combination of clearheaded and profound philosophical thinking. Two seem to be the pillars of his thought: an uncompromising commitment to scientific realism and an equally uncompromising commitment to deductivism. The essays reprinted in this volume (which span a period of 25 years, from 1974 to 1999) testify to these two commitments. (There are two omissions (...) from this collection: “Realism, Truth and Objectivity” in Realism and Anti-realism in the PhilosophyofScience(1996,Kluwer)and“HowtoDowithoutInductiveLogic” (Science&Educationvol.8,1999.Iwillmakesomereferencestothesepapersin what follows.) In the present review, instead of giving an orderly summary of the 16 papers of Essays, I discuss Musgrave’s two major commitments and raise some worries about their combination. (shrink)
John Stuart Mill is—surprisingly—a difficult writer. He writes clearly, non-technically, and in a very plain prose which Bertrand Russell once described as a model for philosophers. It is never hard to see what the general drift of the argument is, and never hard to see which side he is on. He is, none the less, a difficult writer because his clarity hides complicated arguments and assumptions which often take a good deal of unpicking. And when we have done that unpicking, (...) the task of analysing the merits and deficiencies of the arguments is still only half completed. This is true of all his work and particularly true of Liberty. It is an essay whose clarity and energy have made it the most popular of all Mill's work. Yet it conceals philosophical, sociological and historical assumptions of a very debatable kind. In his introduction, Mill says the object of this essay is to defend one very simple principle, as entitled to govern absolutely the dealings of society with the individual in the way of compulsion and control, whether the means used be legal penalties, or the moral coercion of public opinion. (shrink)
The article will attempt a reading of Alan Spence’s play No Nothing. Special attention will be given to the issue of literal and metaphorical space, a peculiar, liminal setting of the play, and the ways it determines the flyting between the two characters, two iconic Glaswegians: Edwin Morgan and Jimmy Reid. It seems that in this theatrical space history, politics and poetry inter-are. We may notice how two completely different masters of speech exchange their views on life, how they (...) reflect upon the meaning of their achievements, and how they find a space of convergence in their affirmation of life. As their flyting is “about life, the Universe and everything—from Glasgow to Infinity and beyond,” the article will also address the space of dialogue between Spence’s and Morgan’s poetry. The metaphor of Indra’s net will serve as a useful tool in exploring spatial dimensions of the play and the issue of interconnectedness. (shrink)