The arts are an integral part of our culture, and they invite us to investigate, express ideas, and create aesthetically pleasing works. Of interest to educators is clear scholarship that links the arts to cognitive and intellectual development. The processes of creating art and viewing and interpreting art promote cognitive and skill development.1 Elliot Eisner, who has written extensively on this topic, argues that "Artistic activity is a form of inquiry that depends on qualitative forms of intelligence."2 Eisner suggests that (...) children can use art to question and reflect on sensory information from their daily lives, and from this reflection develop insight, awareness, and critical thinking skills.3 Expanding on .. (shrink)
Marxist roots of science studies Content Type Journal Article Category Essay Review Pages 1-9 DOI 10.1007/s11016-012-9647-4 Authors Nils Roll-Hansen, Institute of Philosophy, University of Oslo, PB 1024 Blindern, 0315 Oslo, Norway Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.
In our lives, we aim to achieve welfare for ourselves, that is, to live good lives. But we also have another, more impartial perspective, where we aim to balance our concern for our own welfare against a concern for the welfare of others. This is a perspective of justice. Nils Holtug examines these two perspectives and the relations between them. -/- The first part of the book is concerned with prudence; more precisely, with what the necessary and sufficient conditions are (...) for having a self-interest in a particular benefit. It includes discussions of the extent to which self-interest depends on preferences, personal identity, and what matters in survival. It also considers the issue of whether it can benefit (or harm) a person to come into existence and what the implications are for our theory of self-interest. A 'prudential view' is defended, according to which a person has a present self-interest in a future benefit if and only if she stands in a relation of continuous physical realization of (appropriate) psychology to the beneficiary, where the strength of the self-interest depends both on the size of the benefit and on the strength of this relation. -/- The second part of the book concerns distributive justice and so how to distribute welfare or self-interest fulfilment over individuals. It includes discussions of welfarism, egalitarianism and prioritarianism, population ethics, the importance of personal identity and what matters for distributive justice, and the importance of all these issues for various topics in applied ethics, including the badness of death. Here, a version of prioritarianism is defended, according to which, roughly, the moral value of a benefit to an individual at a time depends on both the size of the benefit and on the individual's self-interest, at that time, in the other benefits that accrue to her at this and other times. (shrink)
Designed to be used on its own or with its companion text, Ultimate Questions: Thinking About Philosophy 3e, this collection of readings covers the major topic areas in philosophy: Knowledge; Free Will; Personal Identity; Mind/Body; God; Ethics; and Political Philosophy. While focusing primarily on contemporary philosophy, it also includes many of the classic works essential to an introductory course.
According to the Harm Principle, roughly, the state may coerce a person only if it can thereby prevent harm to others. Clearly, this principle depends crucially on what we understand by harm. Thus, if any sort of negative effect on a person may count as a harm, the Harm Principle will fail to sufficiently protect individual liberty. Therefore, a more subtle concept of harm is needed. I consider various possible conceptions and argue that none gives rise to a plausible version (...) of the Harm Principle. Whether we focus on welfare, quantities of welfare or qualities of welfare, we do not arrive at a plausible version of this principle. Instead, the concept of harm may be moralized. I consider various ways this may be done as well as possible rationales for the resulting versions of the Harm Principle. Again, no plausible version of the principle turns up. I also consider the prospect of including the Harm Principle in a decision-procedure rather than in a criterion of rightness. Finally, in light of my negative appraisal, I briefly discuss why this principle has seemed so appealing to liberals. (shrink)
In the natural sciences higher order structures often occur. There seems to be a need for good methods of describing what we mean by higher order structures in various contexts. This is what hyperstructures are intended to do. We motivate and introduce this new concept. Next we illustrate how it can be applied in various types of genomic analysis—particular the correlations between single nucleotide polymorphisms and diseases. The suggested structure is quite general and may be applied to a variety of (...) situations. Finally we discuss how data sets (f. ex. genomic) may lead to topological spaces, giving new invariants and lead to the prediction of hyperstructures. (shrink)
This paper is just a comment to the impressive work by A. C. Ehresmann and J.-P. Vanbremeersch on the theory of Memory Evolutive Systems (MES). MES are truly higher order systems. Hyperstructures represent a new concept which I introduced in order to capture the essence of what a higher order structure is—encompassing hierarchies and emergence. Hyperstructures are motivated by cobordism theory in topology and higher category theory. The morphism concept is replaced by the concept of a bond. In the paper (...) I briefly introduce hyperstructures motivated geometrically and suggest further developments of the MESs along these lines, which could widen up new areas of applications. (shrink)
Emergence is a universal phenomenon that can be defined mathematically in a very general way. This is useful for the study of scientifically legitimate explanations of complex systems, here defined as hyperstructures. A requirement is that the observation mechanisms are considered within the general framework. Two notions of emergence are defined, and specific examples of these are discussed.
The focus of this paper are the meaning-theoretical arguments against classical logic that Dummett bases on consideration about the meanings of negation. Using Dummettian principles, I shall outline three such arguments, of increasing strength, and show that they are unsuccessful by giving responses to each argument on behalf of the classical logician. What is crucial is that in responding to these arguments a classicist need not challenge any of the basic assumptions of Dummett's outlook on the theory of meaning. In (...) particular, I shall grant Dummett his general bias towards verificationism or justificationism, encapsulated in the slogan `meaning is use'. The second general assumption I see no need to question is Dummett's particular breed of molecularism. Some of Dummett's assumptions will have to be given up, if classical logic is to be vindicated in his meaning-theoretical framework. A major result of this paper will be that the meaning of negation cannot be defined by rules of inference in the Dummettian framework. (shrink)
Contributing Authors: Lilli Alanen & Frans Svensson, David Alm, Gustaf Arrhenius, Gunnar Björnsson, Luc Bovens, Richard Bradley, Geoffrey Brennan & Nicholas Southwood, John Broome, Linus Broström & Mats Johansson, Johan Brännmark, Krister Bykvist, John Cantwell, Erik Carlson, David Copp, Roger Crisp, Sven Danielsson, Dan Egonsson, Fred Feldman, Roger Fjellström, Marc Fleurbaey, Margaret Gilbert, Olav Gjelsvik, Kathrin Glüer & Peter Pagin, Ebba Gullberg & Sten Lindström, Peter Gärdenfors, Sven Ove Hansson, Jana Holsanova, Nils Holtug, Victoria Höög, Magnus Jiborn, Karsten Klint Jensen, (...) Sigurður Kristinsson, Isaac Levi, Kasper Lippert-Rasmussen, David Makinson, Anna-Sofia Maurin, Philippe Mongin, Kevin Mulligan, Lennart Nordenfelt, Jonas Olson, Erik J. Olsson, Ingmar Persson, Johannes Persson, Björn Petersson, Philip Pettit, Hans Rott, Toni Rønnow-Rasmussen, Krister Segerberg, John Skorupski, Howard Sobel, Fredrik Stjernberg, Fred Stoutland, Caj Strandberg, Pär Sundström, Folke Tersman, Torbjörn Tännsjö, Peter Vallentyne, Bruno Verbeek, Stella Villarmea, and Michael J. Zimmerman. (shrink)
In From Chance to Choice, Allen Buchanan, Dan Brock, Norman Daniels and Daniel Wikler propose a new way of defending the moral significance of the distinction between genetic treatments and enhancements. They develop what they call a ‘normal function model’ of equality of opportunity and argue that it offers a ‘limited’ defence of this distinction. In this article, I critically assess their model and the support it (allegedly) provides for the treatment-enhancement distinction. First, I argue that there is a troubling (...) tension in the normal function model. Secondly, I argue that neither of the rationales invoked by Buchanan et al. really serves to justify this model or the results they seek to derive from it with respect to the significance of the distinction between treatments and enhancements. (shrink)
In this paper I argue that coming into existence can benefit (or harm) aperson. My argument incorporates the comparative claim that existence canbe better (or worse) for a person than never existing. Since these claimsare highly controversial, I consider and reject a number of objectionswhich threaten them. These objections raise various semantic, logical,metaphysical and value-theoretical issues. I then suggest that there is animportant sense in which it can harm (or benefit) a person not to comeinto existence. Again, I consider and (...) reject some objections. Finally, Ibriefly consider what the conclusions reached in this paper imply for ourmoral obligations to possible future people. (shrink)
The hypothesis that human reasoning and decision-making can be roughly modeled by Expected Utility Theory has been at the core of decision science. Accumulating evidence has led researchers to modify the hypothesis. One of the latest additions to the field is Dual Process theory, which attempts to explain variance between participants and tasks when it comes to deviations from Expected Utility Theory. It is argued that Dual Process theories at this point cannot replace previous theories, since they, among other things, (...) lack a firm conceptual framework, and have no means of producing independent evidence for their case. (shrink)
Derek Parfit has argued that prioritarianism “naturally” has global scope, i.e. naturally applies to everyone, irrespective of his or her particular national, state or other communal affiliation. In that respect, it differs from e.g. egalitarianism. In this article, I critically assess Parfit's argument. In particular, I argue that it is difficult to draw conclusions about the scope of prioritarianism simply from an inspection of its structure. I also make some suggestions as to what it would take to argue that prioritarianism (...) has either global or merely domestic scope. (shrink)
Jeff McMahan appeals to what he calls the “Time-relative Interest Account of the Wrongness of Killing” to explain the wrongness of killing individuals who are conscious but not autonomous. On this account, the wrongness of such killing depends on the victim’s interest in his or her future, and this interest, in turn, depends on two things: the goods that would have accrued to the victim in the future; and the strength of the prudential relations obtaining between the victim at the (...) time of the killing and at the times these goods would have accrued to him or her. More precisely, when assessing this interest, future goods should be discounted to reflect reductions in the strength of such relations. Against McMahan’s account I argue that it relies on an implausible “actualist” view of the moral importance of interests according to which satisfactions of future interests only have moral significance if they are satisfactions of actual interests (interests that will in fact exist). More precisely, I aim to show that the Time-relative Interest Account (1) does not have the implications for the morality of killing that McMahan takes it to have, and (2) implies, implausibly, that certain interest satisfactions which seem to be morally significant are morally insignificant because they are not satisfactions of actual interests. (shrink)
The fundamental assumption of Dummett’s and Prawitz’ proof-theoretic justification of deduction is that ‘if we have a valid argument for a complex statement, we can construct a valid argument for it which finishes with an application of one of the introduction rules governing its principal operator’. I argue that the assumption is flawed in this general version, but should be restricted, not to apply to arguments in general, but only to proofs. I also argue that Dummett’s and Prawitz’ project of (...) providing a logical basis for metaphysics only relies on the restricted assumption. (shrink)
It is suggested that the Harm Principle can be viewedas the moral basis on which genetically modified (GM) food iscurrently regulated. It is then argued (a) that the concept ofharm cannot be specified in such a manner as to render the HarmPrinciple a plausible political principle, so this principlecannot be used to justify existing regulation; and (b) that evenif the Harm Principle were a plausible political principle, itcould not be used alone in the regulation of GM food, since itdoes not (...) express a concern for the expected benefits of suchfood. (shrink)
I present an argument that negation is a problem for proof-theoretic semantics: it's meaning cannot be defined by rules of inference, and that's particularly problematic for Dummett's and Prawitz' Justification of Deduction. I won the Jacobsen Essay Price of the University of London for this essay a few years ago.
It is widely held, to the point of being the received interpretation, that Frank Ramsey was the ﬁrst to defend the so-called Redundancy Theory of Truth in his landmark article ‘Facts and Propositions’ (hereafter ‘FP’) of 1927.1 For instance, A.J. Ayer2 cited this article in the context of arguing that saying that p is true is simply a way of asserting p and that truth is not a real quality or relation. Other holders of the received interpretation, such as George (...) Pitcher,3 J.L. Mackie, 4 Susan Haack,5 A.C. Grayling,6 Nils-Eric Sahlin,7 Richard Kirkham,8 Donald Davidson9 and Michael Lynch10 credit Ramsey with having originated what they call ‘the Redundancy Theory.’ Even an authoritative source such as The Encyclopedia of Philosophy11 attributes this theory to him. What is more, Grover et al.,12 in defending their Prosentential Theory of Truth, claim that their theory is an improvement and development of the Redundancy Theory, which they too attribute to Ramsey.13.. (shrink)
"Let's go inside nature," says my host, ecophilosopher Nils Faarlund, as we walk out of his small wooden cabin and into the Norwegian countryside. Faarlund is fond of such novel turns of phrase. As we enjoy local strawberries, Faarlund muses on how our everyday language both shapes and reflects our perceptions of the world. Recognizing the power of words, he is extremely careful about the language he uses. For instance, he avoids the term "environmental philosopher" because the word "environment" already (...) puts the speaker on the fringes, looking out at her surroundings rather than seeing them as her home, as the eco- in ecophilosopher suggests. By talking about nature in both careful and novel ways, Faarlund .. (shrink)
I call an experiment “crucial” when it makes possible a decisive choice between conflicting hypotheses. Joharmsen's selection for size and weight within pure lines of beans played a central role in the controversy over continuity or discontinuity in hereditary change, often known as the Biometrician-Mendelian controversy. The “crucial” effect of this experiment was not an instantaneous event, but an extended process of repeating similar experiments and discussing possible objections. It took years before Johannsen's claim about the genetic stability of pure (...) lines was accepted as conclusively demonstrated by the community of geneticists. The paper also argues that crucial experiments thus interpreted contradict certain ideas about the underdetermination of theories by facts and the theory-ladenness of facts which have been influential in recent history and sociology of science. The acceptance of stability in the pure lines did not rest on prior preference for continuity or discontinuity. And this fact permitted a final choice between the two theories. When such choice is characterized as “decisive” or “final”, this is not meant in an absolute philosophical sense. What we achive in these cases is highly reliable empirical knowledge. The philosophical possibility of drawing (almost) any conclusion in doubt should be distinguished from reasonable doubt in empirical science. (shrink)
This paper describes the historical background and early formation of Wilhelm Johannsen's distinction between genotype and phenotype. It is argued that contrary to a widely accepted interpretation (For instance, W. Provine, 1971. "The Origins of Theoretical Population Genetics". Chicago: The University of Chicago Press; Mayr, 1973; F. B. Churchill, 1974. "Journal of the History of Biology" 7: 5-30; E. Mayr, 1982. "The Growth of Biological Thought," Cambridge: Harvard University Press; J. Sapp, 2003. Genesis. "The Evolution of Biology". New York: Oxford (...) University Press) his concepts referred primarily to properties of individual organisms and not to statistical averages. Johannsen's concept of genotype was derived from the idea of species in the tradition of biological systematics from Linnaeus to de Vries: An individual belonged to a group - species, subspecies, elementary species - by representing a certain underlying type (S. Müller-Wille and V. Orel, 2007. "Annals of Science" 64: 171-215). Johannsen sharpened this idea theoretically in the light of recent biological discoveries, not least those of cytology. He tested and confirmed it experimentally combining the methods of biometry, as developed by Francis Galton, with the individual selection method and pedigree analysis, as developed for instance by Louis Vilmorin. The term "genotype" was introduced in W. Johannsen's 1909 ("Elemente der Exakten Erblichkeitslehre". Jena: Gustav Fischer) treatise, but the idea of a stable underlying biological "type" distinct from observable properties was the core idea of his classical bean selection experiment published 6 years earlier (W. Johannsen, 1903. "Ueber Erblichkeit in Populationen und reinen Linien". "Eine Beitrag zur Beleuchtung schwebender Selektionsfragen," Jena: Gustav Fischer, pp. 58-59). The individual ontological foundation of population analysis was a self-evident presupposition in Johannsen's studies of heredity in populations from their start in the early 1890s till his death in 1927. The claim that there was a "substantial but cautious modification of Johannsen's phenotype-genotype distinction" (Churchill, 1974, p. 24) from a statistical to an individual ontological perspective derives from a misreading of the 1903 and 1909 texts. The immediate purpose of this paper is to correct this reading of the 1903 monograph by showing how its problems and results grow out of Johannsen's earlier work in heredity and plant breeding. Johannsen presented his famous selection experiment as the culmination of a line of criticism of orthodox Darwinism by William Bateson, Hugo de Vries, and others (Johannsen, 1903). They had argued that evolution is based on stepwise rather than continuous change in heredity. Johannsen's paradigmatic experiment showed how stepwise variation in heredity could be operationally distinguished from the observable, continuous morphological variation. To test Galton's law of partial regression, Johannsen deliberately chose pure lines of self-fertilizing plants, a pure line being the descendants in successive generations of one single individual. Such a population could be assumed to be highly homogeneous with respect to hereditary type, and Johannsen found that selection produced no change in this type. Galton, he explained, had experimented with populations composed of a number of stable hereditary types. The partial regression which Galton found was simply an effect of selection between types, increasing the proportion of some types at the expense of others. (shrink)
In this paper, we consider a few actual cases of mnemonic strategies among older subjects (older than 65). The cases are taken from an ethnographic study, examining how elderly adults cope with cognitive decline. We believe that these cases illustrate that the process of remembering in many cases involve a complex distributed web of processes involving both internal or intracranial and external sources. Our cases illustrate that the nature of distributed remembering is shaped by and subordinated to the dynamic characteristics (...) of the on-going activity and to our minds suggest that research on memory and distributed cognition should focus on the process of remembering through detailed descriptions and analysis of naturally occurring situations. (shrink)
We discuss a recent approach to investigating cognitive control, which has the potential to deal with some of the challenges inherent in this endeavour. In a model-based approach, the researcher deﬁnes a formal, computational model that performs the task at hand and whose performance matches that of a research participant. The internal variables in such a model might then be taken as proxies for latent variables computed in the brain. We discuss the potential advantages of such an approach for the (...) study of the neural underpinnings of cognitive control and its pitfalls, and we make explicit the assumptions underlying the interpretation of data obtained using this approach. (shrink)
Economics and culture are in a complex, developing relation to each other. Yet, to introduce ?culture? into economic theory requires, first of all, an appropriate understanding of culture itself. The crucial point of this paper is that culture in its development and structure is only understandable if one considers it in connection with the autonomous structural development of the forms with which the subjects experience and construct their world. In recognition of the socio?cultural organization of human society, there is no (...) absolute autonomy of individuals in comparison to society and economics, while together with this interdependency the development of rationality exceeds mere instrumentality. Through ontogenesis, every individual is located ?within the boundaries of society?. What are consequences for economic theory? First of all: Economics is a cultural science in a double sense. Its object is the changing world of economic phenomena that are bound in a very specific cultural context. However, culture is not only relevant for the phenomena of socio?economic life, but also for the phenomena of economic science, i.e. for the development of economic thought. (shrink)
The paper describes a methodology to be used for analysis and design of human activity systems. The methodology is based on an analysis of the decision settings whereas most other decision analysis methodologies are analysing the process. The decision concept is analysed and discussed. A distinction between programmed and programmable as well as non-programmed and non-programmable decisions is proposed. A classification of different information types for decision making is presented. A methodology based on a systemic and systematic analysis of the (...) information requirements of an organization is proposed. This methodology also indicates organizational discrepancies and information imbalances. The methodology focuses the settings of the decisions on all levels of organizations. The methodology can be regarded as a dynamic, learning system. The author proposes further research on the individuals decision making abilities. (shrink)
At early ages, Buber, Scholem, and Rosenzweig encountered Nietzsche’s work. Nietzsche’s philosophy was reduced to short catchwords or barely mentionedin their later writings. His views on Jews and Judaism seemed to have mattered little, and he first and foremost aided their rebellious breaks with both traditionaland enlightened concepts of God. Nietzsche’s proclamation of God’s death thus served them to articulate their own unease with religious traditions. Yet in manyways the confrontation with Nietzsche was both attenuated and accentuated by the concept (...) of Erlebnis and elevation of aesthetical categories. Ironically, Nietzsche’s challenge to Jewish thought was less in his alleged anti-religious stance, than in the celebration of an unmitigated experience, which was incompatible with any attempt of forging a new critical Jewish philosophy. (shrink)