This essay draws on two emerging fields—the study of comics or graphic fiction, and disability studies—to demonstrate how graphic fictions articulate the embodied, ethical, and sociopolitical experiences of impairment and disability. Examining David B’s Epileptic and Paul Karasik and Judy Karasik’s The Ride Together, I argue that these graphic novels unsettle conventional notions of normalcy and disability. In so doing, they also challenge our assumed dimensions and possibilities of the comics genre and medium, demonstrating the great potential comics hold (...) for disability studies. (shrink)
Our understanding of subjunctive conditionals has been greatly enhanced through the use of possible world semantics and, more precisely, by the idea that they involve variably strict quantification over possible worlds. I propose to extend this treatment to ceteris paribus conditionals - that is, conditionals that incorporate a ceteris paribus or 'other things being equal' clause. Although such conditionals are commonly invoked in scientific theorising, they traditionally arouse suspicion and apprehensiveness amongst philosophers. By treating ceteris paribus conditionals as a species (...) of variably strict conditional I hope to shed new light upon their content and their logic. (shrink)
NOTE: This paper is a reworking of some aspects of a previous paper of mine – ‘What else justification could be’ published in Noûs in 2010. I’m currently in the process of writing a book developing and defending some of the ideas from this paper. What follows will, I hope, fall into place as one of the chapters of this book – though it is still very much at the draft stage. Comments are welcome. -/- My concern in this paper (...) is with a certain, pervasive picture of epistemic justification. On this picture, acquiring justification for believing something is essentially a matter of minimising one’s risk of error – so one is justified in believing something just in case it is sufficiently likely, given one’s evidence, to be true. This view is motivated by an admittedly natural thought: If we want to be fallibilists about justification then we shouldn’t demand that something be certain – that we completely eliminate error risk – before we can be justified in believing it. But if justification does not require the complete elimination of error risk, then what could it possibly require if not its minimisation? If justification does not require epistemic certainty then what could it possibly require if not epistemic likelihood? When all is said and done, I’m not sure that I can offer satisfactory answers to these questions – but I will attempt to trace out some possible answers here. The alternative picture that I’ll outline makes use of a notion of normalcy that I take to be irreducible to notions of statistical frequency or predominance. (shrink)
Many believe that severe intellectual impairment, blindness or dying young amount to serious harm and disadvantage. It is also increasingly denied that it matters, from a moral point of view, whether something is biologically normal to humans. We show that these two claims are in serious tension. It is hard explain how, if we do not ascribe some deep moral significance to human nature or biological normality, we could distinguish severe intellectual impairment or blindness from the vast list of seemingly (...) innocent ways in which we fail to have as much wellbeing as we could, such not having super-intelligence, or not living to 130. We consider a range of attempts to draw this intuitive normative distinction without appealing to normality. These, we argue, all fail. But this doesn't mean that we cannot draw this distinction or that we must, implausibly, conclude that biological normality does possess an inherent moral importance. We argue that, despite appearances, it is not biological normality but rather statistical normality that, although lacking any intrinsic moral significance, nevertheless makes an important moral difference in ways that explain and largely justify the intuitive distinction. (shrink)
Taking the visual appeal of the ‘bell curve’ as an example, this paper discusses in how far the availability of quantitative approaches (here: statistics) that comes along with representational standards immediately affects qualitative concepts of scientific reasoning (here: normality). Within the realm of this paper I shall focus on the relationship between normality, as defined by scientific enterprise, and normativity, that result out of the very processes of standardisation itself. Two hypotheses are guiding this analysis: (1) normality, as it is (...) defined by the natural and the life sciences, must be regarded as an ontological, but epistemological important fiction and (2) standardised, canonical visualisations (such as the ‘bell curve’) impact on scientific thinking and reasoning to a significant degree. I restrict my analysis to the epistemological function of scientific representations of data: This means identifying key strategies of producing graphs and images in scientific practice. As a starting point, it is crucial to evaluate to what degree graphs and images could be seen as guiding scientific reasoning itself, for instance in attributing to them a certain epistemological function within a given field of research. (shrink)
Although an invasive medical intervention, Deep Brain Stimulation (DBS) has been regarded as an efficient and safe treatment of Parkinson’s disease for the last 20 years. In terms of clinical ethics, it is worth asking whether the use of DBS may have unanticipated negative effects similar to those associated with other types of psychosurgery. Clinical studies of epileptic patients who have undergone an anterior temporal lobectomy have identified a range of side effects and complications in a number of domains: psychological, (...) behavioural, affective and social. In many cases, patients express difficulty adjusting from being chronically ill to their new status as ‘treated’ or ‘seizure free’. This postoperative response adjustment has been described in the literature on epilepsy as the ‘Burden of Normality’ (BoN) syndrome. Most of the discussion about DBS postoperative changes to self is focused on abnormal side effects caused by the intervention (ie, hypersexuality, hypomania, etc). By contrast, relatively little attention is paid to the idea that successfully ‘treated’ individuals might experience difficulties in adjusting to becoming ‘normal’. The purpose of this paper is (1) to articulate the postoperative DBS psychosocial adjustment process in terms of the BoN syndrome, (2) to address whether the BoN syndrome illustrates that DBS treatment poses a threat to the patient’s identity, and (3) to examine whether the current framework for rehabilitation after DBS procedures should be updated and take into account the BoN syndrome as a postoperative self-change response. (shrink)
This article proposes a solution to the ‘paradox of normalcy’, a problem raised by the early Frankfurt Sehool in its questioning of basic concepts of psychoanalysis. After reviewing the different definitions of normalcy put forward by Freud, the paradoxical character of the concept of normalcy, as perceived by the various members of the Frankfurt School, will be made explicit. The solution to the paradox will take the form ofa practical ‘dis-solution’, and will bring to the fore a (...) fundamental principle of Critical Theory identified as the ’banning of graven images’, which will be shown to operate even in the contemporary work of Habermas.Cet artiele propose une solution au ‘paradoxe de la normalité’ qui émergea lors des analyses psychanalytiques de I’École de Francfort de première heure. Suite à une revue des différentes définitions de la normalité avancées par Freud, le caraetère paradoxaI du concept de normalité, tel que perçu par divers membres de l’École de Francfort, sera explieité. La solution au paradoxe prendra la forme d’une ‘dis-solution’ pratique et mettra en relief un prineipe fondamental de la Théorie Critique identifié comme ‘interdiction des idoles’, laquelle opère encore - comme iI sera démontré - dans I’oeuvre contemporaine de Habermas. (shrink)
Although the use of new health technologies in healthcare and medicine is generally seen as beneficial, there has been little analysis of the impact of such technologies on people's lives and understandings of health and illness. This book explores how new technologies not only provide hope for cure and well-being, but also introduce new ethical dilemmas and raise questions about the "natural" body. Focusing on the ways new health technologies intervene into our lives and affect our ideas about normalcy, (...) the body and identity, New Health Technologies explores: how new health technologies are understood by lay people and patients how the outcomes of these technologies are communicated in various clinical settings how these technologies can alter our notions of health and illness and create "new illness." Written by authors with differing backgrounds in phenomenology, social psychology, social anthropology, communication studies and the nursing sciences, this book is essential reading for students andacademics of medical sociology, health and allied studies, and anyone with an interest in new health technologies. (shrink)
It is widely assumed that the scope of indefinites is island insensitive, i.e., that, generally, an indefinite inside of a syntactic island, such as an adjunct clause, is capable of taking scope outside of that island. This paper challenges this assumption by studying the scope behaviour of the Spanish plural indefinite algunos (roughly, ‘some (pl.)’). It presents an experimental study that shows that the scope of algunos is not free and depends on its syntactic environment, at least in the dialect (...) of Spanish studied here. The paper discusses some of the implications of the study for current theories of indefinite scope: it points out the problems that choice functions and singleton indefinites have with the Spanish data, and it also discusses the implications for Schwarzschild's (2002) solution to the so-called ‘Donald Duck’ problem. (shrink)
Normic laws have the form "if A, then normally B." They are omnipresent in everyday life and non-physical 'life' sciences such as biology, psychology, social sciences, and humanities. They differ significantly from ceteris-paribus laws in physics. While several authors have doubted that normic laws are genuine laws at all, others have argued that normic laws express a certain kind of prototypical normality which is independent of statistical majority. This paper presents a foundation for normic laws which is based on generalized (...) evolution theory and explains their omnipresence, lawlikeness, and reliability. It is argued that the fact that normic laws are a product of evolution must establish a systematic connection between prototypical and statistical normality. (shrink)
The biological sciences employ a concept of normality that must be distinguished from statistical or value concepts. The concept of normality is presupposed in the standard explications of biological functions, and it is crucial to the strategy of explanation by approximations in, for example, physiology. Nevertheless, this concept of normality does not seem to be captured in the language of physics. Thus attempts at explaining the methodological relationship between the biological sciences and the physical sciences by concentrating only on the (...) concept of biological function cannot go very far. An analysis of the concept of normality is also necessary. (shrink)
This article draws on Husserl’s manuscripts from the 1920s and 1930s (especially on the as-yet unpublished D-manuscripts), arguing that each concrete experience is governed by an irreducible tension between two intersecting normative dimensions: primordial and intersubjective. Husserl’s ideas of normality and normativity have gained a lot of attention in recent years, but the normative aspects of primordial constitution have not been properly taken into account. By arguing for the “normative tension” between the primordial and the intersubjective, this article contributes to (...) filling in this lack. By doing so, it sheds new light on the debate concerning the relationship between genetic and generative phenomenology, challenging interpretations that exclusively render either the genetic-primordial or the generative-intersubjective as the constitutive absolute. (shrink)
The article asserts that Goffman's concept of normality comes close to the notion of trust as a protective mechanism that prevents chaos and disorder by providing us with feelings of safety, certainty, and familiarity. Arguing that to account for the tendency of social order to be seen as normal we need to conceptualize trust as the routine background of everyday interaction, the article analyzes Goffman's concepts of normal appearances, stigma, and frames as devices for endowing social order with predictability, reliability, (...) and legibility. For Goffman, normality is a collective achievement, which is possible because of the orderliness of interactional activities, which is-in turn-predicated "on a large base of shared cognitive presuppositions, if not normative ones, and self-sustained restraints" (Goffman 1983, American Sociological Review 48:1-53, p. 5 cited here). (shrink)
The paper presents the main ideas of Ultrafilter Logic (UL), as introduced by Veloso and others. A new proposal, Normality Logic (NL), is outlined for expanding the expressive power of UL. The system NL appears to offer a simpler solution to the problem of expressive power than the sorting strategy of Carnielli and Veloso. Interpretations of NL are discussed and an important point of contact to Hansson's notion of non-prioritized belief revision is observed.
In response to our target article, many of the commentators concentrated on our notion of Residual Normality. In our response, we focus on the questions raised by this idea. However, we also examine broader issues concerning the importance of incorporating a realistic theory of the process of development into explanations of developmental deficits.
Thomas & Karmiloff-Smith’ (T&K-S’) argument that the Residual Normality assumption is not valid for developmental disorders has implications for models of cognition in schizophrenia, a disorder that may involve a neurodevelopmental pathogenesis. A limiting factor for such theories is the lack of understanding about the nature of the cognitive system (modular components versus global processes). Moreover, it is unclear how the proposal that modularization emerges from developmental processes would change that fundamental question.
Se reflexionó sobre la personalidad normal, su relación con los valores ético-morales, y los aspectos en los que la personalidad del paciente con trastornos neuróticos se aparta de la normalidad y que varios criterios de la normalidad constituyen precisiones del concepto de valor ético-moral. Se concluyó que la personalidad del paciente con trastornos neuróticos se aparta de la mayoría de los criterios analizados de normalidad de la personalidad: los criterios de ausencia de psicopatología, el estadístico, el de relaciones interpersonales, el (...) evolutivo, y el funcional. It was reflected on the normal personality, its relationship to the ethical-moral values and aspects in which the neurotic personality deviates from normality and that various criteria of normality are clarifications of the moral value concept. It was concluded that the neurotic personality moves away from the absence of psychopathology, the statistics, the interpersonal relationships, the evolutionary and functional criteria, which constitute the majority of the analyzed criteria of normal personality. (shrink)
Thomas & Karmiloff-Smith (T&K-S) correctly identify Residual Normality (RN) as a critical assumption of some theorising about mental structure within developmental psychology. However, their simulations provide only weak support for the conditions under which RN may occur because they explore closely related architectures that share a learning algorithm. It is suggested that more work is required to establish the limits of RN.
We agree with the critique of the Residual Normality assumption. Moreover, we challenge monolithic views of functional normality. Throughout life, development and adaptation require variations in cortical functional circuitry within and across individuals. We propose the principle of “coconstructed functionality” which maintains that brain-behavior functional correspondences are dynamically coproduced by neurobiological, experiential, and contextual processes.
Thomas & Karmiloff-Smith (T&K-S) show that the assumption of residual normality (RN) does not hold in connectionist simulations, and argue that RN has been inappropriately applied to childhood disorders. We agree. However, we suggest that the RN hypothesis may never have been fully viable, either empirically or computationally.
From the end of the First World War, a broad discussion took place within the framework of the revived German constitutional teaching on the question of the physical normality of man. The founder of the so-called statistical concept of normality, which preceded the still widespread normal (reference) interval concept, is H. Rautmann, who gave it the character of a tool for discriminating between health and disease. Among some of his successors (Bauer, Borchardt, Günther), however, it was considered more a means (...) of establishing a type, without supposing any precise relation between the frequency of a character in the population and the probability of the occurrence of disease. The concept of a statistical norm as a certain region of the variation range of a character determined by the parameters of Gaussian distribution was criticized both by the supporters of the ideal norm (Hildebrandt) and those who were in favour of a ‘personal’ norm (Grote). The underlying motifs of these three conceptions of normality influenced German constitutional doctrine until after the end of the Second World War, but without a satisfactory solution to the diagnosis of physical normality being found. Since the 1950s, world medicine has moved more and more in the direction of prevention, with the emphasis on a study of individual dispositions to disease and its precursors. In this connection a new view of health has gained importance whereby it is considered a smoothly gradated condition, not sharply distinguished from disease (‘continual’ model of health and disease as opposed to the previous ‘alternative’ model). The purpose of diagnostic characters is no longer merely to place patients in clearly defined categories as healthy or affected by one disease or another, but has taken on the function of indices of the disposition to disease among those who exhibit ‘gross normality’. Discrimination between the alternative and continuous models allows a clarification to be made of the sources of the confusion in which the pre-war concept of statistical normality had found itself. Today many exceptions are known to the rule that the functional optimum lies in the region of the population mean, both for the population as a whole and for individuals; and immense variability has been found in the manner in which individuals in the population attain health. Thus a distinction between health and disease by statistical means alone (such as establishing some sort of species design) is not possible at all. The ideal of a personal norm is pursued today through the concept of a multivariate norm backed up by modern data processing methods, though it was anticipated in principle by Kaup even in the 1920s. CiteULike Connotea Del.icio.us What's this? (shrink)
This article analyses the different connotations of “normality” and “being natural,” bringing together the theoretical discussion from both human medicine and veterinary medicine. We show how the interpretations of the concepts in the different areas could be mutually fruitful. It appears that the conceptions of “natural” are more elaborate in veterinary medicine, and can be of value to human medicine. In particular they can nuance and correct conceptions of nature in human medicine that may be too idealistic. Correspondingly, the wide (...) ranging conceptions of “normal” in human medicine may enrich conceptions in veterinary medicine, where the discussions seem to be sparse. We do not argue that conceptions from veterinary medicine should be used in human medicine and vice versa, but only that it could be done and that it may well be fruitful. Moreover, there are overlaps between some notions of normal and natural, and further conceptual analysis on this overlap is needed. (shrink)
For a classical theory T, ℋ(T) denotes the intuitionistic theory of T-normal (i.e. locally T) Kripke structures. S. Buss has asked for a characterization of the theories in the range of ℋ and raised the particular question of whether HA is an ℋ-theory. We show that T i ∈ range(ℋ) iff T i = ℋ(T). As a corollary, no fragment of HA extending iΠ 1 belongs to the range of ℋ. A. Visser has already proved that HA is not in (...) the range of H by different methods. We provide more examples of theories not in the range of ℋ. We show PA-normality of once-branching Kripke models of HA + MP, where it is not known whether the same holds if MP is dropped. (shrink)
One of the debated issues regarding Residual Normality (RN) is frequency sensitivity in Williams syndrome (WS). We present some data on frequency sensitivity in Hungarian WS subjects. Based on vocabulary measures, we suggest that instead of the across-the-board frequency insensitivity proposed by some, a higher frequency threshold characterizes these subjects’performance. Results from a category fluency task show that whereas frequency sensitivity in WS is in line with controls, error patterns imply a qualitatively distinct, looser categorical organization. Regarding the much-debated issue (...) of morphological overgeneralizations, our data suggest that frequency sensitivity cuts across the divisions proposed by dual-process theories. In general, some of the frequency effects are the same as in typically developing populations, but with a delayed pattern. Frequency may be interpreted as supporting RN, but in WS it operates with higher thresholds that might be a general processing feature of WS individuals. (shrink)
This paper presents an adaptive logic enhancement of conditional logics of normality that allows for defeasible applications of Modus Ponens to conditionals. In addition to the possibilities these logics already offer in terms of reasoning about conditionals, this way they are enriched by the ability to perform default inferencing. The idea is to apply Modus Ponens defeasibly to a conditional and a fact on the condition that it is ?safe' to do so concerning the factual and conditional knowledge at hand. (...) It is for instance not safe if the given information describes exceptional circumstances: although birds usually fly, penguins are exceptional to this rule. The two adaptive standard strategies are shown to correspond to different intuitions, a skeptical and a credulous reasoning type, which manifest themselves in the handling of so-called floating conclusions. (shrink)
This paper addresses a recent suggestion that moral particularists can extend their view to countenance default reasons (at a first stab, reasons that are pro tanto unless undermined) by relying on certain background expectations of normality. I first argue that normality must be understood non-extensionally. Thus if default reasons rest on normality claims, those claims won't bestow upon default reasons any definite degree of extensional generality. Their generality depends rather on the contingent distributional aspects of the world, which no theory (...) of reasons should purport to settle. Appeals to default reasons cannot therefore uniquely support particularism. But this argument also implies that if moral generalism entailed that moral reasons by necessity have invariant valence (in the natural extensional sense), it would be a non-starter. Since generalism is not a non-starter, my argument forces us to rethink the parameters of the generalism-particularism debate. Here I propose to clarify the debate by focusing on its modal rather than extensional aspects. In closing, I outline the sort of generalism that I think is motivated by my discussion, and then articulate some worries this view raises about the theoretical usefulness of the label ‘default reason’. (shrink)
I contrast two approaches to the interpretation of generics such as ‘ravens are black:’ majority-based views, on which they are about what is the case most of the time, and inquiry-based views, on which they are about a feature we focus on in inquiry. I argue that majority-based views face far more systematic counterexamples than has previously been supposed. They cannot account for generics about kinds with multiple characteristic properties, such as ‘elephants live in Africa and Asia.’ I then go (...) on to sketch an inquiry-based view. (shrink)
I contrast two approaches to the interpretation of generics such as `ravens are black:' majority-based views, on which they are about what is the case most of the time, and inquiry-based views, on which they are about a feature we focus on in inquiry---an inductive target. I argue that while majority-based views are preferable based on the most basic data about generics, only inquiry-based views can account for a systematic class of sentences: generics with logically complex predicates, such as `cats (...) are black, white, and ginger.' Thus, inquiry-based views should carry the day. I then go on to sketch a theory of inductive targets. (shrink)
This is a comprehensive resource of original essays by leading thinkers exploring the newly emerging inter-disciplinary field of the philosophy of psychiatry. The contributors aim to define this exciting field and to highlight the philosophical assumptions and issues that underlie psychiatric theory and practice, the category of mental disorder, and rationales for its social, clinical and legal treatment. As a branch of medicine and a healing practice, psychiatry relies on presuppositions that are deeply and unavoidably philosophical. Conceptions of rationality, personhood (...) and autonomy frame our understanding and treatment of mental disorder. Philosophical questions of evidence, reality, truth, science, and values give meaning to each of the social institutions and practices concerned with mental health care. The psyche, the mind and its relation to the body, subjectivity and consciousness, personal identity and character, thought, will, memory, and emotions are equally the stuff of traditional philosophical inquiry and of the psychiatric enterprise. A new research field--the philosophy of psychiatry--began to form during the last two decades of the twentieth century. Prompted by a growing recognition that philosophical ideas underlie many aspects of clinical practice, psychiatric theorizing and research, mental health policy, and the economics and politics of mental health care, academic philosophers, practitioners, and philosophically trained psychiatrists have begun a series of vital, cross-disciplinary exchanges. This volume provides a sampling of the research yield of those exchanges. Leading thinkers in this area, including clinicians, philosophers, psychologists, and interdisciplinary teams, provide original discussions that are not only expository and critical, but also a reflection of their authors' distinctive and often powerful and imaginative viewpoints and theories. All the discussions break new theoretical ground. As befits such an interdisciplinary effort, they are methodologically eclectic, and varied and divergent in their assumptions and conclusions; together, they comprise a significant new exploration, definition, and mapping of the philosophical aspects of psychiatric theory and practice. (shrink)
The 1990 Americans with Disabilities Act enacted a conceptual shift in the meaning of ‘disability.’ Rather than defining ‘disability’ as a disadvantageous physical or mental deficit of persons, it codifies the understanding of ‘disability’ as a defective state of society which disadvantages these persons. In contrast, the standard medical model incorrectly conceptualizes disabled persons as biologically inferior, and thus confines them to the role of recipients of benevolence or care. Turning to an ethic of caring yields counter-intuitive results that conflict (...) with the conceptual apparatus of the ADA. It is argued that in order to liberate social thought from this medical model and thus move the disabled from being socially marginalized to being socially enabled, one must re-conceptualize current practice by adopting the ADA's conceptual framework. Keywords: caring, disability, equality, ethics, health care polic CiteULike Connotea Del.icio.us What's this? (shrink)
It is impossible to discuss the constructs and in a single coherent essay. The following three rejoinders address each of these exceedingly complex constructs individually, as each relates to the two-path model of sociopathy and psychopathy.
We recall some notions introduced and developed by António Aniceto Monteiro, and show how these notions have been used and generalised, thus establishing a direct and indirect influence of Monteiro’s work that extends to this day.
Normic Laws and the Significance of Nonmonotonic Reasoning for Philosophy of Science. Normic laws have the form ‘if A then normally B’. They have been discovered in the explanation debate, but were considered as empirically vacuous (§1). I argue that the prototypical (or ideal) normality of normic laws implies statistical normality (§2), whence normic laws have empirical content. In §3–4 I explain why reasoning from normic laws is nonmonotonic, and why the understanding of the individual case is so important here. (...) After sketching some foundations of nonmonotonic reasoning as developed by AI-researchers (§5), Iargue that normic laws are also the best way to understand ceteris paribus laws (§6). §7 deals with the difference between physical and non-physical disciplines and §9 with the difference between normicity and approximation. In §8 it is shown how nonmonotonic reasoning provides a new understanding of the protection of theories against falsification by auxiliary hypotheses. §10, finally, gives a system- and evolution-theoretical explanation of the deeper reason for the omnipresence of normic laws in practice and science, and forthe connection between ideal and statistical normality. (shrink)