There has been much debate regarding the 'double-effect' of sedatives and analgesics administered at the end-of-life, and the possibility that health professionals using these drugs are performing 'slow euthanasia.' On the one hand analgesics and sedatives can do much to relieve suffering in the terminally ill. On the other hand, they can hasten death. According to a standard view, the administration of analgesics and sedatives amounts to euthanasia when the drugs are given with an intention to hasten death. In this (...) paper we report a small qualitative study based on interviews with 8 Australian general physicians regarding their understanding of intention in the context of questions about voluntary euthanasia, assisted suicide and particularly the use of analgesic and sedative infusions (including the possibility of voluntary or non-voluntary 'slow euthanasia'). We found a striking ambiguity and uncertainty regarding intentions amongst doctors interviewed. Some were explicit in describing a 'grey' area between palliation and euthanasia, or a continuum between the two. Not one of the respondents was consistent in distinguishing between a foreseen death and an intended death. A major theme was that 'slow euthanasia' may be more psychologically acceptable to doctors than active voluntary euthanasia by bolus injection, partly because the former would usually only result in a small loss of 'time' for patients already very close to death, but also because of the desirable ambiguities surrounding causation and intention when an infusion of analgesics and sedatives is used. The empirical and philosophical implications of these findings are discussed. (shrink)
The moral importance of the ‘intention–foresight’ distinction has long been a matter of philosophical controversy, particularly in the context of end-of-life care. Previous empirical research in Australia has suggested that general physicians and surgeons may use analgesic or sedative infusions with ambiguous intentions, their actions sometimes approximating ‘slow euthanasia’. In this paper, we report findings from a qualitative study of 18 Australian palliative care medical specialists, using in-depth interviews to address the use of sedation at the end of life. The (...) majority of subjects were agnostic or atheistic. In contrast to their colleagues in acute medical practice, these Australian palliative care specialists were almost unanimously committed to distinguishing their actions from euthanasia. This commitment appeared to arise principally from the need to maintain a clear professional role, and not obviously from an ideological opposition to euthanasia. While some respondents acknowledged that there are difficult cases that require considered reflection upon one's intention, and where there may be some ‘mental gymnastics,’ the nearly unanimous view was that it is important, even in these difficult cases, to cultivate an intention that focuses exclusively on the relief of symptoms. We present four narratives of ‘terminal’ sedation – cases where sedation was administered in significant doses just before death, and may well have hastened death. Considerable ambiguities of intention were evident in some instances, but the discussion around these clearly exceptional cases illustrates the importance of intention to palliative care specialists in maintaining their professional roles. (shrink)
In an article somewhat ironically entitled “Disambiguating Clinical Intentions,” Lynn Jansen promotes an idea that should be bewildering to anyone familiar with the literature on the intention/foresight distinction. According to Jansen, “intention” has two commonsense meanings, one of which is equivalent to “foresight.” Consequently, questions about intention are “infected” with ambiguity—people cannot tell what they mean and do not know how to answer them. This hypothesis is unsupported by evidence, but Jansen states it as if it were accepted fact. In (...) this reply, we make explicit the multiple misrepresentations she has employed to make her hypothesis seem plausible. We also point out the ways in which it defies common sense. In particular, Jansen applies her thesis only to recent empirical research on the intentions of doctors, totally ignoring the widespread confusion that her assertion would imply in everyday life, in law, and indeed in religious and philosophical writings concerning the intention/foresight distinction and the Principle of Double Effect. (shrink)
Are the methods of synthetic biology capable of recreating authentic living members of an extinct species? An analogy with the restoration of destroyed natural landscapes suggests not. The restored version of a natural landscape will typically lack much of the aesthetic value of the original landscape because of the different historical processes that created it—processes that involved human intentions and actions, rather than natural forces acting over millennia. By the same token, it would appear that synthetically recreated versions of extinct (...) natural organisms will also be less aesthetically valuable than the originals; that they will be, in some strong sense, ‘inauthentic’, because of their peculiar history and mode of origin. I call this the ‘genesis argument’ against de-extinction. In this article I critically evaluate the genesis argument. I highlight an important disanalogy between living organisms and natural landscapes: viz., it is of the essence of the former, but not of the latter, to regularly reproduce and die. The process of iterated natural reproduction that sustains the continued existence of a species through time obviously does not undermine the authenticity of the species. I argue that the authenticity of a species will likewise be left intact by the kind of artificial copying of genes and traits that a de-extinction project entails. I conclude on this basis that the genesis argument is unsound. (shrink)
Cell and tissue-based products, such as autologous adult stem cells, are being prescribed by physicians across the world for diseases and illnesses that they have neither been approved for or been demonstrated as safe and effective in formal clinical trials. These doctors often form part of informal transnational networks that exploit differences and similarities in the regulatory systems across geographical contexts. In this paper, we examine the regulatory infrastructure of five geographically diverse but socio-economically comparable countries with the aim of (...) identifying similarities and differences in how these products are regulated and governed within clinical contexts. We find that while there are many subtle technical differences in how these regulations are implemented, they are sufficiently similar that it is difficult to explain why these practices appear more prevalent in some countries and not in others. We conclude with suggestions for how international governance frameworks might be improved to discourage the exploitation of vulnerable patient populations while enabling innovation in the clinical application of cellular therapies. (shrink)
David Lewis describes, then attempts to refute, a simple anti-Humean theory of desire he calls ‘Desire as Belief’. Lewis’ critics generally accept that his argument is sound and focus instead on trying to show that its implications are less severe than appearances suggest. In this paper I argue that Lewis’ argument is unsound. I show that it rests on an essential assumption that can be straightforwardly proven false using ideas and principles to which Lewis is himself committed.
This chapter introduces the two main philosophical questions that are raised by the prospect of extinct species being brought back from the dead—namely, the ‘Authenticity Question’ and the ‘Ethical Question’. It distinguishes different types of de-extinction, and different methods by which de-extinction can be accomplished. Finally, it examines the aims of wildlife conservation with a view to whether they are compatible with de-extinction, or not.
The counterfactual account of physical computation is simple and, for the most part, very attractive. However, it is usually thought to trivialize the notion of physical computation insofar as it implies ‘limited pancomputationalism’, this being the doctrine that every deterministic physical system computes some function. Should we bite the bullet and accept limited pancomputationalism, or reject the counterfactual account as untenable? Jack Copeland would have us do neither of the above. He attempts to thread a path between the two horns (...) of the dilemma by buttressing the counterfactual account with extra conditions intended to block certain classes of deterministic physical systems from qualifying as physical computers. His theory is called the ‘algorithm execution account’. Here we show that the algorithm execution account entails limited pancomputationalism, despite Copeland’s argument to the contrary. We suggest, partly on this basis, that the counterfactual account should be accepted as it stands, pancomputationalist warts and all. (shrink)
This chapter surveys and critically evaluates all the main arguments both for and against de-extinction. It presents a qualified defence of the claim that conservationists should embrace de-extinction. It ends with a list of do’s and don’ts for conservationist de-extinction projects.
This book is about the philosophy of de-extinction. -/- CHAPTER 1 introduces the two main philosophical questions that are raised by the prospect of extinct species being brought back from the dead—namely, the ‘Authenticity Question’ and the ‘Ethical Question’. It distinguishes the many different types and methods of de-extinction. Finally, it examines the aims of wildlife conservation with a view to whether they are compatible with de-extinction, or not. -/- CHAPTER 2 examines three prime candidates for de-extinction—namely, the aurochs, the (...) woolly mammoth, and the passenger pigeon. It is about what these animals were like, why people want to resurrect them, and the methods by which their resurrections could be accomplished. -/- CHAPTER 3 is about the authenticity of de-extinct animals. Critics of de-extinction have offered many reasons for thinking that the products of de-extinction will be inauthentic. The bulk of the chapter is taken up with surveying their arguments. We attempt to show that none are convincing, and end the chapter by offering and defending two arguments in favour of the view that authentic de-extinctions are possible. -/- CHAPTER 4 surveys and critically evaluates all the main arguments both for and against de-extinction. It presents a qualified defence of the claim that conservationists should embrace de-extinction. It ends with a list of do’s and don’ts for conservationist de-extinction projects. (shrink)
Is the resurrection of an extinct species genuinely possible, or not? Will organisms produced by de-extinction technology be authentic new members of the species that died out, or just convincing fakes? We seek to answer these questions in this chapter. Critics of de-extinction have offered many reasons for thinking that the products of de-extinction will be inauthentic. The bulk of the chapter is taken up with surveying their arguments. We attempt to show that none are convincing. We end the chapter (...) by offering and defending two arguments in favour of the view that authentic de-extinctions are possible. (shrink)
This chapter examines three prime candidates for de-extinction—namely, the aurochs, the woolly mammoth, and the passenger pigeon. It will be about what these animals were like, why people want to resurrect them, and the methods by which their resurrections could be accomplished.
This paper concerns the three great modal dichotomies: (i) the necessary/contingent dichotomy; (ii) the a priori/empirical dichotomy; and (iii) the analytic/synthetic dichotomy. These can be combined to produce a tri-dichotomy of eight modal categories. The question as to which of the eight categories house statements and which do not is a pivotal battleground in the history of analytic philosophy, with key protagonists including Descartes, Hume, Kant, Kripke, Putnam and Kaplan. All parties to the debate have accepted that some categories are (...) void. This paper defends the contrary view that all eight categories house statements—a position I dub ‘octopropositionalism’. Examples of statements belonging to all eight categories are given. (shrink)
In 2014, American doctor Ian Crozier chose to travel to Sierra Leone to help fight the West African Ebola epidemic. He contracted Ebola himself and was evacuated to the US, where he received hospital treatment for 40 days. Crozier knowingly chose to expose himself to a risk of contracting Ebola, and thus appears to be at least somewhat morally responsible for his infection. Did this responsibility weaken his justice-based claim to publicly funded treatment? On one influential view—luck egalitarianism—the answer is (...) ‘yes’. Or so Albertsen and Thaysen suggest in this issue.1 According to luck egalitarianism, justice requires the elimination of inequalities between people, but only when the relative harm borne by those on the wrong end of the inequality is a matter of luck. Albertsen and Thaysen understand luck egalitarianism to entail that, when one is responsible for befalling a harm, one has no claim in justice to the mitigation of that harm. On this view, they suggest, Crozier’s claim to treatment would be similar to that of a reckless Alpine skier who is injured in a skiing accident, and would be weaker than that of those with a normal claim to treatment—those who bear no responsibility for their illness. Both of these implications seem implausible, however. Intuitively, Crozier’s claim to treatment is no weaker than those with normal claims, and certainly stronger than the reckless skier’s. Can luck egalitarianism be ‘adjusted or interpreted’ so as to avoid these implausible implications? The authors suggest that it can. Their innovation is to distinguish heroic doctors like Crozier from reckless skiers by invoking a distinction between being responsible for creating harm and for incurring harm. Their thought is that one creates harm when one brings harm on oneself without also preventing at least as much harm to others. Thus, the harm that …. (shrink)
Christiane Bailey and Chloë Taylor (Editorial Introduction) Sue Donaldson (Stirring the Pot - A short play in six scenes) Ralph Acampora (La diversification de la recherche en éthique animale et en études animales) Eva Giraud (Veganism as Affirmative Biopolitics: Moving Towards a Posthumanist Ethics?) Leonard Lawlor (The Flipside of Violence, or Beyond the Thought of Good Enough) Kelly Struthers Montford (The “Present Referent”: Nonhuman Animal Sacrifice and the Constitution of Dominant Albertan Identity) James Stanescu (Beyond Biopolitics: Animal Studies, Factory Farms, (...) and the Advent of Deading Life) Ian Werkheiser (Domination and Consumption: An Examination of Veganism, Anarchism, and Ecofeminism) Cynthia Willett (Water and Wing Give Wonder: Trans-Species Cosmopolitanism) Corey Lee Wrenn (Nonhuman Animal Rights, Alternative Food Systems, and the Non-Profit Industrial Complex) Emily R. Douglas (Eat or Be Eaten: A Feminist Phenomenology of Women as Food) Gary Steiner’s Animals and the Limits of Postmodernism (New York: Columbia University Press, 2013) Chloë Taylor (“Postmodern” Critical Animal Theory: A Defense) Patrick Llored (La déconstruction derridienne peut-elle fonder une communauté politique et morale entre vivants humains et non humains?) Jan Dutkiewicz (“Postmodernism,” Politics, and Pigs) Gary Steiner (Response to Commentators). (shrink)
The rosy dawn of my title refers to that optimistic time when the logical concept of a natural kind originated in Victorian England. The scholastic twilight refers to the present state of affairs. I devote more space to dawn than twilight, because one basic problem was there from the start, and by now those origins have been forgotten. Philosophers have learned many things about classification from the tradition of natural kinds. But now it is in disarray and is unlikely to (...) be put back together again. My argument is less founded on objections to the numerous theories now in circulation, than on the sheer proliferation of incompatible views. There no longer exists what Bertrand Russell called ‘the doctrine of natural kinds’—one doctrine. Instead we have a slew of distinct analyses directed at unrelated projects. (shrink)
ABSTRACT In this wide-ranging interview Professor Douglas V. Porpora discusses a number of issues. First, how he became a Critical Realist through his early work on the concept of structure. Second, drawing on his Reconstructing Sociology, his take on the current state of American sociology. This leads to discussion of the broader range of his work as part of Margaret Archer’s various Centre for Social Ontology projects, and on moral-macro reasoning and the concept of truth in political discourse.
How is a person's freedom related to his or her preferences? Liberal theorists of negative freedom have generally taken the view that the desire of a person to do or not do something is irrelevant to the question of whether he is free to do it. Supporters of the “pure negative” conception of freedom have advocated this view in its starkest form: they maintain that a person is unfree to Φ if and only if he is prevented from Φ-ing by (...) the conduct or dispositions of some other person. This definition of freedom is value-neutral in the sense that no reference is made to preferences over options or indeed to any other indicators of the values of options, either in the characterization of “Φ-ing” itself or in the characterization of the way in which Φ-ing can be constrained. (shrink)
If “perfectionism” in ethics refers to those normative theories that treat the fulfillment or realization of human nature as central to an account of both goodness and moral obligation, in what sense is “human flourishing” a perfectionist notion? How much of what we take “human flourishing” to signify is the result of our understanding of human nature? Is the content of this concept simply read off an examination of our nature? Is there no place for diversity and individuality? Is the (...) belief that the content of such a normative concept can be determined by an appeal to human nature merely the result of epistemological naiveté? What is the exact character of the connection between human flourishing and human nature? These questions are the ultimate concern of this essay, but to appreciate the answers that will be offered it is necessary to understand what is meant by “human flourishing.” “Human flourishing” is a relatively recent term in ethics. It seems to have developed in the last two decades because the traditional translation of the Greek term eudaimonia as “happiness” failed to communicate clearly that eudaimonia was an objective good, not merely a subjective good. (shrink)
Introduction: The many faces of human nature / Agustín Fuentes and Aku Visala Chapter 1. Off human nature / Jonathan Marks. Response I. On your marks... get set, we’re off human nature / James M. Calcagno ; Response II. Rethinking human nature : comments on Jonathan Marks’s anti-essentialism / Phillip R. Sloan ; Response III. Off human nature and on human culture : the importance of the concept of culture to science and society / Robert Sussman and Linda Sussman Chapter (...) 2. "To human" is a verb / Tim Ingold. Response I. Free and easy wandering : humans, humane education, and designing in harmony with the nature of the way / Susan D. Blum ; Response II. On human natures : anthropological and Jewish musings / Richard Sosis ; Response III. The humanifying adventure : a response to Tim Ingold / Markus Mühling ; Response IV. The ontogenesis of human moral becoming / Darcia Narvaez Chapter 3. Recognizing the complexity of personhood : complex emergent developmental linguistic relational neurophysiologicalism / Warren Brown and Brad D. Strawn. Response I. "Self-organizing personhood" and many loose ends / Lluis Oviedo ; Response II. A last hurrah for dualism? / Kelly James Clark ; Response III. Why the foundational question about human nature is open and empirical / Carl Gillett Chapter 4. Human origins and the emergence of a distinctively human imagination : theology and the archaeology of personhood / J. Wentzel van Huyssteen. Response I. Constructing the face, creating the collective : Neolithic mediation of personhood / Ian Kuijt ; Response II. Imago Dei and the glabrous ape / Douglas Hedley Chapter 5. What is human nature for? / Grant Ramsey. Response I. The difficulties of forsaking normativity / Neil Arner ; Response II. Some remarks on human nature and naturalism / Aku Visala Epilogues. Putting evolutionary theory to work in investigating human nature / Agustín Fuentes ; Moving us forward? / Celia Deane-Drummond. (shrink)
Discussions of Karl Popper's falsificationist philosophy of science appear regularly in the recent literature on economic methodology. In this literature, there seem to be two fundamental points of agreement about Popper. First, most economists take Popper's falsificationist method of bold conjecture and severe test to be the correct characterization of scientific conduct in the physical sciences. Second, most economists admit that economic theory fails miserably when judged by these same falsificationist standards. As Latsis states, “the development of economic analysis would (...) look a dismal affair through falsificationist spectacles.”. (shrink)
Douglas proposes a new ideal in which values serve an essential function throughout scientific inquiry, but where the role values play is constrained at key points, protecting the integrity and objectivity of science.
In this article, I argue that Brad Hooker's rule-consequentialism implausibly implies that what earthlings are morally required to sacrifice for the sake of helping their less fortunate brethren depends on whether or not other people exist on some distant planet even when these others would be too far away for earthlings to affect.
This 1983 book is a lively and clearly written introduction to the philosophy of natural science, organized around the central theme of scientific realism. It has two parts. 'Representing' deals with the different philosophical accounts of scientific objectivity and the reality of scientific entities. The views of Kuhn, Feyerabend, Lakatos, Putnam, van Fraassen, and others, are all considered. 'Intervening' presents the first sustained treatment of experimental science for many years and uses it to give a new direction to debates about (...) realism. Hacking illustrates how experimentation often has a life independent of theory. He argues that although the philosophical problems of scientific realism can not be resolved when put in terms of theory alone, a sound philosophy of experiment provides compelling grounds for a realistic attitude. A great many scientific examples are described in both parts of the book, which also includes lucid expositions of recent high energy physics and a remarkable chapter on the microscope in cell biology. (shrink)
In this important new study Ian Hacking continues the enquiry into the origins and development of certain characteristic modes of contemporary thought undertaken in such previous works as his best selling Emergence of Probability. Professor Hacking shows how by the late nineteenth century it became possible to think of statistical patterns as explanatory in themselves, and to regard the world as not necessarily deterministic in character. Combining detailed scientific historical research with characteristic philosophic breath and verve, The Taming of Chance (...) brings out the relations among philosophy, the physical sciences, mathematics and the development of social institutions, and provides a unique and authoritative analysis of the "probabilization" of the Western world. (shrink)
Commonsense Consequentialism is a book about morality, rationality, and the interconnections between the two. In it, Douglas W. Portmore defends a version of consequentialism that both comports with our commonsense moral intuitions and shares with other consequentialist theories the same compelling teleological conception of practical reasons. Broadly construed, consequentialism is the view that an act's deontic status is determined by how its outcome ranks relative to those of the available alternatives on some evaluative ranking. Portmore argues that outcomes should (...) be ranked, not according to their impersonal value, but according to how much reason the relevant agent has to desire that each outcome obtains and that, when outcomes are ranked in this way, we arrive at a version of consequentialism that can better account for our commonsense moral intuitions than even many forms of deontology can. What's more, Portmore argues that we should accept this version of consequentialism, because we should accept both that an agent can be morally required to do only what she has most reason to do and that what she has most reason to do is to perform the act that would produce the outcome that she has most reason to want to obtain.Although the primary aim of the book is to defend a particular moral theory, Portmore defends this theory as part of a coherent whole concerning our commonsense views about the nature and substance of both morality and rationality. Thus, it will be of interest not only to those working on consequentialism and other areas of normative ethics, but also to those working in metaethics. Beyond offering an account of morality, Portmore offers accounts of practical reasons, practical rationality, and the objective/subjective obligation distinction. (shrink)
Moral philosophy has been increasingly concerned with the nature of emotion and its ethical significance. Almost no attention, however, has been paid to disgust, in spite of its evident connections to taboos, exclusionary policies, and severe forms of moral, political, and aesthetic condemnation. This dissertation offers a theory of revulsion. On the basis of this account, it also gives us a way of thinking about intimate or tactile features of moral agency, which play a vital role in maintaining those various (...) practices. ;The analysis begins with a detailed portrait of the expansive and curious realm of disgust and derives from the portrait a set of desiderata by which to build a theory of revulsion. The views of the three primary theorists of disgust are evaluated: the anomaly thesis of the anthropologist Mary Douglas, the animal reminder theory of the empirical psychologist Paul Rozin, and the generative lowliness account of the social historian William Ian Miller. None of these satisfies the desiderata, and they also fail for reasons internal to their own viewpoints. The works of Julia Kristeva on abjection and Noel Carroll on art-horror are also considered. ;Both the portrait and the set of desiderata direct attention upon not just objects commonly deemed to be impure, but pollution processes and the wider emotional dynamics with which they are integrated. Due consideration of these various factors reveals that disgust is generally about carnal disvalue, or somatic unwholesomeness, and paradigmatically about filthiness and its smear. Disgust is so concerned because its phenomenology functions as a foretaste of unhealthiness. And it is aided in its task by a physiognomic structure calibrated to various organic signs taken to indicate deeper somatic defilement. One of the special features of such carnal disvalue is the facility with which it extends to larger and more abstract realms of disvalue, like the morally or aesthetically repulsive. These reflections on revulsion suggest that our mortality is as much about the organic or carnal character of living and dying, as it is cessation of life. (shrink)
A rational defense of the criminal law must provide a comprehensive theory of culpability. A comprehensive theory of culpability must resolve several difficult issues; in this article I will focus on only one. The general problem arises from the lack of a systematic account of relative culpability. An account of relative culpability would identify and defend a set of considerations to assess whether, why, under what circumstances, and to what extent persons who perform a criminal act with a given culpable (...) state are more or less blameworthy than persons who perform that act with a different culpable state. (shrink)
Historical records show that there was no real concept of probability in Europe before the mid-seventeenth century, although the use of dice and other randomizing objects was commonplace. Ian Hacking presents a philosophical critique of early ideas about probability, induction, and statistical inference and the growth of this new family of ideas in the fifteenth, sixteenth, and seventeenth centuries. Hacking invokes a wide intellectual framework involving the growth of science, economics, and the theology of the period. He argues that the (...) transformations that made it possible for probability concepts to emerge have constrained all subsequent development of probability theory and determine the space within which philosophical debate on the subject is still conducted. First published in 1975, this edition includes an introduction that contextualizes his book in light of developing philosophical trends. Ian Hacking is the winner of the Holberg International Memorial Prize 2009. (shrink)
Develops a logical analysis of dialogue in which two or more parties attempt to advance their own interests. It includes a classification of the major types of dialogues and a discussion of several important informal fallacies.
From the time of its clearest origins with Pascal, the theory of probabilities seemed to offer means by which the study of human affairs might be reduced to the same kind of mathematical discipline that was already being achieved in the study of nature. Condorcet is to a great extent merely representative of the philosophers of the seventeenth and eighteenth centuries who were led on by the prospect of developing moral and political sciences on the pattern of the natural sciences, (...) specifically physics. The development of economics and the social sciences, from the eighteenth century onwards, may be said in part to have fulfilled and in a manner to have perpetuated these ambitions. In so far as the new sciences have been susceptible of mathematical treatment, this has not been confined to the calculus of probabilities. But there is a temptation at every stage to ascribe fundamental significance and universal applicability to each latest mathematical device that is strikingly useful or illuminating on its first introduction. It is the theory of games that enjoys this position at present, and shapes the common contemporary conception of the very same problems that preoccupied Condorcet. (shrink)
This book provides a systematic analysis of many common argumentation schemes and a compendium of 96 schemes. The study of these schemes, or forms of argument that capture stereotypical patterns of human reasoning, is at the core of argumentation research. Surveying all aspects of argumentation schemes from the ground up, the book takes the reader from the elementary exposition in the first chapter to the latest state of the art in the research efforts to formalize and classify the schemes, outlined (...) in the last chapter. It provides a systematic and comprehensive account, with notation suitable for computational applications that increasingly make use of argumentation schemes. (shrink)
Fundamentals of Critical Argumentation presents the basic tools for the identification, analysis, and evaluation of common arguments for beginners. The book teaches by using examples of arguments in dialogues, both in the text itself and in the exercises. Examples of controversial legal, political, and ethical arguments are analyzed. Illustrating the most common kinds of arguments, the book also explains how to evaluate each kind by critical questioning. Douglas Walton shows how arguments can be reasonable under the right dialogue conditions (...) by using critical questions to evaluate them. The book teaches by example, both in the text itself and in exercises, but it is based on methods that have been developed through the author's thirty years of research in argumentation studies. (shrink)