Two common themes emerge in our writings over the past several decades. Estelle Jorgensen has focused partially and significantly on models and metaphors that undergird music education.1 Iris Yob has examined the role of higher education generally and music education specifically in creating positive social change.2 At times, and against the backdrop of recent writing on music education, social change, and social justice,3 we each have explored topics in the other's area of interest.4 Neither of us, however, has systematically (...) brought together the two themes: building practices on grounding metaphors for developing music education as a means for promoting the common good. In this paper, our conversation explores... (shrink)
Next SectionBackground Scientific papers are retracted for many reasons including fraud (data fabrication or falsification) or error (plagiarism, scientific mistake, ethical problems). Growing attention to fraud in the lay press suggests that the incidence of fraud is increasing. Methods The reasons for retracting 742 English language research papers retracted from the PubMed database between 2000 and 2010 were evaluated. Reasons for retraction were initially dichotomised as fraud or error and then analysed to determine specific reasons for retraction. Results Error was (...) more common than fraud (73.5% of papers were retracted for error (or an undisclosed reason) vs 26.6% retracted for fraud). Eight reasons for retraction were identified; the most common reason was scientific mistake in 234 papers (31.5%), but 134 papers (18.1%) were retracted for ambiguous reasons. Fabrication (including data plagiarism) was more common than text plagiarism. Total papers retracted per year have increased sharply over the decade (r=0.96; p<0.001), as have retractions specifically for fraud (r=0.89; p<0.001). Journals now reach farther back in time to retract, both for fraud (r=0.87; p<0.001) and for scientific mistakes (r=0.95; p<0.001). Journals often fail to alert the naïve reader; 31.8% of retracted papers were not noted as retracted in any way. Conclusions Levels of misconduct appear to be higher than in the past. This may reflect either a real increase in the incidence of fraud or a greater effort on the part of journals to police the literature. However, research bias is rarely cited as a reason for retraction. (shrink)
Background Papers retracted for fraud (data fabrication or data falsification) may represent a deliberate effort to deceive, a motivation fundamentally different from papers retracted for error. It is hypothesised that fraudulent authors target journals with a high impact factor (IF), have other fraudulent publications, diffuse responsibility across many co-authors, delay retracting fraudulent papers and publish from countries with a weak research infrastructure. Methods All 788 English language research papers retracted from the PubMed database between 2000 and 2010 were evaluated. Data (...) pertinent to each retracted paper were abstracted from the paper and the reasons for retraction were derived from the retraction notice and dichotomised as fraud or error. Data for each retracted article were entered in an Excel spreadsheet for analysis. Results Journal IF was higher for fraudulent papers (p<0.001). Roughly 53% of fraudulent papers were written by a first author who had written other retracted papers (‘repeat offender’), whereas only 18% of erroneous papers were written by a repeat offender (χ=88.40; p<0.0001). Fraudulent papers had more authors (p<0.001) and were retracted more slowly than erroneous papers (p<0.005). Surprisingly, there was significantly more fraud than error among retracted papers from the USA (χ2=8.71; p<0.05) compared with the rest of the world. Conclusions This study reports evidence consistent with the ‘deliberate fraud’ hypothesis. The results suggest that papers retracted because of data fabrication or falsification represent a calculated effort to deceive. It is inferred that such behaviour is neither naïve, feckless nor inadvertent. (shrink)
This article presents a conceptual investigation into the value impacts and relations of algorithms in the domain of justice and security. As a conceptual investigation, it represents one step in a value sensitive design based methodology. Here, we explicate and analyse the expression of values of accuracy, privacy, fairness and equality, property and ownership, and accountability and transparency in this context. We find that values are sensitive to disvalue if algorithms are designed, implemented or deployed inappropriately or without sufficient consideration (...) for their value impacts, potentially resulting in problems including discrimination and constrained autonomy. Furthermore, we outline a framework of conceptual relations of values indicated by our analysis, and potential value tensions in their implementation and deployment with a view towards supporting future research, and supporting the value sensitive design of algorithms in justice and security. (shrink)
Leibniz viewed the principle of continuity, the principle that all natural changes are produced by degrees, as a useful heuristic for evaluating the truth of a theory. Since the Cartesian laws of motion entailed discontinuities in the natural order, Leibniz could safely reject it as a false theory. The principle of continuity has similar implications for analyses of Leibniz's theory of consciousness. I briefly survey the three main interpretations of Leibniz's theory of consciousness and argue that the standard account entails (...) a discontinuity that Leibniz could not allow. I argue that the principle of continuity and the textual data favor an interpretation according to which a conscious mental state just is a perception that is distinct to a sufficient degree. (shrink)
Most moral psychologists have come to accept two types of moral reasoning: Kohlberg's justice and Gilligan's care, but there still seem to be some unresolved issues. By analysing and comparing Kohlberg's statement on some theoretical issues with some of Gilligan's statements in an interview in April 2003, I will look at some key issues in the so?called ?Kohlberg?Gilligan conflict?. Some of the questions raised in this paper are: (1) Does Gilligan reject the idea of developmental morality? (2) Does Gilligan support (...) Kohlberg's stage theory and his claim of universality? (3) Did Kohlberg reject Gilligan's proposal to expand his understanding of moral reasoning? (4) Was Gilligan's theory a critique of or an expansion to Kohlberg's theory? The findings of this analysis suggest that the first question be answered negatively, the second positively, the third negatively and the fourth that Gilligan's theory is an expansion rather than a critique. (shrink)
Background Clinical papers so flawed that they are eventually retracted may put patients at risk. Patient risk could arise in a retracted primary study or in any secondary study that draws ideas or inspiration from a primary study. Methods To determine how many patients were put at risk, we evaluated 788 retracted English-language papers published from 2000 to 2010, describing new research with humans or freshly derived human material. These primary papers—together with all secondary studies citing them—were evaluated using ISI (...) Web of Knowledge. Excluded from study were 468 basic science papers not studying fresh human material; 88 reviews presenting older data; 22 case reports; 7 papers retracted for journal error and 23 papers unavailable on Web of Knowledge. Overall, 180 retracted primary papers (22.8%) met the inclusion criteria. Subjects enrolled and patients treated in 180 primary studies and 851 secondary studies were combined. Results Retracted papers were cited over 5000 times, with 93% of citations being research related, suggesting that ideas promulgated in retracted papers can influence subsequent research. Over 28 000 subjects were enrolled—and 9189 patients were treated—in 180 retracted primary studies. Over 400 000 subjects were enrolled—and 70 501 patients were treated—in 851 secondary studies which cited a retracted paper. Papers retracted for fraud (n=70) treated more patients per study (p<0.01) than papers retracted for error (n=110). Conclusions Many patients are put at risk by retracted studies. These are conservative estimates, as only patients enrolled in published clinical studies were tallied. (shrink)
The aim of this study was to explore and interpret the diverse subject of positions, or roles, that nurses construct when caring for patients in their own home. Ten interviews were analysed and interpreted using discourse analysis. The findings show that these nurses working in home care constructed two positions: `guest' and `professional'. They had to make a choice between these positions because it was impossible to be both at the same time. An ethics of care and an ethics of (...) justice were present in these positions, both of which create diverse ethical appeals, that is, implicit demands to perform according to a guest or to a professional norm. (shrink)
In this article, I develop a higher-order interpretation of Leibniz's theory of consciousness according to which memory is constitutive of consciousness. I offer an account of Leibniz's theory of memory on which his theory of consciousness may be based, and I then show that Leibniz could have developed a coherent higher-order account. However, it is not clear whether Leibniz held (or should have held) such an account of consciousness; I sketch an alternative that has at least as many advantages as (...) the higher-order theory. This analysis provides an important antecedent to the contemporary discussions of higher-order theories of consciousness. (shrink)
ABSTRACTEffects of metaphorical framing of political issues on opinion have been studied widely by two approaches: a critical-discourse approach and a response-elicitation approach. The current article reports a systematic literature review that examines whether these approaches report converging or diverging effects. We compared CDA and REA on the metaphorical frames that were studied and their reported effects. Results show that the CDA frames are typically more negative, nonfictional, and extreme than REA frames. Reported effects in CDA and REA studies differ (...) in terms of presence, directionality, and strength, with CDA typically reporting strong effects in line with the frame, compared to REA. These differences in effects can be explained by the different frame characteristics. However, differences in the methods applied by CDA and REA could be responsible for these differences as well. In all, we conclude that the research field is fragmented on the impact of metaphors in politics. (shrink)
Background Medical research so flawed as to be retracted may put patients at risk by influencing treatments. Objective To explore hypotheses that more patients are put at risk if a retracted paper appears in a journal with a high impact factor (IF) so that the paper is widely read; is written by a ‘repeat offender’ author who has produced other retracted research; or is a clinical trial. Methods English language papers (n=788) retracted from the PubMed database between 2000 and 2010 (...) were evaluated. Only those papers retracting research with humans or freshly derived human material were included; 180 retracted primary papers (22.8%) met inclusion criteria. Subjects enrolled and patients treated were tallied, both in the retracted primary studies and in 851 secondary studies that cited a retracted primary paper. Results Retracted papers published in high-IF journals were cited more often (p=0.0004) than those in low-IF journals, but there was no difference between high- and low-IF papers in subjects enrolled or patients treated. Retracted papers published by ‘repeat offender’ authors did not enrol more subjects or treat more patients than papers by one-time offenders, nor was there a difference in number of citations. However, retracted clinical trials treated more patients (p=0.0002) and inspired secondary studies that put more patients at risk (p=0.0019) than did other kinds of medical research. Conclusions If the goal is to minimise risk to patients, the appropriate focus is on clinical trials. Clinical trials form the foundation of evidence-based medicine; hence, the integrity of clinical trials must be protected. (shrink)
A standard response to the problem of diachronic vagueness is ‘the semantic solution’, which demands an abundant ontology. Although it is known that the abundant ontology does not logically preclude endurantism, their combination is rejected because it necessitates massive coincidence between countless objects. In this paper, I establish that the semantic solution is available not only to perdurantists but also to endurantists by showing that there is no problem with such ubiquitous and principled coincidence.
A study found that women participating in mammography screening were content with the programme and the paternalistic invitations that directly encourage participation and include a pre-specified time of appointment. We argue that this merely reflects that the information presented to the invited women is seriously biased in favour of participation. Women are not informed about the major harms of screening, and the decision to attend has already been made for them by a public authority. This short-circuits informed decision-making and the (...) legislation on informed consent, and violates the autonomy of the women. Screening invitations must present both benefits and harms in a balanced fashion, and should offer, not encourage, participation. It should be stated clearly that the choice not to participate is as sensible as the choice to do so. To allow this to happen, the responsibility for the screening programmes must be separated from the responsibility for the information material. (shrink)
In the pursuit of a naturalized philosophy of mind, consciousness receives concentrated attention, in part because the phenomena of consciousness seem recalcitrant, difficult to explain in the terms of the natural sciences. But this is not a new phenomenon—efforts to provide a naturalized theory of consciousness originate in Ancient Greek philosophy. This chapter defines the project of naturalism in a way that allows for a common project to be traced through the history of Western Philosophy.
Leibniz explains both activity and sensation in terms of the relative distinctness of perception. This paper argues that the systematic connection between activity and sensation is illuminated by Leibniz’s use of distinctness in analyzing each. Leibnizian sensation involves two levels of activity: on one level, the relative forcefulness of an expression enables certain expressions to stand out against the perceptual field, but in addition to this there is an activity of the mind that enables sensory experience. This connection of mental (...) activity and perceptual distinctness enables us to better appreciate the fundamental role perceptual distinctness plays in Leibniz’s theory of sensation. (shrink)
Interdisciplinary integration has fundamental limitations. This is not sufficiently realized in science and in philosophy. Concerning scientific theories there are many examples of pseudo-integration which should be unmasked by elementary philosophical analysis. For example, allegedly over-arching theories of stress which are meant to unite biology and psychology, upon analysis, turn out to represent terminological rather than substantive unity. They should be replaced by more specific, local theories. Theories of animal orientation, likewise, have been formulated in unduly general terms. A natural (...) history approach is more suitable for the study of animal orientation. The tendency to formulate overgeneral theories is also present in evolutionary biology. Philosophy of biology can only deal with these matters if it takes a normative turn. Undue emphasis on interdisciplinary integration is a modern variant of the old unity of science ideal. The replacement of the ideal by a better one is an important challenge for the philosophy of science. (shrink)
Recent work has suggested that conservation efforts such as restoration ecology and invasive species eradication are largely value-driven pursuits. Concurrently, changes to global climate are forcing ecologists to consider if and how collections of species will migrate, and whether or not we should be assisting such movements. Herein, we propose a philosophical framework which addresses these issues by utilizing ecological and evolutionary interrelationships to delineate individual ecological communities. Specifically, our Evolutionary Community Concept recognizes unique collections of species that interact and (...) have co-evolved in a given geographic area. We argue this concept has implications for a number of contemporary global conservation issues. Specifically, our framework allows us to establish a biological and science-driven context for making decisions regarding the restoration of systems and the removal of exotic species. The ECC also has implications for how we view shifts in species assemblages due to climate change and it advances our understanding of various ecological concepts, such as resilience. (shrink)
Developers and designers make all sorts of moral decisions throughout an innovation project. In this article, we describe how teams of developers and designers engaged with ethics in the early phases of innovation based on case studies in the SUBCOP project. For that purpose, Value Sensitive Design will be used as a reference. Specifically, we focus on the following two research questions: How can researchers/developers learn about users’ perspectives and values during the innovation process? and How can researchers/developers take into (...) account these values, and related design criteria, in their decision-making during the innovation process? Based on a case study of several innovation processes in this project, we conclude the researchers/developers involved are able to do something similar to VSD, supported by relatively simple exercises in the project, e.g., meetings with potential end-users and discussions with members of the Ethical Advisory Board of the project. Furthermore, we also found—possibly somewhat counterintuitively—that a commercial, with its focus on understanding and satisfying customers’ needs, can promote VSD. (shrink)
Roderick Chisholm changed his mind about ordinary objects. Circa 1973-1976, his analysis of them required the positing of two kinds of entities—part-changing ens successiva and non-part-changing, non-scatterable primary objects. This view has been well noted and frequently discussed (e.g., recently in Gallois 1998 and Sider 2001). Less often treated is his later view of ordinary objects (1986-1989), where the two kinds of posited entities change, from ens successiva to modes, and, while retaining primary objects, he now allows them to survive (...) spatial scatter. Also (to my knowledge) not discussed is why he changed his mind. This paper is mostly intended to fill in these gaps, but I also give some additional reasons to prefer Chisholm's later view. Also, I discuss how mereological essentialism can be further defended by how it informs a theory of property-inherence which steers between the excesses of the bare particularists and bundle theorists. (shrink)
That any filled location of spacetime contains a persisting thing has been defended based on the ‘argument from vagueness.’ It is often assumed that since the epistemicist account of vagueness blocks the argument from vagueness it facilitates a conservative ontology without gerrymandered objects. It doesn't. The epistemic vagueness of ordinary object predicates such as ‘bicycle’ requires that objects that can be described as almost‐but‐not‐quite‐bicycle exist even though they fall outside the predicate's sharp extension. Since the predicates that begin with ‘almost’ (...) are vague as well, epistemicism's ontological backdrop is far from the conservative picture it is thought to enable. (shrink)
Why, when confronted with policy alternatives that could improve patient care, public health, and the economy, does Congress neglect those goals and tailor legislation to suit the interests of pharmaceutical corporations? In brief, for generations, the pharmaceutical industry has convinced legislators to define policy problems in ways that protect its profit margin. It reinforces this framework by selectively providing information and by targeting campaign contributions to influential legislators and allies. In this way, the industry displaces the public's voice in developing (...) pharmaceutical policy. Unless citizens mobilize to confront the political power of pharmaceutical firms, objectionable industry practices and public policy will not change. Yet we need to refine this analysis. I propose a research agenda to uncover pharmaceutical influence. It develops the theory of dependence corruption to explain how the pharmaceutical industry is able to deflect the broader interests of the general public. It includes empirical studies of lobbying and campaign finance to uncover the means drug firms use to: (1) shape the policy framework adopted and information used to analyze policy; (2) subsidize the work of political allies; and (3) influence congressional voting. (shrink)
Russell's criticisms force Meinong to adopt a distinction between two types of negation. Logical expositions of Meinong's theory show the distinction is easily drawn in formal terms, but that alone does not justify the distinction intuitively.I criticise Routley'streatment of the distinction and argue that only Terence Parsons'theory retains and preserves the tight network of conceptual connections between the notions of negation, contradiction and impossibility. Hence, Parsons' approach best expresses the Meinongian perspective.
It has recently been argued that the following Rule should be part of any characterization of science: Claims concerning specific disputed facts should be endorsed only if they are sufficiently supported by the application of validated methods of research or discovery, and moreover that acceptance of this Rule should lead one to reject religious belief. This paper argues, first, that the Rule, as stated, should not be accepted as it suffers from a number of problems. And second, that even if (...) the Rule were to be acceptable, it should not lead one to reject religious belief. (shrink)
Psychological Altruism (PA) is the view that everyone, ultimately, acts altruistically all the time. I defend PA by showing strong prima facie support, and show how a reinterpretive strategy against supposed counterexamples is successful. I go on to show how PA can be argued for in ways which exactly mirror the arguments for an opposing view, Psychological Egoism. This shows that the case for PA is at least as plausible as PE. Since the case for PA is not plausible, neither (...) is that for PE. (shrink)
In 1710 G. W. Leibniz published Theodicy: Essays on the Goodness of God, the Freedom of Man, and the Origin of Evil. This book, the only one he published in his lifetime, established his reputation more than anything else he wrote. The Theodicy brings together many different strands of Leibniz's own philosophical system, and we get a rare snapshot of how he intended these disparate aspects of his philosophy to come together into a single, overarching account of divine justice in (...) the face of the world's evils. At the same time, the Theodicy is a fascinating window into the context of philosophical theology in the seventeenth century. Leibniz had his finger on the intellectual pulse of his time, and this comes out very clearly in the Theodicy. He engages with all of the major lines of theological dispute of that time, demonstrating the encyclopaedic breadth of his understanding of the issues. Leibniz's Theodicy remains one of the most abiding systematic accounts of how evil is compatible with divine goodness. Any treatment of the problem of evil must, at some point, come to grips with Leibniz's proposed solution. This volume refreshes and deepens our understanding of this great work. Leading scholars present original essays which critically evaluate the Theodicy, providing a window on its historical context and giving close attention to the subtle and enduring philosophical arguments. (shrink)
Theoretical models for patient-physician communication in clinical practice are frequently described in the literature. Respecting patient autonomy is an ethical problem the physician faces in a medical emergency situation. No theoretical physician-patient model seems to be ideal for solving the communication problem in clinical practice. Theoretical models can at best give guidance to behavior and judgement in emergency situations. In this article the premises of autonomous treatment decisions are discussed. Based on a case-report we discuss different genuine efforts the physician (...) can do to uncover treatment refusal and respect patient autonomy in an emergency situation. Autonomy requires competence and in emergency medicine time does not allow intimate exploration of patient competence and reasons for treatment refusal. We find that the physician must base her decision on a firm theoretical base combined with a practical and realistic view of the patient's situation on a case to case basis. (shrink)
Holistic accounts of meaning normally incorporate a subjective dimension that invites the criticism that they make communication impossible, for speakers are bound to differ in ways the accounts take to be relevant to meaning, and holism generalises any difference over some words to a difference about all, and this seems incompatible with the idea that successful communication requires mutual understanding. I defend holism about meaning from this criticism. I argue that the same combination of properties (subjective origins of value, holism (...) among values, and ultimate publicity of value) is exhibited by monetary value and take the emergence of equilibrium prices as a model for the emergence of public meanings. (shrink)
Although Caenorhabditis elegans was chosen and modified to be an organism that would facilitate a reductionist program for neurogenetics, recent research has provided evidence for properties that are emergent from the neurons. While neurogenetic advances have been made using C. elegans which may be useful in explaining human neurobiology, there are severe limitations on C. elegans to explain any significant human behavior.
In this essay several virtues are discussed that are needed in people who work in participatory design (PD). The term PD is used here to refer specifically to an approach in designing information systems with its roots in Scandinavia in the 1970s and 1980s. Through the lens of virtue ethics and based on key texts in PD, the virtues of cooperation, curiosity, creativity, empowerment and reflexivity are discussed. Cooperation helps people in PD projects to engage in cooperative curiosity and cooperative (...) creativity. Curiosity helps them to empathize with others and their experiences, and to engage in joint learning. Creativity helps them to envision, try out and materialize ideas, and to jointly create new products and services. Empowerment helps them to share power and to enable other people to flourish. Moreover, reflexivity helps them to perceive and to modify their own thoughts, feelings and actions. In the spirit of virtue ethics—which focuses on specific people in concrete situations—several examples from one PD project are provided. Virtue ethics is likely to appeal to people in PD projects because it is practice-oriented, provides room for exploration and experimentation, and promotes professional and personal development. In closing, some ideas for practical application, for education and for further research are discussed. (shrink)