Most writing on informed consent in Africa highlights different cultural and social attributes that influence informed consent practices, especially in research settings. This review presents a composite picture of informed consent in Nigeria using empirical studies and legal and regulatory prescriptions, as well as clinical experience. It shows that Nigeria, like most other nations in Africa, is a mixture of sociocultural entities, and, notwithstanding the multitude of factors affecting it, informed consent is evolving along a purely Western model. Empirical studies (...) show that 70–95% of Nigerian patients report giving consent for their surgical treatments. Regulatory prescriptions and adjudicated cases in Nigeria follow the Western model of informed consent. However, adversarial legal proceedings, for a multiplicity of reasons, do not play significant roles in enforcing good medical practice in Nigeria. Gender prejudices are evident, but not a norm. Individual autonomy is recognized even when decisions are made within the family. Consent practices are influenced by the level of education, extended family system, urbanization, religious practices, and health care financing options available. All limitations notwithstanding, consent discussions improved with increasing level of education of the patients, suggesting that improved physician's knowledge and increasing awareness and education of patients can override other influences. Nigerian medical schools should restructure their teaching of medical ethics to improve the knowledge and practices of physicians. More research is needed on the preferences of the Nigerian people regarding informed consent so as to adequately train physicians and positively influence physicians' behaviors. (shrink)
The ethics of conducting research in epidemic situations have yet to account fully for differences in the proportion and acuteness of epidemics, among other factors. While epidemics most often arise from infectious diseases, not all infectious diseases are of epidemic proportions, and not all epidemics occur acutely. These and other variations constrain the generalization of ethical decision-making and impose ethical demands on the individual researcher in a way not previously highlighted. This paper discusses a number of such constraints and impositions. (...) It applies the ethical principles enunciated by Emmanuel et al.1 to the controversial Pfizer study in Nigeria in order to highlight the particular ethical concerns of acute epidemic research, and suggest ways of meeting such challenges. The paper recommends that research during epidemics should be partly evaluated on its own merits in order to determine its ethical appropriateness to the specific situation. Snap decisions to conduct research during acute epidemics should be resisted. Community engagement, public notification and good information management are needed to promote the ethics of conducting research during acute epidemics. Individual consent is most at risk of being compromised, and every effort should be made to ensure that it is maintained and valid. Use of data safety management boards should be routine. Acute epidemics also present opportunities to enhance the social value of research and maximize its benefits to communities. Ethical research is possible in acute epidemics, if the potential challenges are thought of ahead of time and appropriate precautions taken. (shrink)
Classical (Bayesian) probability (CP) theory has led to an influential research tradition for modeling cognitive processes. Cognitive scientists have been trained to work with CP principles for so long that it is hard even to imagine alternative ways to formalize probabilities. However, in physics, quantum probability (QP) theory has been the dominant probabilistic approach for nearly 100 years. Could QP theory provide us with any advantages in cognitive modeling as well? Note first that both CP and QP theory share the (...) fundamental assumption that it is possible to model cognition on the basis of formal, probabilistic principles. But why consider a QP approach? The answers are that (1) there are many well-established empirical findings (e.g., from the influential Tversky, Kahneman research tradition) that are hard to reconcile with CP principles; and (2) these same findings have natural and straightforward explanations with quantum principles. In QP theory, probabilistic assessment is often strongly context- and order-dependent, individual states can be superposition states (that are impossible to associate with specific values), and composite systems can be entangled (they cannot be decomposed into their subsystems). All these characteristics appear perplexing from a classical perspective. However, our thesis is that they provide a more accurate and powerful account of certain cognitive processes. We first introduce QP theory and illustrate its application with psychological examples. We then review empirical findings that motivate the use of quantum theory in cognitive theory, but also discuss ways in which QP and CP theories converge. Finally, we consider the implications of a QP theory approach to cognition for human rationality. (shrink)
This squib attempts to constrain semantic theories of agree wh constructions by broadening the data set and collecting naive speakers’ intuitions. Overall, our data suggest relatively permissive truth-conditions for these constructions. They also suggest a previously undiscussed presupposition for agree wh and also indicate that agree wh is not straightforwardly reducible to agree that. Although some accounts suggest differences in truth conditions among different asymmetrical agree with constructions and symmetrical agree constructions, we do not find any indication of such truth-conditional (...) distinctions. In the course of our exploration of the data, we offer a new approach to distinguishing between truth, falsity and presuppositional failure. (shrink)
The attempt to employ quantum principles for modeling cognition has enabled the introduction of several new concepts in psychology, such as the uncertainty principle, incompatibility, entanglement, and superposition. For many commentators, this is an exciting opportunity to question existing formal frameworks (notably classical probability theory) and explore what is to be gained by employing these novel conceptual tools. This is not to say that major empirical challenges are not there. For example, can we definitely prove the necessity for quantum, as (...) opposed to classical, models? Can the distinction between compatibility and incompatibility inform our understanding of differences between human and nonhuman cognition? Are quantum models less constrained than classical ones? Does incompatibility arise as a limitation, to avoid the requirements from the principle of unicity, or is it an inherent (or essential?) characteristic of intelligent thought? For everyday judgments, do quantum principles allow more accurate prediction than classical ones? Some questions can be confidently addressed within existing quantum models. A definitive resolution of others will have to anticipate further work. What is clear is that the consideration of quantum cognitive models has enabled a new focus on a range of debates about fundamental aspects of cognition. (shrink)
Quantum cognition research applies abstract, mathematical principles of quantum theory to inquiries in cognitive science. It differs fundamentally from alternative speculations about quantum brain processes. This topic presents new developments within this research program. In the introduction to this topic, we try to answer three questions: Why apply quantum concepts to human cognition? How is quantum cognitive modeling different from traditional cognitive modeling? What cognitive processes have been modeled using a quantum account? In addition, a brief introduction to quantum probability (...) theory and a concrete example is provided to illustrate how a quantum cognitive model can be developed to explain paradoxical empirical findings in psychological literature. (shrink)
Understanding cognitive processes with a formal framework necessitates some limited, internal prescriptive normativism. This is because it is not possible to endorse the psychological relevance of some axioms in a formal framework, but reject that of others. The empirical challenge then becomes identifying the remit of different formal frameworks, an objective consistent with the descriptivism Elqayam & Evans (E&E) advocate.
In this paper I explain what is the difference between a book and a document according to Levinas. T hen I explain why, although he was very reluctant to read "cabalistic documents" he was interested by R. Haïm of Volozin's book, Nefesh HaHaïm , and even praised the French translation of the book as an event worth the attention of Jews, Christians and Muslims. T he main point is concerns his understanding of God "from our view point".
In lieu of an abstract, here is a brief excerpt of the content:Interpreting from the IntersticesThe Role of Justice in a Liberal Democracy—Lessons from Michael Walzer and Emmanuel LevinasNicholas R. Brown (bio)1As anyone who is familiar with more recent theological debate can attest, the appraisal of the liberal democratic tradition has undergone a radical reevaluation in the wake of Stanley Hauerwas’s and Alasdair MacIntyre’s scathing critiques. As a result of their blistering assault, religious ethicists and philosophers now find themselves (...) operating within a discursive milieu that is almost the photo negative of the one they previously inhabited. For what has followed After Virtue and After Christendom is a situation in which compliance with liberal democratic norms is now perceived as actively inveighing against justice rather than as an integral prerequisite to its pursuit.There are cracks, however, beginning to emerge in the MacIntyre/Hauerwas edifice. For what is becoming disputed and increasingly so among a growing chorus of religious ethicists and philosophers is whether their critical reading of liberal democracy offers the most [End Page 155] helpful or even the most biblical way to think through its own moral dimensions as well as those undergirding its relationship with justice.It is the emergence of these criticisms that forms the basis for this essay. For the thesis that I wish to advance below is that liberal democracy offers religious ethicists and philosophers alike a moral framework and vocabulary from which it is possible to comprehend and enact the normative precepts encapsulated within a biblical understanding of justice. Accordingly, some aspects of my argument will build upon the rhetorical trajectories that have been already charted by the ethicists and philosophers I mention above.What distinguishes my approach, however, is that I will proceed from a more focused examination of some of the ethical and political undercurrents found within contemporary Jewish thought. More specifically, I want to probe the ethical philosophy of Emmanuel Levinas and the political philosophy of Michael Walzer, for I believe the juxtapositional methodology of interpretation which informs each of their perspectives is illustrative of an interstitial hermeneutic that helps further illuminate the moral compatibility between biblical and democratic accounts of justice.2By now, MacIntyre’s and Hauerwas’s critiques of the liberal democratic tradition have been so thoroughly documented, discussed, and dissected that a review of their perspectives cannot help but have a certain pleonastic quality.Probably the most significant and disturbing problem that MacIntyre and Hauerwas see belying the liberal democratic tradition stems from its conception of time and space, or more precisely, its lack thereof. For what they discover upon a more careful probing of its moral and epistemological underpinnings is a pursuit of transcendence not dissimilar to Gnostic metaphysics. In the case of liberalism, however, the existential encumbrances to be excised are not corporeal and carnal in nature, but historical and social. [End Page 156]Such conditionalities, surmise liberal theorists, are so shot through with conceptual prejudices that they comprise an interpretative straight-jacket that vitiates against the kind of objectivity necessary to engage in a nonparochial process of moral and political discernment. For it is precisely this ability “to be able to stand back from any and every situation in which one is involved, from any and every characteristic that one may possess, and to pass judgment on it from a purely universal and abstract point of view that is totally detached from all social particularity” which MacIntyre sees as constituting “the essence of moral agency” of modern liberalism (AV 31–32). Therefore, “liberalism is successful,” maintains Hauerwas, “exactly because... [it] provide[s] that philosophical account of society designed to deal with” the moral and political implications such a social and historical denuding portends, namely “a system of rules that will constitute procedures for resolving disputes as they pursue their various interests.”1However, what liberalism defines as success MacIntyre and Hauerwas see as anything but. Instead, both judge its “system of rules” to be an insidious prescription for a particularly virulent form of moral nihilism and political bankruptcy. For by stripping moral and political discourse of their historical and social referents, liberalism, ironically and tragically, eviscerates itself of the very heuristic and discursive practices necessary to make those... (shrink)
When constrained by limited resources, how do we choose axioms of rationality? The target article relies on Bayesian reasoning that encounter serioustractabilityproblems. We propose another axiomatic foundation: quantum probability theory, which provides for less complex and more comprehensive descriptions. More generally, defining rationality in terms of axiomatic systems misses a key issue: rationality must be defined by humans facing vague information.
BackgroundThe ARRIVE guidelines are widely endorsed but compliance is limited. We sought to determine whether journal-requested completion of an ARRIVE checklist improves full compliance with the guidelines.MethodsIn a randomised controlled trial, manuscripts reporting in vivo animal research submitted to PLOS ONE were randomly allocated to either requested completion of an ARRIVE checklist or current standard practice. Authors, academic editors, and peer reviewers were blinded to group allocation. Trained reviewers performed outcome adjudication in duplicate by assessing manuscripts against an operationalised version (...) of the ARRIVE guidelines that consists 108 items. Our primary outcome was the between-group differences in the proportion of manuscripts meeting all ARRIVE guideline checklist subitems.ResultsWe randomised 1689 manuscripts, of which 1269 were sent for peer review and 762 accepted for publication. No manuscript in either group achieved full compliance with the ARRIVE checklist. Details of animal husbandry was the only subitem to show improvements in reporting, with the proportion of compliant manuscripts rising from 52.1 to 74.1% in the control and intervention groups, respectively.ConclusionsThese results suggest that altering the editorial process to include requests for a completed ARRIVE checklist is not enough to improve compliance with the ARRIVE guidelines. Other approaches, such as more stringent editorial policies or a targeted approach on key quality items, may promote improvements in reporting. (shrink)