Altruism and cynicism are two fundamental algorithms of moral decision-making. This derives from the evolution of cooperative behavior and reciprocal altruism and the need to avoid being taken advantage of. Rushton (1986) developed a self-report scale to measure altruism, however no scale to measure cynicism has been developed for use in ethics research. Following a discussion of reciprocal altruism and cynicism, this article presents an 11-item self-report scale to measure cynicism, developed and validated using a sample of 271 customer-service and (...) sales personnel. (shrink)
The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to (...) existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource providing details on the people, policies, and issues being addressed in association with OBI. (shrink)
Peer review is a widely accepted instrument for raising the quality of science. Peer review limits the enormous unstructured influx of information and the sheer amount of dubious data, which in its absence would plunge science into chaos. In particular, peer review offers the benefit of eliminating papers that suffer from poor craftsmanship or methodological shortcomings, especially in the experimental sciences. However, we believe that peer review is not always appropriate for the evaluation of controversial hypothetical science. We argue that (...) the process of peer review can be prone to bias towards ideas that affirm the prior convictions of reviewers and against innovation and radical new ideas. Innovative hypotheses are thus highly vulnerable to being “filtered out” or made to accord with conventional wisdom by the peer review process. Consequently, having introduced peer review, the Elsevier journal Medical Hypotheses may be unable to continue its tradition as a radical journal allowing discussion of improbable or unconventional ideas. Hence we conclude by asking the publisher to consider re-introducing the system of editorial review to Medical Hypotheses. (shrink)
1. WHAT IS ARTIFICIAL INTELLIGENCE? One of the fascinating aspects of the field of artificial intelligence (AI) is that the precise nature of its subject ..
Historian James H. Jones published the first edition of Bad Blood, the definitive history of the Tuskegee Syphilis Experiment, in 1981. Its clear-eyed examination of that research and its implications remains a bioethics classic, and the 30-year anniversary of its publication served as the impetus for the reexamination of research ethics that this symposium presents. Recent revelations about the United States Public Health Service study that infected mental patients and prisoners in Guatemala with syphilis in the late 1940s in (...) order to determine the efficacy of treatment represent only one of the many attestations to the persistence of ongoing, critical, and underaddressed issues in research ethics that Bad Blood first explored. Those issues include, but are not limited to: the complex and contested matters of the value of a given research question, the validity of the clinical trial designed to address it, and the priorities of science. (shrink)
This paper pursues the question, To what extent does the propensity approach to probability contribute to plausible solutions to various anomalies which occur in quantum mechanics? The position I shall defend is that of the three interpretations — the frequency, the subjective, and the propensity — only the third accommodates the possibility, in principle, of providing a realistic interpretation of ontic indeterminism. If these considerations are correct, then they lend support to Popper's contention that the propensity construction tends to remove (...) (at least some of) the mystery from quantum phenomena. (shrink)
Perhaps no technological innovation has so dominated the second half of the twentieth century as has the introduction of the programmable computer. It is quite difficult if not impossible to imagine how contemporary affairs—in business and science, communications and transportation, governmental and military activities, for example—could be conducted without the use of computing machines, whose principal contribution has been to relieve us of the necessity for certain kinds of mental exertion. The computer revolution has reduced our mental labors by means (...) of these machines, just as the Industrial Revolution reduced our physical labor by means of other machines. (shrink)
An approach to inference to the best explanation integrating a Popperianconception of natural laws together with a modified Hempelian account of explanation, one the one hand, and Hacking's law of likelihood (in its nomicguise), on the other, which provides a robust abductivist model of sciencethat appears to overcome the obstacles that confront its inductivist,deductivist, and hypothetico-deductivist alternatives.This philosophy of scienceclarifies and illuminates some fundamental aspects of ontology and epistemology, especially concerning the relations between frequencies and propensities. Among the most important (...) elements of this conception is thecentral role of degrees of nomic expectability in explanation, prediction,and inference, for which this investigation provides a theoretical defense. (shrink)
Technological revolutions are dissected into three stages: the introduction stage, the permeation stage, and the power stage. The information revolution is a primary example of this tripartite model. A hypothesis about ethics is proposed, namely, ethical problems increase as technological revolutions progress toward and into the power stage. Genetic technology, nanotechnology, and neurotechnology are good candidates for impending technological revolutions. Two reasons favoring their candidacy as revolutionary are their high degree of malleability and their convergence. Assuming the emerging technologies develop (...) into mutually enabling revolutionary technologies, we will need better ethical responses to cope with them. Some suggestions are offered about how our approach to ethics might be improved. (shrink)
Globalization has increased the need for managers (and future managers) to predict the potential for country corruption. This study examines the relationship between Hofstede''s cultural dimensions and how country corruption is perceived. Power distance, individualism and masculinity were found to explain a significant portion of the variance in perceived corruption. A significant portion of country''s risk, trade flow with U.S.A., foreign investment, and per capita income was explained by perceived corruption.
The notion of program verification appears to trade upon an equivocation. Algorithms, as logical structures, are appropriate subjects for deductive verification. Programs, as causal models of those structures, are not. The success of program verification as a generally applicable and completely reliable method for guaranteeing program performance is not even a theoretical possibility.
This sequel to the widely read Zen and the Brain continues James Austin's explorations into the key interrelationships between Zen Buddhism and brain research. In Zen-Brain Reflections, Austin, a clinical neurologist, researcher, and Zen practitioner, examines the evolving psychological processes and brain changes associated with the path of long-range meditative training. Austin draws not only on the latest neuroscience research and new neuroimaging studies but also on Zen literature and his personal experience with alternate states of consciousness.Zen-Brain Reflections takes (...) up where the earlier book left off. It addresses such questions as: how do placebos and acupuncture change the brain? Can neuroimaging studies localize the sites where our notions of self arise? How can the latest brain imaging methods monitor meditators more effectively? How do long years of meditative training plus brief enlightened states produce pivotal transformations in the physiology of the brain? In many chapters testable hypotheses suggest ways to correlate normal brain functions and meditative training with the phenomena of extraordinary states of consciousness.After briefly introducing the topic of Zen and describing recent research into meditation, Austin reviews the latest studies on the amygdala, frontotemporal interactions, and paralimbic extensions of the limbic system. He then explores different states of consciousness, both the early superficial absorptions and the later, major "peak experiences." This discussion begins with the states called kensho and satori and includes a fresh analysis of their several different expressions of "oneness." He points beyond the still more advanced states toward that rare ongoing stage of enlightenment that is manifest as "sage wisdom."Finally, with reference to a delayed "moonlight" phase of kensho, Austin envisions novel links between migraines and metaphors, moonlight and mysticism. The Zen perspective on the self and consciousness is an ancient one. Readers will discover how relevant Zen is to the neurosciences, and how each field can illuminate the other. (shrink)
The idea that human thought requires the execution of mental algorithms provides a foundation for research programs in cognitive science, which are largely based upon the computational conception of language and mentality. Consideration is given to recent work by Penrose, Searle, and Cleland, who supply various grounds for disputing computationalism. These grounds in turn qualify as reasons for preferring a non-computational, semiotic approach, which can account for them as predictable manifestations of a more adquate conception. Thinking does not ordinarily require (...) the execution of mental algorithms, which appears to be at best no more than one rather special kind of thinking. (shrink)
The idea that human thought requires the execution of mental algorithms provides a foundation for research programs in cognitive science, which are largely based upon the computational conception of language and mentality. Consideration is given to recent work by Penrose, Searle, and Cleland, who supply various grounds for disputing computationalism. These grounds in turn qualify as reasons for preferring a non-computational, semiotic approach, which can account for them as predictable manifestations of a more adquate conception. Thinking does not ordinarily require (...) the execution of mental algorithms, which appears to be at best no more than one rather special kind of thinking. (shrink)
Luciano Floridi (2003) offers a theory of information as a strongly semantic notion, according to which information encapsulates truth, thereby making truth a necessary condition for a sentence to qualify as information. While Floridi provides an impressive development of this position, the aspects of his approach of greatest philosophical significance are its foundations rather than its formalization. He rejects the conception of information as meaningful data, which entails at least three theses – that information can be false; that tautologies are (...) information; and, that It is true that ... is non-redundant – appear to be defensible. This inquiry offers various logical, epistemic, and ordinary-language grounds to demonstrate that an account of his kind is too narrow to be true and that its adoption would hopelessly obscure crucial differences between information, misinformation, and disinformation. (shrink)
Luciano Floridi offers a theory of information as a “strongly semantic” notion, according to which information encapsulates truth, thereby making truth a necessary condition for a sentence to qualify as “information”. While Floridi provides an impressive development of this position, the aspects of his approach of greatest philosophical significance are its foundations rather than its formalization. He rejects the conception of information as meaningful data, which entails at least three theses – that information can be false; that tautologies are information; (...) and, that “It is true that...” is non-redundant – appear to be defensible. This inquiry offers various logical, epistemic, and ordinary-language grounds to demonstrate that an account of his kind is too narrow to be true and that its adoption would hopelessly obscure crucial differences between information, misinformation, and disinformation. (shrink)
Computer and information ethics, as well as other fields of applied ethics, need ethical theories which coherently unify deontological and consequentialist aspects of ethical analysis. The proposed theory of just consequentialism emphasizes consequences of policies within the constraints of justice. This makes just consequentialism a practical and theoretically sound approach to ethical problems of computer and information ethics.