In this thorough compendium, nineteen accomplished scholars explore, in some manner the values they find inherent in the world, their nature, and revelence through the thought of Frederick FerrZ. These essays, informed by the insights of FerrZ and coming from manifold perspectives—ethics, philosophy, theology, and environmental studies, advance an ambitious challenge to current intellectual and scholarly fashions.
In this thorough compendium, nineteen accomplished scholars explore, in some manner the values they find inherent in the world, their nature, and revelence through the thought of Frederick Ferré. These essays, informed by the insights of Ferré and coming from manifold perspectives—ethics, philosophy, theology, and environmental studies, advance an ambitious challenge to current intellectual and scholarly fashions.
There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in (...) this area has been on the artificial agents and the humans they may encounter after they are deployed. We contend that the humans who design, implement, and deploy the artificial agents are crucial to any discussion of e-trust and to understanding the distinctions among the concepts of trust, e-trust and face-to-face trust. (shrink)
What is nontrivial digital computation? It is the processing of discrete data through discrete state transitions in accordance with finite instructional information. The motivation for our account is that many previous attempts to answer this question are inadequate, and also that this account accords with the common intuition that digital computation is a type of information processing. We use the notion of reachability in a graph to defend this characterization in memory-based systems and underscore the importance of instructional information for (...) digital computation. We argue that our account evaluates positively against adequacy criteria for accounts of computation. (shrink)
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question (...) is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility. (shrink)
In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. Essentially, the qualitative (...) difference between ethical decisions and general decisions is that ethical decisions must be part of the process of developing ethical expertise within an agent. We use this distinction in examining publicity surrounding a particular experiment in which a simulated robot attempted to safeguard simulated humans from falling into a hole. We conclude that any suggestions that this simulated robot was making ethical decisions were misleading. (shrink)
This paper is a response to some recent discussions of many-minds interpretations in the philosophical literature. After an introduction to the many-minds idea, the complexity of quantum states for macroscopic objects is stressed. Then it is proposed that a characterization of the physical structure of observers is a proper goal for physical theory. It is argued that an observer cannot be defined merely by the instantaneous structure of a brain, but that the history of the brain's functioning must also be (...) taken into account. Next the nature of probability in many-minds interpretations is discussed and it is suggested that only discrete probability models are needed. The paper concludes with brief comments on issues of actuality and identity over time. (shrink)
To many who develop and use free software, the GNU General Public License represents an embodiment of the meaning of free software. In this paper we examine the definition and meaning of free software in the context of three events surrounding the GNU General Public License. We use a case involving the GPU software project to establish the importance of Freedom 0 in the meaning of free software. We analyze version 3 of the GNU General Public License and conclude that (...) although a credible case can be made that the added restrictions are consistent with the definition of free software, the case requires subtle arguments. Strong arguments against the added restrictions are less subtle, and may therefore be more convincing to many users and developers. We also analyze the Affero General Public License and conclude that it is inconsistent with the definition of free software. (shrink)
Incidental fndings of potential medical signifcance are seen in approximately 5-8 percent of asymptomatic subjects and 16 percent of symptomatic subjects participating in large computed tomography colonography studies, with the incidence varying further by CT acquisition technique. While most CTC research programs have a well-defned plan to detect and disclose IFs, such plans are largely communicated only verbally. Written consent documents should also inform subjects of how IFs of potential medical signifcance will be detected and reported in CTC research studies.
A civic science curriculum is advocated. We discuss practical mechanisms for (and highlight the possible benefits of) addressing the relationship between scientific knowledge and civic responsibility coextensively with rigorous scientific content. As a strategy, we suggest an in-course treatment of well known (and relevant) historical and contemporary controversies among scientists over science policy or the use of sciences. The scientific content of the course is used to understand the controversy and to inform the debate while allowing students to see the (...) role of scientists in shaping public perceptions of science and the value of scientific inquiry, discoveries and technology in society. The examples of the activism of Linus Pauling, Alfred Nobel and Joseph Rotblat as scientists and engaged citizens are cited. We discuss the role of science professors in informing the social conscience of students and consider ways in which a treatment of the function of science in society may find, coherently, a meaningful space in a science curriculum at the college level. Strategies for helping students to recognize early the crucial contributions that science can make in informing public policy and global governance are discussed. (shrink)
This paper analyzes certain technical details of Floridi’s Theory of Strongly Semantic Information. It provides a clarification regarding desirable properties of degrees of informativeness functions by rejecting three of Floridi’s original constraints and proposing a replacement constraint. Finally, the paper briefly explores the notion of quantities of inaccuracy and shows an analysis that mimics Floridi’s analysis of quantities of vacuity.
In his long 1957 paper, “The Theory of the Universal Wave Function”, Hugh Everett III made some significant preliminary steps towards the application and generalization of Shannon’s information theory to quantum mechanics. In the course of doing so, he conjectured that, for a given wavefunction on a compound space, the Schmidt decomposition maximises the correlation between subsystem bases. This is proved here.
This commentary on Fresco's article "Information processing as an account of concrete digital computation" illuminates the two intertwined roles that the definition of the term "information" plays in Fresco's analysis. It provides analysis of the notion of actualizing control in information processing. The key point made is that not all control information in common computational devices cannot be processed.
The original development of the formalism of quantum mechanics involved the study of isolated quantum systems in pure states. Such systems fail to capture important aspects of the warm, wet, and noisy physical world which can better be modelled by quantum statistical mechanics and local quantum field theory using mixed states of continuous systems. In this context, we need to be able to compute quantum probabilities given only partial information. Specifically, suppose that B is a set of operators. This set (...) need not be a von Neumann algebra. Simple axioms are proposed which allow us to identify a function which can be interpreted as the probability, per unit trial of the information specified by B, of observing the (mixed) state of the world restricted to B to be σ when we are given ρ – the restriction to B of a prior state. This probability generalizes the idea of a mixed state (ρ) as being a sum of terms (σ) weighted by probabilities. The unique function satisfying the axioms can be defined in terms of the relative entropy. The analogous inference problem in classical probability would be a situation where we have some information about the prior distribution, but not enough to determine it uniquely. In such a situation in quantum theory, because only what we observe should be taken to be specified, it is not appropriate to assume the existence of a fixed, definite, unknown prior state, beyond the set B about which we have information. The theory was developed for the purposes of a fairly radical attack on the interpretation of quantum theory, involving many-worlds ideas and the abstract characterization of observers as finite information-processing structures, but deals with quantum inference problems of broad generality. (shrink)
As software developers design artificial agents , they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to (...) gain our trust? Is trust generated through technological “enchantment” warranted? Next, we investigate more complex questions of how deception that involves AAs differs from deception that only involves humans. Finally, we analyze the role and responsibility of developers in trust situations that involve both humans and AAs. (shrink)
As exome and genome sequencing move into clinical application, questions surround how to elicit consent and handle potential return of individual genomic results. This study analyzes nine consent forms used in NIH-funded sequencing studies. Content analysis reveals considerable heterogeneity, including in defining results that may be returned, identifying potential benefits and risks of return, protecting privacy, addressing placement of results in the medical record, and data-sharing. In response to lack of consensus, we offer recommendations.