Public acceptance is often seen as a key reason why water-recycling technology is rejected. A common assumption is that projects fail because the general public is unable to comprehend specialist information about risk and the belief that if the public were better informed, they would accept change more readily. This article suggests that rhetoric about acceptance is counterproductive in progressing sustainability as it does not address issues relating to institutional arrangements and reinforces a dichotomy between expert and lay groups. Instead, (...) it is argued that institutional change is needed to build opportunities for constructive public engagement. The failure to implement sustainable water use through recycling can be understood as the result of several factors including present cost structures for water, institutional conservatism, administrative fragmentation, and inadequate involvement of communities in planning. Achieving sustainable water use through recycling may require better coordination between agencies and integrated government policies. (shrink)
It seems that the reception of Conrad Hal Waddington’s work never really gathered speed in mainstream biology. This paper, offering a transdisciplinary survey of approaches using his epigenetic landscape images, argues that (i) Waddington’s legacy is much broader than is usually recognized—it is widespread across the life sciences (e.g. stem cell biology, developmental psychology and cultural anthropology). In addition, I will show that (ii) there exist as yet unrecognized heuristic roles, especially in model building and theory formation, which Waddington’s images (...) play within his work. These different methodological facets envisioned by Waddington are used as a natural framework to analyze and classify the manners of usage of epigenetic landscape images in post-Waddingtonian ‘landscape approaches’. This evaluation of Waddington’s pictorial legacy reveals that there are highly diverse lines of traditions in the life sciences, which are deeply rooted in Waddington’s methodological work. (shrink)
Our ability to rapidly distinguish new from already stored information is important for behavior and decision making, but the underlying processes remain unclear. Here, we tested the hypothesis that contextual cues lead to a preselection of information and, therefore, faster recognition. Specifically, on the basis of previous modeling work, we hypothesized that recognition time depends on the amount of relevant content stored in long-term memory, i.e., set size, and we explored possible age-related changes of this relationship in older humans. In (...) our paradigm, subjects learned by heart four different word lists written in different colors. On the day of testing, a color cue indicated with a probability of 50% that a subsequent word might be from the corresponding list or from a list of new words. The old/new status of the word had to be distinguished via button press. As a main finding, we can show in a sample of n = 49 subjects, including 26 younger and 23 older humans, that response times increased linearly and logarithmically as a function of set size in both age groups. Conversely, corrected hit rates decreased as a function of set size with no statistically significant differences between both age groups. As such, our findings provide empirical evidence that contextual information can lead to a preselection of relevant information stored in long-term memory to promote efficient recognition, possibly by cyclical top-down and bottom-up processing. (shrink)
Studying personal identity, the continuity and sameness of persons across lifetimes, is notoriously difficult and competing conceptualizations exist within philosophy and psychology. Personal reidentification, linking persons between points in time is a fundamental step in allocating merit and blame and assigning rights and privileges. Based on Nozick’s closest continuer theory we develop a theoretical framework that explicitly invites a meaningful empirical approach and offers a constructive, integrative solution to current disputes about appropriate experiments. Following Nozick, reidentification involves judging continuers on (...) a metric of continuity and choosing the continuer with the highest acceptable value on this metric. We explore both the metric and its implications for personal identity. Since James, academic theories have variously attributed personal identity to the continuity of memories, psychology, bodies, social networks, and possessions. In our experiments, we measure how participants weighted the relative contributions of these five dimensions in hypothetical fission accidents, in which a person was split into two continuers. Participants allocated compensation money or adjudicated inheritance claims and reidentified the original person. Most decided based on the continuity of memory, personality, and psychology, with some consideration given to the body and social relations. Importantly, many participants identified the original with both continuers simultaneously, violating the transitivity of identity relations. We discuss the findings and their relevance for philosophy and psychology and place our approach within the current theoretical and empirical landscape. (shrink)
The increased presence of medical AI in clinical use raises the ethical question which standard of explainability is required for an acceptable and responsible implementation of AI-based applications in medical contexts. In this paper, we elaborate on the emerging debate surrounding the standards of explainability for medical AI. For this, we first distinguish several goods explainability is usually considered to contribute to the use of AI in general, and medical AI in specific. Second, we propose to understand the value of (...) explainability relative to other available norms of explainable decision-making. Third, in pointing out that we usually accept heuristics and uses of bounded rationality for medical decision-making by physicians, we argue that the explainability of medical decisions should not be measured against an idealized diagnostic process, but according to practical considerations. We conclude, fourth, to resolve the issue of explainability-standards by relocating the issue to the AI’s certifiability and interpretability. (shrink)
In this paper we study a notion of a κ -covering set in connection with Bernstein sets and other types of non-measurability. Our results correspond to those obtained by Muthuvel in [7] and Nowik in [8]. We consider also other types of coverings.
This paper explores the role and resolution of disagreements between physicians and their diagnostic AI-based decision support systems. With an ever-growing number of applications for these independently operating diagnostic tools, it becomes less and less clear what a physician ought to do in case their diagnosis is in faultless conflict with the results of the DSS. The consequences of such uncertainty can ultimately lead to effects detrimental to the intended purpose of such machines, e.g. by shifting the burden of proof (...) towards a physician. Thus, we require normative clarity for integrating these machines without affecting established, trusted, and relied upon workflows. In reconstructing different causes of conflicts between physicians and their AI-based tools—inspired by the approach of “meaningful human control” over autonomous systems and the challenges to resolve them—we will delineate normative conditions for “meaningful disagreements”. These incorporate the potential of DSS to take on more tasks and outline how the moral responsibility of a physician can be preserved in an increasingly automated clinical work environment. (shrink)
This work presents an annotated text of the most comprehensive and detailed arguments in Erasmus's conflict with the Catholic, conservative, scholastic theologians, the _Declarationes_. It also shows the contrast between the scholastic/logical and the humanist/rhetorical approach to Scripture and to theological questions.