We demonstrate that Statistical significance (Chow 1996) includes straw man arguments against (1) effect size, (2) meta-analysis, and (3) Bayesianism. We agree with the author that in experimental designs, H0 “is the effect of chance influences on the data-collection procedure . . . it says nothing about the substantive hypothesis or its logical complement” (Chow 1996, p. 41).
Studying knowledge utilization and related processes calls for a conceptual framework. We look at the actors that engage in these processes in a specific field of human activity, and the interfaces and linkages between them, as a Knowledge and Information System (KIS). Although this KIS perspective originates from agriculture it also can be applied to other knowledge domains. Evidence gathered shows that for a KIS to be effective the actors (e.g., researchers, extensionalists, and clients) must act synergically. This inspired us (...) to look for basic KIS principles that indicate opportunities for intervention. This article provides a brief state-of-the-art overview, presents some insights gained to date; and states the main issues for the use of information technology in knowledge management. (shrink)
Statistical procedures can be applied to episodes in the history of science in order to weight attributes to predict short-term survival of theories; an asymptotic method is used to show that short-term survival is a valid proxy for ultimate survival; and a theoretical argument is made that ultimate survival is a valid proxy for objective truth. While realists will appreciate this last step, instrumentalists do not need it to benefit from the actuarial procedures of cliometric metatheory. Introduction A plausible proxy (...) for Peircean consensus Assessing the validity of theory attributes as predictors of theory survival 3.1 Linear discriminant function 3.2 Factor analysis 3.3 Taxometric analysis Verisimilitude index Satisfying both instrumentalists and realists Recapitulation Implementation of cliometric metatheory * Correspondence about this article may be addressed to Leslie Yonce at pemeehle{at}umn.edu' + u + '@' + d + ''//--> This article had been completed by Paul Meehl at the time of his death on 14 February 2003. His wife, Leslie J. Yonce, is grateful to Keith Gunderson (University of Minnesota, Center for Philosophy of Science) and Niels G. Waller (Psychology Department, Vanderbilt University) for advice with some final editing details. (shrink)
In his famous 1982 paper, Allen Newell [22, 23] introduced the notion of knowledge level to indicate a level of analysis, and prediction, of the rational behavior of a cognitive arti cial agent. This analysis concerns the investigation about the availability of the agent knowledge, in order to pursue its own goals, and is based on the so-called Rationality Principle (an assumption according to which "an agent will use the knowledge it has of its environment to achieve its goals" [22, (...) p. 17]. By using the Newell's own words: "To treat a system at the knowledge level is to treat it as having some knowledge, some goals, and believing it will do whatever is within its power to attain its goals, in so far as its knowledge indicates" [22, p. 13]. In the last decades, the importance of the knowledge level has been historically and system- atically downsized by the research area in cognitive architectures (CAs), whose interests have been mainly focused on the analysis and the development of mechanisms and the processes governing human and (arti cial) cognition. The knowledge level in CAs, however, represents a crucial level of analysis for the development of such arti cial general systems and therefore deserves greater research attention [17]. In the following, we will discuss areas of broad agree- ment and outline the main problematic aspects that should be faced within a Common Model of Cognition [12]. Such aspects, departing from an analysis at the knowledge level, also clearly impact both lower (e.g. representational) and higher (e.g. social) levels. (shrink)
In this paper I argue that any adequate evolutionary ethical theory needs to account for moral belief as well as for dispositions to behave altruistically. It also needs to be clear whether it is offering us an account of the motivating reasons behind human behaviour or whether it is giving justifying reasons for a particular set of behaviours or, if both, to distinguish them clearly. I also argue that, unless there are some objective moral truths, the evolutionary ethicist cannot offer (...) justifying reasons for a set of behaviours. I use these points to refute Waller's claims that the illusion of objectivity plays a dispensable role in Ruse's theory, that my critique of Ruse's Darwinian metaethics is built on a false dilemma, that there is nothing to be distressed about if morality is not objective, and that ethical beliefs are subject to a kind of causal explanation that undermines their objectivity in a way that scientific beliefs are not. (shrink)
What are the types of action at issue in the free will and moral responsibility debate? Are the neuroscientists who make claims about free will and moral responsibility studying those types of action? If not, can the existing paradigm in the field be modified to study those types of action? This paper outlines some claims made by neuroscientists about the inefficacy of conscious intentions and the implications of this inefficacy for the existence of free will. It argues that, typically, the (...) types of actions at issue in the philosophical literature require proximal or distal conscious decisions and have the right kind of connection to reasons. It points out that neuroscientists are not studying this class of actions, as their studies focus on simple commanded actions (e.g., finger flex) and simple Buridan choices (e.g., push the left or right button). Finally, it argues that neuroscience already has the resources to study the types of action relevant for free will and moral responsibility and outlines two experiments which focus on skilled actions and moral choices that could be run using the available technology. (shrink)
This essay explores some concerns about the quality of informed consent in patients whose autonomy is diminished by fatal illness. It argues that patients with diminished autonomy cannot give free and voluntary consent, and that recruitment of such patients as subjects in human experimentation exploits their vulnerability in a morally objectionable way. Two options are given to overcome this objection: (i) recruit only those patients who desire to contribute to medical knowledge, rather than gain access to experimental treatment, or (ii) (...) provide prospective subjects the choice to participate in standard doubleblind study or receive the experimental treatment. Either option would guarantee that patients in desperate conditions are given a more meaningful choice and a richer freedom, and thus a higher quality of informed consent, than under standard randomized trials. (shrink)
Any complete theory of speaking must take the dialogical function of language use into account. Pickering & Garrod (P&G) make some progress on this point. However, we question whether their interactive alignment model is the optimal approach. In this commentary, we specifically criticize (1) their notion of alignment being implemented through priming, and (2) their claim that self-monitoring can occur at all levels of linguistic representation.
It is widely held that there are important differences between indicative conditionals (e.g. “If the authors are linguists, they have written a linguistics paper”) and subjunctive conditionals (e.g. “If the authors had been linguists, they would have written a linguistics paper”). A central difference is that indicatives and subjunctives convey different stances towards the truth of their antecedents. Indicatives (often) convey neutrality: for example, about whether the authors in question are linguists. Subjunctives (often) convey the falsity of the antecedent: for (...) example, that the authors in question are not linguists. This paper tests prominent accounts of how these different stances are conveyed: whether by presupposition or conversational implicature. Experiment 1 tests the presupposition account by investigating whether the stances project – remain constant – when embedded under operators like negations, possibility modals, and interrogatives, a key characteristic of presuppositions. Experiment 2 tests the conversational-implicature account by investigating whether the stances can be cancelled without producing a contradiction, a key characteristic of implicatures. The results provide evidence that both stances – neutrality about the antecedent in indicatives and the falsity of the antecedent in subjunctives – are conveyed by conversational implicatures. (shrink)
In this paper, a critical discussion is made of the role of entailments in the so-called New Paradigm of psychology of reasoning based on Bayesian models of rationality (Elqayam & Over, 2013). It is argued that assessments of probabilistic coherence cannot stand on their own, but that they need to be integrated with empirical studies of intuitive entailment judgments. This need is motivated not just by the requirements of probability theory itself, but also by a need to enhance the interdisciplinary (...) integration of the psychology of reasoning with formal semantics in linguistics. The constructive goal of the paper is to introduce a new experimental paradigm, called the Dialogical Entailment task, to supplement current trends in the psychology of reasoning towards investigating knowledge-rich, social reasoning under uncertainty (Oaksford and Chater, 2019). As a case study, this experimental paradigm is applied to reasoning with conditionals and negation operators (e.g. CEM, wide and narrow negation). As part of the investigation, participants’ entailment judgments are evaluated against their probability evaluations to assess participants’ cross-task consistency over two experimental sessions. (shrink)
Until the late 19th century scientists almost always assumed that the world could be described as a rule-based and hence deterministic system or as a set of such systems. The assumption is maintained in many 20th century theories although it has also been doubted because of the breakthrough of statistical theories in thermodynamics (Boltzmann and Gibbs) and other fields, unsolved questions in quantum mechanics as well as several theories forwarded within the social sciences. Until recently it has furthermore been assumed (...) that a rule-based and deterministic system was also predictable if only the rules were known, but this assumption has now been undermined by modern chaos-theory describing rule-based and deterministic, but unpredictable systems, while catastrophe-theory delivers a set of types describing various kinds of instability and conditions for the stability of a given system. Hence the main trait in the theoretical development in the 20th-century science can be described as a basic modification and limitation of some of the fundamental and strong assumptions forwarded in the previous epochs of modern science. Ironically, the very same process has been a process in which the human capacity to intervene in nature has expanded dramatically and mainly with the help of the very same theories, and not least because they allow nature to be described and made manipulable on a lower level and a more fine-grained scale. While the overall theoretical consistency between the various theories has gone, the reach of human intervention in nature has increased based on quite new dimensions whether in the area of physics (e.g.: energy technologies, chemical technologies, nanotechnologies etc.) or biology (ge¬netic manipulation) or in the area of psychology, sociology and culture (artificial simulations of mental processes, new means of communication, implying changes in the social infrastructure and cultural behaviour etc.). While some of these changes and new conditions can be reflected from within the conceptual framework of rule-based systems, albeit more complex than formerly recognized, others seem to give rise to the question of whether there are »systems« and relations between different systems in the world which are not rule-based? For instance, it seems to be obvious that the notion of instability represents a major conceptual break with former theories of rule-based systems, as the stability of the latter is an axiomatically given property implied in the very notion of rule-based systems, while instability can only be the result of external influence which should be explained as the result of another rule-based sy¬stem. While there are no difficulties implied concerning the stability of rule-based systems, the notion of unstable states of a system raises the question of how there can be a system at all if there are no invariant stabilising principles? This is the first question which I will address. And I shall do so by taking two examples of such systems as my point of departure. The first example will be the computer and the second will be ordinary language. In both cases I will argue that the stability of these systems (which are both defined by the existence/presence of human intentions) are provided by the help of - differently organised - redundancy functions which both allow the maintenance of systems in unstable macro-states, suspension of previous rules, underdetermination and overdetermination and generation/emergence/creation of new rules more or less independent of previous rules by the help of optional recursions to the permanently accessible underlying levels as for instance the level of binary representation in computers. Since the notion of redundancy is both controversial as such and often avoided, the concept is discussed (as defined in Claude Shannon's mathematical theory of information and in the semiotic framework of J. J. Greimas) and leading to a more general definition in which the redundancy functions serve to overcome noisy conditions, but at the cost of rule-based stability, determination, and predictability. A second question will be how the notion of rule-generating systems relates to the notion of anticipatory systems and it will be argued that rule-generating systems share some features with an¬ticipatory systems and that the former from a certain viewpoint can be seen as a subclass of the latter, although anticipative features are not necessarily a part of the definition of rule-generating systems. On the other hand, it will be discussed whether anticipatory systems which are not rule-generating systems can exist and it will be argued that the capacity to anticipate is strongly limited if it is not part of a rule-generating system. Therefore, it is concluded that the most powerful anticipatory systems need to be rule-generating systems. (shrink)
Clinical decisions are expected to be based on factual evidence and official values derived from healthcare law and soft laws such as regulations and guidelines. But sometimes personal values instead influence clinical decisions. One way in which personal values may influence medical decision-making is by their affecting factual claims or assumptions made by healthcare providers. Such influence, which we call ‘value-impregnation,’ may be concealed to all concerned stakeholders. We suggest as a hypothesis that healthcare providers’ decision making is sometimes affected (...) by value-impregnated factual claims or assumptions. If such claims influence e.g. doctor–patient encounters, this will likely have a negative impact on the provision of correct information to patients and on patients’ influence on decision making regarding their own care. In this paper, we explore the idea that value-impregnated factual claims influence healthcare decisions through a series of medical examples. We suggest that more research is needed to further examine whether healthcare staff’s personal values influence clinical decision-making. (shrink)
On Anscombe's view, intentional actions are characterized by a specific type of knowledge (practical knowledge) possessed by the agents that perform them. Recently, interest in Anscombean action theory has been renewed. Sarah Paul argues that Anscombean action theory faces a serious problem: It fails to discriminate between an action’s intended aim or purpose and its foreseen side effects. Since Anscombeans conceive practical knowledge as the formal cause of intentional actions, Paul dubs this a problem of “deviant formal causation.” In this (...) paper I will show that Anscombean action theory can escape Paul’s critique by employing a sufficiently developed conception of practical knowledge. It will turn out that Anscombeans can precisely capture the difference between intended aim and foreseen side effect in terms of differences in the agent’s knowledge. (shrink)
Cervical spinal cord injuries often lead to loss of motor function in both hands and legs, limiting autonomy and quality of life. While it was shown that unilateral hand function can be restored after SCI using a hybrid electroencephalography/electrooculography brain/neural hand exoskeleton, it remained unclear whether such hybrid paradigm also could be used for operating two hand exoskeletons, e.g., in the context of bimanual tasks such as eating with fork and knife. To test whether EEG/EOG signals allow for fluent and (...) reliable as well as safe and user-friendly bilateral B/NHE control, eight healthy participants as well as four chronic tetraplegics performed a complex sequence of EEG-controlled bilateral grasping and EOG-controlled releasing motions of two exoskeletons visually presented on a screen. A novel EOG command performed by prolonged horizontal eye movements to the left or right was introduced as a reliable switch to activate either the left or right exoskeleton. Fluent EEG control was defined as average “time to initialize” grasping motions below 3 s. Reliable EEG control was assumed when classification accuracy exceeded 80%. Safety was defined as “time to stop” all unintended grasping motions within 2 s. After the experiment, tetraplegics were asked to rate the user-friendliness of bilateral B/NHE control using Likert scales. Average TTI and accuracy of EEG-controlled operations ranged at 2.14 ± 0.66 s and 85.89 ± 15.81% across healthy participants and at 1.90 ± 0.97 s and 81.25 ± 16.99% across tetraplegics. Except for one tetraplegic, all participants met the safety requirements. With 88 ± 11% of the maximum achievable score, tetraplegics rated the control paradigm as user-friendly and reliable. These results suggest that hybrid EEG/EOG B/NHE control of two assistive devices is feasible and safe, paving the way to test this paradigm in larger clinical trials performing bimanual tasks in everyday life environments. (shrink)
This commentary discusses whether abstract metrical frames are stored. For stress-assigning languages (e.g., Dutch and English), which have a dominant stress pattern, metrical frames are stored only for words that deviate from the default stress pattern. The majority of the words in these languages are produced without retrieving any independent syllabic or metrical frame.
1. Introduction : humanity's urge to understand -- 2. Elements of scientific thinking : skepticism, careful reasoning, and exhaustive evaluation are all vital. Science Is universal -- Maintaining a critical attitude. Reasonable skepticism -- Respect for the truth -- Reasoning. Deduction -- Induction -- Paradigm shifts -- Evaluating scientific hypotheses. Ockham's razor -- Quantitative evaluation -- Verification by others -- Statistics : correlation and causation -- Statistics : the indeterminacy of the small -- Careful definition -- Science at the frontier. (...) When good theories become ugly -- Stuff that just does not fit -- 3. Christopher Columbus and the discovery of the "Indies" : it can be disastrous to stubbornly refuse to recognize that you have falsified your own hypothesis -- 4. Antoine Lavoisier and Joseph Priestley both test the befuddling phlogiston theory : junking a confusing hypothesis may be necessary to clear the way for new and productive science -- 5. Michael Faraday discovers electromagnetic induction but fails to unify electromagnetism and gravitation : it is usually productive to simplify and consolidate your hypotheses -- 6. Wilhelm Röntgen intended to study cathode rays but ended up discovering X-rays : listen carefully when Mother Nature whispers in your ear : she may be leading you to a Nobel Prize -- 7. Max Planck, the first superhero of quantum theory, saves the universe from the ultraviolet catastrophe : assemble two flawed hypotheses about a key phenomenon into a model that fits experiment exactly and people will listen to you even if you must revolutionize physics -- 8. Albert Einstein attacks the problem "Are atoms real?" from every angle : solving a centuries-old riddle in seven different ways can finally resolve it -- 9. Niels Bohr models the hydrogen atom as a quantized system with compelling exactness, but his later career proves that collaboration and developing new talent can become more significant than the groundbreaking research of any individual -- 10. Conclusions, status of science, and lessons for our time. Conclusions from our biographies -- What thought processes lead to innovation? -- Is the scientist an outsider? -- The status of the modern scientific enterprise -- Lessons for our time -- Can the scientific method be applied to public policy? -- Why so little interest in science? -- Knowledge is never complete. (shrink)
Skepticism about moral responsibility, or what is more commonly referred to as moral responsibility skepticism, refers to a family of views that all take seriously the possibility that human beings are never morally responsible for their actions in a particular but pervasive sense. This sense is typically set apart by the notion of basic desert and is defined in terms of the control in action needed for an agent to be truly deserving of blame and praise. Some moral responsibility skeptics (...) wholly reject this notion of moral responsibility because they believe it to be incoherent or impossible. Others maintain that, though possible, our best philosophical and scientific theories about the world provide strong and compelling reasons for adopting skepticism about moral responsibility. What all varieties of moral responsibility skepticism share, however, is the belief that the justification needed to ground basic desert moral responsibility and the practices associated with it—such as backward-looking praise and blame, punishment and reward (including retributive punishment), and the reactive attitudes of resentment and indignation—is not met. Versions of moral responsibility skepticism have historically been defended by Spinoza, Voltaire, Diderot, d’Holbach, Priestley, Schopenhauer, Nietzsche, Clarence Darrow, B.F. Skinner, and Paul Edwards, and more recently by Galen Strawson, Derk Pereboom, Bruce Waller, Neil Levy, Tamler Sommers, and Gregg D. Caruso. -/- Critics of these views tend to focus both on the arguments for skepticism about moral responsibility and on the implications of such views. They worry that adopting such a view would have dire consequences for our interpersonal relationships, society, morality, meaning, and the law. They fear, for instance, that relinquishing belief in moral responsibility would undermine morality, leave us unable to adequately deal with criminal behavior, increase anti-social conduct, and destroy meaning in life. Optimistic skeptics, however, respond by arguing that life without free will and basic desert moral responsibility would not be as destructive as many people believe. These optimistic skeptics argue that prospects of finding meaning in life or of sustaining good interpersonal relationships, for instance, would not be threatened. They further maintain that morality and moral judgments would remain intact. And although retributivism and severe punishment, such as the death penalty, would be ruled out, they argue that the imposition of sanctions could serve purposes other than the punishment of the guilty—e.g., it can also be justified by its role in incapacitating, rehabilitating, and deterring offenders. (shrink)
The article examines the development of physics research in Ukraine on the example of the Ukrainian Institute of Physics and Technology (UIPT). Founded on the initiative of the eminent physicist Abram Ioffe, the UIPT has gradually become one of the world’s leading research institutions. During 1928–1938, many important events took place at the institute, which became markers for the development of physics in Ukraine and the USSR as well as in the world. An experiment on the fission of atomic nucleus (...) using artificially accelerated protons confirmed the validity of the intentions to reorient research towards nuclear physics. The involvement of foreign specialists in the work of the UIPT contributed to the informal consolidation of scientific thinking in physics. Outstanding physicists of the world such as Boris Podolskyi, Oleksandr Weisberg, Konrad Weiselberg, Friedrich Houtermans, Laszlo Tisza, Fritz Lange, Victor Weisskopf, George Placzek, Paul Dirac, Georgii Gamov, Niels Bohr, Paul Ehrenfest, and others worked here for longer or shorter periods. Niels Bohr, Ivar Waller, Milton S. Plesset, Evan J. Williams, and Leon Rosenfeld made reports at the theoretical conferences of UIPT. As a result, in the late 1920s and during the 1930s, an informal society of physicists from around the world was formed in Kharkiv. The consolidation of talented scientists has accumulated traditions, centuries of experience, and practical knowledge in the field from many scientific schools around the world. (shrink)
Free will skepticism maintains that what we do, and the way we are, is ultimately the result of factors beyond our control and because of this we are never morally responsible for our actions in the basic desert sense—the sense that would make us truly deserving of praise and blame. In recent years, a number of contemporary philosophers have advanced and defended versions of free will skepticism, including Derk Pereboom (2001, 2014), Galen Strawson (2010), Neil Levy (2011), Bruce Waller (...) (2011, 2015), and myself (Caruso 2012, 2013, forthcoming). Critics, however, often complain that adopting such views would have dire consequences for ourselves, society, morality, meaning, and the law. They fear, for instance, that relinquishing belief in free will and basic desert moral responsibility would leave us unable to adequately deal with criminal behavior, increase anti-social conduct, and undermine meaning in life. -/- In response, free will skeptics argue that life without free will and basic desert moral responsibility would not be as destructive as many people believe (see, e.g., Pereboom 2001, 2014; Waller 2011, 2015; Caruso 2016, forthcoming). According to optimistic skeptics, prospects of finding meaning in life or of sustaining good interpersonal relationships, for instance, would not be threatened. And although retributivism and severe punishment, such as the death penalty, would be ruled out, incapacitation and rehabilitation programs would still be justified (see Pereboom 2001, 2013, 2014; Levy 2012; Caruso 2016; Pereboom and Caruso, forthcoming). In this paper, I attempt to extend this general optimism about the practical implications of free will skepticism to the question of creativity. -/- In Section I, I spell out the question of creativity and explain why it’s relevant to the problem of free will. In Section II, I identify three different conceptions of creativity and explain the practical concerns critics have with free will skepticism. In Section III, I distinguish between three different conceptions of moral responsibility and argue that at least two of them are consistent with free will skepticism. I further contend that forward-looking accounts of moral responsibility, which are perfectly consistent with free will skepticism, can justify calling agents to account for immoral behavior as well as providing encouragement for creative activities since these are important for moral and creative formation and development. I conclude in Section IV by arguing that relinquishing belief in free will and basic desert would not mean the death of creativity or our sense of achievement since important and realistic conceptions of both remain in place. (shrink)
v. 1. Atomic theory and the description of nature -- v. 2. Essays 1932-1957 on atomic physics and human knowledge -- v. 3. Essays 1958-1962 on atomic physics and human knowledge -- v. 4. Causality and complementarity.
Indeterminism of quantum mechanics is considered as an immediate corollary from the theorems about absence of hidden variables in it, and first of all, the Kochen – Specker theorem. The base postulate of quantum mechanics formulated by Niels Bohr that it studies the system of an investigated microscopic quantum entity and the macroscopic apparatus described by the smooth equations of classical mechanics by the readings of the latter implies as a necessary condition of quantum mechanics the absence of hidden (...) variables, and thus, quantum indeterminism. Consequently, the objectivity of quantum mechanics and even its possibility and ability to study its objects as they are by themselves imply quantum indeterminism. The so-called free-will theorems in quantum mechanics elucidate that the “valuable commodity” of free will is not a privilege of the experimenters and human beings, but it is shared by anything in the physical universe once the experimenter is granted to possess free will. The analogical idea, that e.g. an electron might possess free will to “decide” what to do, scandalized Einstein forced him to exclaim (in a letter to Max Born in 2016) that he would be а shoemaker or croupier rather than a physicist if this was true. Anyway, many experiments confirmed the absence of hidden variables and thus quantum indeterminism in virtue of the objectivity and completeness of quantum mechanics. Once quantum mechanics is complete and thus an objective science, one can ask what this would mean in relation to classical physics and its objectivity. In fact, it divides disjunctively what possesses free will from what does not. Properly, all physical objects belong to the latter area according to it, and their “behavior” is necessary and deterministic. All possible decisions, on the contrary, are concentrated in the experimenters (or human beings at all), i.e. in the former domain not intersecting the latter. One may say that the cost of the determinism and unambiguous laws of classical physics, is the indeterminism and free will of the experimenters and researchers (human beings) therefore necessarily being out of the scope and objectivity of classical physics. This is meant as the “deterministic subjectivity of classical physics” opposed to the “indeterminist objectivity of quantum mechanics”. (shrink)
Although highly disputed, critical realism (in Ian G. Barbour’s style) is widely known as a tool to relate science and religion. Sympathising with an even more stringent hermeneutical approach, Andreas Losch had argued for a modification of critical realism into the so-called constructive-critical realism to give humanities with its constructive role of the subject due weight in any discussion on how to bridge the apparent gulf between the disciplines. So far, his constructive-critical realism has mainly been developed theologically. This paper (...) will evaluate whether constructive-critical realism is suitable as a philosophy of both science and religion and an appropriate basis for the science and religion discourse. In his original account of the critical realist philosophy of science, Barbour discussed and modified agreement with data, coherence, scope and fertility as criteria for good science, and for religion as well. The article discusses each of the criteria in how far Barbour does justice to the relevant concept, both in science and religion, and it will ask how to eventually modify the criteria for a maybe more sustainable bridge between science and religion, drawing on the idea of constructive-critical realism. Niels Henrik Gregersen’s contextual coherence theory will play a significant role in this regard. The conclusion suggests a deeper meaning of the fertility criterium, embracing ethical fruitfulness as well. As constructive-critical realism fully acknowledges the importance of the role of the knower in the process of knowing, it leads us from pure epistemology into ethics.Contribution: (1) The science and religion debate, inspired by critical realism, is identified as mainly theological discourse about the influence of science on religion; (2) the analysis of truth criteria in Losch’s constructive-critical version of realism proposes an emphasis on correspondence in science and coherence in the humanities; and (3) the deeper meaning of the criterium of fertility in this philosophical stance is highlighted, including ethical fruitfulness. (shrink)