We revisit the analogy suggested by Madelung between a non-relativistic time-dependent quantum particle, to a fluid system which is pseudo-barotropic, irrotational and inviscid. We first discuss the hydrodynamical properties of the Madelung description in general, and extract a pressure like term from the Bohm potential. We show that the existence of a pressure gradient force in the fluid description, does not violate Ehrenfest’s theorem since its expectation value is zero. We also point out that incompressibility of the fluid implies conservation (...) of density along a fluid parcel trajectory and in 1D this immediately results in the non-spreading property of wave packets, as the sum of Bohm potential and an exterior potential must be either constant or linear in space. Next we relate to the hydrodynamic description a thermodynamic counterpart, taking the classical behavior of an adiabatic barotopric flow as a reference. We show that while the Bohm potential is not a positive definite quantity, as is expected from internal energy, its expectation value is proportional to the Fisher information whose integrand is positive definite. Moreover, this integrand is exactly equal to half of the square of the imaginary part of the momentum, as the integrand of the kinetic energy is equal to half of the square of the real part of the momentum. This suggests a relation between the Fisher information and the thermodynamic like internal energy of the Madelung fluid. Furthermore, it provides a physical linkage between the inverse of the Fisher information and the measure of disorder in quantum systems—in spontaneous adiabatic gas expansion the amount of disorder increases while the internal energy decreases. (shrink)
We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: Are networks explainable, and if so, what does it mean to explain the output of a network? And what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In (...) response to, we show how four familiar accounts of explanation apply to neural networks as they would to any scientific phenomenon. We diagnose the confusion about explaining neural networks within the machine learning literature as an equivocation on “explainability,” “understandability” and “interpretability.” To remedy this, we distinguish between these notions, and answer by offering a theory and typology of interpretation in machine learning. Interpretation is something one does to an explanation with the aim of producing another, more understandable, explanation. As with explanation, there are various concepts and methods involved in interpretation: Total or Partial, Global or Local, and Approximative or Isomorphic. Our account of “interpretability” is consistent with uses in the machine learning literature, in keeping with the philosophy of explanation and understanding, and pays special attention to medical artificial intelligence systems. (shrink)
Both left libertarians, who support the redistribution of income and wealth through taxation, and right libertarians, who oppose redistributive taxation, share an important view: that, looming catastrophes aside, the state must never redistribute any part of our body or our person without our consent. Cécile Fabre rejects that view. For her, just as the undeservedly poor have a just claim to money from their fellow citizens in order to lead a minimally flourishing life, the undeservedly ‘medically poor’ have a just (...) claim to help from fellow citizens in order to lead such a life. Such obligatory help may in principle involve even the supply of body parts for transplantation. The state ought to exact such resources from the medically rich whenever doing so would secure the prospect of a minimally flourishing life to the medically poor without denying that prospect to anyone else. Fabre criticizes Ronald Dworkin's belief in ‘a prophylactic line that comes close to making the body inviolate, that is, making body parts not parts of social resources at all’. For her, ‘Duties to help. . . do not stop at material resources: they involve the body. . . in invasive ways’. (shrink)
Law, Economics, and Morality examines the possibility of combining economic methodology and deontological morality through explicit and direct incorporation of moral constraints into economic models. Economic analysis of law is a powerful analytical methodology. However, as a purely consequentialist approach, which determines the desirability of acts and rules solely by assessing the goodness of their outcomes, standard cost-benefit analysis is normatively objectionable. Moderate deontology prioritizes such values as autonomy, basic liberties, truth-telling, and promise-keeping over the promotion of good outcomes. It (...) holds that there are constraints on promoting the good. Such constraints may be overridden only if enough good is at stake. While moderate deontology conforms to prevailing moral intuitions and legal doctrines, it is arguably lacking in methodological rigor and precision. Eyal Zamir and Barak Medina argue that the normative flaws of economic analysis can be rectified without relinquishing its methodological advantages and that moral constraints can be formalized so as to make their analysis more rigorous. They discuss various substantive and methodological choices involved in modeling deontological constraints. Zamir and Medina propose to determine the permissibility of any act or rule infringing a deontological constraint by means of mathematical threshold functions. Law, Economics, and Morality presents the general structure of threshold functions, analyzes their elements and addresses possible objections to this proposal. It then illustrates the implementation of constrained CBA in several legal fields, including contract law, freedom of speech, antidiscrimination law, the fight against terrorism, and legal paternalism. (shrink)
What it means for an action to have moral worth, and what is required for this to be the case, is the subject of continued controversy. Some argue that an agent performs a morally worthy action if and only if they do it because the action is morally right. Others argue that a morally worthy action is that which an agent performs because of features that make the action right. These theorists, though they oppose one another, share something important in (...) common. They focus almost exclusively on the moral worth of right actions. But there is a negatively valenced counterpart that attaches to wrong actions, which we will call moral counterworth. In this paper, we explore the moral counterworth of wrong actions in order to shed new light on the nature of moral worth. Contrary to theorists in both camps, we argue that more than one kind of motivation can affect the moral worth of actions. (shrink)
In _Differentiating the Pearl from the Fish-Eye_, Eyal Aviv offers an account of Ouyang Jingwu, a revolutionary Buddhist thinker and educator. The book surveys the life and career of Ouyang and his influence on modern Chinese intellectual history.
This volume offers a holistic, empirically grounded examination of the factors which influence educational leaders' ethical judgments in their day-to-day work in schools. Drawing on a range of quantitative studies, the text utilizes organizational psychology to explore multiple ethical paradigms. It considers social aspects including ethnicity, gender, hegemony-minority relations, and leadership styles which influence and drive ethical judgment patterns employed by educators and principals. The book ultimately demonstrates the Ethical Perspectives Instrument (EPI) as an effective tool for the assessment of (...) various ethical viewpoints and their interactions, suitable for application to diverse cultures and socio-educational circumstances. An important study of the leaders' ethics and preparation in handling marginalized populations, this book will be valuable for academics, researchers, and graduate students working in the fields of educational leadership, organizational psychology, and the sociology of education. (shrink)
We examine whether the "evidence of evidence is evidence" principle is true. We distinguish several different versions of the principle and evaluate recent attacks on some of those versions. We argue that, whatever the merits of those attacks, they leave the more important rendition of the principle untouched. That version is, however, also subject to new kinds of counterexamples. We end by suggesting how to formulate a better version of the principle that takes into account those new counterexamples.
Suppose we learn that we have a poor track record in forming beliefs rationally, or that a brilliant colleague thinks that we believe P irrationally. Does such input require us to revise those beliefs whose rationality is in question? When we gain information suggesting that our beliefs are irrational, we are in one of two general cases. In the first case we made no error, and our beliefs are rational. In that case the input to the contrary is misleading. In (...) the second case we indeed believe irrationally, and our original evidence already requires us to fix our mistake. In that case the input to that effect is normatively superfluous. Thus, we know that information suggesting that our beliefs are irrational is either misleading or superfluous. This, I submit, renders the input incapable of justifying belief revision, despite our not knowing which of the two kinds it is. (shrink)
This is a lively discussion between two perceptive philosophical thinkers as comfortable with vulnerable intimacy and abstract ideas as they are savvy with the aesthetics of oppression and the many neurotic loops of fear-based escape routes from the Real. With a deep concern for finding the best ways to build a healthy and sane society, their Integrating of East-West, Indigenous and ecological knowledges brings forward a synthesis of ideas to be reckoned with. Dr. Fisher, founder of The Fearology Institute (...) and Luke Barnesmoore a doctoral student in the Geography department at The University of British Columbia caress the contours of fear and fearlessness and the importance of admitting how much fear exists in most all places humans dwell in contemporary urban societies. if we are to avoid the worst catastrophe's of crises we face on the planet in the very near future, Fisher and Barnesmoore are sure that fear is going to be a major player in the outcomes. (shrink)
The Madelung equations map the non-relativistic time-dependent Schrödinger equation into hydrodynamic equations of a virtual fluid. While the von Neumann entropy remains constant, we demonstrate that an increase of the Shannon entropy, associated with this Madelung fluid, is proportional to the expectation value of its velocity divergence. Hence, the Shannon entropy may grow due to an expansion of the Madelung fluid. These effects result from the interference between solutions of the Schrödinger equation. Growth of the Shannon entropy due to expansion (...) is common in diffusive processes. However, in the latter the process is irreversible while the processes in the Madelung fluid are always reversible. The relations between interference, compressibility and variation of the Shannon entropy are then examined in several simple examples. Furthermore, we demonstrate that for classical diffusive processes, the “force” accelerating diffusion has the form of the positive gradient of the quantum Bohm potential. Expressing then the diffusion coefficient in terms of the Planck constant reveals the lower bound given by the Heisenberg uncertainty principle in terms of the product between the gas mean free path and the Brownian momentum. (shrink)
The Self-Intimation thesis has it that whatever justificatory status a proposition has, i.e., whether or not we are justified in believing it, we are justified in believing that it has that status. The Infallibility thesis has it that whatever justificatory status we are justified in believing that a proposition has, the proposition in fact has that status. Jointly, Self-Intimation and Infallibility imply that the justificatory status of a proposition closely aligns with the justification we have about that justificatory status. Self-Intimation (...) has two noteworthy implications. First, assuming that we never have sufficient justification for a proposition and for its negation, we can derive Infallibility from Self-Intimation. Interestingly, there seems to be no equivalently simple way to derive Self-Intimation from Infallibility. This asymmetry provides reason for thinking that bottom-level justification rather than top-level justification drives the explanation for why the levels of justification align. Second, Self-Intimation suggests a counterintuitive treatment of information concerning what justificatory status a proposition has. It follows from Self-Intimation that we always have justification for the truth about whether a proposition is justified for us, and therefore, that higher-order evidence could change what we should believe on this matter only by misleading us. This permits forming beliefs about whether a proposition is justified for us without regard to higher-order evidence, and thus reveals a reason for thinking that top-level justification is evidentially inert. (shrink)
Which inequalities in longevity and health among individuals, groups, and nations are unfair? And what priority should health policy attach to narrowing them? These essays by philosophers, economists, epidemiologists, and physicians attempt to determine how health inequalities should be conceptualized, measured, ranked, and evaluated.
How are we to think of Beckett's fiction? Lyrical, inventive, uncompromising, beautifully precise-an immense achievement—is it really an art that proclaims the disintegration of language and of the imagination, as traditional readings conclude? Eyal Amiran's study demonstrates that Beckett's work does not embody the failure of synthetic vision. Beckett's fiction transposes a large intertextual logic from the Western metaphysics it is said to disown, and so takes its place in a literary and philosophical tradition that extends from Plato to (...) Joyce and Yeats. At the same time, it develops as a serial narrative, from the early novels to the late short fictions, to unravel the story itself that its metaphysical tradition tells. (shrink)
ABSTRACTShould conciliating with disagreeing peers be considered sufficient for reaching rational beliefs? Thomas Kelly argues that when taken this way, Conciliationism lets those who enter into a disagreement with an irrational belief reach a rational belief all too easily. Three kinds of responses defending Conciliationism are found in the literature. One response has it that conciliation is required only of agents who have a rational belief as they enter into a disagreement. This response yields a requirement that no one should (...) follow. If the need to conciliate applies only to already rational agents, then an agent must conciliate only when her peer is the one irrational. A second response views conciliation as merely necessary for having a rational belief. This alone does little to address the central question of what is rational to believe when facing a disagreeing peer. Attempts to develop the response either reduce to the first response, or deem necessary an unnecessary doxastic revision, or imply that rational dilemmas obtain in cases where intuitively there are none. A third response tells us to weigh what our pre-disagreement evidence supports against the evidence from the disagreement itself. This invites epistemic akrasia. (shrink)
The Global Burden of Disease Study is one of the largest-scale research collaborations in global health, producing critical data for researchers, policy-makers, and health workers about more than 350 diseases, injuries, and risk factors. Such an undertaking is, of course, extremely complex from an empirical perspective. But it also raises complex ethical and philosophical questions. In this volume, a group of leading philosophers, economists, epidemiologists, and policy scholars identify and discuss these philosophical questions. Better appreciating the philosophical dimensions of a (...) study like the GBD can make possible a more sophisticated interpretation of its results, and it can improve epidemiological studies in the future, so that they are better suited to produce results that help us to improve global health. (shrink)
Revised to reflect the current status of scientific and professional theory, practices, and debate across all facets of ethical decision making, this latest edition of Celia B. Fisher's acclaimed book demystifies the American Psychological Association's (APA) Ethical Principles of Psychologists and Code of Conduct. The Fifth Edition explains and puts into practical perspective the format, choice of wording, aspirational principles, and enforceability of the code. Providing in-depth discussions of the foundation and application of each ethical standard to the broad (...) spectrum of scientific, teaching, and professional roles of psychologists, this unique guide helps practitioners effectively use ethical principles and standards to morally conduct their work activities, avoid ethical violations, and, most importantly, preserve and protect the fundamental rights and welfare of those whom they serve. This edition retains and expands upon the critical content of the previous editions to help readers apply the Ethics Code to contemporary social issues in the conduct of responsible psychological science and practice. (shrink)
Zionism emerged at the end of the nineteenth century in response to a rise in anti-Semitism in Europe and to the crisis of modern Jewish identity. This novel, national revolution aimed to unite a scattered community, defined mainly by shared texts and literary tradition, into a vibrant political entity destined for the Holy Land. However, Zionism was about much more than a national political ideology and practice. By tracing its origins in the context of a European history of ideas and (...) by considering the writings of key Jewish and Hebrew writers and thinkers from the nineteenth and twentieth centuries, the book offers an entirely new philosophical perspective on Zionism as a unique movement based on intellectual boldness and belief in human action. In counter-distinction to the studies of history and ideology that dominate the field, this book also offers a new way of reflecting upon contemporary Israeli politics. (shrink)
I first support Alec Fisher's thesis that premises and conclusions in arguments can be unasserted first by arguing in its favor that only it preserves our intuition that it is at least possible that two arguments share the same premises and the same conclusion although not everything that is asserted in the one is also asserted in the other and second by answering two objections that might be raised against it. I then draw from Professor Fisher's thesis the (...) consequence that in suppositional arguments the falsity (or unacceptability) of a supposition does not tell unfavorably in the evaluation of the argument, because the falsity (or unacceptability) of a (nonredundant) premise counts against an argument if and only if that premise is asserted. Finally, I observe that, despite the fact that they are neither expressed nor even alluded to, implicit assumptions in arguments are always asserted, unless the conclusion, but none of the explicit premisses, is unasserted. Hence, apart from an exceptional case of the kind just mentioned, the falsity (or unacceptability) of implicit assumptions always counts against an argument. (shrink)
The question of how we apply knowledge from biomedical science to medical and public health practice has been the subject of heated debates about generalizability and related concepts, such as applicability and inductive inference. In this essay, I interpret the term from the perspective of two causal models: determinism and indeterminism. I suggest that theories of generalizability can be formulated on the basis of both models and take the form of testable but unverifiable hypotheses, an attribute that is common to (...) all scientific theories. Nonetheless, there is one noteworthy difference between the two models: determinism allows one to rationalize a decision to treat a certain kind of patient but only indirectly a decision to treat any particular patient, whereas indeterminism accommodates both types of decisions. (shrink)
In certain judgmental situations where a “correct” decision is presumed to exist, optimal decision making requires evaluation of the decision-makers’ capabilities and the selection of the appropriate aggregation rule. The major and so far unresolved difficulty is the former necessity. This article presents the optimal aggregation rule that simultaneously satisfies these two interdependent necessary requirements. In our setting, some record of the voters’ past decisions is available, but the correct decisions are not known. We observe that any arbitrary evaluation of (...) the decision-makers’ capabilities as probabilities yields some optimal aggregation rule that, in turn, yields a maximum-likelihood estimation of decisional skills. Thus, a skill-evaluation equilibrium can be defined as an evaluation of decisional skills that yields itself as a maximum-likelihood estimation of decisional skills. We show that such equilibrium exists and offer a procedure for finding one. The obtained equilibrium is locally optimal and is shown empirically to generally be globally optimal in terms of the correctness of the resulting collective decisions. Interestingly, under minimally competent (almost symmetric) skill distributions that allow unskilled decision makers, the optimal rule considerably outperforms the common simple majority rule (SMR). Furthermore, a sufficient record of past decisions ensures that the collective probability of making a correct decision converges to 1, as opposed to accuracy of about 0.7 under SMR. Our proposed optimal voting procedure relaxes the fundamental (and sometimes unrealistic) assumptions in Condorcet’s celebrated theorem and its extensions, such as sufficiently high decision-making quality, skill homogeneity or existence of a sufficiently large group of decision makers. (shrink)
This paper provides a philosophical analysis of the ongoing controversy surrounding R.A. Fisher's famous fundamental theorem of natural selection. The difference between the traditional and modern interpretations of the theorem is explained. I argue that proponents of the modern interpretation have captured Fisher's intended meaning correctly and shown that the theorem is mathematically correct, pace the traditional consensus. However, whether the theorem has any real biological significance remains an unresolved issue. I argue that the answer depends on whether (...) we accept Fisher's non-standard notion of environmental change, on which the theorem rests; arguments for and against this notion are explored. I suggest that there is a close link between Fisher's fundamental theorem and the modern gene's eye view of evolution. Introduction What Does the Fundamental Theorem Say? Key Concepts Explained Alleged Significance of the FTNS Traditional versus Modern Interpretations of the FTNS The Modern Interpretation Illustrated Fisher's Concept of Environmental Change Causality and the Modern Interpretation The Significance of the FTNS Re-considered Appendix CiteULike Connotea Del.icio.us What's this? (shrink)
Donald C. Williams was a key figure in the development of analytic philosophy. This book will be the definitive source for his highly original work, which did much to bring metaphysics back into fashion. It presents six classic papers and six previously unpublished, revealing his full philosophical vision for the first time.
Framing effects occur when people respond differently to the same information, just because it is conveyed in different words. For example, in the classic ‘Disease Problem’ introduced by Amos Tversky and Daniel Kahneman, people’s choices between alternative interventions depend on whether these are described positively, in terms of the number of people who will be saved, or negatively in terms of the corresponding number who will die. In this paper, I discuss an account of framing effects based on ‘fuzzy-trace theory’. (...) The central claim of this account is that people represent the numbers in framing problems in a ‘gist-like’ way, as ‘some’; and that this creates a categorical contrast between ‘some’ people being saved and ‘no’ people being saved. I argue that fuzzy-trace theory’s gist-like representation, ‘some’, must have the semantics of ‘some and possibly all’, not ‘some but not all’. I show how this commits fuzzy-trace theory to a modest version of a rival ‘lower bounding hypothesis’, according to which lower-bounded interpretations of quantities contribute to framing effects by rendering the alternative descriptions extensionally inequivalent. As a result, fuzzy-trace theory is incoherent as it stands. Making sense of it requires dropping, or refining, the claim that decision-makers perceive alternatively framed options as extensionally equivalent; and the related claim that framing effects are irrational. I end by suggesting that, whereas the modest lower bounding hypothesis is well supported, there is currently less evidence for the core element of the fuzzy trace account. (shrink)
This text meets the requirements of the OCR AS specification for critical thinking. Alec Fisher shows students how they can develop a range of creative and critical thinking skills that are transferable to other subjects and contexts.
When a belief is self-fulfilling, having it guarantees its truth. When a belief is self-defeating, having it guarantees its falsity. These are the cases of “self-impacting” beliefs to be examined below. Scenarios of self-defeating beliefs can yield apparently dilemmatic situations in which we seem to lack sufficient reason to have any belief whatsoever. Scenarios of self-fulfilling beliefs can yield apparently dilemmatic situations in which we seem to lack reason to have any one belief over another. Both scenarios have been used (...) independently to challenge Evidentialism, on which what we may rationally believe is all and only what fits our current evidence. Here we tie the two scenarios together and explore what a knowledge-sensitive evidentialist approach to one implies for the other. (shrink)
Is it ever rational to suspend judgment about whether a particular doxastic attitude of ours is rational? An agent who suspends about whether her attitude is rational has serious doubts that it is. These doubts place a special burden on the agent, namely, to justify maintaining her chosen attitude over others. A dilemma arises. Providing justification for maintaining the chosen attitude would commit the agent to considering the attitude rational—contrary to her suspension on the matter. Alternatively, in the absence of (...) such justification, the attitude would be arbitrary by the agent’s own lights, and therefore irrational from the agent’s own perspective. So, suspending about whether an attitude of ours is rational does not cohere with considering it rationally preferable to other attitudes, and leads to a more familiar form of epistemic akrasia otherwise. (shrink)