This engaging work of comparative philosophy puts the Chinese and American philosophical traditions into a mutually informative and transformative philosophical dialogue on the way to developing a new form of Confucian pragmatism.
We present experimental evidence that supports the thesis :602–625, 2015, Br J Philos Sci 70:77–102, 2019; Bradley in Decisions theory with a human face, Cambridge University Press, Cambridge, 2017; Goldschmidt and Nissan-Rozen in Synthese 198:7553–7575, 2021) that people might positively or negatively desire risky prospects conditional on only some of the prospects’ outcomes obtaining. We argue that this evidence has important normative implications for the central debate in normative decision theory between two general approaches on how to rationalize several common (...) patterns of preference, which are ruled out as irrational by orthodox decision theory, namely the re-individuation approach and the non-expected utility approach. (shrink)
Many philosophers in the field of meta-ethics believe that rational degrees of confidence in moral judgments should have a probabilistic structure, in the same way as do rational degrees of belief. The current paper examines this position, termed “moral Bayesianism,” from an empirical point of view. To this end, we assessed the extent to which degrees of moral judgments obey the third axiom of the probability calculus, ifP(A∩B)=0thenP(A∪B)=P(A)+P(B), known as finite additivity, as compared to degrees of beliefs on the one (...) hand and degrees of desires on the other. Results generally converged to show that degrees of moral judgment are more similar to degrees of belief than to degrees of desire in this respect. This supports the adoption of a Bayesian approach to the study of moral judgments. To further support moral Bayesianism, we also demonstrated its predictive power. Finally, we discuss the relevancy of our results to the meta-ethical debate between moral cognitivists and moral non-cognitivists. (shrink)
Karl Popper is one of this century's most influential philosophers, but his life in fin-de siècle and interwar Vienna, and his exile in New Zealand during World War II, have so far remained shrouded in mystery. This intellectual 2001 biography recovers the legacy of the young Popper; the progressive, cosmopolitan, Viennese socialist who combated fascism, revolutionized the philosophy of science, and envisioned the Open Society. Malachi Hacohen delves into his archives and draws a compelling portrait of the philosopher, the assimilated (...) Jewish intelligentsia, and the vanished culture of Red Vienna, which was decimated by Nazism. Hacohen's adventurous biography restores Popper's works to their original Central European contexts and, at the same time, shows that they have urgent messages for contemporary politics and philosophy. (shrink)
After experiencing the COVID-19 pandemic, the status and mechanisms of leadership, and the challenges for medical workers in terms of family–work conflicts, have caused widespread concern. In the post-pandemic era, based on role theory and the stressor-detachment model, this paper seeks to break the “black box” of negative effects that can be caused by leadership, research the mechanism and boundary conditions of those negative effects, and explore factors to reduce those negative effects. We recruited 1,010 Chinese medical workers fighting COVID-19 (...) on the frontline. Our study results showed that there was a significant negative correlation between empowering leadership and work–family conflict, and this relationship was completely mediated by role stress, while psychological detachment moderated the relationship between role stress and work–family conflict. Moreover, psychological detachment moderated the mediating effect of empowering leadership on work–family conflict through role stress. Therefore, higher levels of psychological detachment were less conducive to medical workers' family–work conflict. This study has important theoretical significance and practical value for revealing the negative effects and mechanisms of empowering leadership and for medical workers to better deal with work–family relations. (shrink)
John Forrester’s book Thinking in Cases does not provide one ultimate definition of what it means to ‘think in cases’, but rather several alternatives: a ‘style of reasoning’, ‘paradigms’ or ‘exemplars’, and ‘language games’, to mention only a few. But for Forrester, the stories behind each of the figures who suggested these different models for thinking are as important as the models themselves. In other words, the question for Forrester is not only what ‘thinking in cases’ is, but also who (...) might be considered a ‘thinker in cases’. Who could serve as a case study for such a thinker? The major candidates that Forrester considers in his book to be ‘thinkers in cases’ are Kuhn, Foucault, Freud, and Winnicott. In what follows, I will argue that one name is missing from this list, as well as from Forrester’s book more generally: Michael Balint. This name is missing not only because Balint was a great ‘thinker in cases’, but also because we have some reasons to believe that Forrester himself thought so and wished to add him to the list. Forrester, I will argue, found in Balint an exemplar for a thinker in cases that combined elements from Winnicott’s psychoanalytic theory and Foucault’s philosophy of the case-based sciences. (shrink)
This article explores the socio-ethical concerns raised by the familial searching of forensic databases in criminal investigations, from the perspective of family and kinship studies. It discusses the broader implications of this expanded understanding for wider debates about identity, privacy and genetic databases.
The goal of this paper is a comprehensive analysis of basic reasoning patterns that are characteristic of vague predicates. The analysis leads to rigorous reconstructions of the phenomena within formal systems. Two basic features are dealt with. One is tolerance: the insensitivity of predicates to small changes in the objects of predication (a one-increment of a walking distance is a walking distance). The other is the existence of borderline cases. The paper shows why these should be treated as different, though (...) related phenomena. Tolerance is formally reconstructed within a proposed framework of contextual logic, leading to a solution of the Sorites paradox. Borderline-vagueness is reconstructed using certain modality operators; the set-up provides an analysis of higher order vagueness and a derivation of scales of degrees for the property in question. (shrink)
If we try to evaluate the sentence on line 1 we ¯nd ourselves going in an unending cycle. For this reason alone we may conclude that the sentence is not true. Moreover we are driven to this conclusion by an elementary argument: If the sentence is true then what it asserts is true, but what it asserts is that the sentence on line 1 is not true. Consequently the sentence on line 1 is not true. But when we write this (...) true conclusion on line 2 we ¯nd ourselves repeating the very same sentence. It seems that we are unable to deny the truth of the sentence on line 1 without asserting it at the same time. (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation that (...) satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
Consistent with numerous electrophysiological studies, we recently reported that conscious perception is associated with a widely distributed modulation of the P3 component . We also showed that correct objective performance in the absence of subjective awareness is associated with a spatially more restricted modulation of the P3. The relatively late occurrence of the P3 along with lack of control for post-perceptual processes suggests that this component might reflect processes related to stimulus evaluation or confidence rather than to visual awareness or (...) objective performance. The main aim of the current study was to test this hypothesis. While EEG was recorded, participants performed a forced-choice localization task and reported their subjective perception of the target on a 3-level scale that also indexed their confidence. The results showed that our previous findings are replicated when confidence is controlled for. (shrink)
In this paper we study the question assuming MA+⌝CH does Sacks forcing or Laver forcing collapse cardinals? We show that this question is equivalent to the question of what is the additivity of Marczewski's ideals 0. We give a proof that it is consistent that Sacks forcing collapses cardinals. On the other hand we show that Laver forcing does not collapse cardinals.
Naïve realism is the view that veridical experiences are fundamentally relations of acquaintance to external objects and their features, and multidisjunctivism is the conjunction of naïve realism and the view that hallucinatory experiences don’t share a common fundamental kind. Multidisjunctivism allegedly removes the screening-off worry over naïve realism, and the relevant literature suggests that multidisjunctivism is one of the naïve realist responses to the worry. The present paper argues that the multidisjunctive solution is implicitly changing the subject, so the impression (...) that the multidisjunctivist is addressing the screening-off problem is illusory. (shrink)
The semantic paradoxes, whose paradigm is the Liar, played a crucial role at a crucial juncture in the development of modern logic. In his 1908 seminal paper, Russell outlined a system, soon to become that of the Principia Mathematicae, whose main goal was the solution of the logical paradoxes, both semantic and settheoretic. Russell did not distinguish between the two and his theory of types was designed to solve both kinds in the same uniform way. Set theoreticians, however, were content (...) to treat only the set-theoretic paradoxes, putting aside the semantic ones as a non-mathematical concern. This separation was explicitly proposed, eighteen years after Russell’s paper, by Ramsey, though he, like Russell, advocated a system that addresses both kinds. Since then, the semantic paradoxes have been viewed within the perspective of the theory of truth, where they have occupied a respectable niche, but one of rather specialized interest. (shrink)
In this work we give a complete answer as to the possible implications between some natural properties of Lebesgue measure and the Baire property. For this we prove general preservation theorems for forcing notions. Thus we answer a decade-old problem of J. Baumgartner and answer the last three open questions of the Kunen-Miller chart about measure and category. Explicitly, in \S1: (i) We prove that if we add a Laver real, then the old reals have outer measure one. (ii) We (...) prove a preservation theorem for countable-support forcing notions, and using this theorem we prove (iii) If we add ω 2 Laver reals, then the old reals have outer measure one. From this we obtain (iv) $\operatorname{Cons}(\mathrm{ZF}) \Rightarrow \operatorname{Cons}(\mathrm{ZFC} + \neg B(m) + \neg U(m) + U(c))$ . In \S2: (i) We prove a preservation theorem, for the finite support forcing notion, of the property " $F \subseteq ^\omega\omega$ is an unbounded family." (ii) We introduce a new forcing notion making the old reals a meager set but the old members of ω ω remain an unbounded family. Using this we prove (iii) $\operatorname{Cons}(\mathrm{ZF}) \Rightarrow \operatorname{Cons}(\mathrm{ZFC} + U(m) + \neg B(c) + \neg U(c) + C(c))$ . In \S3: (i) We prove a preservation theorem, for the finite support forcing notion, of a property which implies "the union of the old measure zero sets is not a measure zero set," and using this theorem we prove (ii) $\operatorname{Cons}(\mathrm{ZF}) \Rightarrow \operatorname{Cons}(\mathrm{ZFC} + \neg U(m) + C(m) + \neg C(c))$. (shrink)
There are three sections in this paper. The first is a philosophical discussion of the general problem of reasoning under limited deductive capacity. The second sketches a rigorous way of assigning probabilities to statements in pure arithmetic; motivated by the preceding discussion, it can nonetheless be read separately. The third is a philosophical discussion that highlights the shifting contextual character of subjective probabilities and beliefs.
The Cambridge Malting House, an experimental school, serves here as a case study for investigating the tensions within 1920s liberal elites between their desire to abandon some Victorian and Edwardian sets of values in favour of more democratic ones, and at the same time their insistence on preserving themselves as an integral part of the English upper class. Susan Isaacs, the manager of the Malting House, provided the parents – some of whom were the most famous scientists and intellectuals of (...) their age – with an opportunity to fulfil their ‘fantasy’ of bringing up children in total freedom. In retrospect, however, she deeply criticized those from their milieu for not fully understanding the real socio-cultural implications of their ideological decision to make independence and freedom the core values in their children’s education. Thus, 1920s progressive education is a paradigmatic case study of the cultural and ideological inner contradictions within liberal thought in the interwar era. The article also shows how psychoanalysis – which attracted many progressive educators – played a crucial role in providing liberals of all sorts with a new language to articulate their political visions, but, at the same time, explored the limits of the liberal discourse as a whole. (shrink)
Non-standard models were introduced by Skolem, first for set theory, then for Peano arithmetic. In the former, Skolem found support for an anti-realist view of absolutely uncountable sets. But in the latter he saw evidence for the impossibility of capturing the intended interpretation by purely deductive methods. In the history of mathematics the concept of a nonstandard model is new. An analysis of some major innovations–the discovery of irrationals, the use of negative and complex numbers, the modern concept of function, (...) and non-Euclidean geometry–reveals them as essentially different from the introduction of non-standard models. Yet, non-Euclidean geometry, which is discussed at some length, is relevant to the present concern; for it raises the issue of intended interpretation. The standard model of natural numbers is the best candidate for an intended interpretation that cannot be captured by a deductive system. Next, I suggest, is the concept of a wellordered set, and then, perhaps, the concept of a constructible set. One may have doubts about a realistic conception of the standard natural numbers, but such doubts cannot gain support from non-standard models. Attempts to utilize non-standard models for an anti-realist position in mathematics, which appeal to meaning-as-use, or to arguments of the kind proposed by Putnam, fail through irrelevance, or lead to incoherence. Robinson’s skepticism, on the other hand, is a coherent position, though one that gives up on providing a detailed philosophical account. The last section enumerates various uses of non-standard models. (shrink)
Most legal systems deny civilians a right to compensation for losses they sustain during belligerent activities. Arguments for recognising such a right are usually divorced, to various degrees, from the moral and legal underpinnings of the notion of inflicting a wrongful loss under either international humanitarian law or domestic tort law. My aim in this article is to advance a novel account of states’ tortious liability for belligerent wrongdoing, drawing on both international humanitarian law and corrective justice approaches to domestic (...) tort law. Structuring my account on both frameworks, I argue that some of the losses that states inflict during war are private law wrongs that establish a claim of compensation in tort. Only in cases where the in bello principles are observed can losses to person and property be justified and non-wrongful. Otherwise, they constitute wrongs, which those who inflict them have duties of corrective justice to repair. (shrink)
In a recent paper S. McCall adds another link to a chain of attempts to enlist Gödel’s incompleteness result as an argument for the thesis that human reasoning cannot be construed as being carried out by a computer.1 McCall’s paper is undermined by a technical oversight. My concern however is not with the technical point. The argument from Gödel’s result to the no-computer thesis can be made without following McCall’s route; it is then straighter and more forceful. Yet the argument (...) fails in an interesting and revealing way. And it leaves a remainder: if some computer does in fact simulate all our mathematical reasoning, then, in principle, we cannot fully grasp how it works. Gödel’s result also points out a certain essential limitation of self-reflection. The resulting picture parallels, not accidentally, Davidson’s view of psychology, as a science that in principle must remain “imprecise”, not fully spelt out. What is intended here by “fully grasp”, and how all this is related to self-reflection, will become clear at the end of this comment. (shrink)
Techniques for resolving some types of inherited mitochondrial diseases have recently been the subject of scientific research, ethical scrutiny, media coverage and regulatory initiatives in the UK. Building on research using eggs from a variety of providers, scientists hope to eradicate maternally transmitted mutations in mitochondrial DNA by transferring the nuclear DNA of a fertilised egg, created by an intending mother at risk of transmitting mitochondrial disease, and her male partner, into an enucleated egg provided by another woman. In this (...) article we examine how egg providers for mitochondrial research and therapy have been represented in stakeholder debates. A systematic review of key documents and parliamentary debates shows that the balance of consideration tilts heavily towards therapeutic egg providers; research egg providers have been ignored and rendered invisible. However, mapping the various designations of therapeutic egg providers shows that their role is so heavily camouflaged that they have only an absent presence in discussions. We explore this puzzling ambivalence towards egg providers whose contributions are necessary to the success of current mitochondrial research and proposed therapies. We suggest that labels that diminish the contributions of egg providers serve certain governance objectives in managing possible future claims about, and by, therapeutic egg providers. We demonstrate that the social positioning of research egg providers is entangled within that of therapeutic egg providers which means that the former can also never receive their due recognition. This article contributes to the wider literature on the governance of new technological interventions. (shrink)
The technique of minimizing information (infomin) has been commonly employed as a general method for both choosing and updating a subjective probability function. We argue that, in a wide class of cases, the use of infomin methods fails to cohere with our standard conception of rational degrees of belief. We introduce the notion of a deceptive updating method and argue that non-deceptiveness is a necessary condition for rational coherence. Infomin has been criticized on the grounds that there are no higher (...) order probabilities that ‘support’ it, but the appeal to higher order probabilities is a substantial assumption that some might reject. Our elementary arguments from deceptiveness do not rely on this assumption. While deceptiveness implies lack of higher order support, the converse does not, in general, hold, which indicates that deceptiveness is a more objectionable property. We offer a new proof of the claim that infomin updating of any strictly-positive prior with respect to conditional-probability constraints is deceptive. In the case of expected-value constraints, infomin updating of the uniform prior is deceptive for some random variables but not for others. We establish both a necessary condition and a sufficient condition (which extends the scope of the phenomenon beyond cases previously considered) for deceptiveness in this setting. Along the way, we clarify the relation which obtains between the strong notion of higher order support, in which the higher order probability is defined over the full space of first order probabilities, and the apparently weaker notion, in which it is defined over some smaller parameter space. We show that under certain natural assumptions, the two are equivalent. Finally, we offer an interpretation of Jaynes, according to which his own appeal to infomin methods avoids the incoherencies discussed in this paper. (shrink)
We trace self-reference phenomena to the possibility of naming functions by names that belong to the domain over which the functions are defined. A naming system is a structure of the form ,{ }), where D is a non-empty set; for every a∈ D, which is a name of a k-ary function, {a}: Dk → D is the function named by a, and type is the type of a, which tells us if a is a name and, if it is, (...) the arity of the named function. Under quite general conditions we get a fixed point theorem, whose special cases include the fixed point theorem underlying Gödel's proof, Kleene's recursion theorem and many other theorems of this nature, including the solution to simultaneous fixed point equations. Partial functions are accommodated by including “undefined” values; we investigate different systems arising out of different ways of dealing with them. Many-sorted naming systems are suggested as a natural approach to general computatability with many data types over arbitrary structures. The first part of the paper is a historical reconstruction of the way Gödel probably derived his proof from Cantor's diagonalization, through the semantic version of Richard. The incompleteness proof–including the fixed point construction–result from a natural line of thought, thereby dispelling the appearance of a “magic trick”. The analysis goes on to show how Kleene's recursion theorem is obtained along the same lines. (shrink)
The research examined the adjustment process to retirement among teachers during the initial stages of retirement. The focus on the specific sector of teachers is drawn from a theoretical foundatio...
Savage's framework of subjective preference among acts provides a paradigmatic derivation of rational subjective probabilities within a more general theory of rational decisions. The system is based on a set of possible states of the world, and on acts, which are functions that assign to each state a consequence. The representation theorem states that the given preference between acts is determined by their expected utilities, based on uniquely determined probabilities (assigned to sets of states), and numeric utilities assigned to consequences. (...) Savage's derivation, however, is based on a highly problematic well-known assumption not included among his postulates: for any consequence of an act in some state, there is a "constant act" which has that consequence in all states. This ability to transfer consequences from state to state is, in many cases, miraculous -- including simple scenarios suggested by Savage as natural cases for applying his theory. We propose a simplification of the system, which yields the representation theorem without the constant act assumption. We need only postulates P1-P6. This is done at the cost of reducing the set of acts included in the setup. The reduction excludes certain theoretical infinitary scenarios, but includes the scenarios that should be handled by a system that models human decisions. (shrink)
This paper is based on linked qualitative studies of the donation of human embryos to stem cell research carried out in the United Kingdom, Switzerland, and China. All three studies used semi-structured interview protocols to allow an in-depth examination of donors’ and non-donors’ rationales for their donation decisions, with the aim of gaining information on contextual and other factors that play a role in donor decisions and identifying how these relate to factors that are more usually included in evaluations made (...) by theoretical ethics. Our findings have implications for one factor that has previously been suggested as being of ethical concern: the role of gratitude. Our empirical work shows no evidence that interpersonal gratitude is an important factor, but it does support the existence of a solidarity-based desire to “give something back” to medical research. Thus, we use empirical data to expand and refine the conceptual basis of bioethically theorizing the IVF–stem cell interface. (shrink)
We define the ideal with the property that a real omits all Borel sets in the ideal which are coded in a transitive model if and only if it is an amoeba real over this model. We investigate some other properties of this ideal. Strolling through the "amoeba forest" we gain as an application a modification of the proof of the inequality between the additivities of Lebesgue measure and Baire category.
Simple locative sentences show a variety of pseudo-quantificational interpretations. Some locatives give the impression of universal quantification over parts of objects, others involve existential quantification, and yet others cannot be characterized by either of these quantificational terms. This behavior is explained by virtually all semantic theories of locatives. What has not been previously observed is that similar quantificational variability is also exhibited by locative sentences containing indefinites with the ‘a’ article. This phenomenon is especially problematic for traditional existential treatments of (...) indefinites. We propose a solution where indefinites denote properties and are assigned locations similarly to other spatial descriptions. This Property-Eigenspace Hypothesis accounts for the correlation between the interpretations of locative indefinites and the pseudo-quantificational effects with simple entity-denoting NPs. Thereby, the proposal opens up a new empirical domain for property-based theories of indefinites, with implications for the analysis of collective descriptions, generics, negative polarity items and part–whole structure. (shrink)