In a recent article in this journal, Andrew Johnson seeks to defend the “New Atheism” against several objections. We provide a philosophical assessment of his defense of contemporary atheistic arguments that are said to amount to bifurcation fallacies. This point of discussion leads to our critical discussion of the presumption of atheism and the epistemic justification of atheism.
In this article we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change and it contribute to combating the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the (...) contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI’s greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combating climate change, while reducing its impact on the environment. (shrink)
Initiatives relying on artificial intelligence (AI) to deliver socially beneficial outcomes—AI for social good (AI4SG)—are on the rise. However, existing attempts to understand and foster AI4SG initiatives have so far been limited by the lack of normative analyses and a shortage of empirical evidence. In this Perspective, we address these limitations by providing a definition of AI4SG and by advocating the use of the United Nations’ Sustainable Development Goals (SDGs) as a benchmark for tracing the scope and spread of AI4SG. (...) We introduce a database of AI4SG projects gathered using this benchmark, and discuss several key insights, including the extent to which different SDGs are being addressed. This analysis makes possible the identification of pressing problems that, if left unaddressed, risk hampering the effectiveness of AI4SG initiatives. (shrink)
Technologies to rapidly alert people when they have been in contact with someone carrying the coronavirus SARS-CoV-2 are part of a strategy to bring the pandemic under control. Currently, at least 47 contact-tracing apps are available globally. They are already in use in Australia, South Korea and Singapore, for instance. And many other governments are testing or considering them. Here we set out 16 questions to assess whether — and to what extent — a contact-tracing app is ethically justifiable.
In a series of papers, Donald Davidson :3–17, 1984, The philosophical grounds of rationality, 1986, Midwest Stud Philos 16:1–12, 1991) developed a powerful argument against the claim that linguistic conventions provide any explanatory purchase on an account of linguistic meaning and communication. This argument, as I shall develop it, turns on cases of what I call lexical innovation: cases in which a speaker uses a sentence containing a novel expression-meaning pair, but nevertheless successfully communicates her intended meaning to her audience. (...) I will argue that cases of lexical innovation motivate a dynamic conception of linguistic conventions according to which background linguistic conventions may be rapidly expanded to incorporate new word meanings or shifted to revise the meanings of words already in circulation. I argue that this dynamic account of conventions both resolves the problem raised by cases of lexical innovation and that it does so in a way that is preferable to those who—like Davidson—deny important explanatory roles for linguistic conventions. (shrink)
An important objection to the "higher-order" theory of consciousness turns on the possibility of higher-order misrepresentation. I argue that the objection fails because it illicitly assumes a characterization of consciousness explicitly rejected by HO theory. This in turn raises the question of what justifies an initial characterization of the data a theory of consciousness must explain. I distinguish between intrinsic and extrinsic characterizations of consciousness, and I propose several desiderata a successful characterization of consciousness must meet. I then defend the (...) particular extrinsic characterization of the HO theory, the "transitivity principle," against its intrinsic rivals, thereby showing that the misrepresentation objection conclusively falls short. (shrink)
Reformulating a scientific theory often leads to a significantly different way of understanding the world. Nevertheless, accounts of both theoretical equivalence and scientific understanding have neglected this important aspect of scientific theorizing. This essay provides a positive account of how reformulating theories changes our understanding. My account simultaneously addresses a serious challenge facing existing accounts of scientific understanding. These accounts have failed to characterize understanding in a way that goes beyond the epistemology of scientific explanation. By focusing on cases where (...) we have differences in understanding without differences in explanation, I show that understanding cannot be reduced to explanation. (shrink)
ABSTRACT What is to be done when parents disagree about whether to raise their children as vegans? Three positions have recently emerged. Marcus William Hunt has argued that parents should seek a compromise. I have argued that there should be no compromise on animal rights, but there may be room for compromise over some ‘unusual’ sources of non-vegan, but animal-rights-respecting, food. Carlo Alvaro has argued that both Hunt and I are wrong; veganism is like religion, and there should be no (...) compromise on religion, meaning there should be no compromise on veganism. This means that even my minimal-compromise approach should be rejected. This paper critiques Alvaro’s zero-compromise veganism, demonstrating that his case against Hunt’s position is undermotivated, and his case against my position rests upon misunderstandings. If vegans wish to reject Hunt’s pro-compromise position, they should favour a rightist approach, not Alvaro’s zero-compromise approach. (shrink)
That AI will have a major impact on society is no longer in question. Current debate turns instead on how far this impact will be positive or negative, for whom, in which ways, in which places, and on what timescale. In order to frame these questions in a more substantive way, in this prolegomena we introduce what we consider the four core opportunities for society offered by the use of AI, four associated risks which could emerge from its overuse or (...) misuse, and the opportunity costs associated with its under use. We then offer a high-level view of the emerging advantages for organisations of taking an ethical approach to developing and deploying AI. Finally, we introduce a set of five principles which should guide the development and deployment of AI technologies. The development of laws, policies and best practices for seizing the opportunities and minimizing the risks posed by AI technologies would benefit from building on ethical frameworks such as the one offered here. (shrink)
The same-order representation theory of consciousness holds that conscious mental states represent both the world and themselves. This complex representational structure is posited in part to avoid a powerful objection to the more traditional higher-order representation theory of consciousness. The objection contends that the higher-order theory fails to account for the intimate relationship that holds between conscious states and our awareness of them--the theory 'divides the phenomenal labor' in an illicit fashion. This 'failure of intimacy' is exposed by the possibility (...) of misrepresentation by higher-order states. In this paper, I argue that despite appearances, the same-order theory fails to avoid the objection, and thus also has troubles with intimacy. (shrink)
The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...) essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good. (shrink)
Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these (...) principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principles for ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice. On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts. (shrink)
This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...) stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society. (shrink)
An argument is usually said to be valid iff it is truth-preserving—iff it cannot be that all its premises are true and its conclusion false. But imperatives (it is normally thought) are not truth-apt. They are not in the business of saying how the world is, and therefore cannot either succeed or fail in doing so. To solve this problem, we need to find a new criterion of validity, and I aim to propose such a criterion.
Since 2016, there has been an explosion of academic work and journalism that fixes its subject matter using the terms ‘fake news’ and ‘post-truth’. In this paper, I argue that this terminology is not up to scratch, and that academics and journalists ought to completely stop using the terms ‘fake news’ and ‘post-truth’. I set out three arguments for abandonment. First, that ‘fake news’ and ‘post-truth’ do not have stable public meanings, entailing that they are either nonsense, context-sensitive, or contested. (...) Secondly, that these terms are unnecessary, because we already have a rich vocabulary for thinking about epistemic dysfunction. Thirdly, I observe that ‘fake news’ and ‘post-truth’ have propagandistic uses, meaning that using them legitimates anti-democratic propaganda, and runs the risk of smuggling bad ideology into conversations. (shrink)
At the outset of the Republic, Polemarchus advances the bold thesis that “justice is the art which gives benefit to friends and injury to enemies”. He quickly rejects the hypothesis, and what follows is a long tradition of neglecting the ethics of enmity. The parallel issue of how friendship affects the moral sphere has, by contrast, been greatly illuminated by discussions both ancient and contemporary. This article connects this existing work to the less explored topic of the normative significance of (...) our negative relationships. I explain how negative partiality should be conceptualized through reference to the positive analogue, and argue that at least some forms of negative partiality are justified. I further explore the connection between positive and negative relationships by showing how both are justified by ongoing histories of encounter. However, I also argue that these relationships are in some important ways asymmetrical. (shrink)
Despite its potential for radically reducing the harm inflicted on nonhuman animals in the pursuit of food, there are a number of objections grounded in animal ethics to the development of in vitro meat. In this paper, I defend the possibility against three such concerns. I suggest that worries about reinforcing ideas of flesh as food and worries about the use of nonhuman animals in the production of in vitro meat can be overcome through appropriate safeguards and a fuller understanding (...) of the interests that nonhuman animals actually possess. Worries about the technology reifying speciesist hierarchies of value are more troublesome, however. In response to this final challenge, I suggest that we should be open not just to the production of in vitro nonhuman flesh, but also in vitro human flesh. This leads to a consideration of the ethics of cannibalism. The paper ultimately defends the position that cannibalism simpliciter is not morally problematic, though a great many practices typically associated with it are. The consumption of in vitro human flesh, however, is able to avoid these problematic practices, and so should be considered permissible. I conclude that animal ethicists and vegans should be willing to cautiously embrace the production of in vitro flesh. (shrink)
The following quotation, from Frank Jackson, is the beginning of a typical exposition of the debate between those metaphysicians who believe in temporal parts, and those who do not: The dispute between three-dimensionalism and four-dimensionalism, or more precisely, that part of the dispute we will be concerned with, concerns what persistence, and correllatively, what change, comes to. Three-dimensionalism holds that an object exists at a time by being wholly present at that time, and, accordingly, that it persists if it is (...) wholly present at more than one time. For short, it persists by enduring. Four-dimensionalism holds that an object exists at a time by having a temporal part at that time, and it persists if it has distinct temporal parts at more than one time. For short, it persists by perduring. In the light of these comments, some readers will perhaps ﬁnd the question that forms the title of this paper a little puzzling. They may have learned to use the terms ‘fourdimensionalism’ ‘perdurantism’ and ‘belief in temporal parts’ interchangeably; or perhaps even to deﬁne one in terms of the other. Such a usage, however, is inapposite. We might imagine a Flatland-like world of two spatial dimensions and one temporal, whose philosophers are divided between a theory of persistence on which they persist by having temporal parts, and a theory on which they persist by being wholly located in each of several times. This is just the same issue we face, but at least the label ‘four-dimensionalism’ seems inapposite: the four-dimensionalist Flatlanders believe in only three dimensions! (shrink)
In this paper, I explore two contrasting conceptions of the social character of language. The first takes language to be grounded in social convention. The second, famously developed by Donald Davidson, takes language to be grounded in a social relation called triangulation. I aim both to clarify and to evaluate these two conceptions of language. First, I propose that Davidson’s triangulation-based story can be understood as the result of relaxing core features of conventionalism pertaining to both common-interest and diachronic stability—specifically, (...) Davidson does not require uses of language to be self-perpetuating, in the way required by conventionalism, in order to be bona fide components of linguistic systems. Second, I argue that Davidson’s objections to conventionalism from language innovation and language variation fail, and that certain kinds of negative data in language use require an appeal to diachronic social relations. However, I also argue that recent work on communication in the a.. (shrink)
Cappelen and Dever present a forceful challenge to the standard view that perspective, and in particular the perspective of the first person, is a philosophically deep aspect of the world. Their goal is not to show that we need to explain indexical and other perspectival phenomena in different ways, but to show that the entire topic is an illusion.
ABSTRACT This paper proposes a novel answer to the Special Composition Question. In some respects it agrees with brutalism about composition; in others with universalism. The main novel feature of this answer is the insight I think it gives into what the debate over the Special Composition Question is about.
In his two recent books on ontology, Universals: an Opinionated Introduction, and A World of States of Affairs, David Armstrong gives a new argument against nominalism. That argument seems, on the face of it, to be similar to another argument that he used much earlier against Rylean behaviourism: the Truthmaker Argument, stemming from a certain plausible premise, the Truthmaker Principle. Other authors have traced the history of the truthmaker principle, its appearance in the work of Aristotle , Bradley , and (...) even Husserl . But that is not my task — in this paper I argue that Armstrong’s new argument is not logically analogous to the old, and, in particular, that it is quite possible to be a thoroughgoing nominalist, and hold a truthmaker principle. (shrink)
Nevertheless, any competent speaker will know what it means. What explains our ability to understand sentences we have never before encountered? One natural hypothesis is that those novel sentences are built up out of familiar parts, put together in familiar ways. This hypothesis requires the backing hypothesis that English has a compositional semantic theory.
Scholars studying the origins and evolution of language are also interested in the general issue of the evolution of cognition. Language is not an isolated capability of the individual, but has intrinsic relationships with many other behavioral, cognitive, and social abilities. By understanding the mechanisms underlying the evolution of linguistic abilities, it is possible to understand the evolution of cognitive abilities. Cognitivism, one of the current approaches in psychology and cognitive science, proposes that symbol systems capture mental phenomena, and attributes (...) cognitive validity to them. Therefore, in the same way that language is considered the prototype of cognitive abilities, a symbol system has become the prototype for studying language and cognitive systems. Symbol systems are advantageous as they are easily studied through computer simulation (a computer program is a symbol system itself), and this is why language is often studied using computational models. (shrink)
Animal rights positions face the ‘predator problem’: the suggestion that if the rights of nonhuman animals are to be protected, then we are obliged to interfere in natural ecosystems to protect prey from predators. Generally, rather than embracing this conclusion, animal ethicists have rejected it, basing this objection on a number of different arguments. This paper considers but challenges three such arguments, before defending a fourth possibility. Rejected are Peter Singer’s suggestion that interference will lead to more harm than good, (...) Sue Donaldson and Will Kymlicka’s suggestion that respect for nonhuman sovereignty necessitates non-interference in normal circumstances, and Alasdair Cochrane’s solution based on the claim that predators cannot survive without killing prey. The possibility defended builds upon Tom Regan’s suggestion that predators, as moral patients but not moral agents, cannot violate the rights of their prey, and so the rights of the prey, while they do exist, do not call for intervention. This idea is developed by a consideration of how moral agents can be more or less responsible for a given event, and defended against criticisms offered by thinkers including Alasdair Cochrane and Dale Jamieson. (shrink)
Interpreters of Robert Nozick’s political philosophy fall into two broad groups concerning his application of the ‘Lockean proviso’. Some read his argument in an undemanding way: individual instances of ownership which make people worse off than they would have been in a world without any ownership are unjust. Others read the argument in a demanding way: individual instances of ownership which make people worse off than they would have been in a world without that particular ownership are unjust. While I (...) argue that the former reading is correct as an interpretive matter, I suggest that this reading is nonetheless highly demanding. In particular, I argue that it is demanding when it is expanded to include the protection of nonhuman animals; if such beings are right bearers, as more and more academics are beginning to suggest, then there is no nonarbitrary reason to exclude them from the protection of the proviso. (shrink)
The buyer–supplier relationship is the nexus of the economic partnership of many commercial transactions and is founded upon the reciprocal trust of the two parties that participate in this economic exchange. In this article, we identify how six ethical elements play a key role in framing the buyer–supplier relationship, incorporating a model articulated by Hosmer (The ethics of management, McGraw-Hill, New York, 2008 ). We explain how trust is a behavior, the relinquishing of personal control in the expectant hope that (...) the other party will honor the duties of a psychological contract. Presenting information about six factors of organizational trustworthiness, we offer insights about the relationship between ethics and trust in the buyer–supplier relationship. (shrink)
As tends to be the way with philosophical positions, there are at least as many two-dimensionalisms as there are two-dimensionalists. But painting with a broad brush, there are core epistemological and metaphysical commitments which underlie the two-dimensionalist project, commitments for which I have no sympathies. A sketch of three signi?cant points of disagreement.
I want to join Dummett in saying that the reality of the past (and, by analogy, the reality of the future) is an issue of realism versus anti-realism: (Dummett 1969) If you affirm the reality of the past, you are a realist about the past. If you deny the reality of the past, you are an anti-realist about the past. (And likewise, in each case, for the future). It makes sense to think of these issues by analogy with realism about (...) the external world, unobservable objects, mathematical objects, universals, and so on. These are all properly described as ontological issues. (shrink)
This paper examines the success of corporate communication in voluntary sustainability reporting. Existing studies have focused on the perspective of the communicators but lack an understanding of the perspective of information recipients to clearly evaluate this interactive communication process. This paper looks at the issue of a credibility gap perceived by external stakeholders when they doubt the authenticity of communicated information due to the reporting company’s governance structure. The paper uses family businesses to exemplify the emergence of such a gap (...) when outsiders become concerned about the potential agency problem of the integrated ownership and management controlled by a few members of the same family. Following source credibility theory, these concerns raise a credibility gap associated with a family firm’s trustworthiness and goodwill, even if the family has the expertise to carry out sustainability reporting. The findings of two experimental studies indicate that family businesses suffer a greater credibility gap than non-family businesses. An external and independent assurance service can mitigate such gaps, especially when the service is comprehensive and targets family businesses. The paper provides a complete view evaluating corporate communication by looking at the interaction between the communicating company and the information recipients. (shrink)
The possibility of “clean milk”—dairy produced without the need for cows—has been championed by several charities, companies, and individuals. One can ask how those critical of the contemporary dairy industry, including especially vegans and others sympathetic to animal rights, should respond to this prospect. In this paper, I explore three kinds of challenges that such people may have to clean milk: first, that producing clean milk fails to respect animals; second, that humans should not consume dairy products; and third, that (...) the creation of clean milk would affirm human superiority over cows. None of these challenges, I argue, gives us reason to reject clean milk. I thus conclude that the prospect is one that animal activists should both welcome and embrace. (shrink)
Cognitivism about imperatives is the thesis that sentences in the imperative mood are truth-apt: have truth values and truth conditions. This allows cognitivists to give a simple and powerful account of consequence relations between imperatives. I argue that this account of imperative consequence has counterexamples that cast doubt on cognitivism itself.
Alan Baker’s enhanced indispensability argument supports mathematical platonism through the explanatory role of mathematics in science. Busch and Morrison defend nominalism by denying that scientific realists use inference to the best explanation to directly establish ontological claims. In response to Busch and Morrison, I argue that nominalists can rebut the EIA while still accepting Baker’s form of IBE. Nominalists can plausibly require that defenders of the EIA establish the indispensability of a particular mathematical entity. Next, I argue that IBE cannot (...) establish that any particular mathematical entity is indispensable. Mathematical entities do not compete with each other in the way physical unobservables do. This lack of competition enables alternative formulations of scientific explanations that use different, but compatible, mathematical entities. The compatibility of these explanations prevents IBE from establishing platonism. (shrink)
Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to “switch” between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise (...) development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills. (shrink)
in the tripartite psychology of the Republic, Plato characterizes the “spirited” part of the soul as the “ally of reason”: like the auxiliaries of the just city, whose distinctive job is to support the policies and judgments passed down by the rulers, spirit’s distinctive “job” in the soul is to support and defend the practical decisions and commands of the reasoning part. This is to include not only defense against external enemies who might interfere with those commands, but also, and (...) most importantly, defense against unruly appetites within the individual’s own soul.1 Spirit, according to this picture, is by nature reason’s faithful auxiliary in the soul, while appetite is always a potential enemy to be watched .. (shrink)
This paper discusses the handicapped child case and some other variants of Derek Parfit's non-identityproblem (Parfit, 1984) The case is widely held to show that there is harmless wrongdoing, and that amoral system which tries to reduce wrongdoing directly to harm (``person-affecting morality'')is inadequate.I show that the argument for this does not depend (as some have implied it does) on Kripkean necessity of origin. I distinguish the case from other variants (``wrongful life cases'') of the non-identityproblem which do not bear (...) directly on person-affecting morality as I understand it. And finally, I describe a respect in which the handicapped child case is puzzling and counter-intuitive, even on the supposition that it is a case of harmless wrongdoing. I conclude that the case is ``hard'': it will take more than the rejection of person-affecting morality to remove its puzzling character. (shrink)
It has been widely believed since the nineteenth century that modern science provides a serious challenge to religion, but less agreement as to the reason. One main complication is that whenever there has been broad consensus for a scientific theory that challenges traditional religious doctrines, one finds religious believers endorsing the theory or even formulating it. As a result, atheists who argue for the incompatibility of science and religion often go beyond the religious implications of individual scientific theories, arguing that (...) the sciences taken together provide a comprehensive challenge to religious belief. Scientific theories, on this view, can be integrated to form a general vision of humans and our place in nature, one that excludes the existence of supernatural phenomena to which many religious traditions refer. The most common name given to this general vision is the scientific worldview. The purpose of my paper is to argue that the relation of a worldview to science is more complex and ambiguous than this position allows, drawing upon recent work in the history and philosophy of science. While there are other ways to complicate the picture, this paper will focus on differing views that scientists and philosophers have on the proper scope and limits of scientific inquiry. I will identify two different types of science—Baconian and Cartesian—that have different ambitions with respect to scientific theories, and thus different answers about the possibility of a scientific worldview. The paper will conclude by showing how their differing intuitions about scientific inquiry are evident in contemporary debates about reductionism, drawing upon the work of two physicists, Steven Weinberg and John Polkinghorne. History is more complex than this simple schema allows, of course, but these types provide a useful first approximation into the ambiguities of modern science. (shrink)
I argue that Colin Cheyne and Charles Pigden's recent attempt to find truthmakers for negative truths fails. Though Cheyne and Pigden are correct in their treatment of some of the truths they set out to find truthmakers for (such as 'There is no hippopotamus in S223' and 'Theatetus is not flying') they over-generalize when they apply the same treatment to 'There are no unicorns'. In my view, this difficulty is ineliminable: not every truth has a truthmaker.