We show how to formalize approximate counting via hash functions in subsystems of bounded arithmetic, using variants of the weak pigeonhole principle. We discuss several applications, including a proof of the tournament principle, and an improvement on the known relationship of the collapse of the bounded arithmetic hierarchy to the collapse of the polynomial-time hierarchy.
This paper is an expanded written version of my reply to Rosanna Keefe’s paper ‘Modelling higher-order vagueness: columns, borderlines and boundaries’ (Keefe 2015), which in turn is a reply to my paper ‘Columnar higher-order vagueness, or Vagueness is higher-order vagueness’ (Bobzien 2015). Both papers were presented at the Joint Session of the the Aristotelian Society and the Mind Association in July, 2015. At the Joint Session meeting, there was insufficient time to present all of my points in response to (...) Keefe’s paper. In addition, the audio of the session, which is available online, becomes inaudible at the beginning of my reply to Keefe’s comments due to a technical defect. The following is a full version of my remarks. (shrink)
Many philosophers, I suspect, are partial to supervaluational theories of vagueness. And with good reason. Its rivals all seem to promise metaphysical mysteries concerning hitherto unnoticed, and perhaps unnoticeable, sharp boundaries around our concepts, or radical revision in our logical practices. And not only have philosophers been so tempted. The texts are a little unclear, but it seems several economists can be read as adopting supervaluational solutions to the difficulties raised by vagueness in economic concepts. Given its popularity, and plausibility, (...) supervaluationism deserves a book-length defence. Yet this is the first such book in the philosophical canon. (shrink)
Most expressions in natural language are vague. But what is the best semantic treatment of terms like 'heap', 'red' and 'child'? And what is the logic of arguments involving this kind of vague expression? These questions are receiving increasing philosophical attention, and in this book, first published in 2000, Rosanna Keefe explores the questions of what we should want from an account of vagueness and how we should assess rival theories. Her discussion ranges widely and comprehensively over the main (...) theories of vagueness and their supporting arguments, and she offers a powerful and original defence of a form of supervaluationism, a theory that requires almost no deviation from standard logic yet can accommodate the lack of sharp boundaries to vague predicates and deal with the paradoxes of vagueness in a methodologically satisfying way. Her study will be of particular interest to readers in philosophy of language and of mind, philosophical logic, epistemology and metaphysics. (shrink)
We describe a deep generative model in which the lowest layer represents the word-count vector of a document and the top layer represents a learned binary code for that document. The top two layers of the generative model form an undirected associative memory and the remaining layers form a belief net with directed, top-down connections. We present efficient learning and inference procedures for this type of generative model and show that it allows more accurate and much faster retrieval than latent (...) semantic analysis. By using our method as a filter for a much slower method called TF-IDF we achieve higher accuracy than TF-IDF alone and save several orders of magnitude in retrieval time. By using short binary codes as addresses, we can perform retrieval on very large document sets in a time that is independent of the size of the document set using only one word of memory to describe each document. (shrink)
A method for sharing private data through personalized searches is described. This method enables users to retrieve access-controlled private data as well as publicly available data by submitting a single query to a conventional search engine. Seamless integration of the method into current search services through a prototype on the Mozilla Firefox web browser, without any changes to existing search functions, such as crawling, indexing, and matching, is also described. Evaluations showed that the additional storage requirement is only 10% and (...) that the system implementing the method responds in 10 s with 1,000 dummy results for anonymization. (shrink)
Vagueness is currently the subject of vigorous debate in the philosophy of logic and language. Vague terms -- such as 'tall', 'red', 'bald', and 'tadpole' -- have borderline cases ; and they lack well-defined extensions. The phenomenon of vagueness poses a fundamental challenge to classical logic and semantics, which assumes that propositions are either true or false and that extensions are determinate.This anthology collects for the first time the most important papers in the field. After a substantial introduction that surveys (...) the field, the essays form four groups, starting with some historically notable pieces. The 1970s saw an explosion of interest in vagueness, and the second group of essays reprints classic papers from this period. The following group of papers represent the best recent work on the logic and semantics of vagueness. The essays in the final group are contributions to the continuing debate about vague objects and vague identity. (shrink)
Logical Pluralists maintain that there is more than one genuine/true logical consequence relation. This paper seeks to understand what the position could amount to and some of the challenges faced by its formulation and defence. I consider in detail Beall and Restall’s Logical Pluralism—which seeks to accommodate radically different logics by stressing the way that they each fit a general form, the Generalised Tarski Thesis (GTT)—arguing against the claim that different instances of GTT are admissible precisifications of logical consequence. I (...) then consider what it is to endorse a logic within a pluralist framework and criticise the options Beall and Restall entertain. A case study involving many-valued logics is examined. I next turn to issues of the applications of different logics and questions of which logic a pluralist should use in particular contexts. A dilemma regarding the applicability of admissible logics is tackled and it is argued that application is a red herring in relation to both understanding and defending a plausible form of logical pluralism. In the final section, I consider other ways to be and not to be a logical pluralist by examining analogous positions in debates over religious pluralism: this, I maintain, illustrates further limitations and challenges for a very general logical pluralism. Certain less wide-ranging pluralist positions are more plausible in both cases, I suggest, but assessment of those positions needs to be undertaken on a case-by-case basis. (shrink)
In this paper I offer a critique of the recent popular strategy of giving a contextualist account of vagueness. Such accounts maintain that truth-values of vague sentences can change with changes of context induced by confronting different entities (e.g. different pairs through a sorites series). I claim that appealing to context does not help in solving the sorites paradox, nor does it give us new insights into vagueness per se. Furthermore, the contextual variation to which the contextualist is committed is (...) problematic in various ways. For example, it yields the consequence that much of our everyday (non-soritical) reasoning is fallacious, and it renders us ignorant of what we and others have said. (shrink)
What the world needs now is another theory of vagueness. Not because the old theories are useless. Quite the contrary, the old theories provide many of the materials we need to construct the truest theory of vagueness ever seen. The theory shall be similar in motivation to supervaluationism, but more akin to many-valued theories in conceptualisation. What I take from the many-valued theories is the idea that some sentences can be truer than others. But I say very different things to (...) the ordering over sentences this relation generates. I say it is not a linear ordering, so it cannot be represented by the real numbers. I also argue that since there is higher-order vagueness, any mapping between sentences and mathematical objects is bound to be inappropriate. This is no cause for regret; we can say all we want to say by using the comparative truer than without mapping it onto some mathematical objects. From supervaluationism I take the idea that we can keep classical logic without keeping the familiar bivalent semantics for classical logic. But my preservation of classical logic is more comprehensive than is normally permitted by supervaluationism, for I preserve classical inference rules as well as classical sequents. And I do this without relying on the concept of acceptable precisifications as an unexplained explainer. The world does not need another guide to varieties of theories of vagueness, especially since Timothy Williamson (1994) and Rosanna Keefe (2000) have already provided quite good guides. I assume throughout familiarity with popular theories of vagueness. (shrink)
This paper examines people's reasoning about identity continuity and its relation to previous research on how people value one-of-a-kind artifacts, such as artwork. We propose that judgments about the continuity of artworks are related to judgments about the continuity of individual persons because art objects are seen as physical extensions of their creators. We report a reanalysis of previous data and the results of two new empirical studies that test this hypothesis. The first study demonstrates that the mere categorization of (...) an object as “art” versus “a tool” changes people's intuitions about the persistence of those objects over time. In a second study, we examine some conditions that may lead artworks to be thought of as different from other artifacts. These observations inform both current understanding of what makes some objects one-of-a-kind as well as broader questions regarding how people intuitively think about the persistence of human agents. (shrink)
Taking a series of colour patches, starting with one that clearly looks red, and making each so similar in colour to the previous one that it looks the same as it, we appear to be able to show that a yellow patch looks red. I ask whether phenomenal sorites paradoxes, such as this, are subject to a unique kind of solution that is unavailable in relation to other sorites paradoxes. I argue that they do not need such a solution, nor (...) do they succumb to one. In particular, I reject the claim made by Fara and Raffman that looks the same is a transitive relation, which would allow us to solve phenomenal sorites paradoxes by denying the possibility of the required kind of sorites series. (shrink)
According to columnar higher-order vagueness, all orders of vagueness coincide: any borderline case is a borderline borderline case, and a third-order borderline case, etc. Bobzien has worked out many details of such a theory and models it with a modal logic closely related to S4. I take up a range of questions about the framework and argue that it is not suitable for modelling the structure of vagueness and higher-order vagueness.
In 1966, a team made up of Brazilian and foreign scientists spent a week carefully recording the body temperature and other clinical signs and symptoms of 110 Tiriyó Indigenous people in their communities along the Brazil-Suriname border. Led by the Yale University virologist and immunologist Francis Black, the researchers faced an "epidemic" with a special profile, distinct from those most common in Indigenous populations, which usually resulted in widespread illness, the collapse of subsistence activities, hunger, and as a rule, elevated (...) mortality.Rather, what was happening with the Tiriyó was a planned event, controlled and carefully... (shrink)
Evan's influential argument against vague objects (_Analysis<D>, 1978) has a parallel directed against contingent identity. I argue that Noonan failed in his attempt to accept Evans's argument but save contingent identity by establishing a disanalogy between the two arguments (in The Philosophical Quarterly 1991). Instead, I suggest an alternative way to block the argument against contingent identity and argue that its analogue provides a satisfactory response to Evans's original argument.
This paper asks whether a good philosophical account of something can ever be circular. It explores the kind of circumstances in which an account of F might involve F itself while still serving the functions of and meeting the requirements on a philosophical account. The paper discusses two criteria for acceptable circularity, based on ideas from Humberstone 1997. And it illustrates the surprisingly wide variety of kinds of accounts in which circularity need not be bad.
The concept of authenticity plays an important role in how people reason about objects, other people, and themselves. However, despite a great deal of academic interest in this concept, to date, the precise meaning of the term, authenticity, has remained somewhat elusive. This paper reviews the various definitions of authenticity that have been proposed in the literature and identifies areas of convergence. We then outline a novel framework that organizes the existing definitions of authenticity along two key dimensions: describing the (...) type of entity that is evaluated and describing the source of information that is consulted. We argue that this convergence across a number of papers, and more importantly, across a number of domains, reflects significant progress in articulating the meaning of authenticity. We conclude by suggesting new avenues for research in this area, with particular attention toward psychological process. (shrink)
A theory of vagueness gives a model of vague language and of reasoning within the language. Among the models that have been offered are Degree Theorists’ numerical models that assign values between 0 and 1 to sentences, rather than simply modelling sentences as true or false. In this paper, I ask whether we can benefit from employing a rich, well-understood numerical framework, while ignoring those aspects of it that impute a level of mathematical precision that is not present in the (...) modelled phenomenon of vagueness. Can we ignore apparent implications for the phenomena by pointing out that it is just a model and that the unwanted features are mere artefacts? I explore the distinction between representors and artefacts and criticise the strategy of appealing to features as mere artefacts in defence of a theory. I focus largely on theories using numerical resources, but also consider other, related theories and strategies, including theories appealing to non-linear structures. (shrink)
Degree theories of vagueness build on the observation that vague predicates such as 'tall' and 'red' come in degrees. They employ an infinite-valued logic, where the truth values correspond to degrees of truth and are typically represented by the real numbers in the interval [0,1]. In this paper, the success with which the numerical assignments of such theories can capture the phenomenon of vagueness is assessed by drawing an analogy with the measurement of various physical quantities using real numbers. I (...) argue that degree theories of vagueness are undermined by the failure of the necessary connectedness principle. Moreover, the semantics for the connectives entail that there must be a uniquely correct numerical assignment for the sentences, and this is implausible. Different senses of 'coming in degrees' are then distinguished; I argue that a confusion between them could be the source of the degree theorist's error, and the distinction illuminates the problem cases described earlier in the paper. (shrink)
This paper explores several different accounts of validity within the supervaluationist framework that coincide in the absence of the D operator but differ once that operator is introduced. It argues that the alternatives have different advantages and suggests a form of a pluralism about notions of validity within the supervaluationist framework.
Michael Tye responds to the problem of higher-order vagueness for his trivalent semantics by maintaining that truth-value predicates are “vaguely vague”: it’s indeterminate, on his view, whether they have borderline cases and therefore indeterminate whether every sentence is true, false, or indefinite. Rosanna Keefe objects (1) that Tye’s argument for this claim tacitly assumes that every sentence is true, false, or indefinite, and (2) that the conclusion is any case not viable. I argue – contra (1) – that Tye’s (...) argument needn’t make that assumption. A version of her objection is in fact better directed against other arguments Tye advances, though Tye can absorb this criticism without abandoning his position’s core. On the other hand, Keefe’s second objection does hit the mark: embracing ‘vaguely vague’ truth-value predicates undermines Tye’s ability to support validity claims needed to defend his position. To see this, however, we must develop Keefe’s remarks further than she does. (shrink)
If you keep removing single grains of sand from a heap, when is it no longer a heap? From discussions of the heap paradox in classical Greece, to modern formal approaches like fuzzy logic, Timothy Williamson traces the history of the problem of vagueness. He argues that standard logic and formal semantics apply even to vague languages and defends the controversial, realist view that vagueness is a form of ignorance - there really is a grain of sand whose removal turns (...) a heap into a non-heap, but we can never know exactly which one it is. (shrink)
A number of recent accounts for vague terms postulate a kind of context-sensitivity, one that kicks in after the usual ‘external’ contextual factors like comparison class are established and held fixed. In a recent paper, ‘Vagueness without Context Change’: 275–92), Rosanna Keefe criticizes all such accounts. The arguments are variations on considerations that have been brought against context-sensitive accounts of knowledge, predicates of personal taste, epistemic modals, and the like. The issues are well known and there are variety of (...) options available in reply. More important, the arguments rely on an overly narrow conception of context-sensitivity, suggesting that one size fits all. If Keefe’s arguments were cogent, they would tell against the context-sensitivity of just about any expression, beyond the typical indexicals, including the variation of vague terms with comparison class. However, the criticisms raised by Keefe do highlight certain questions that must be answered by an advo... (shrink)
Rosanna Keefe (`Vagueness by Numbers' MIND 107 1998 565--79) argues that theories of vagueness based upon fuzzy logic and set theory rest on a confusion: once we have assigned a number to an object to represent (for example) its *height*, there is no distinct purpose left to be served by assigning a number to the object to represent its *degree of tallness*; she claims that ``any numbers assigned in an attempt to capture the vagueness of `tall' do no more (...) than serve as another measure of height.'' In this paper I defend fuzzy theories of vagueness against Keefe's attack. I show that the numbers that we assign to objects to measure (for example) heights serve a quite distinct purpose from the numbers that fuzzy theories of vagueness assign to objects to measure degrees of tallness: the two sorts of assignment are both *formally* and *conceptually* distinct; the fuzzy approach to vagueness is well-motivated and free of confusion. (shrink)
It is argued that “human-centredness” will be an important characteristic of systems that learn tasks from human users, as the difficulties in inductive inference rule out learning without human assistance. The aim of “programming by example” is to create systems that learn how to perform tasks from their human users by being shown examples of what is to be done. Just as the user creates a learning environment for the system, so the system provides a teaching opportunity for the user, (...) and emphasis is placed as much on facilitating successful teaching as on incorporating techniques of machine learning. If systems can “learn” repetitive tasks, their users will have the power to decide for themselves which parts of their jobs should be automated, and teach the system how to do them — reducing their dependence on intermediaries such as system designers and programmers.This paper presents principles for programming by example derived from experience in creating four prototype learners: for technical drawing, text editing, office tasks, and robot assembly. A teaching metaphor (a) enables the user to demonstrate a task by performing it manually, (b) helps to explain the learner's limited capabilities in terms of a persona, and (c) allows users to attribute intentionality. Tasks are represented procedurally, and augmented with constraints. Suitable mechanisms for attention focusing are necessary in order to control inductive search. Hidden features of a task should be made explicit so that the learner need not embark on the huge search entailed by hypothesizing missing steps. (shrink)
In this paper I will argue that the distinction between biological life and political life as found in Hannah Arendt’s reading of Aristotle and later repeated and elaborated by Giorgio Agamben under the headings of and, is in fact a fertile point of entry to, and the only viable option in order the grasp what constitutes the political as such for Aristotle. By hashing out the conceptual steps necessary for the establishment of what can be called a “political community”, I (...) seek to illuminate how the distinction upon which much of Arendt’s and Agamben’s works rests, does indeed play a vital role in the work of Aristotle. By clarifying the nature of a “political community” according to Aristotle, this paper thus seeks to make a proper assessment of the thought of both Arendt and Agamben possible. (shrink)
A framework of degrees of belief, or credences, is often advocated to model our uncertainty about how things are or will turn out. It has also been employed in relation to the kind of uncertainty or indefiniteness that arises due to vagueness, such as when we consider “a is F” in a case where a is borderline F. How should we understand degrees of belief when we take into account both these phenomena? Can the right kind of theory of the (...) semantics of vagueness help us answer this? Nicholas J.J. Smith defends a unified account, according to which “degree of belief is expected truth-value”; this builds on his Degree Theory of vagueness that offers an account of the semantics and logic of vagueness in terms of degrees of truth. I argue that his account fails. Degree theories of vagueness do not help us understand degrees of belief and, I argue, we shouldn’t expect a theory of vagueness to yield a detailed uniform story about this. The route from the semantics to psychological states needn’t be straightforward or uniform even before we attempt to combine vagueness with probabilistic uncertainty. (shrink)