Color adjectives have played a central role in work on language typology and variation, but there has been relatively little investigation of their meanings by researchers in formal semantics. This is surprising given the fact that color terms have been at the center of debates in the philosophy of language over foundational questions, in particular whether the idea of a compositional, truth-conditional theory of natural language semantics is even coherent. The challenge presented by color terms is articulated in detail in (...) the work of Charles Travis. Travis argues that structurally isomorphic sentences containing color adjectives can shift truth value from context to context depending on how they are used and in the absence of effects of vagueness or ambiguity/polysemy, and concludes that a deterministic mapping from structures to truth conditions is impossible. The goal of this paper is to provide a linguistic perspective on this issue, which we believe defuses Travis’ challenge. We provide empirical arguments that color adjectives are in fact ambiguous between gradable and nongradable interpretations, and that this simple ambiguity, together with independently motivated options concerning scalar dimension within the gradable reading accounts for the Travis facts in a simpler, more constrained, and thus ultimately more successful fashion than recent contextualist analyses such as those in Szabó (Perspectives on semantics, pragmatics and discourse: A festschrift for Ferenc Kiefer, 2001) or Rothschild and Segal (Mind Lang, 2009). (shrink)
PRINCIPLES OF PHILOSOPHY" CHAPTER 1 LESSONS FROM THE HISTORY OF PHILOSOPHY §1. NOMINALISM* 15. Very early in my studies of logic, before I had really been ...
In this paper, we consider how certain longstanding philosophical questions about mental representation may be answered on the assumption that cognitive and perceptual systems implement hierarchical generative models, such as those discussed within the prediction error minimization framework. We build on existing treatments of representation via structural resemblance, such as those in Gładziejewski :559–582, 2016) and Gładziejewski and Miłkowski, to argue for a representationalist interpretation of the PEM framework. We further motivate the proposed approach to content by arguing that it (...) is consistent with approaches implicit in theories of unsupervised learning in neural networks. In the course of this discussion, we argue that the structural representation proposal, properly understood, has more in common with functional-role than with causal/informational or teleosemantic theories. In the remainder of the paper, we describe the PEM framework for approximate Bayesian inference in some detail, and discuss how structural representations might arise within the proposed Bayesian hierarchies. After explicating the notion of variational inference, we define a subjectively accessible measure of misrepresentation for hierarchical Bayesian networks by appeal to the Kullbach–Leibler divergence between posterior generative and approximate recognition densities, and discuss a related measure of objective misrepresentation in terms of correspondence with the facts. (shrink)
In this paper, I argue that theories of perception that appeal to Helmholtz’s idea of unconscious inference (“Helmholtzian” theories) should be taken literally, i.e. that the inferences appealed to in such theories are inferences in the full sense of the term, as employed elsewhere in philosophy and in ordinary discourse. -/- In the course of the argument, I consider constraints on inference based on the idea that inference is a deliberate acton, and on the idea that inferences depend on the (...) syntactic structure of representations. I argue that inference is a personal-level but sometimes unconscious process that cannot in general be distinguished from association on the basis of the structures of the representations over which it’s defined. I also critique arguments against representationalist interpretations of Helmholtzian theories, and argue against the view that perceptual inference is encapsulated in a module. (shrink)
This chapter focuses on what’s novel in the perspective that the prediction error minimization (PEM) framework affords on the cognitive-scientific project of explaining intelligence by appeal to internal representations. It shows how truth-conditional and resemblance-based approaches to representation in generative models may be integrated. The PEM framework in cognitive science is an approach to cognition and perception centered on a simple idea: organisms represent the world by constantly predicting their own internal states. PEM theories often stress the hierarchical structure of (...) the generative models they posit. The novel explanatory power of the PEM account derives largely from the way in which pairs of generative and recognition models interact. “Predictive coding” refers to an encoding strategy in which predicted portions of an input signal are subtracted from the actual signal received, so that only the difference between the two is passed as output to the next stage of information processing. (shrink)
Charles Darwin's On the Origin of Species is unquestionably one of the chief landmarks in biology. The Origin (as it is widely known) was literally only an abstract of the manuscript Darwin had originally intended to complete and publish as the formal presentation of his views on evolution. Compared with the Origin, his original long manuscript work on Natural Selection, which is presented here and made available for the first time in printed form, has more abundant examples and illustrations (...) of Darwin's argument, plus an extensive citation of sources. (shrink)
Time is absolute in standard quantum theory and dynamical in general relativity. The combination of both theories into a theory of quantum gravity leads therefore to a “problem of time”. In my essay, I investigate those consequences for the concept of time that may be drawn without a detailed knowledge of quantum gravity. The only assumptions are the experimentally supported universality of the linear structure of quantum theory and the recovery of general relativity in the classical limit. Among the consequences (...) are the fundamental timelessness of quantum gravity, the approximate nature of a semiclassical time, and the correlation of entropy with the size of the Universe. (shrink)
On 27th December 1831, HMS Beagle set out from Plymouth under the command of Captain Robert Fitzroy on a voyage that lasted nearly 5 years. The purpose of the trip was to complete a survey of the southern coasts of South America, and afterwards to circumnavigate the globe. The ship's geologist and naturalist was Charles Darwin. Darwin kept a diary throughout the voyage in which he recorded his daily activities, not only on board the ship but also during the (...) several long journeys that he made on horseback in Patagonia and Chile. His entries tell the story of one of the most important scientific journeys ever made with matchless immediacy and vivid descriptiveness. (shrink)
I argue that quantum theory can, and in fact must, be applied to the Universe as a whole. After a general introduction, I discuss two concepts that are essential for my chain of arguments: the universality of quantum theory and the emergence of classical behaviors by decoherence. A further motivation is given by the open problem of quantum gravity. I then present the main ingredients of quantum cosmology and discuss their relevance for the interpretation of quantum theory. I end with (...) some brief epistemological remarks. (shrink)
This is the first comprehensive evaluation of Charles Taylor's work and a major contribution to leading questions in philosophy and the human sciences as they face an increasingly pluralistic age. Charles Taylor is one of the most influential contemporary moral and political philosophers: in an era of specialisation he is one of the few thinkers who has developed a comprehensive philosophy which speaks to the conditions of the modern world in a way that is compelling to specialists in (...) various disciplines. This collection of specially commissioned essays brings together twelve distinguished scholars from a variety of fields to discuss critically Taylor's work. The topics range from the history of philosophy, to truth, modernity and postmodernity, theism, interpretation, the human sciences, liberalism, pluralism and difference. Taylor responds to all the contributions and re-articulates his own views. (shrink)
Prominent philosophers have argued that contradictions contain either too much or too little information to be useful. We dispute this with what we call the “Paradox of the Two Firefighters.” Suppose you are awakened in your hotel room by a fire alarm. You open the door. You see three possible ways out: left, right, straight ahead. You see two firefighters. One says there is exactly one safe route and it is to your left. The other says there is exactly one (...) safe route and it is to your right. While the two firemen are giving you contradictory information, they are also both giving you the perhaps useful information that there is a safe way out and it is not straight ahead. We give two analyses. The first uses the “Opinion Tetrahedron,” introduced by Dunn as a generalization of Audun Jøsang’s “Opinion Triangle.” The Opinion Tetrahedron in effect embeds the values of the “Belnap-Dunn 4-valued Logic” into a context of subjective probability generalized to allow for degrees of belief, disbelief, and two kinds of uncertainty—that in which the reasoner has too little information and that in which the reasoner has too much information. Jøsang had only a single value for uncertainty. We also present an alternative solution, again based on subjective probability but of a more standard type. This solution builds upon “linear opinion pooling.” Kiefer had already developed apparatus for assessing risk using expert opinion, and this influences the second solution. Finally, we discuss how these solutions might apply to “Big Data” and the World Wide Web. (shrink)
The compact and, with \ M\, very massive object located at the center of the Milky Way is currently the very best candidate for a supermassive black hole in our immediate vicinity. The strongest evidence for this is provided by measurements of stellar orbits, variable X-ray emission, and strongly variable polarized near-infrared emission from the location of the radio source Sagittarius A* in the middle of the central stellar cluster. Simultaneous near-infrared and X-ray observations of SgrA* have revealed insights into (...) the emission mechanisms responsible for the powerful near-infrared and X-ray flares from within a few tens to one hundred Schwarzschild radii of such a putative SMBH. If SgrA* is indeed a SMBH it will, in projection onto the sky, have the largest event horizon and will certainly be the first and most important target for very long baseline interferometry observations currently being prepared by the event horizon telescope. These observations in combination with the infrared interferometry experiment GRAVITY at the very large telescope interferometer and other experiments across the electromagnetic spectrum might yield proof for the presence of a black hole at the center of the Milky Way. The large body of evidence continues to discriminate the identification of SgrA* as a SMBH from alternative possibilities. It is, however, unclear when the ever mounting evidence for SgrA* being associated with a SMBH will suffice as a convincing proof. Additional compelling evidence may come from future gravitational wave observatories. This manuscript reviews the observational facts, theoretical grounds and conceptual aspects for the case of SgrA* being a black hole. We treat theory and observations in the framework of the philosophical discussions about “realism and underdetermination”, as this line of arguments allows us to describe the situation in observational astrophysics with respect to supermassive black holes. Questions concerning the existence of supermassive black holes and in particular SgrA* are discussed using causation as an indispensable element. We show that the results of our investigation are convincingly mapped out by this combination of concepts. (shrink)
The least element 0 of a finite meet semi-distributive lattice is a meet of meet-prime elements. We investigate conditions under which the least element of an algebraic, meet semi-distributive lattice is a meet of meet-prime elements. For example, this is true if the lattice has only countably many compact elements, or if $|L|, or if L is in the variety generated by a finite meet semi-distributive lattice. We give an example of an algebraic, meet semi-distributive lattice that has no meet-prime (...) element or join-prime element. This lattice L has $|L|=|L_{c}|=2^{\aleph _{0}}$ where $L_{c}$ is the set of compact elements of L. (shrink)
The PEIRCE EDITION contains large sections of previously unpublished material in addition to selected published works. Each volume includes a brief historical and biographical introduction, extensive editorial and textual notes, and a full chronological list of all of Peirce’s writings, published and unpublished, during the period covered.
The law tends to think that there is no difficulty about identifying humans. When someone is born, her name is entered into a statutory register. She is ‘X’ in the eyes of the law. At some point, ‘X’ will die and her name will be recorded in another register. If anyone suggested that the second X was not the same as the first, the suggestion would be met with bewilderment. During X's lifetime, the civil law assumed that the X who (...) entered into a contract was the same person who breached it. The criminal law assumed that X, at the age of 80, was liable for criminal offences ‘she’ committed at the age of 18. This accords with the way we talk. ‘She's not herself today’, we say; or ‘When he killed his wife he wasn't in his right mind’. The intuition has high authority: ‘To thine own self be true’, urged Polonius.1 It sounds as if we believe in souls—immutable, core essences that constitute our real selves. Medicine conspires in the belief. If you become mentally ill, a psychiatrist will seek to get you back to your right mind. The Mental Capacity Act 1985 states that when a patient loses capacity the only lawful interventions will be interventions which are in that patient's best interests,2 and that in determining what those interests are the decision-maker must have …. (shrink)
Charles Taylor’s idea of “deep diversity” has played a major role in the debates around multiculturalism in Canada and around the world. Originally, the idea was meant to account for how the different national communities within Canada – those of the English-speaking Canadians, the French-speaking Quebeckers, and the Aboriginals – conceive of their belonging to the country in different ways. But Taylor conceives of these differences strictly in terms of irreducibility; that is, he fails to see that they also (...) exist in such a way that the country cannot be said to form a unified whole. After giving an account of the philosophical as well as religious reasons behind his position, the chapter goes on to describe some of its political implications. (shrink)