From classical mechanics to quantum �field theory, the physical facts at one point in space are held to be independent of those at other points in space. I propose that we can usefully challenge this orthodoxy in order to explain otherwise puzzling correlations at both cosmological and microscopic scales.
This book gives a comprehensive overview of central themes of finite model theory â expressive power, descriptive complexity, and zero-one laws â together with selected applications relating to database theory and artificial intelligence, especially constraint databases and constraint satisfaction problems. The final chapter provides a concise modern introduction to modal logic, emphasizing the continuity in spirit and technique with finite model theory. This underlying spirit involves the use of various fragments of and hierarchies within first-order, second-order, fixed-point, and infinitary logics (...) to gain insight into phenomena in complexity theory and combinatorics. The book emphasizes the use of combinatorial games, such as extensions and refinements of the Ehrenfeucht-Fraissé pebble game, as a powerful way to analyze the expressive power of such logics, and illustrates how deep notions from model theory and combinatorics, such as o-minimality and treewidth, arise naturally in the application of finite model theory to database theory and AI. Students of logic and computer science will find here the tools necessary to embark on research into finite model theory, and all readers will experience the excitement of a vibrant area of the application of logic to computer science. (shrink)
Special relativity is said to prohibit faster-than-light (superluminal) signaling, yet controversy regularly arises as to whether this or that physical phenomenon violates the prohibition. I argue that the controversy is a result of a lack of clarity as to what it means to ‘signal’, and I propose a criterion. I show that according to this criterion, superluminal signaling is not prohibited by special relativity.
Anthropic arguments in multiverse cosmology and string theory rely on the weak anthropic principle (WAP). We show that the principle, though ultimately a tautology, is nevertheless ambiguous. It can be reformulated in one of two unambiguous ways, which we refer to as WAP_1 and WAP_2. We show that WAP_2, the version most commonly used in anthropic reasoning, makes no physical predictions unless supplemented by a further assumption of "typicality", and we argue that this assumption is both misguided and unjustified. WAP_1, (...) however, requires no such supplementation; it directly implies that any theory that assigns a non-zero probability to our universe predicts that we will observe our universe with probability one. We argue, therefore, that WAP_1 is preferable, and note that it has the benefit of avoiding the inductive overreach characteristic of much anthropic reasoning. (shrink)
This paper examines some common measures of complexity, structure, and information, with an eye toward understanding the extent to which complexity or information‐content may be regarded as objective properties of individual objects. A form of contextual objectivity is proposed which renders the measures objective, and which largely resolves the puzzle of Maxwell's Demon.
Special relativity is said to prohibit faster-than-light (superluminal) signalling, yet controversy regularly arises as to whether this or that physical phenomenon violates the prohibition. I argue that the controversy is a result of a lack of clarity as to what it means to `signal', and I propose a criterion. I show that although we have no reason to think that one can send signals faster than light, this is not prohibited by special relativity.
In this paper we consider a naive conception of what a quantum theory of gravity might entail: a quantum-mechanically fluctuating gravitational field at each spacetime point. We argue that this idea is problematic both conceptually and technically.
Gauge theories are theories that are invariant under a characteristic group of "gauge" transformations. General relativity is invariant under transformations of the diffeomorphism group. This has prompted many philosophers and physicists to treat general relativity as a gauge theory, and diffeomorphisms as gauge transformations. I argue that this approach is misguided.
General relativity is commonly thought to imply the existence of a unique metric structure for space-time. A simple example is presented of a general relativistic theory with ambiguous metric structure. Brans-Dicke theory is then presented as a further example of a space-time theory in which the metric structure is ambiguous. Other examples of theories with ambiguous metrical structure are mentioned. Finally, it is suggested that several new and interesting philosophical questions arise from the sorts of theories discussed.
David Albert and Barry Loewer have proposed a new interpretation of quantum mechanics which they call the Many Minds interpretation, according to which there are infinitely many minds associated with a given (physical) state of a brain. This interpretation is related to the family of many worlds interpretations insofar as it assumes strictly unitary (Schrödinger) time-evolution of quantum-mechanical systems (no reduction of the wave-packet). The Many Minds interpretation itself is principally motivated by an argument which purports to show that the (...) assumption of unitary evolution, along with some common sense assumptions about mental states (specifically, beliefs) leads to a certain nonphysicalism, in which there is a many-to-one correspondence between minds and brains. In this paper, I critically examine this motivating argument, and show that it depends on a mistaken assumption regarding the correspondence between projection operators and yes/no questions. (shrink)
A criterion of adequacy is proposed for theories of relevant consequence. According to the criterion, scientists whose deductive reasoning is limited to some proposed subset of the standard consequence relation must not thereby suffer a reduction in scientific competence. A simple theory of relevant consequence is introduced and shown to satisfy the criterion with respect to a formally defined paradigm of empirical inquiry.
A paradigm of scientific discovery is defined within a first-order logical framework. It is shown that within this paradigm there exists a formal scientist that is Turing computable and universal in the sense that it solves every problem that any scientist can solve. It is also shown that universal scientists exist for no regular logics that extend first-order logic and satisfy the Löwenheim-Skolem condition.
A model of idealized scientific inquiry is presented in which scientists are required to infer the nature of the structure that makes true the data they examine. A necessary and sufficient condition is presented for scientific success within this paradigm.
Alternative models of idealized scientific inquiry are investigated and compared. Particular attention is devoted to paradigms in which a scientist is required to determine the truth of a given sentence in the structure giving rise to his data.
This paper provides a mathematical model of scientific discovery. It is shown in the context of this model that any discovery problem that can be solved by a computable scientist can be solved by a computable scientist all of whose conjectures are finitely axiomatizable theories.
To be pertinent to democratic practice, collective choice functions need not apply to all possible constellations of individual preference, but only to those that are humanly possible in an appropriate sense. The present paper develops a theory of humanly possible preference within the context of the mathematical theory of learning. The theory of preference is then exploited in an attempt to resolve Arrow's voting paradox through restriction of the domain of majoritarian choice functions.