In this introduction we discuss the motivation behind the workshop “Towards a New Epistemology of Mathematics” of which this special issue constitutes the proceedings. We elaborate on historical and empirical aspects of the desired new epistemology, connect it to the public image of mathematics, and give a summary and an introduction to the contributions to this issue.
We give characterizations for the sentences "Every $\Sigma^1_2$-set is measurable" and "Every $\Delta^1_2$-set is measurable" for various notions of measurability derived from well-known forcing partial orderings.
We describe the solution of the Limit Rule Problem of Revision Theory and discuss the philosophical consequences of the fact that the truth set of Revision Theory is a complete 1/2 set.
Maddy gave a semi-formal account of restrictiveness by defining a formal notion based on a class of interpretations and explaining how to handle false positives and false negatives. Recently, Hamkins pointed out some structural issues with Maddy's definition. We look at Maddy's formal definitions from the point of view of an abstract interpretation relation. We consider various candidates for this interpretation relation, including one that is close to Maddy's original notion, but fixes the issues raised by Hamkins. Our work brings (...) to light additional structural issues that we also discuss. (shrink)
Gupta-Belnap-style circular definitions use all real numbers as possible starting points of revision sequences. In that sense they are boldface definitions. We discuss lightface versions of circular definitions and boldface versions of inductive definitions.
We argue that mathematical knowledge is context dependent. Our main argument is that on pain of distorting mathematical practice, one must analyse the notion of having available a proof, which supplies justification in mathematics, in a context dependent way.
The distinction between data and phenomena introduced by Bogen and Woodward (Philosophical Review 97(3):303–352, 1988) was meant to help accounting for scientific practice, especially in relation with scientific theory testing. Their article and the subsequent discussion is primarily viewed as internal to philosophy of science. We shall argue that the data/phenomena distinction can be used much more broadly in modelling processes in philosophy.
Hamkins and Löwe proved that the modal logic of forcing is S4.2 . In this paper, we consider its modal companion, the intermediate logic KC and relate it to the fatal Heyting algebra H ZFC of forcing persistent sentences. This Heyting algebra is equationally generic for the class of fatal Heyting algebras. Motivated by these results, we further analyse the class of fatal Heyting algebras.
Gupta-Belnap-style circular definitions use all real numbers as possible starting points of revision sequences. In that sense they are boldface definitions. We discuss lightface versions of circular definitions and boldface versions of inductive definitions.
We look at bimodal logics interpreted by cartesian products of topological spaces and discuss the validity of certain bimodal formulae in products of so-called cardinal spaces. This solves an open problem of van Benthem et al.
Maddy's notion of restrictiveness has many problematic aspects, one of them being that it is almost impossible to show that a theory is not restrictive. In this note the author addresses a crucial question of Martin Goldstern (Vienna) and points to some directions of future research.
We generalize Solovay's unfolding technique for infinite games and use an Unfolding Theorem to give a uniform method to prove that all analytic sets are in the $\sigma$ -algebras of measurability connected with well-known forcing notions.
We investigate Turing cones as sets of reals, and look at the relationship between Turing cones, measures, Baire category and special sets of reals, using these methods to show that Martin's proof of Turing Determinacy (every determined Turing closed set contains a Turing cone or is disjoint from one) does not work when you replace “determined” with “Blackwell determined”. This answers a question of Tony Martin.
The theory of infinite games with slightly imperfect information has been developed for games with finitely and countably many moves. In this paper, we shift the discussion to games with uncountably many possible moves, introducing the axiom of real Blackwell determinacy ${\mathsf{Bl-AD}_\mathbb{R}}$ (as an analogue of the axiom of real determinacy ${\mathsf{AD}_\mathbb{R}}$ ). We prove that the consistency strength of ${\mathsf{Bl-AD}_\mathbb{R}}$ is strictly greater than that of AD.
We work under the assumption of the Axiom of Determinacy and associate a measure to each cardinal $\kappa < \aleph_{\varepsilon_0}$ in a recursive definition of a canonical measure assignment. We give algorithmic applications of the existence of such a canonical measure assignment (computation of cofinalities, computation of the Kleinberg sequences associated to the normal ultrafilters on all projective ordinals).
We do not believe that logic is the sole answer to deep and intriguing questions about human behaviour, but we think that it might be a useful tool in simulating and understanding it to a certain degree and in specifically restricted areas of application. We do not aim to resolve the question of what rational behaviour in games with mistaken and changing beliefs is. Rather, we develop a formal and abstract framework that allows us to reason about behaviour in games (...) with mistaken and changing beliefs leaving aside normative questions concerning whether the agents are behaving “rationally”; we focus on what agents do in a game. In this paper, we are not concerned with the reasoning process of the economic agent; rather, our intended application is artificial agents, e.g., autonomous agents interacting with a human user or with each other as part of a computer game or in a virtual world. We give a story of mistaken beliefs that is a typical example of the situation in which we should want our formal setting to be applied. Then we give the definitions for our formal system and how to use this setting to get a backward induction solution. We then apply our semantics to the story related earlier and give an analysis of it. Our final section contains a discussion of related work and future projects. We discuss the advantages of our approach over existing approaches and indicate how it can be connected to the existing literature. (shrink)
We define a parametrised choice principle PCP which is equivalent to the Axiom of Determinacy. PCP describes the difference between these two axioms and could serve as a means of proving Martin's conjecture on the equivalence of these axioms.
This collection of papers from the workshop serves as the initial volume in the new series Texts in Logics and Games—touching on research in logic, mathematics, computer science, and game theory. “A wonderful demonstration of ...