I identify two reasons for believing in the objectivity of mathematical knowledge: apparent objectivity and applications in science. Focusing on arithmetic, I analyze platonism and cognitive nativism in terms of explaining these two reasons. After establishing that both theories run into difficulties, I present an alternative epistemological account that combines the theoretical frameworks of enculturation and cumulative cultural evolution. I show that this account can explain why arithmetical knowledge appears to be objective and has scientific applications. Finally, I will argue (...) that, while this account is compatible with platonist metaphysics, it does not require postulating mind-independent mathematical objects. (shrink)
Beck presents an outline of the procedure of bootstrapping of integer concepts, with the purpose of explicating the account of Carey. According to that theory, integer concepts are acquired through a process of inductive and analogous reasoning based on the object tracking system, which allows individuating objects in a parallel fashion. Discussing the bootstrapping theory, Beck dismisses what he calls the "deviant-interpretation challenge"—the possibility that the bootstrapped integer sequence does not follow a linear progression after some point—as being general to (...) any account of inductive learning. While the account of Carey and Beck focuses on the OTS, in this paper I want to reconsider the importance of another empirically well-established cognitive core system for treating numerosities, namely the approximate number system. Since the ANS-based account offers a potential alternative for integer concept acquisition, I show that it provides a good reason to revisit the deviant-interpretation challenge. Finally, I will present a hybrid OTS-ANS model as the foundation of integer concept acquisition and the framework of enculturation as a solution to the challenge. (shrink)
The basic human ability to treat quantitative information can be divided into two parts. With proto-arithmetical ability, based on the core cognitive abilities for subitizing and estimation, numerosities can be treated in a limited and/or approximate manner. With arithmetical ability, numerosities are processed (counted, operated on) systematically in a discrete, linear, and unbounded manner. In this paper, I study the theory of enculturation as presented by Menary (2015) as a possible explanation of how we make the move from the proto-arithmetical (...) ability to arithmetic proper. I argue that enculturation based on neural reuse provides a theoretically sound and fruitful framework for explaining this development. However, I show that a comprehensive explanation must be based on valid theoretical distinctions and involve several stages in the development of arithmetical knowledge. I provide an account that meets these challenges and thus leads to a better understanding of the subject of enculturation. (shrink)
Recent years have seen an explosion of empirical data concerning arithmetical cognition. In this paper that data is taken to be philosophically important and an outline for an empirically feasible epistemological theory of arithmetic is presented. The epistemological theory is based on the empirically well-supported hypothesis that our arithmetical ability is built on a protoarithmetical ability to categorize observations in terms of quantities that we have already as infants and share with many nonhuman animals. It is argued here that arithmetical (...) knowledge developed in such a way cannot be totally conceptual in the sense relevant to the philosophy of arithmetic, but neither can arithmetic understood to be empirical. Rather, we need to develop a contextual a priori notion of arithmetical knowledge that preserves the special mathematical characteristics without ignoring the roots of arithmetical cognition. Such a contextual a priori theory is shown not to require any ontologically problematic assumptions, in addition to fitting well within a standard framework of general epistemology. (shrink)
In this paper I study the development of arithmetical cognition with the focus on metaphorical thinking. In an approach developing on Lakoff and Núñez, I propose one particular conceptual metaphor, the Process → Object Metaphor, as a key element in understanding the development of mathematical thinking.
Hutto and Myin have proposed an account of radically enactive (or embodied) cognition (REC) as an explanation of cognitive phenomena, one that does not include mental representations or mental content in basic minds. Recently, Zahidi and Myin have presented an account of arithmetical cognition that is consistent with the REC view. In this paper, I first evaluate the feasibility of that account by focusing on the evolutionarily developed proto-arithmetical abilities and whether empirical data on them support the radical enactivist view. (...) I argue that although more research is needed, it is at least possible to develop the REC position consistently with the state-of-the-art empirical research on the development of arithmetical cognition. After this, I move the focus to the question whether the radical enactivist account can explain the objectivity of arithmetical knowledge. Against the realist view suggested by Hutto, I argue that objectivity is best explained through analyzing the way universal proto-arithmetical abilities determine the development of arithmetical cognition. (shrink)
Linnebo in 2018 argues that abstract objects like numbers are “thin” because they are only required to be referents of singular terms in abstraction principles, such as Hume's principle. As the specification of existence claims made by analytic truths (the abstraction principles), their existence does not make any substantial demands of the world; however, as Linnebo notes, there is a potential counter-argument concerning infinite regress against introducing objects this way. Against this, he argues that vicious regress is avoided in the (...) account of arithmetic based on Hume's principle because we are specifying numbers in terms of the concept of equinumerosity, or its ordinal equivalent. But far from being only a matter for philosophy, this implies a distinct empirical prediction: in cognitive development, the principle of equinumerosity is primary to number concepts. However, by analysing and expanding on the bootstrapping theory of Carey in 2009, I argue in this paper that there are good reasons to think that the development could be the other way around: possessing numerosity concepts may precede grasping the principle of equinumerosity. I propose that this analysis of early numerical cognition can also help us understand what numbers as thin objects are like, moving away from Platonist interpretations. (shrink)
Marr’s seminal distinction between computational, algorithmic, and implementational levels of analysis has inspired research in cognitive science for more than 30 years. According to a widely-used paradigm, the modelling of cognitive processes should mainly operate on the computational level and be targeted at the idealised competence, rather than the actual performance of cognisers in a specific domain. In this paper, we explore how this paradigm can be adopted and revised to understand mathematical problem solving. The computational-level approach applies methods from (...) computational complexity theory and focuses on optimal strategies for completing cognitive tasks. However, human cognitive capacities in mathematical problem solving are essentially characterised by processes that are computationally sub-optimal, because they initially add to the computational complexity of the solutions. Yet, these solutions can be optimal for human cognisers given the acquisition and enactment of mathematical practices. Here we present diagrams and the spatial manipulation of symbols as two examples of problem solving strategies that can be computationally sub-optimal but humanly optimal. These aspects need to be taken into account when analysing competence in mathematical problem solving. Empirically informed considerations on enculturation can help identify, explore, and model the cognitive processes involved in problem solving tasks. The enculturation account of mathematical problem solving strongly suggests that computational-level analyses need to be complemented by considerations on the algorithmic and implementational levels. The emerging research strategy can help develop algorithms that model what we call enculturated cognitive optimality in an empirically plausible and ecologically valid way. (shrink)
In this paper I develop a philosophical account of actual mathematical infinity that does not demand ontologically or epistemologically problematic assumptions. The account is based on a simple metaphor in which we think of indefinitely continuing processes as defining objects. It is shown that such a metaphor is valid in terms of mathematical practice, as well as in line with empirical data on arithmetical cognition.
Following Marr’s famous three-level distinction between explanations in cognitive science, it is often accepted that focus on modeling cognitive tasks should be on the computational level rather than the algorithmic level. When it comes to mathematical problem solving, this approach suggests that the complexity of the task of solving a problem can be characterized by the computational complexity of that problem. In this paper, I argue that human cognizers use heuristic and didactic tools and thus engage in cognitive processes that (...) make their problem solving algorithms computationally suboptimal, in contrast with the optimal algorithms studied in the computational approach. Therefore, in order to accurately model the human cognitive tasks involved in mathematical problem solving, we need to expand our methodology to also include aspects relevant to the algorithmic level. This allows us to study algorithms that are cognitively optimal for human problem solvers. Since problem solving methods are not universal, I propose that they should be studied in the framework of enculturation, which can explain the expected cultural variance in the humanly optimal algorithms. While mathematical problem solving is used as the case study, the considerations in this paper concern modeling of cognitive tasks in general. (shrink)
In computational complexity theory, decision problems are divided into complexity classes based on the amount of computational resources it takes for algorithms to solve them. In theoretical computer science, it is commonly accepted that only functions for solving problems in the complexity class P, solvable by a deterministic Turing machine in polynomial time, are considered to be tractable. In cognitive science and philosophy, this tractability result has been used to argue that only functions in P can feasibly work as computational (...) models of human cognitive capacities. One interesting area of computational complexity theory is descriptive complexity, which connects the expressive strength of systems of logic with the computational complexity classes. In descriptive complexity theory, it is established that only first-order systems are connected to P, or one of its subclasses. Consequently, second-order systems of logic are considered to be computationally intractable, and may therefore seem to be unfit to model human cognitive capacities. This would be problematic when we think of the role of logic as the foundations of mathematics. In order to express many important mathematical concepts and systematically prove theorems involving them, we need to have a system of logic stronger than classical first-order logic. But if such a system is considered to be intractable, it means that the logical foundation of mathematics can be prohibitively complex for human cognition. In this paper I will argue, however, that this problem is the result of an unjustified direct use of computational complexity classes in cognitive modelling. Placing my account in the recent literature on the topic, I argue that the problem can be solved by considering computational complexity for humanly relevant problem solving algorithms and input sizes. (shrink)
One of the most fundamental questions in the philosophy of mathematics concerns the relation between truth and formal proof. The position according to which the two concepts are the same is called deflationism, and the opposing viewpoint substantialism. In an important result of mathematical logic, Kurt Gödel proved in his first incompleteness theorem that all consistent formal systems containing arithmetic include sentences that can neither be proved nor disproved within that system. However, such undecidable Gödel sentences can be established to (...) be true once we expand the formal system with Alfred Tarski s semantical theory of truth, as shown by Stewart Shapiro and Jeffrey Ketland in their semantical arguments for the substantiality of truth. According to them, in Gödel sentences we have an explicit case of true but unprovable sentences, and hence deflationism is refuted. -/- Against that, Neil Tennant has shown that instead of Tarskian truth we can expand the formal system with a soundness principle, according to which all provable sentences are assertable, and the assertability of Gödel sentences follows. This way, the relevant question is not whether we can establish the truth of Gödel sentences, but whether Tarskian truth is a more plausible expansion than a soundness principle. In this work I will argue that this problem is best approached once we think of mathematics as the full human phenomenon, and not just consisting of formal systems. When pre-formal mathematical thinking is included in our account, we see that Tarskian truth is in fact not an expansion at all. I claim that what proof is to formal mathematics, truth is to pre-formal thinking, and the Tarskian account of semantical truth mirrors this relation accurately. -/- However, the introduction of pre-formal mathematics is vulnerable to the deflationist counterargument that while existing in practice, pre-formal thinking could still be philosophically superfluous if it does not refer to anything objective. Against this, I argue that all truly deflationist philosophical theories lead to arbitrariness of mathematics. In all other philosophical accounts of mathematics there is room for a reference of the pre-formal mathematics, and the expansion of Tarkian truth can be made naturally. Hence, if we reject the arbitrariness of mathematics, I argue in this work, we must accept the substantiality of truth. Related subjects such as neo-Fregeanism will also be covered, and shown not to change the need for Tarskian truth. -/- The only remaining route for the deflationist is to change the underlying logic so that our formal languages can include their own truth predicates, which Tarski showed to be impossible for classical first-order languages. With such logics we would have no need to expand the formal systems, and the above argument would fail. From the alternative approaches, in this work I focus mostly on the Independence Friendly (IF) logic of Jaakko Hintikka and Gabriel Sandu. Hintikka has claimed that an IF language can include its own adequate truth predicate. I argue that while this is indeed the case, we cannot recognize the truth predicate as such within the same IF language, and the need for Tarskian truth remains. In addition to IF logic, also second-order logic and Saul Kripke s approach using Kleenean logic will be shown to fail in a similar fashion. (shrink)
In Pantsar (2014), an outline for an empirically feasible epistemological theory of arithmetic is presented. According to that theory, arithmetical knowledge is based on biological primitives but in the resulting empirical context develops an essentially a priori character. Such contextual a priori theory of arithmetical knowledge can explain two of the three characteristics that are usually associated with mathematical knowledge: that it appears to be a priori and objective. In this paper it is argued that it can also explain the (...) ... (shrink)
How is knowledge of geometry developed and acquired? This central question in the philosophy of mathematics has received very different answers. Spelke and colleagues argue for a “core cognitivist”, nativist, view according to which geometric cognition is in an important way shaped by genetically determined abilities for shape recognition and orientation. Against the nativist position, Ferreirós and García-Pérez have argued for a “culturalist” account that takes geometric cognition to be fundamentally a culturally developed phenomenon. In this paper, I argue that (...) when understood as moderate versions supported by the state-of-the-art research, the nativist and culturalist views are in fact possible to reconcile. While Ferreirós and García-Pérez present the work of Spelke and colleagues as implying that geometric cognition is genetically determined, I argue that they fail to appreciate the role that Spelke and colleagues see for cultural factors. On this basis, I provide theoretical and terminological clarifications and show that moderate versions of the nativist and culturalist view are in fact consistent with each other. I then propose a unifying theoretical framework for future study that can integrate the two accounts in ontogeny by moving beyond the crude nature (nativism) vs. nurture (culturalism) dichotomy. (shrink)
In Pantsar, an outline for an empirically feasible epistemological theory of arithmetic is presented. According to that theory, arithmetical knowledge is based on biological primitives but in the resulting empirical context develops an essentially a priori character. Such contextual a priori theory of arithmetical knowledge can explain two of the three characteristics that are usually associated with mathematical knowledge: that it appears to be a priori and objective. In this paper it is argued that it can also explain the third (...) one: why arithmetical knowledge appears to be necessary. A Kripkean analysis of necessity is used as an example to show that a proper analysis of the relevant possible worlds can explain arithmetical necessity in a sufficiently strong form. (shrink)
Why would we want to develop artificial human-like arithmetical intelligence, when computers already outperform humans in arithmetical calculations? Aside from arithmetic consisting of much more than mere calculations, one suggested reason is that AI research can help us explain the development of human arithmetical cognition. Here I argue that this question needs to be studied already in the context of basic, non-symbolic, numerical cognition. Analyzing recent machine learning research on artificial neural networks, I show how AI studies could potentially shed (...) light on the development of human numerical abilities, from the proto-arithmetical abilities of subitizing and estimating to counting procedures. Although the current results are far from conclusive and much more work is needed, I argue that AI research should be included in the interdisciplinary toolbox when we try to explain the development and character of numerical cognition and arithmetical intelligence. This makes it relevant also for the epistemology of mathematics. (shrink)
In this paper, I study how mathematicians are presented in western popular culture. I identify five stereotypes that I test on the best-known modern movies and television shows containing a significant amount of mathematics or important mathematician characters: (1) Mathematics is highly valued as an intellectual pursuit. (2) Little attention is given to the mathematical content. (3) Mathematical practice is portrayed in an unrealistic way. (4) Mathematicians are asocial and unable to enjoy normal life. (5) Higher mathematics is ...
One main challenge of non-platonist philosophy of mathematics is to account for the apparent objectivity of mathematical knowledge. Cole and Feferman have proposed accounts that aim to explain objectivity through the intersubjectivity of mathematical knowledge. In this paper, focusing on arithmetic, I will argue that these accounts as such cannot explain the apparent objectivity of mathematical knowledge. However, with support from recent progress in the empirical study of the development of arithmetical cognition, a stronger argument can be provided. I will (...) show that since the development of arithmetic is (partly) determined by biologically evolved proto-arithmetical abilities, arithmetical knowledge can be understood as maximally intersubjective. This maximal intersubjectivity, I argue, can lead to the experience of objectivity, thus providing a solution to the problem of reconciling non-platonist philosophy of mathematics with the (apparent) objectivity of mathematical knowledge. (shrink)
Abstract In the new millennium there have been important empirical developments in the philosophy of mathematics. One of these is the so-called “Empirical Philosophy of Mathematics”(EPM) of Buldt, Löwe, Müller and Müller-Hill, which aims to complement the methodology of the philosophy of mathematics with empirical work. Among other things, this includes surveys of mathematicians, which EPM believes to give philosophically important results. In this paper I take a critical look at the sociological part of EPM as a case study of (...) ... (shrink)
Tutkin tässä artikkelissa Kurt Gödelin epätäydellisyysteoreemojen tulkintoja filosofiassa. Aihepiiri kattaa valtavan määrän eri tulkintoja tekoälystä fysiikkaan ja runouteen asti. Osoitan, että kriittisesti tarkasteltuna kaikki radikaalit epätäydellisyysteoreemojen sovellukset ovat virheellisiä.
In early analytic philosophy, one of the most central questions concerned the status of arithmetical objects. Frege argued against the popular conception that we arrive at natural numbers with a psychological process of abstraction. Instead, he wanted to show that arithmetical truths can be derived from the truths of logic, thus eliminating all psychological components. Meanwhile, Dedekind and Peano developed axiomatic systems of arithmetic. The differences between the logicist and axiomatic approaches turned out to be philosophical as well as mathematical. (...) In this paper, I will argue that Dedekind’s approach can be seen as a precursor to modern structuralism and as such, it enjoys many advantages over Frege’s logicism. I also show that from a modern perspective, Frege’s criticism of abstraction and psychologism is one-sided and fails against the psychological processes that modern research suggests to be at the heart of numerical cognition. The approach here is twofold. First, through historical analysis, I will try to build a clear image of what Frege’s and Dedekind’s views on arithmetic were. Then, I will consider those views from the perspective of modern philosophy of mathematics, and in particular, the empirical study of arithmetical cognition. I aim to show that there is nothing to suggest that the axiomatic Dedekind approach could not provide a perfectly adequate basis for philosophy of arithmetic. (shrink)
In the new millennium there have been important empirical developments in the philosophy of mathematics. One of these is the so-called “Empirical Philosophy of Mathematics” of Buldt, Löwe, Müller and Müller-Hill, which aims to complement the methodology of the philosophy of mathematics with empirical work. Among other things, this includes surveys of mathematicians, which EPM believes to give philosophically important results. In this paper I take a critical look at the sociological part of EPM as a case study of sociological (...) approaches to the philosophy of mathematics, focusing on the most concrete development of EPM so far: a questionnaire-based study by Müller-Hill. I argue that the study has many problems and that the EPM conclusion that mathematical knowledge is context-dependent is unwarranted by the evidence. In addition, I consider the general justification and criteria for introducing sociological methods in the philosophy of mathematics. While surveys can give us important data about the philosophical views of mathematicians, there is no reason to believe that mathematicians have a privileged access to philosophical questions concerning mathematics. In order to be philosophically relevant in the way EPM claims, the philosophical views of mathematicians cannot be assessed without considering the argumentation behind them. (shrink)